is it true that the probability of a deadlock is much higher if one tran is using pessimistic locking?

db042190 1,516 Reputation points
2023-01-17T17:36:03.9933333+00:00

Hi I'm reading somewhat quickly the discussion at sites like [https://stackoverflow.com/questions/129329/optimistic-vs-pessimistic-locking .

i did not know that deadlocks are more likely when at least one side has pessimistic locking. i would have thought the opposite or equally probable, ie the classic example of two processes currently on different records but both about to retrieve the same (a third) one.

why do i ask? we have an incremental process that records the high timestamp key of about 15 tables it wants to read. it already knows the low key from last time it ran. and in the 2nd step it retrieves the range of records described from the 1st step over the 15 tables.

about once a week we experience a deadlock in which the incremental process is a victim to our front end's screens and/or batch like processes. and by design it does no harm to the incremental process. we know our front end does pessimistic locking in certain situations which i'm trying to get a better idea on. if its a 99% chance that our incremental falls victim to pessimistic locking , i'd feel more comfortable because i suspect the incremental will always lose to the pessimistic locking. but if its just as likely that sooner or later the front end will fall victim to the incremental's optimistic locking, i'm worried.

SQL Server
SQL Server
A family of Microsoft relational database management and analysis systems for e-commerce, line-of-business, and data warehousing solutions.
14,477 questions
0 comments No comments
{count} votes

5 answers

Sort by: Most helpful
  1. Erland Sommarskog 116.9K Reputation points MVP
    2023-01-17T21:12:40.6466667+00:00

    SQL Server selects the deadlock victim based on the number of log records. The process with the least number of log records will be selected as a the victim, presumably because this is the cheapest one to roll back.

    From what you describe, I would rather expect the front-ends to be the victims, since they typically have shorter transactions.

    If I understand you correctly, you want this incremental process to always be the victim. There is a simple way to achieve this. Have the incremental process to issue this command:

    SET DEADLOCK_PRIORITY LOW
    

    Which basically means that "Hey, if there is a deadlock, I volunteer to be the vicitim."

    1 person found this answer helpful.
    0 comments No comments

  2. Seeya Xi-MSFT 16,571 Reputation points
    2023-01-18T06:40:47.4866667+00:00

    Hi db042190,

    Pessimistic locking: Records are locked when they are read within a transaction, preventing any user from making any changes before the transaction is completed. This ensures that updates succeed, unless a deadlock occurs. This is the default locking method used by MS SQL Server 2000.

    Optimistic locking: A method which records are NOT locked when they are read within a transaction. This allows users to make changes to the database. The records are locked only when they are updated. This introduces the possibility that two transactions modify data at the same time. Although this is not the default locking method used by MS SQL Server 2012, it can be easily implemented by using cursors. This is because a cursor allows you to control an individual record in a recordset. Cursors are covered in a prior module in this course.

    Deadlock: A deadlock problem occurs when two (or more than two) operations already want to access resources locked by the other one. In this circumstance, database resources are affected negatively because both processes are constantly waiting for each other. This contention issue is terminated by the SQL Server intervention. It chooses a victim from the transactions who are involved in the deadlock, forces it to rollback all actions.

    Agree with Erland. Set the deadlock priority is a great method. Here is a related article for your reference.

    https://blog.quest.com/the-anatomy-of-sql-server-deadlocks-and-the-best-ways-to-avoid-them/

    Best regards,

    Seeya


    If the answer is the right solution, please click "Accept Answer" and kindly upvote it. If you have extra questions about this answer, please click "Comment".

    0 comments No comments

  3. LiHongMSFT-4306 30,671 Reputation points
    2023-01-18T08:43:09.5166667+00:00

    Hi @db042190

    When update a row, you have to lock it. The difference between optimistic and pessimistic locking is whether you pessimistically lock the row just in case you might update it or whether you optimistically wait until you know you want to update it to obtain the lock.

    If your application is not designed to lock resources in the same order everywhere, it's easy enough to deadlock regardless of your locking strategy.

    Best regards,

    Cosmog Hong

    0 comments No comments

  4. db042190 1,516 Reputation points
    2023-01-26T11:05:08.7+00:00

    thx everybody. sorry for the delay. didnt get the usual email notifications that there were responses waiting. we already are the victim each time presumably because all we are doing is a select of usually no more that 8,000 records using the tail end (from and thru) of a timestamp based index. i got talked into using the stamp rather than the integer equivalent in the index but suspect that's not the problem. one front end developer suggested that doing so (my tail end approach) on a "sorted" list of constantly changing values is part of the problem. i agree that these values can be changing often but might question the "ordered" part unless the old rule of everybody doing their thing in the same order is what he's really talking about. his order of operation is significantly different than mine and as far as i know we are already "not running" when his larger transaction based mass update job runs. i do believe he is doing the pessimistic thing which means he locks what he will later update. nonetheless, there is no damage when we deadlock. and i can put that sql deadlock command in but if i also use a try catch, might i be able to conceal the deadlock from sql and spotfire so i dont need to worry people any longer?

    i just saw a built in delay sql offers on deadlocks that are "caught" so i'm pushing my front end guy a little harder on just how long he is holding locks and if we've missed any of the en masse processes he has. i'm also looking at how connection pools instantiate because if they are just an issue in his en masse updates i'll push him with "ok but what about on your more piecemeal transactions?" he (and erland agreed) that we dont want to change any of that just so our secondary process can look to see if one of his trans is in flight.

    it looks like conn pools are instantiated from .net. so i can see the argument that says "dont ask me to set the app in those pools for some of my multi threaded/mass update processs" but unless i'm missing something, they theoretically could be set in more piecemeal update trans that involve a different .net address space.


  5. db042190 1,516 Reputation points
    2023-01-26T11:07:34.8+00:00

    i posted a response but dont see it. trying this as a test. now i see both.

    0 comments No comments

Your answer

Answers can be marked as Accepted Answers by the question author, which helps users to know the answer solved the author's problem.