Put simply, certain operations are blocked when there is no need for it. Why, he asked, wasn't the process getting blocked the first time around? His initial explanation was incorrect, but the underlying reason for his problem was that that transaction was getting a shared lock on the referenced row (the one on table A), which it really didn't need except to ensure that the row didn't go away - that is, to make sure the foreign key constraint remained satisfied until it could commit. In Joel's example, he was getting an unexpected deadlock when session 2 updated the row the second time. Joel Jacobson of Glue Finance illustrated it with an example in the post referenced above, which can be seen in action in this screencast. This had an enormous positive impact to concurrency, so people then began to use foreign keys more extensively.īut when you start raising the load level, at some point another problem becomes apparent: the locks taken are a stronger than strictly necessary, causing pauses and sometimes deadlocks. We partly fixed this by introducing SELECT FOR SHARE in 8.1, which allowed checks to be run concurrently. Consequently, many people used to drop their foreign keys just to get a reasonable concurrency level. This was so strong a lock that it had a severe impact to the performance of applications that expected to concurrently access and modify tables with foreign key relationships.
To recapitulate on this problem a bit: in the aboriginal code, foreign key checks obtained FOR UPDATE locks on referenced tuples, meaning that they were exclusively locked for the duration of the transaction doing the check. The most recent detailed discussion about this problem took place on August 2010 on pgsql-hackers. This problem has been known for a very long time, and it affects many users to varying degrees. This causes some operations to block unnecessarily and, perhaps more aggravating, some other operations to deadlock. More specifically, on the problem that when foreign keys are verified, they sometimes obtain locks that are stronger than really needed. When you use REPEATABLEREAD/SERIALIZABLE, it needs to get (S) because it may be held longer than (X), so you’d have blocking if SELECT uses those isolation levels.I've been commissioned to work on foreign keys. This is somewhat similar to the condition when the session may skip (S) lock if there is another (S) lock held on the same row. It makes sense because the (S) lock will/should be released immediately and (X) lock guarantees that the row does not held any other (X) locks.
And it seems that there is an optimization in READ COMMITTED, which detects such a condition and does not acquire (S) in that case. On SELECT – it seems that there is some internal attribute in lock structure, that indicates such condition – e.g. It will change if you use any other value for IncludedCol and indexes will be actually modified. The update statement does not acquire (X) on NCI because, technically, it did not modify the data. Dmitri Korotkevitch Post author Octoat 3:29 pm.Perhaps there is a cached plan it is reading Col1 from? I am not sure why the 2nd transaction does not block on the Key Lookup at the readcommitted level. This is what happens at the repeatableread and/or serializable levels, but NOT the readcommitted level.
Since the 1st transaction still held the eXclusive lock on the IDX_CI row, I expected this 2nd transaction to block and wait for a Shared lock on this row. Since the 1st transaction did not lock the IDX_NCI row, it did not wait for a shared lock on this row. Leaving this first transaction uncommitted, I then ran the select statement using the Non-Clustered Index Seek and Key Lookup: This puts an eXclusive lock on the Clustered Index row, but not the IDX_NCI row, presumably because the IncludedCol value was already 10, so the transaction did not lock this row. The first updates the IncludedCol using the Clustered Index Seek: As a variation, I tried 2 separate transactions.