We experienced some data locks that gave a customer some performance issues. Going through the motions we found this statement (scrambled) being the problem:
update_recordSet adjustment setting Voucher = voucher where ! adjustment.voucher join trans where trans.Status == Status::Open && trans.DateTime < perDateTime && trans.RecId == adjustment.Trans;
The table Adjustment is connected to the table Trans through a relation like this: Adjustment.Trans = Trans.RecId. And Adjustment has – among others – an unclustered index like this: Trans, Voucher and a couple of other fields.
So you might think that the SQL server was capable of utilising this index since both Trans and Voucher are in play in the attempt to limit the records involved.
Looking at it from the SQL server it ends up like this:
(@P1 NVARCHAR(21), @P2 INT, @P3 BIGINT, @P4 NVARCHAR(5), @P5 NVARCHAR(21), @P6 BIGINT, @P7 NVARCHAR(5), @P8 INT, @P9 DATETIME2) UPDATE T1 SET VOUCHER = @P1, RECVERSION = @P2 FROM ADJUSTMENT T1 CROSS JOIN TRANS T2 WHERE (((T1.PARTITION = @P3) AND (T1.DATAAREAID = @P4)) AND ((T1.VOUCHER = @P5))) AND (((T2.PARTITION = @P6) AND (T2.DATAAREAID = @P7)) AND (((T2.STATUS = @P8) AND (T2.DATETIME < @P9)) AND (T2.RECID = T1.TRANS)))
Now, when executing this ended up giving an index scan resulting in heavy locking of data. The reason for this – and the reason why the index could not be used – is that the SQL server sees this as two statements selecting adjustment records with the Voucher field as only range and the trans records with the specified ranges except the relation range and then returns the intersection of these two result sets.
Adding an index with Voucher as first field solves the problem and the data locking stops.