SQL Server trigger insert causes lock until commit -


having trouble locks on related table via triggers.

i'm inserting table tcatalog (this has trigger insert record in table tsearchqueue). insert tcatalog inside transaction has lot of other functions takes several seconds. however, tsearchqueue table locked until transaction can committed. there way avoid this?

insert [dbo].[tsearchqueue] (processed, sql, sys_createdate)      select               0, 'test ' + cast(catalogid varchar(10)), getdate()                     inserted    begin tran t1     declare @catalogid int      insert tcatalog (programid, sys_createdby, itemnumber, description, uomid)      values (233, 1263, 'brian catalog4', 'brian catalog4', 416)      select @catalogid = scope_identity()      insert tcustomattributecatalog (catalogid, customattributeid, defaultvalue, defaultvalueread, sys_createdby)      values (@catalogid, 299, 'no', 'no', 1263)      insert tcustomattributecatalog (catalogid, customattributeid, defaultvalue, defaultvalueread, sys_createdby)      values (@catalogid, 300, null, null, 1263)  commit tran t1 

it looks have background process wants notified of changes can sort of re-indexing. if that's case it's not wrong blocked, since if transaction not commit shouldn't index anyway.

so sequence is:

  • begin transaction
  • insert tcatalog
    • trigger inserts tsearchqueue
  • insert other tables
  • perform long-running operation
  • commit transaction.

and problem another process wants read tsearchqueue cannot since locked.

option 1: perform background operation in batches.

if process getting behind because has few opportunities read table, maybe reading multiple rows @ time solve issue. i.e. each time gets chance read queue should read many rows, process them @ once, , mark them done (or delete them case may be).

option 2: perform long running operation first, if possible:

  • begin transaction
  • perform long-running operation
  • insert tcatalog
    • trigger inserts tsearchqueue
  • insert other tables
  • commit transaction

other process finds tsearchqueue locked short time. note if long-running operation file copy, these can included in transaction using copyfiletransacted, or copy can rolled in "catch" statement if operation fails.

option 3: background process avoids lock

if other process trying read table, snapshot isolation may solve problem. return committed rows, existed @ point in time. combined row-level locking may solve problem.

alternatively background process might read nolock hint (dirty reads). may result in reading data transactions later rolled back. if data being validated in separate step (e.g. writing identifier object needs reindexing) not problem. if indexing process can cope entries no longer exist, or haven't changed, having spurious reads won't matter.


Comments

Popular posts from this blog

javascript - Thinglink image not visible until browser resize -

firebird - Error "invalid transaction handle (expecting explicit transaction start)" executing script from Delphi -

mongodb - How to keep track of users making Stripe Payments -