<<
There are SEVEN FKs against that table…I would bet that’s 50% of the duration. The lack of an index, perhaps an issue, but
With that many FK references plus that many rows…the transaction log could easily blow out and start paging to disk.
When deleting more than perhaps 20k rows, I will normally write a delete loop, grabbing roughly 20-50k rows at time (server capacity
dependent), deleting that set, grabbing another set, etc. That allows the set to commit, releasing pressure on the tran log.
You can easily experiment and see how long 10k rows take to delete. If still long, the FKs are the issue…you may need to script them out,
drop them, run the deletes, then rebuild them.
Mike S.