This question is for the hardcore SQL Server gurus out there.
In summary, the question is pertaining to setting up a custom trace using
extended stored procedures in production. The idea is to find the source of
a deadlock that occurs on occasional basis. My question deals mainly with
the following stored procedures.
xp_trace_setqueryhistory
xp_trace_flushqueryhistory
xp_trace_addnewqueue
Here is the detail: Every other day or so, the insert to this one table
fails in production environment. Unfortunately, the error handling code
doesn't trap the error number so I am not sure what error is occurring and
causing the insert to fail. I have tried and have been unsuccessful in
reproducing the problem in the development environment.
I am guessing that it is due to a dead lock and would like to find the
source of the problem through SQL Profiler. Since I am not at the site, I
can't just fire up SQL Profiler and try to trace the source of the problem.
After some research, I found a way to set up a custom trace that will log
the output of the profiler to a log file (or can output to a table)....
please see xp_trace_addnewqueue in BOL.
My concern is that the log file may get too big and may take up all the disk
space if it ran for too long. I also saw another extended procedure that
flushes the last 100 events from the trace to a log file.... but that is
only for a particular trace (please see xp_trace_setqueryhistory and
xp_trace_flushqueryhistory).
My question therefore is, can I make the trace only dump the last... say
5,000 events to the file when the deadlock occurs. I could setup an Alert
for the error number for the deadlock. This alert could then execute a
similar command as xp_trace_flushqueryhistory showing me only the last 5,000
events at the time the error occurs.
Any help would be appreciated.
Thank you.