Thread: Deadlock with pg_dump?

Deadlock with pg_dump?

From
Chris Campbell
Date:
We're getting deadlock error messages in the production database logs
during times of inactivity, where the only other thing using the
database (we think) is the every-15-minutes pg_dump process. There
are still database connections up-and-running from unused Hibernate
Java processes, but they Shouldn't Be doing anything (and shouldn't
be holding locks, etc).

The deadlock error message looks like this:

ERROR: deadlock detected
DETAIL: Process 1120 waits for ShareLock on transaction 5847116;
blocked by process 1171.
Process 1171 waits for ExclusiveLock on tuple (6549,28) of relation
37637 of database 37574; blocked by process 1120.

Relation 37636 is the users table (schema attached).

Process 1120 was running an UPDATE query and changing a single row in
the users table. The users table does have foreign keys to 4 other
tables. Is it possible that those foreign key constraints acquire
locks in a different order than pg_dump (as it's SELECTing from the
tables), and it's hitting at *just* the right time to cause a deadlock?

I've tried to reproduce it on a test machine by running pgbench
(after adding foreign keys to the pgbench tables) and pg_dump in
tight loops in two concurrent shell scripts, but no deadlock.

Any ideas on how to track this down?

Under what conditions does a process acquire a ShareLock on another
transaction?

Thanks!

- Chris


Attachment

Re: Deadlock with pg_dump?

From
Tom Lane
Date:
Chris Campbell <chris@bignerdranch.com> writes:
> ERROR: deadlock detected
> DETAIL: Process 1120 waits for ShareLock on transaction 5847116;  
> blocked by process 1171.
> Process 1171 waits for ExclusiveLock on tuple (6549,28) of relation  
> 37637 of database 37574; blocked by process 1120.

> Relation 37636 is the users table (schema attached).

> Process 1120 was running an UPDATE query and changing a single row in  
> the users table.

And what was 1171 doing?  I really doubt that either of these could have
been pg_dump.

Given that you appear to be running 8.1 (tut-tut for not saying), it
really shouldn't be a foreign key problem either.  I'm betting these
are just flat out conflicting updates of the same row(s).
        regards, tom lane


Re: Deadlock with pg_dump?

From
Chris Campbell
Date:
On Oct 26, 2006, at 17:21, Tom Lane wrote:

> And what was 1171 doing?  I really doubt that either of these could  
> have
> been pg_dump.

I know that process 1120 is a Java client (Hibernate) running an  
UPDATE query, but I have no idea what 1171 is. I doubt that 1171 was  
pg_dump, but when we turn off the pg_dump cron jobs (for 12-ish  
hours), the deadlocks go away. We usually see 5 or 6 deadlocks spread  
throughout the day. That's not definitive evidence, of course, but  
it's certainly curious.

> Given that you appear to be running 8.1 (tut-tut for not saying), it
> really shouldn't be a foreign key problem either.  I'm betting these
> are just flat out conflicting updates of the same row(s).

Yeah, 8.1.3. Sorry about the omission.

Is there additional logging information I can turn on to get more  
details? I guess I need to see exactly what locks both processes  
hold, and what queries they were running when the deadlock occurred?  
Is that easily done, without turning on logging for *all* statements?

Thanks!

- Chris



Re: Deadlock with pg_dump?

From
"Jim C. Nasby"
Date:
On Thu, Oct 26, 2006 at 06:11:59PM -0400, Chris Campbell wrote:
> On Oct 26, 2006, at 17:21, Tom Lane wrote:
> 
> >And what was 1171 doing?  I really doubt that either of these could  
> >have
> >been pg_dump.
> 
> I know that process 1120 is a Java client (Hibernate) running an  
> UPDATE query, but I have no idea what 1171 is. I doubt that 1171 was  
> pg_dump, but when we turn off the pg_dump cron jobs (for 12-ish  
> hours), the deadlocks go away. We usually see 5 or 6 deadlocks spread  
> throughout the day. That's not definitive evidence, of course, but  
> it's certainly curious.

I seem to remember something funny about hibernate and locking, though I
can't recall any details right now... but searching the archives might
provide insight.
-- 
Jim Nasby                                            jim@nasby.net
EnterpriseDB      http://enterprisedb.com      512.569.9461 (cell)


Re: Deadlock with pg_dump?

From
Tom Lane
Date:
Chris Campbell <chris@bignerdranch.com> writes:
> Is there additional logging information I can turn on to get more  
> details? I guess I need to see exactly what locks both processes  
> hold, and what queries they were running when the deadlock occurred?  
> Is that easily done, without turning on logging for *all* statements?

log_min_error_statement = error would at least get you the statements
reporting the deadlocks, though not what they're conflicting against.
Still, if you're seeing multiple occurrences per day, that would
probably let you build up a good picture of all the involved operations
over a couple of days.

[ Memo to hackers: why is it that log_min_error_statement = error
isn't the default? ]

Unless your applications are issuing actual LOCK TABLE commands, it's
really hard to see how pg_dump could be involved.  It doesn't take
anything stronger than AccessShareLock, and shouldn't be holding any
tuple-level locks at all.
        regards, tom lane


Re: Deadlock with pg_dump?

From
Alvaro Herrera
Date:
Tom Lane wrote:

> [ Memo to hackers: why is it that log_min_error_statement = error
> isn't the default? ]

I think it default to panic because it's the way to disable the feature,
which was the easiest sell when the feature was introduced.  I'm all for
lowering it to error.

-- 
Alvaro Herrera                                http://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.


Re: Deadlock with pg_dump?

From
Chris Campbell
Date:
On Oct 26, 2006, at 18:45, Tom Lane wrote:

> log_min_error_statement = error would at least get you the statements
> reporting the deadlocks, though not what they're conflicting against.

Would it be possible (in 8.3, say) to log the conflicting backend's  
current statement (from pg_stat_activity, perhaps)? I guess the  
conflicting backend would currently be waiting for a lock, so its  
current query (before releasing the lock) is the one we want.

Thanks!

- Chris



Re: Deadlock with pg_dump?

From
"Albe Laurenz"
Date:
> [ Memo to hackers: why is it that log_min_error_statement = error
> isn't the default? ]

To avoid spamming the logs with every failed SQL statement?

Yours,
Laurenz Albe


Re: Deadlock with pg_dump?

From
Csaba Nagy
Date:
On Fri, 2006-10-27 at 09:23, Albe Laurenz wrote:
> > [ Memo to hackers: why is it that log_min_error_statement = error
> > isn't the default? ]
> 
> To avoid spamming the logs with every failed SQL statement?

And it would be hurting applications where query failure is taken as a
valid path (as inserting first and update if failing)...

Cheers,
Csaba.



Re: Deadlock with pg_dump?

From
"Andrew Dunstan"
Date:
Csaba Nagy wrote:
> On Fri, 2006-10-27 at 09:23, Albe Laurenz wrote:
>> > [ Memo to hackers: why is it that log_min_error_statement = error
>> > isn't the default? ]
>>
>> To avoid spamming the logs with every failed SQL statement?
>
> And it would be hurting applications where query failure is taken as a
> valid path (as inserting first and update if failing)...
>

Both of these are arguments in favor of being able to alter the level,
which would still exist. They are not good arguments for not having error
as the default level.

cheers

andrew



Re: Deadlock with pg_dump?

From
Tom Lane
Date:
"Albe Laurenz" <all@adv.magwien.gv.at> writes:
>> [ Memo to hackers: why is it that log_min_error_statement = error
>> isn't the default? ]

> To avoid spamming the logs with every failed SQL statement?

Certainly there are people who will turn it off, but that's why it's
configurable.  I've had to answer "how do I find out what's causing
error message FOO" often enough that I'm starting to think logging error
statements is a more useful default than not logging 'em ...
        regards, tom lane


Re: Deadlock with pg_dump?

From
"Simon Riggs"
Date:
On Thu, 2006-10-26 at 18:45 -0400, Tom Lane wrote:
> Chris Campbell <chris@bignerdranch.com> writes:
> > Is there additional logging information I can turn on to get more  
> > details? I guess I need to see exactly what locks both processes  
> > hold, and what queries they were running when the deadlock occurred?  
> > Is that easily done, without turning on logging for *all* statements?
> 
> log_min_error_statement = error would at least get you the statements
> reporting the deadlocks, though not what they're conflicting against.

Yeh, we need a much better locking logger for performance analysis.

We really need to dump the whole wait-for graph for deadlocks, since
this might be more complex than just two statements involved. Deadlocks
ought to be so infrequent that we can afford the log space to do this -
plus if we did this it would likely lead to fewer deadlocks.

For 8.3 I'd like to have a log_min_duration_lockwait (secs) parameter
that would allow you to dump the wait-for graph for any data-level locks
that wait too long, rather than just those that deadlock. Many
applications experience heavy locking because of lack of holistic
design. That will also show up the need for other utilities to act
CONCURRENTLY, if possible.

> [ Memo to hackers: why is it that log_min_error_statement = error
> isn't the default? ]

For production systems where we expect fewer ERRORs, each one is
important, so this would be a good default.

--  Simon Riggs              EnterpriseDB   http://www.enterprisedb.com




Re: Deadlock with pg_dump?

From
Tom Lane
Date:
Chris Campbell <chris@bignerdranch.com> writes:
> On Oct 26, 2006, at 17:21, Tom Lane wrote:
>> And what was 1171 doing?  I really doubt that either of these could  
>> have been pg_dump.

> I know that process 1120 is a Java client (Hibernate) running an  
> UPDATE query, but I have no idea what 1171 is. I doubt that 1171 was  
> pg_dump, but when we turn off the pg_dump cron jobs (for 12-ish  
> hours), the deadlocks go away. We usually see 5 or 6 deadlocks spread  
> throughout the day. That's not definitive evidence, of course, but  
> it's certainly curious.

FWIW, we recently found a deadlock risk that's likely to be triggered in
situations involving multiple updates and foreign keys.  I still don't
see any connection to pg_dump, but otherwise it seems like it might be
related to your problem.  Try this patch:
http://archives.postgresql.org/pgsql-committers/2006-11/msg00121.php
        regards, tom lane


Re: Deadlock with pg_dump?

From
Bruce Momjian
Date:
Simon Riggs wrote:
> On Thu, 2006-10-26 at 18:45 -0400, Tom Lane wrote:
> > Chris Campbell <chris@bignerdranch.com> writes:
> > > Is there additional logging information I can turn on to get more  
> > > details? I guess I need to see exactly what locks both processes  
> > > hold, and what queries they were running when the deadlock occurred?  
> > > Is that easily done, without turning on logging for *all* statements?
> > 
> > log_min_error_statement = error would at least get you the statements
> > reporting the deadlocks, though not what they're conflicting against.
> 
> Yeh, we need a much better locking logger for performance analysis.
> 
> We really need to dump the whole wait-for graph for deadlocks, since
> this might be more complex than just two statements involved. Deadlocks
> ought to be so infrequent that we can afford the log space to do this -
> plus if we did this it would likely lead to fewer deadlocks.
> 
> For 8.3 I'd like to have a log_min_duration_lockwait (secs) parameter
> that would allow you to dump the wait-for graph for any data-level locks
> that wait too long, rather than just those that deadlock. Many
> applications experience heavy locking because of lack of holistic
> design. That will also show up the need for other utilities to act
> CONCURRENTLY, if possible.

Old email, but I don't see how our current output is not good enough?
test=> lock a;ERROR:  deadlock detectedDETAIL:  Process 6855 waits for AccessExclusiveLock on relation 16394 ofdatabase
16384;blocked by process 6795.Process 6795 waits for AccessExclusiveLock on relation 16396 of database16384; blocked by
process6855.
 

--  Bruce Momjian  <bruce@momjian.us>          http://momjian.us EnterpriseDB
http://www.enterprisedb.com
 + If your life is a hard drive, Christ can be your backup. +


Re: Deadlock with pg_dump?

From
Bruce Momjian
Date:
Tom Lane wrote:
> "Albe Laurenz" <all@adv.magwien.gv.at> writes:
> >> [ Memo to hackers: why is it that log_min_error_statement = error
> >> isn't the default? ]
> 
> > To avoid spamming the logs with every failed SQL statement?
> 
> Certainly there are people who will turn it off, but that's why it's
> configurable.  I've had to answer "how do I find out what's causing
> error message FOO" often enough that I'm starting to think logging error
> statements is a more useful default than not logging 'em ...

Are we ready to set 'log_min_error_statement = error' by default for
8.3?

--  Bruce Momjian  <bruce@momjian.us>          http://momjian.us EnterpriseDB
http://www.enterprisedb.com
 + If your life is a hard drive, Christ can be your backup. +


Re: Deadlock with pg_dump?

From
Tom Lane
Date:
Bruce Momjian <bruce@momjian.us> writes:
> Are we ready to set 'log_min_error_statement = error' by default for
> 8.3?

We already did that in 8.2.
        regards, tom lane


Re: Deadlock with pg_dump?

From
Bruce Momjian
Date:
Tom Lane wrote:
> Bruce Momjian <bruce@momjian.us> writes:
> > Are we ready to set 'log_min_error_statement = error' by default for
> > 8.3?
> 
> We already did that in 8.2.

Oh, interesting.  Oops again.

--  Bruce Momjian  <bruce@momjian.us>          http://momjian.us EnterpriseDB
http://www.enterprisedb.com
 + If your life is a hard drive, Christ can be your backup. +


Re: Deadlock with pg_dump?

From
"Simon Riggs"
Date:
On Tue, 2007-02-13 at 22:19 -0500, Bruce Momjian wrote:
> Simon Riggs wrote:
> > On Thu, 2006-10-26 at 18:45 -0400, Tom Lane wrote:
> > > Chris Campbell <chris@bignerdranch.com> writes:
> > > > Is there additional logging information I can turn on to get more  
> > > > details? I guess I need to see exactly what locks both processes  
> > > > hold, and what queries they were running when the deadlock occurred?  
> > > > Is that easily done, without turning on logging for *all* statements?
> > > 
> > > log_min_error_statement = error would at least get you the statements
> > > reporting the deadlocks, though not what they're conflicting against.
> > 
> > Yeh, we need a much better locking logger for performance analysis.
> > 
> > We really need to dump the whole wait-for graph for deadlocks, since
> > this might be more complex than just two statements involved. Deadlocks
> > ought to be so infrequent that we can afford the log space to do this -
> > plus if we did this it would likely lead to fewer deadlocks.
> > 
> > For 8.3 I'd like to have a log_min_duration_lockwait (secs) parameter
> > that would allow you to dump the wait-for graph for any data-level locks
> > that wait too long, rather than just those that deadlock. Many
> > applications experience heavy locking because of lack of holistic
> > design. That will also show up the need for other utilities to act
> > CONCURRENTLY, if possible.
> 
> Old email, but I don't see how our current output is not good enough?
> 
>     test=> lock a;
>     ERROR:  deadlock detected
>     DETAIL:  Process 6855 waits for AccessExclusiveLock on relation 16394 of
>     database 16384; blocked by process 6795.
>     Process 6795 waits for AccessExclusiveLock on relation 16396 of database
>     16384; blocked by process 6855.

This detects deadlocks, but it doesn't detect lock waits. 

When I wrote that it was previous experience driving me. Recent client
experience has highlighted the clear need for this. We had a lock wait
of 50 hours because of an RI check; thats the kind of thing I'd like to
show up in the logs somewhere.

Lock wait detection can be used to show up synchronisation points that
have been inadvertently designed into an application, so its a useful
tool in investigating performance issues.

I have a patch implementing the logging as agreed with Tom, will post to
patches later tonight.

--  Simon Riggs              EnterpriseDB   http://www.enterprisedb.com




Re: Deadlock with pg_dump?

From
"Simon Riggs"
Date:
On Mon, 2007-02-19 at 19:38 +0000, Simon Riggs wrote:
> On Tue, 2007-02-13 at 22:19 -0500, Bruce Momjian wrote:
> > Simon Riggs wrote:
> > > On Thu, 2006-10-26 at 18:45 -0400, Tom Lane wrote:
> > > > Chris Campbell <chris@bignerdranch.com> writes:
> > > > > Is there additional logging information I can turn on to get more
> > > > > details? I guess I need to see exactly what locks both processes
> > > > > hold, and what queries they were running when the deadlock occurred?
> > > > > Is that easily done, without turning on logging for *all* statements?
> > > >
> > > > log_min_error_statement = error would at least get you the statements
> > > > reporting the deadlocks, though not what they're conflicting against.
> > >
> > > Yeh, we need a much better locking logger for performance analysis.
> > >
> > > We really need to dump the whole wait-for graph for deadlocks, since
> > > this might be more complex than just two statements involved. Deadlocks
> > > ought to be so infrequent that we can afford the log space to do this -
> > > plus if we did this it would likely lead to fewer deadlocks.
> > >
> > > For 8.3 I'd like to have a log_min_duration_lockwait (secs) parameter
> > > that would allow you to dump the wait-for graph for any data-level locks
> > > that wait too long, rather than just those that deadlock. Many
> > > applications experience heavy locking because of lack of holistic
> > > design. That will also show up the need for other utilities to act
> > > CONCURRENTLY, if possible.
> >
> > Old email, but I don't see how our current output is not good enough?
> >
> >     test=> lock a;
> >     ERROR:  deadlock detected
> >     DETAIL:  Process 6855 waits for AccessExclusiveLock on relation 16394 of
> >     database 16384; blocked by process 6795.
> >     Process 6795 waits for AccessExclusiveLock on relation 16396 of database
> >     16384; blocked by process 6855.
>
> This detects deadlocks, but it doesn't detect lock waits.
>
> When I wrote that it was previous experience driving me. Recent client
> experience has highlighted the clear need for this. We had a lock wait
> of 50 hours because of an RI check; thats the kind of thing I'd like to
> show up in the logs somewhere.
>
> Lock wait detection can be used to show up synchronisation points that
> have been inadvertently designed into an application, so its a useful
> tool in investigating performance issues.
>
> I have a patch implementing the logging as agreed with Tom, will post to
> patches later tonight.

Patch for discussion, includes doc entries at top of patch, so its
fairly clear how it works.

Output is an INFO message, to allow this to trigger
log_min_error_statement when it generates a message, to allow us to see
the SQL statement that is waiting. This allows it to generate a message
prior to the statement completing, which is important because it may not
ever complete, in some cases, so simply logging a list of pids won't
always tell you what the SQL was that was waiting.

Other approaches are possible...

Comments?

--
  Simon Riggs
  EnterpriseDB   http://www.enterprisedb.com


Re: Deadlock with pg_dump?

From
Bruce Momjian
Date:
I am a little concerned about a log_* setting that is INFO. I understand
why you used INFO (for log_min_error_messages), but INFO is inconsistent
with the log* prefix, and by default INFO doesn't appear in the log
file.

So, by default, the INFO is going to go to the user terminal, and not to
the logfile.

Ideas?

---------------------------------------------------------------------------

Simon Riggs wrote:
> On Mon, 2007-02-19 at 19:38 +0000, Simon Riggs wrote:
> > On Tue, 2007-02-13 at 22:19 -0500, Bruce Momjian wrote:
> > > Simon Riggs wrote:
> > > > On Thu, 2006-10-26 at 18:45 -0400, Tom Lane wrote:
> > > > > Chris Campbell <chris@bignerdranch.com> writes:
> > > > > > Is there additional logging information I can turn on to get more
> > > > > > details? I guess I need to see exactly what locks both processes
> > > > > > hold, and what queries they were running when the deadlock occurred?
> > > > > > Is that easily done, without turning on logging for *all* statements?
> > > > >
> > > > > log_min_error_statement = error would at least get you the statements
> > > > > reporting the deadlocks, though not what they're conflicting against.
> > > >
> > > > Yeh, we need a much better locking logger for performance analysis.
> > > >
> > > > We really need to dump the whole wait-for graph for deadlocks, since
> > > > this might be more complex than just two statements involved. Deadlocks
> > > > ought to be so infrequent that we can afford the log space to do this -
> > > > plus if we did this it would likely lead to fewer deadlocks.
> > > >
> > > > For 8.3 I'd like to have a log_min_duration_lockwait (secs) parameter
> > > > that would allow you to dump the wait-for graph for any data-level locks
> > > > that wait too long, rather than just those that deadlock. Many
> > > > applications experience heavy locking because of lack of holistic
> > > > design. That will also show up the need for other utilities to act
> > > > CONCURRENTLY, if possible.
> > >
> > > Old email, but I don't see how our current output is not good enough?
> > >
> > >     test=> lock a;
> > >     ERROR:  deadlock detected
> > >     DETAIL:  Process 6855 waits for AccessExclusiveLock on relation 16394 of
> > >     database 16384; blocked by process 6795.
> > >     Process 6795 waits for AccessExclusiveLock on relation 16396 of database
> > >     16384; blocked by process 6855.
> >
> > This detects deadlocks, but it doesn't detect lock waits.
> >
> > When I wrote that it was previous experience driving me. Recent client
> > experience has highlighted the clear need for this. We had a lock wait
> > of 50 hours because of an RI check; thats the kind of thing I'd like to
> > show up in the logs somewhere.
> >
> > Lock wait detection can be used to show up synchronisation points that
> > have been inadvertently designed into an application, so its a useful
> > tool in investigating performance issues.
> >
> > I have a patch implementing the logging as agreed with Tom, will post to
> > patches later tonight.
>
> Patch for discussion, includes doc entries at top of patch, so its
> fairly clear how it works.
>
> Output is an INFO message, to allow this to trigger
> log_min_error_statement when it generates a message, to allow us to see
> the SQL statement that is waiting. This allows it to generate a message
> prior to the statement completing, which is important because it may not
> ever complete, in some cases, so simply logging a list of pids won't
> always tell you what the SQL was that was waiting.
>
> Other approaches are possible...
>
> Comments?
>
> --
>   Simon Riggs
>   EnterpriseDB   http://www.enterprisedb.com
>

[ Attachment, skipping... ]

--
  Bruce Momjian  <bruce@momjian.us>          http://momjian.us
  EnterpriseDB                               http://www.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

Re: Deadlock with pg_dump?

From
"Simon Riggs"
Date:
On Mon, 2007-02-26 at 13:34 -0500, Bruce Momjian wrote:

> I am a little concerned about a log_* setting that is INFO. I understand
> why you used INFO (for log_min_error_messages), but INFO is inconsistent
> with the log* prefix, and by default INFO doesn't appear in the log
> file.

Yeh, LOG would be most appropriate, but thats not possible.

log_min_messages allows only DEBUG5, DEBUG4, DEBUG3, DEBUG2, DEBUG1,
INFO, NOTICE and WARNING for non-error states.

Possibly DEBUG1?

--
  Simon Riggs
  EnterpriseDB   http://www.enterprisedb.com



Re: Deadlock with pg_dump?

From
Tom Lane
Date:
"Simon Riggs" <simon@2ndquadrant.com> writes:
> Yeh, LOG would be most appropriate, but thats not possible.

You have not given any good reason for that.

> log_min_messages allows only DEBUG5, DEBUG4, DEBUG3, DEBUG2, DEBUG1,
> INFO, NOTICE and WARNING for non-error states.

I don't think you understand quite how the log message priority works...

            regards, tom lane

Re: Deadlock with pg_dump?

From
"Simon Riggs"
Date:
On Mon, 2007-02-26 at 14:11 -0500, Tom Lane wrote:
> "Simon Riggs" <simon@2ndquadrant.com> writes:
> > Yeh, LOG would be most appropriate, but thats not possible.
>
> You have not given any good reason for that.

The idea of the patch is that it generates a log message which then
invokes log_min_error_statement so that the SQL statement is displayed.
LOG is not on the list of options there, otherwise I would use it.

The reason for behaving like this is so that a message is generated
while the statement is still waiting, rather than at the end. As I
mentioned in the submission, you may not like that behaviour; I'm in two
minds myself, but I'm trying to get to the stage of having useful
information come out of the server when we have long lock waits.

--
  Simon Riggs
  EnterpriseDB   http://www.enterprisedb.com



Re: Deadlock with pg_dump?

From
Tom Lane
Date:
"Simon Riggs" <simon@2ndquadrant.com> writes:
> The idea of the patch is that it generates a log message which then
> invokes log_min_error_statement so that the SQL statement is displayed.
> LOG is not on the list of options there, otherwise I would use it.

As I said, you don't understand how the logging priority control works.
LOG *is* the appropriate level for stuff intended to go to the server log.

            regards, tom lane

Re: Deadlock with pg_dump?

From
"Simon Riggs"
Date:
On Mon, 2007-02-26 at 14:28 -0500, Tom Lane wrote:
> "Simon Riggs" <simon@2ndquadrant.com> writes:
> > The idea of the patch is that it generates a log message which then
> > invokes log_min_error_statement so that the SQL statement is displayed.
> > LOG is not on the list of options there, otherwise I would use it.
>
> As I said, you don't understand how the logging priority control works.
> LOG *is* the appropriate level for stuff intended to go to the server log.

Please look at the definition of log_min_error_statement, so you
understand where I'm coming from.

--
  Simon Riggs
  EnterpriseDB   http://www.enterprisedb.com



Re: Deadlock with pg_dump?

From
Tom Lane
Date:
"Simon Riggs" <simon@2ndquadrant.com> writes:
> On Mon, 2007-02-26 at 14:28 -0500, Tom Lane wrote:
>> "Simon Riggs" <simon@2ndquadrant.com> writes:
>>> The idea of the patch is that it generates a log message which then
>>> invokes log_min_error_statement so that the SQL statement is displayed.
>>> LOG is not on the list of options there, otherwise I would use it.
>>
>> As I said, you don't understand how the logging priority control works.
>> LOG *is* the appropriate level for stuff intended to go to the server log.

> Please look at the definition of log_min_error_statement, so you
> understand where I'm coming from.

I *have* read the definition of log_min_error_statement.  (The SGML docs
are wrong btw, as a quick look at the code shows that LOG is an accepted
value.)

The real issue here is that send_message_to_server_log just does

    if (edata->elevel >= log_min_error_statement && debug_query_string != NULL)

to determine whether to log the statement, whereas arguably it should be
using a test like is_log_level_output --- that is, the priority ordering
for log_min_error_statement should be like log_min_messages not like
client_min_messages.  We've discussed that before in another thread, but
it looks like nothing's been done yet.  In any case, if you're unhappy
with the code's choice of whether to emit the STATEMENT part of a log
message, some changes here are what's indicated, not bizarre choices of
elevel for individual messages.

            regards, tom lane

Re: Deadlock with pg_dump?

From
"Simon Riggs"
Date:
On Mon, 2007-02-26 at 14:52 -0500, Tom Lane wrote:
> "Simon Riggs" <simon@2ndquadrant.com> writes:
> > On Mon, 2007-02-26 at 14:28 -0500, Tom Lane wrote:
> >> "Simon Riggs" <simon@2ndquadrant.com> writes:
> >>> The idea of the patch is that it generates a log message which then
> >>> invokes log_min_error_statement so that the SQL statement is displayed.
> >>> LOG is not on the list of options there, otherwise I would use it.
> >>
> >> As I said, you don't understand how the logging priority control works.
> >> LOG *is* the appropriate level for stuff intended to go to the server log.
>
> > Please look at the definition of log_min_error_statement, so you
> > understand where I'm coming from.
>
> I *have* read the definition of log_min_error_statement.  (The SGML docs
> are wrong btw, as a quick look at the code shows that LOG is an accepted
> value.)

OK, I should have looked passed the manual.

> The real issue here is that send_message_to_server_log just does
>
>     if (edata->elevel >= log_min_error_statement && debug_query_string != NULL)
>
> to determine whether to log the statement, whereas arguably it should be
> using a test like is_log_level_output --- that is, the priority ordering
> for log_min_error_statement should be like log_min_messages not like
> client_min_messages.  We've discussed that before in another thread, but
> it looks like nothing's been done yet.

Hopefully not with me? Don't remember that.

> In any case, if you're unhappy
> with the code's choice of whether to emit the STATEMENT part of a log
> message, some changes here are what's indicated, not bizarre choices of
> elevel for individual messages.

Well, I would have chosen LOG if I thought it was available.

I'll do some more to the patch.

--
  Simon Riggs
  EnterpriseDB   http://www.enterprisedb.com



Re: [PATCHES] Deadlock with pg_dump?

From
Bruce Momjian
Date:
Simon Riggs wrote:
> On Mon, 2007-02-26 at 13:34 -0500, Bruce Momjian wrote:
>
> > I am a little concerned about a log_* setting that is INFO. I understand
> > why you used INFO (for log_min_error_messages), but INFO is inconsistent
> > with the log* prefix, and by default INFO doesn't appear in the log
> > file.
>
> Yeh, LOG would be most appropriate, but thats not possible.
>
> log_min_messages allows only DEBUG5, DEBUG4, DEBUG3, DEBUG2, DEBUG1,
> INFO, NOTICE and WARNING for non-error states.
>
> Possibly DEBUG1?

This highlights a problem we have often had with LOG output where we
also want the query.

I think there are two possible approaches.  First, we could add a new
bitmap value like LOG_STATEMENT to ereport when we want the statement
with the log line:

    ereport (LOG | LOG_STATEMENT, ...)

(or a new LOG_WITH_STATEMENT log level) and a new GUC like
"log_include_statement" that would control the output of statements for
certain GUC parameters, and we document with GUC values it controls.

A simpler idea would be to unconditionally include the query in the
errdetail() of the actual LOG ereport.

This is not the first GUC that has needed this.  We had this issue with
"log_temp_files", which we just added, and the only suggested solution
was to use log_statement = 'all'.  Either of these ideas above would be
useful for this as well.

--
  Bruce Momjian  <bruce@momjian.us>          http://momjian.us
  EnterpriseDB                               http://www.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

Re: [PATCHES] Deadlock with pg_dump?

From
Tom Lane
Date:
Bruce Momjian <bruce@momjian.us> writes:
> This is not the first GUC that has needed this.

Exactly.  I think that we simply made a mistake in the initial
implementation of log_min_error_statement: we failed to think about
whether it should use client or server priority ordering, and the
easy-to-code behavior was the wrong one.

            regards, tom lane

Re: Deadlock with pg_dump?

From
Bruce Momjian
Date:
I will rework this before application to use LOG level.

Your patch has been added to the PostgreSQL unapplied patches list at:

    http://momjian.postgresql.org/cgi-bin/pgpatches

It will be applied as soon as one of the PostgreSQL committers reviews
and approves it.

---------------------------------------------------------------------------


Simon Riggs wrote:
> On Mon, 2007-02-19 at 19:38 +0000, Simon Riggs wrote:
> > On Tue, 2007-02-13 at 22:19 -0500, Bruce Momjian wrote:
> > > Simon Riggs wrote:
> > > > On Thu, 2006-10-26 at 18:45 -0400, Tom Lane wrote:
> > > > > Chris Campbell <chris@bignerdranch.com> writes:
> > > > > > Is there additional logging information I can turn on to get more
> > > > > > details? I guess I need to see exactly what locks both processes
> > > > > > hold, and what queries they were running when the deadlock occurred?
> > > > > > Is that easily done, without turning on logging for *all* statements?
> > > > >
> > > > > log_min_error_statement = error would at least get you the statements
> > > > > reporting the deadlocks, though not what they're conflicting against.
> > > >
> > > > Yeh, we need a much better locking logger for performance analysis.
> > > >
> > > > We really need to dump the whole wait-for graph for deadlocks, since
> > > > this might be more complex than just two statements involved. Deadlocks
> > > > ought to be so infrequent that we can afford the log space to do this -
> > > > plus if we did this it would likely lead to fewer deadlocks.
> > > >
> > > > For 8.3 I'd like to have a log_min_duration_lockwait (secs) parameter
> > > > that would allow you to dump the wait-for graph for any data-level locks
> > > > that wait too long, rather than just those that deadlock. Many
> > > > applications experience heavy locking because of lack of holistic
> > > > design. That will also show up the need for other utilities to act
> > > > CONCURRENTLY, if possible.
> > >
> > > Old email, but I don't see how our current output is not good enough?
> > >
> > >     test=> lock a;
> > >     ERROR:  deadlock detected
> > >     DETAIL:  Process 6855 waits for AccessExclusiveLock on relation 16394 of
> > >     database 16384; blocked by process 6795.
> > >     Process 6795 waits for AccessExclusiveLock on relation 16396 of database
> > >     16384; blocked by process 6855.
> >
> > This detects deadlocks, but it doesn't detect lock waits.
> >
> > When I wrote that it was previous experience driving me. Recent client
> > experience has highlighted the clear need for this. We had a lock wait
> > of 50 hours because of an RI check; thats the kind of thing I'd like to
> > show up in the logs somewhere.
> >
> > Lock wait detection can be used to show up synchronisation points that
> > have been inadvertently designed into an application, so its a useful
> > tool in investigating performance issues.
> >
> > I have a patch implementing the logging as agreed with Tom, will post to
> > patches later tonight.
>
> Patch for discussion, includes doc entries at top of patch, so its
> fairly clear how it works.
>
> Output is an INFO message, to allow this to trigger
> log_min_error_statement when it generates a message, to allow us to see
> the SQL statement that is waiting. This allows it to generate a message
> prior to the statement completing, which is important because it may not
> ever complete, in some cases, so simply logging a list of pids won't
> always tell you what the SQL was that was waiting.
>
> Other approaches are possible...
>
> Comments?
>
> --
>   Simon Riggs
>   EnterpriseDB   http://www.enterprisedb.com
>

[ Attachment, skipping... ]

--
  Bruce Momjian  <bruce@momjian.us>          http://momjian.us
  EnterpriseDB                               http://www.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

Re: Deadlock with pg_dump?

From
Bruce Momjian
Date:
I have applied a modified version of this patch, attached.   I trimmed
down the description of log_lock_waits to be more concise, and moved the
idea of using this to tune deadlock_timeout to the deadlock_timeout
section of the manual.

---------------------------------------------------------------------------

Simon Riggs wrote:
> On Mon, 2007-02-19 at 19:38 +0000, Simon Riggs wrote:
> > On Tue, 2007-02-13 at 22:19 -0500, Bruce Momjian wrote:
> > > Simon Riggs wrote:
> > > > On Thu, 2006-10-26 at 18:45 -0400, Tom Lane wrote:
> > > > > Chris Campbell <chris@bignerdranch.com> writes:
> > > > > > Is there additional logging information I can turn on to get more
> > > > > > details? I guess I need to see exactly what locks both processes
> > > > > > hold, and what queries they were running when the deadlock occurred?
> > > > > > Is that easily done, without turning on logging for *all* statements?
> > > > >
> > > > > log_min_error_statement = error would at least get you the statements
> > > > > reporting the deadlocks, though not what they're conflicting against.
> > > >
> > > > Yeh, we need a much better locking logger for performance analysis.
> > > >
> > > > We really need to dump the whole wait-for graph for deadlocks, since
> > > > this might be more complex than just two statements involved. Deadlocks
> > > > ought to be so infrequent that we can afford the log space to do this -
> > > > plus if we did this it would likely lead to fewer deadlocks.
> > > >
> > > > For 8.3 I'd like to have a log_min_duration_lockwait (secs) parameter
> > > > that would allow you to dump the wait-for graph for any data-level locks
> > > > that wait too long, rather than just those that deadlock. Many
> > > > applications experience heavy locking because of lack of holistic
> > > > design. That will also show up the need for other utilities to act
> > > > CONCURRENTLY, if possible.
> > >
> > > Old email, but I don't see how our current output is not good enough?
> > >
> > >     test=> lock a;
> > >     ERROR:  deadlock detected
> > >     DETAIL:  Process 6855 waits for AccessExclusiveLock on relation 16394 of
> > >     database 16384; blocked by process 6795.
> > >     Process 6795 waits for AccessExclusiveLock on relation 16396 of database
> > >     16384; blocked by process 6855.
> >
> > This detects deadlocks, but it doesn't detect lock waits.
> >
> > When I wrote that it was previous experience driving me. Recent client
> > experience has highlighted the clear need for this. We had a lock wait
> > of 50 hours because of an RI check; thats the kind of thing I'd like to
> > show up in the logs somewhere.
> >
> > Lock wait detection can be used to show up synchronisation points that
> > have been inadvertently designed into an application, so its a useful
> > tool in investigating performance issues.
> >
> > I have a patch implementing the logging as agreed with Tom, will post to
> > patches later tonight.
>
> Patch for discussion, includes doc entries at top of patch, so its
> fairly clear how it works.
>
> Output is an INFO message, to allow this to trigger
> log_min_error_statement when it generates a message, to allow us to see
> the SQL statement that is waiting. This allows it to generate a message
> prior to the statement completing, which is important because it may not
> ever complete, in some cases, so simply logging a list of pids won't
> always tell you what the SQL was that was waiting.
>
> Other approaches are possible...
>
> Comments?
>
> --
>   Simon Riggs
>   EnterpriseDB   http://www.enterprisedb.com
>

[ Attachment, skipping... ]

--
  Bruce Momjian  <bruce@momjian.us>          http://momjian.us
  EnterpriseDB                               http://www.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +
Index: doc/src/sgml/config.sgml
===================================================================
RCS file: /cvsroot/pgsql/doc/src/sgml/config.sgml,v
retrieving revision 1.113
diff -c -c -r1.113 config.sgml
*** doc/src/sgml/config.sgml    2 Mar 2007 23:37:22 -0000    1.113
--- doc/src/sgml/config.sgml    3 Mar 2007 18:41:13 -0000
***************
*** 2946,2951 ****
--- 2946,2966 ----
        </listitem>
       </varlistentry>

+      <varlistentry id="guc-log-lock-waits" xreflabel="log_lock_waits">
+       <term><varname>log_lock_waits</varname> (<type>boolean</type>)</term>
+       <indexterm>
+        <primary><varname>log_lock_waits</> configuration parameter</primary>
+       </indexterm>
+       <listitem>
+        <para>
+         Controls whether a log message is produced when a statement waits
+         longer than <xref linkend="guc-deadlock-timeout"> to acquire a
+         lock.  This is useful in determining if lock waits are causing
+         poor performance.  The default is <literal>off</>.
+        </para>
+       </listitem>
+      </varlistentry>
+
       <varlistentry id="guc-log-temp-files" xreflabel="log_temp_files">
        <term><varname>log_temp_files</varname> (<type>integer</type>)</term>
        <indexterm>
***************
*** 3980,3996 ****
          This is the amount of time, in milliseconds, to wait on a lock
          before checking to see if there is a deadlock condition. The
          check for deadlock is relatively slow, so the server doesn't run
!         it every time it waits for a lock. We (optimistically?) assume
          that deadlocks are not common in production applications and
          just wait on the lock for a while before starting the check for a
          deadlock. Increasing this value reduces the amount of time
          wasted in needless deadlock checks, but slows down reporting of
          real deadlock errors. The default is one second (<literal>1s</>),
          which is probably about the smallest value you would want in
!         practice. On a heavily loaded server you might want to raise it.
          Ideally the setting should exceed your typical transaction time,
!         so as to improve the odds that a lock will be released before
!         the waiter decides to check for deadlock.
         </para>
        </listitem>
       </varlistentry>
--- 3995,4012 ----
          This is the amount of time, in milliseconds, to wait on a lock
          before checking to see if there is a deadlock condition. The
          check for deadlock is relatively slow, so the server doesn't run
!         it every time it waits for a lock. We optimistically assume
          that deadlocks are not common in production applications and
          just wait on the lock for a while before starting the check for a
          deadlock. Increasing this value reduces the amount of time
          wasted in needless deadlock checks, but slows down reporting of
          real deadlock errors. The default is one second (<literal>1s</>),
          which is probably about the smallest value you would want in
!         practice.  Set <xref linkend="guc-log-lock-waits"> to log deadlock
!         checks.  On a heavily loaded server you might want to raise it.
          Ideally the setting should exceed your typical transaction time,
!         so as to improve the odds that a lock will be released before the
!         waiter decides to check for deadlock.
         </para>
        </listitem>
       </varlistentry>
Index: src/backend/storage/lmgr/deadlock.c
===================================================================
RCS file: /cvsroot/pgsql/src/backend/storage/lmgr/deadlock.c,v
retrieving revision 1.44
diff -c -c -r1.44 deadlock.c
*** src/backend/storage/lmgr/deadlock.c    5 Jan 2007 22:19:38 -0000    1.44
--- src/backend/storage/lmgr/deadlock.c    3 Mar 2007 18:41:13 -0000
***************
*** 187,199 ****
   * deadlock.  If resolution is impossible, return TRUE --- the caller
   * is then expected to abort the given proc's transaction.
   *
!  * Caller must already have locked all partitions of the lock tables.
   *
   * On failure, deadlock details are recorded in deadlockDetails[] for
   * subsequent printing by DeadLockReport().  That activity is separate
   * because we don't want to do it while holding all those LWLocks.
   */
! bool
  DeadLockCheck(PGPROC *proc)
  {
      int            i,
--- 187,200 ----
   * deadlock.  If resolution is impossible, return TRUE --- the caller
   * is then expected to abort the given proc's transaction.
   *
!  * Caller must already have locked all partitions of the lock tables,
!  * so standard error logging/reporting code is handled by caller.
   *
   * On failure, deadlock details are recorded in deadlockDetails[] for
   * subsequent printing by DeadLockReport().  That activity is separate
   * because we don't want to do it while holding all those LWLocks.
   */
! DeadlockState
  DeadLockCheck(PGPROC *proc)
  {
      int            i,
***************
*** 204,209 ****
--- 205,215 ----
      nPossibleConstraints = 0;
      nWaitOrders = 0;

+ #ifdef LOCK_DEBUG
+     if (Debug_deadlocks)
+         DumpAllLocks();
+ #endif
+
      /* Search for deadlocks and possible fixes */
      if (DeadLockCheckRecurse(proc))
      {
***************
*** 217,223 ****
          if (!FindLockCycle(proc, possibleConstraints, &nSoftEdges))
              elog(FATAL, "deadlock seems to have disappeared");

!         return true;            /* cannot find a non-deadlocked state */
      }

      /* Apply any needed rearrangements of wait queues */
--- 223,229 ----
          if (!FindLockCycle(proc, possibleConstraints, &nSoftEdges))
              elog(FATAL, "deadlock seems to have disappeared");

!         return DS_HARD_DEADLOCK;    /* cannot find a non-deadlocked state */
      }

      /* Apply any needed rearrangements of wait queues */
***************
*** 249,255 ****
          /* See if any waiters for the lock can be woken up now */
          ProcLockWakeup(GetLocksMethodTable(lock), lock);
      }
!     return false;
  }

  /*
--- 255,265 ----
          /* See if any waiters for the lock can be woken up now */
          ProcLockWakeup(GetLocksMethodTable(lock), lock);
      }
!
!     if (nWaitOrders > 0)
!         return DS_SOFT_DEADLOCK;
!     else
!         return DS_DEADLOCK_NOT_FOUND;
  }

  /*
***************
*** 896,902 ****
  }

  /*
!  * Report a detected deadlock, with available details.
   */
  void
  DeadLockReport(void)
--- 906,912 ----
  }

  /*
!  * Report a detected DS_HARD_DEADLOCK, with available details.
   */
  void
  DeadLockReport(void)
Index: src/backend/storage/lmgr/proc.c
===================================================================
RCS file: /cvsroot/pgsql/src/backend/storage/lmgr/proc.c,v
retrieving revision 1.184
diff -c -c -r1.184 proc.c
*** src/backend/storage/lmgr/proc.c    15 Feb 2007 23:23:23 -0000    1.184
--- src/backend/storage/lmgr/proc.c    3 Mar 2007 18:41:14 -0000
***************
*** 48,53 ****
--- 48,54 ----
  /* GUC variables */
  int            DeadlockTimeout = 1000;
  int            StatementTimeout = 0;
+ bool        log_lock_waits = false;

  /* Pointer to this process's PGPROC struct, if any */
  PGPROC       *MyProc = NULL;
***************
*** 979,984 ****
--- 980,986 ----
  CheckDeadLock(void)
  {
      int            i;
+     DeadlockState deadlock_state = DS_DEADLOCK_NOT_FOUND;

      /*
       * Acquire exclusive lock on the entire shared lock data structures. Must
***************
*** 1004,1063 ****
       * This is quicker than checking our semaphore's state, since no kernel
       * call is needed, and it is safe because we hold the lock partition lock.
       */
!     if (MyProc->links.prev == INVALID_OFFSET ||
!         MyProc->links.next == INVALID_OFFSET)
!         goto check_done;
!
! #ifdef LOCK_DEBUG
!     if (Debug_deadlocks)
!         DumpAllLocks();
! #endif
!
!     if (!DeadLockCheck(MyProc))
      {
!         /* No deadlock, so keep waiting */
!         goto check_done;
!     }
!
!     /*
!      * Oops.  We have a deadlock.
!      *
!      * Get this process out of wait state.    (Note: we could do this more
!      * efficiently by relying on lockAwaited, but use this coding to preserve
!      * the flexibility to kill some other transaction than the one detecting
!      * the deadlock.)
!      *
!      * RemoveFromWaitQueue sets MyProc->waitStatus to STATUS_ERROR, so
!      * ProcSleep will report an error after we return from the signal handler.
!      */
!     Assert(MyProc->waitLock != NULL);
!     RemoveFromWaitQueue(MyProc, LockTagHashCode(&(MyProc->waitLock->tag)));

!     /*
!      * Unlock my semaphore so that the interrupted ProcSleep() call can
!      * finish.
!      */
!     PGSemaphoreUnlock(&MyProc->sem);

!     /*
!      * We're done here.  Transaction abort caused by the error that ProcSleep
!      * will raise will cause any other locks we hold to be released, thus
!      * allowing other processes to wake up; we don't need to do that here.
!      * NOTE: an exception is that releasing locks we hold doesn't consider the
!      * possibility of waiters that were blocked behind us on the lock we just
!      * failed to get, and might now be wakable because we're not in front of
!      * them anymore.  However, RemoveFromWaitQueue took care of waking up any
!      * such processes.
!      */

      /*
       * Release locks acquired at head of routine.  Order is not critical, so
       * do it back-to-front to avoid waking another CheckDeadLock instance
       * before it can get all the locks.
       */
- check_done:
      for (i = NUM_LOCK_PARTITIONS; --i >= 0;)
          LWLockRelease(FirstLockMgrLock + i);
  }


--- 1006,1082 ----
       * This is quicker than checking our semaphore's state, since no kernel
       * call is needed, and it is safe because we hold the lock partition lock.
       */
!     if (MyProc->links.prev != INVALID_OFFSET &&
!         MyProc->links.next != INVALID_OFFSET)
!         deadlock_state = DeadLockCheck(MyProc);
!
!     if (deadlock_state == DS_HARD_DEADLOCK)
      {
!         /*
!          * Oops.  We have a deadlock.
!          *
!          * Get this process out of wait state.    (Note: we could do this more
!          * efficiently by relying on lockAwaited, but use this coding to preserve
!          * the flexibility to kill some other transaction than the one detecting
!          * the deadlock.)
!          *
!          * RemoveFromWaitQueue sets MyProc->waitStatus to STATUS_ERROR, so
!          * ProcSleep will report an error after we return from the signal handler.
!          */
!         Assert(MyProc->waitLock != NULL);
!         RemoveFromWaitQueue(MyProc, LockTagHashCode(&(MyProc->waitLock->tag)));

!         /*
!          * Unlock my semaphore so that the interrupted ProcSleep() call can
!          * finish.
!          */
!         PGSemaphoreUnlock(&MyProc->sem);

!         /*
!          * We're done here.  Transaction abort caused by the error that ProcSleep
!          * will raise will cause any other locks we hold to be released, thus
!          * allowing other processes to wake up; we don't need to do that here.
!          * NOTE: an exception is that releasing locks we hold doesn't consider the
!          * possibility of waiters that were blocked behind us on the lock we just
!          * failed to get, and might now be wakable because we're not in front of
!          * them anymore.  However, RemoveFromWaitQueue took care of waking up any
!          * such processes.
!          */
!     }

      /*
       * Release locks acquired at head of routine.  Order is not critical, so
       * do it back-to-front to avoid waking another CheckDeadLock instance
       * before it can get all the locks.
       */
      for (i = NUM_LOCK_PARTITIONS; --i >= 0;)
          LWLockRelease(FirstLockMgrLock + i);
+
+     /*
+      * Issue any log messages requested.
+      *
+      * Deadlock ERROR messages are issued as part of transaction abort, so
+      * these messages should not raise error states intentionally.
+      */
+     if (log_lock_waits)
+     {
+         switch (deadlock_state)
+         {
+             case DS_SOFT_DEADLOCK:
+                 ereport(LOG,
+                     (errmsg("deadlock avoided by rearranging lock order")));
+                 break;
+             case DS_DEADLOCK_NOT_FOUND:
+                 ereport(LOG,
+                     (errmsg("statement waiting for lock for at least %d ms",
+                                 DeadlockTimeout)));
+                 break;
+             case DS_HARD_DEADLOCK:
+                 break;    /* ERROR message handled during abort */
+             default:
+                 break;
+         }
+     }
  }


Index: src/backend/utils/misc/guc.c
===================================================================
RCS file: /cvsroot/pgsql/src/backend/utils/misc/guc.c,v
retrieving revision 1.377
diff -c -c -r1.377 guc.c
*** src/backend/utils/misc/guc.c    23 Feb 2007 21:36:18 -0000    1.377
--- src/backend/utils/misc/guc.c    3 Mar 2007 18:41:16 -0000
***************
*** 812,817 ****
--- 812,825 ----
  #endif

      {
+         {"log_lock_waits", PGC_SIGHUP, LOGGING_WHAT,
+             gettext_noop("Logs long lock wait events."),
+             NULL
+         },
+         &log_lock_waits,
+         false, NULL, NULL
+     },
+     {
          {"log_hostname", PGC_SIGHUP, LOGGING_WHAT,
              gettext_noop("Logs the host name in the connection logs."),
              gettext_noop("By default, connection logs only show the IP address "
Index: src/backend/utils/misc/postgresql.conf.sample
===================================================================
RCS file: /cvsroot/pgsql/src/backend/utils/misc/postgresql.conf.sample,v
retrieving revision 1.210
diff -c -c -r1.210 postgresql.conf.sample
*** src/backend/utils/misc/postgresql.conf.sample    2 Mar 2007 23:37:23 -0000    1.210
--- src/backend/utils/misc/postgresql.conf.sample    3 Mar 2007 18:41:16 -0000
***************
*** 340,346 ****
                      # e.g. '<%u%%%d> '
  #log_statement = 'none'            # none, ddl, mod, all
  #log_hostname = off
!
  #log_temp_files = -1            # Log temporary files equal or larger
                      # than the specified number of kilobytes.
                      # -1 disables;  0 logs all temp files
--- 340,346 ----
                      # e.g. '<%u%%%d> '
  #log_statement = 'none'            # none, ddl, mod, all
  #log_hostname = off
! #log_lock_waits = off            # Log lock waits longer than deadlock_timeout
  #log_temp_files = -1            # Log temporary files equal or larger
                      # than the specified number of kilobytes.
                      # -1 disables;  0 logs all temp files
Index: src/include/storage/lock.h
===================================================================
RCS file: /cvsroot/pgsql/src/include/storage/lock.h,v
retrieving revision 1.103
diff -c -c -r1.103 lock.h
*** src/include/storage/lock.h    5 Jan 2007 22:19:58 -0000    1.103
--- src/include/storage/lock.h    3 Mar 2007 18:41:17 -0000
***************
*** 394,399 ****
--- 394,406 ----
      LOCKACQUIRE_ALREADY_HELD    /* incremented count for lock already held */
  } LockAcquireResult;

+ /* Deadlock states identified by DeadlockCheck() */
+ typedef enum
+ {
+     DS_DEADLOCK_NOT_FOUND,        /* no deadlock found within database server */
+     DS_SOFT_DEADLOCK,            /* deadlock, but lock queues rearrangeable */
+     DS_HARD_DEADLOCK            /* deadlock, no way out but ERROR */
+ } DeadlockState;

  /*
   * The lockmgr's shared hash tables are partitioned to reduce contention.
***************
*** 442,448 ****
  extern void lock_twophase_postabort(TransactionId xid, uint16 info,
                          void *recdata, uint32 len);

! extern bool DeadLockCheck(PGPROC *proc);
  extern void DeadLockReport(void);
  extern void RememberSimpleDeadLock(PGPROC *proc1,
                         LOCKMODE lockmode,
--- 449,455 ----
  extern void lock_twophase_postabort(TransactionId xid, uint16 info,
                          void *recdata, uint32 len);

! extern DeadlockState DeadLockCheck(PGPROC *proc);
  extern void DeadLockReport(void);
  extern void RememberSimpleDeadLock(PGPROC *proc1,
                         LOCKMODE lockmode,
Index: src/include/storage/proc.h
===================================================================
RCS file: /cvsroot/pgsql/src/include/storage/proc.h,v
retrieving revision 1.94
diff -c -c -r1.94 proc.h
*** src/include/storage/proc.h    15 Feb 2007 23:23:23 -0000    1.94
--- src/include/storage/proc.h    3 Mar 2007 18:41:17 -0000
***************
*** 127,132 ****
--- 127,133 ----
  /* configurable options */
  extern int    DeadlockTimeout;
  extern int    StatementTimeout;
+ extern bool    log_lock_waits;

  extern volatile bool cancel_from_timeout;