Re: BUG #5566: High levels of savepoint nesting trigger stack overflow in AssignTransactionId - Mailing list pgsql-bugs

From Andres Freund
Subject Re: BUG #5566: High levels of savepoint nesting trigger stack overflow in AssignTransactionId
Date
Msg-id 201007192114.28426.andres@anarazel.de
Whole thread Raw
In response to Re: BUG #5566: High levels of savepoint nesting trigger stack overflow in AssignTransactionId  (Heikki Linnakangas <heikki.linnakangas@enterprisedb.com>)
List pgsql-bugs
On Monday 19 July 2010 21:03:25 Heikki Linnakangas wrote:
> On 19/07/10 21:32, Andres Freund wrote:
> > On Monday 19 July 2010 20:19:35 Heikki Linnakangas wrote:
> >> On 19/07/10 20:58, Andres Freund wrote:
> >>> On Monday 19 July 2010 19:57:13 Alvaro Herrera wrote:
> >>>> Excerpts from Andres Freund's message of lun jul 19 11:58:06 -0400
2010:
> >>>>> On Monday 19 July 2010 17:26:25 Hans van Kranenburg wrote:
> >>>>>> When issuing an update statement in a transaction with ~30800 levels
> >>>>>> of savepoint nesting, (which is insane, but possible), postgresql
> >>>>>> segfaults due to a stack overflow in the AssignTransactionId
> >>>>>> function, which recursively assign transaction ids to parent
> >>>>>> transactions.
> >>>>>
> >>>>> It seems easy enough to throw a check_stack_depth() in there -
> >>>>> survives make check here.
> >>>>
> >>>> I wonder if it would work to deal with the problem non-recursively
> >>>> instead.  We don't impose subxact depth restrictions elsewhere, why
> >>>> start now?
> >>>
> >>> It looks trivial enough, but whats the point?
> >>
> >> To support more than<insert abitrary limit here>  subtransactions,
> >> obviously.
> >
> > Well. I got that far. But why is that something worthy of support?
>
> Because it's not really much harder than putting in the limit.
The difference is that you then get errors like:

WARNING:  53200: out of shared memory
LOCATION:  ShmemAlloc, shmem.c:190
ERROR:  53200: out of shared memory
HINT:  You might need to increase max_locks_per_transaction.
LOCATION:  LockAcquireExtended, lock.c:680
STATEMENT:  INSERT INTO tstack VALUES(1)

After which pg takes longer to cleanup the transaction  than I am willing to
wait (ok ok, thats at an obscene 100k nesting level).

At 50k a single commit takes some minutes as well. (no cassert, -O0)

All that seems pretty annoying to debug...


> Besides, if you put in a limit of 3000, someone with a smaller stack might
> still run out of stack space.
I had left that check there.

Will send a patch, have it locally, just need to verify it.

Andres

pgsql-bugs by date:

Previous
From: Heikki Linnakangas
Date:
Subject: Re: BUG #5566: High levels of savepoint nesting trigger stack overflow in AssignTransactionId
Next
From: Dave Page
Date:
Subject: Re: PG 9.0 Solaris compile error on Sparc