Re: can we optimize STACK_DEPTH_SLOP - Mailing list pgsql-hackers

From Tom Lane
Subject Re: can we optimize STACK_DEPTH_SLOP
Date
Msg-id 32729.1467734050@sss.pgh.pa.us
Whole thread Raw
In response to can we optimize STACK_DEPTH_SLOP  (Greg Stark <stark@mit.edu>)
Responses Re: can we optimize STACK_DEPTH_SLOP  (Robert Haas <robertmhaas@gmail.com>)
List pgsql-hackers
Greg Stark <stark@mit.edu> writes:
> Poking at NetBSD kernel source it looks like the default ulimit -s
> depends on the architecture and ranges from 512k to 16M. Postgres
> insists on max_stack_depth being STACK_DEPTH_SLOP -- ie 512kB -- less
> than the ulimit setting making it impossible to start up on
> architectures with a default of 512kB without raising the ulimit.

> If we could just lower it to 384kB then Postgres would start up but I
> wonder if we should just use MIN(stack_rlimit/2, STACK
> _DEPTH_SLOP) so that there's always a setting of max_stack_depth that
> would allow Postgres to start.

I'm pretty nervous about reducing that materially without any
investigation into how much of the slop we actually use.  Our assumption
so far has generally been that only recursive routines need to have any
stack depth check; but there are plenty of very deep non-recursive call
paths.  I do not think we're doing people any favors by letting them skip
fooling with "ulimit -s" if the result is that their database crashes
under stress.  For that matter, even if we were sure we'd produce a
"stack too deep" error rather than crashing, that's still not very nice
if it happens on run-of-the-mill queries.
        regards, tom lane



pgsql-hackers by date:

Previous
From: Robert Haas
Date:
Subject: Re: asynchronous and vectorized execution
Next
From: Robert Haas
Date:
Subject: Re: Typo Patch