Hi,
On 2023-04-08 21:29:54 -0700, Noah Misch wrote:
> On Sat, Apr 08, 2023 at 11:08:16AM -0700, Andres Freund wrote:
> > On 2023-04-07 23:04:08 -0700, Andres Freund wrote:
> > > There were some failures in CI (e.g. [1] (and perhaps also bf, didn't yet
> > > check), about "no unpinned buffers available". I was worried for a moment
> > > that this could actually be relation to the bulk extension patch.
> > >
> > > But it looks like it's older - and not caused by direct_io support (except by
> > > way of the test existing). I reproduced the issue locally by setting s_b even
> > > lower, to 16 and made the ERROR a PANIC.
> > >
> > > [backtrace]
>
> I get an ERROR, not a PANIC:
What I meant is that I changed the code to use PANIC, to make it easier to get
a backtrace.
> > > If you look at log_newpage_range(), it's not surprising that we get this error
> > > - it pins up to 32 buffers at once.
> > >
> > > Afaics log_newpage_range() originates in 9155580fd5fc, but this caller is from
> > > c6b92041d385.
>
> > > Do we care about fixing this in the backbranches? Probably not, given there
> > > haven't been user complaints?
>
> I would not. This is only going to come up where the user goes out of the way
> to use near-minimum shared_buffers.
It's not *just* that scenario. With a few concurrent connections you can get
into problematic territory even with halfway reasonable shared buffers.
> > Here's a quick prototype of this approach.
>
> This looks fine. I'm not enthusiastic about incurring post-startup cycles to
> cater to allocating less than 512k*max_connections of shared buffers, but I
> expect the cycles in question are negligible here.
Yea, I can't imagine it'd matter, compared to the other costs. Arguably it'd
allow us to crank up the maximum batch size further, even.
Greetings,
Andres Freund