Re: Changing shared_buffers without restart - Mailing list pgsql-hackers

From Dmitry Dolgov
Subject Re: Changing shared_buffers without restart
Date
Msg-id nu3pggzvuqomroda5cliicehtuam4kd4sxfm27d5rqpejnqutf@micq2wwi66jc
Whole thread Raw
In response to Re: Changing shared_buffers without restart  (Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>)
Responses Re: Changing shared_buffers without restart
List pgsql-hackers
> On Mon, Jul 14, 2025 at 01:55:39PM +0530, Ashutosh Bapat wrote:
> > You're right of course, a buffer id could be returned from the
> > ClockSweep and from the custom strategy buffer ring. Buf from what I see
> > those are picking a buffer from the set of already utilized buffers,
> > meaning that for a buffer to land there it first has to go through
> > StrategyControl->firstFreeBuffer, and hence the idea above will be a
> > requirement for those as well.
>
> That isn't true. A buffer which was never in the free list can still
> be picked up by clock sweep.

How's that?

> > Yep, making buffers available would be equivalent to declaring the new
> > NBuffers. What I think is important here is to note, that we use two
> > mechanisms for coordination: the shared structure ShmemControl that
> > shares the state of operation, and ProcSignal that tells backends to do
> > something (change the memory mapping). Declaring the new NBuffers could
> > be done via ShmemControl, atomically applying the new value, instead of
> > sending a ProcSignal -- this way there is no need for backends to wait,
> > but StrategyControl would need to use the ShmemControl instead of local
> > copy of NBuffers. Does it make sense to you?
>
> When expanding buffers, letting StrategyControl continue with the old
> NBuffers may work. When propagating the new buffer value we have to
> reinitialize StrategyControl to use new NBuffers. But when shrinking,
> the StrategyControl needs to be initialized with the new NBuffers,
> lest it picks a victim from buffers being shrunk. And then if the
> operation fails, we have to reinitialize the StrategyControl again
> with the old NBuffers.

Right, those two cases will become more asymmetrical: for expanding
number of available buffers would have to be propagated to the backends
at the end, when they're ready; for shrinking number of available
buffers would have to be propagated at the start, so that backends will
stop allocating unavailable buffers.

> > > What about when shrinking the buffers? Do you plan to make all the
> > > backends wait while the coordinator is evicting buffers?
> >
> > No, it was never planned like that, since it could easily end up with
> > coordinator waiting for the backend to unpin a buffer, and the backend
> > to wait for a signal from the coordinator.
>
> I agree with the deadlock situation. How do we prevent the backends
> from picking or continuing to work with a buffer from buffers being
> shrunk then? Each backend then has to do something about their
> respective pinned buffers.

The idea I've got so far is stop allocating buffers from the unavailable
range and wait until backends will unpin all unavailable buffers. We
either wait unconditionally until it happens, or bail out after certain
timeout.

It's probably possible to force backends to unpin buffers they work
with, but it sounds much more problematic to me. What do you think?



pgsql-hackers by date:

Previous
From: Peter Smith
Date:
Subject: Re: [WIP]Vertical Clustered Index (columnar store extension) - take2
Next
From: Bruce Momjian
Date:
Subject: pg_overexplain extension name