Re: Changing shared_buffers without restart - Mailing list pgsql-hackers
From | Dmitry Dolgov |
---|---|
Subject | Re: Changing shared_buffers without restart |
Date | |
Msg-id | qitidzroynvd3qj5d2wuiuxopplljsiyz3amrwul7mfukdoqu5@5ygghzkahtk2 Whole thread Raw |
In response to | Re: Changing shared_buffers without restart (Tomas Vondra <tomas@vondra.me>) |
Responses |
Re: Changing shared_buffers without restart
|
List | pgsql-hackers |
> On Mon, Jul 07, 2025 at 01:57:42PM +0200, Tomas Vondra wrote: > > It could be potentialy useful for any GUC that controls a resource > > shared between backend, and requires restart. To make this GUC > > changeable online, every backend has to perform some action, and they > > have to coordinate to make sure things are consistent -- exactly the use > > case we're trying to address, shared_buffers is just happened to be one > > of such resources. While I agree that the currently implemented > > interface is wrong (e.g. it doesn't prevent pending GUCs from being > > stored in PG_AUTOCONF_FILENAME, this has to happen only when the new > > value is actually applied), it still makes sense to me to allow more > > flexible lifecycle for certain GUC. > > > > An example I could think of is shared_preload_libraries. If we ever want > > to do a hot reload of libraries, this will follow the procedure above: > > every backend has to do something like dlclose / dlopen and make sure > > that other backends have the same version of the library. Another maybe > > less far fetched example is max_worker_processes, which AFAICT is mostly > > used to control number of slots in shared memory (altough it's also > > stored in the control file, which makes things more complicated). > > > > Not sure. My concern is the config reload / GUC assign hook was not > designed with this use case in mind, and we'll run into issues. I also > dislike the "async" nature of this, which makes it harder to e.g. abort > the change, etc. Yes, GUC assing hook was not designed for that. That's why the idea is to extend the design and see if it will be good enough. > > I'm somewhat torn between those two options myself. The more I think > > about this topic, the more I convinced that pending GUC makes sense, but > > the more work I see needed to implement that. Maybe a good middle ground > > is to go with a simple utility command, as Ashutosh was suggesting, and > > keep pending GUC infrastructure on top of that as an optional patch. > > > > What about a simple function? Probably not as clean as a proper utility > command, and it implies a transaction - not sure if that could be a > problem for some part of this. I'm currently inclined towards this and a new one worker to coordinate the process, with everything else provided as an optional follow-up step. Will try this out unless there are any objections. > >> Stuff like PGPROC, fast-path locks etc. are allocated as part of > >> MAIN_SHMEM_SEGMENT, right? Yet the ratio assigns 10% of the maximum > >> space for that. If I significantly increase GUCs like max_connections or > >> max_locks_per_transaction, how do you know it didn't exceed the 10%? > > > > Still don't see the problem. The 10% we're talking about is the reserved > > space, thus it affects only shared memory resizing operation and nothing > > else. The real memory allocated is less than or equal to the reserved > > size, but is allocated and managed completely in the same way as without > > the patch, including size calculations. If some GUCs are increased and > > drive real memory usage high, it will be handled as before. Are we on > > the same page about this? > > > > How do you know reserving 10% is sufficient? Imagine I set I see, I was convinced you're talking about changing something at runtime, which will hit the reservation boundary. But you mean all of that at simply the start, and yes, of course it will fail -- see the point about SHMEM_RATIO being just a temporary hack.
pgsql-hackers by date: