Re: listen/notify argument (old topic revisited) - Mailing list pgsql-hackers
From | Jeff Davis |
---|---|
Subject | Re: listen/notify argument (old topic revisited) |
Date | |
Msg-id | 200207021902.58958.list-pgsql-hackers@empires.org Whole thread Raw |
In response to | Re: listen/notify argument (old topic revisited) (Bruce Momjian <pgman@candle.pha.pa.us>) |
List | pgsql-hackers |
On Tuesday 02 July 2002 06:03 pm, Bruce Momjian wrote: > Let me tell you what would be really interesting. If we didn't report > the pid of the notifying process and we didn't allow arbitrary strings > for notify (just pg_class relation names), we could just add a counter > to pg_class that is updated for every notify. If a backend is > listening, it remembers the counter at listen time, and on every commit > checks the pg_class counter to see if it has incremented. That way, > there is no queue, no shared memory, and there is no scanning. You just > pull up the cache entry for pg_class and look at the counter. > > One problem is that pg_class would be updated more frequently. Anyway, > just an idea. I think that currently a lot of people use select() (after all, it's mentioned in the docs) in the frontend to determine when a notify comes into a listening backend. If the backend only checks on commit, and the backend is largely idle except for notify processing, might it be a while before the frontend realizes that a notify was sent? Regards,Jeff > > --------------------------------------------------------------------------- > > Tom Lane wrote: > > Bruce Momjian <pgman@candle.pha.pa.us> writes: > > > Is disk i/o a real performance > > > penalty for notify, and is performance a huge issue for notify anyway, > > > > Yes, and yes. I have used NOTIFY in production applications, and I know > > that performance is an issue. > > > > >> The queue limit problem is a valid argument, but it's the only valid > > >> complaint IMHO; and it seems a reasonable tradeoff to make for the > > >> other advantages. > > > > BTW, it occurs to me that as long as we make this an independent message > > buffer used only for NOTIFY (and *not* try to merge it with SI), we > > don't have to put up with overrun-reset behavior. The overrun reset > > approach is useful for SI because there are only limited times when > > we are prepared to handle SI notification in the backend work cycle. > > However, I think a self-contained NOTIFY mechanism could be much more > > flexible about when it will remove messages from the shared buffer. > > Consider this: > > > > 1. To send NOTIFY: grab write lock on shared-memory circular buffer. > > If enough space, insert message, release lock, send signal, done. > > If not enough space, release lock, send signal, sleep some small > > amount of time, and then try again. (Hard failure would occur only > > if the proposed message size exceeds the buffer size; as long as we > > make the buffer size a parameter, this is the DBA's fault not ours.) > > > > 2. On receipt of signal: grab read lock on shared-memory circular > > buffer, copy all data up to write pointer into private memory, > > advance my (per-process) read pointer, release lock. This would be > > safe to do pretty much anywhere we're allowed to malloc more space, > > so it could be done say at the same points where we check for cancel > > interrupts. Therefore, the expected time before the shared buffer > > is emptied after a signal is pretty small. > > > > In this design, if someone sits in a transaction for a long time, > > there is no risk of shared memory overflow; that backend's private > > memory for not-yet-reported NOTIFYs could grow large, but that's > > his problem. (We could avoid unnecessary growth by not storing > > messages that don't correspond to active LISTENs for that backend. > > Indeed, a backend with no active LISTENs could be left out of the > > circular buffer participation list altogether.) > > > > We'd need to separate this processing from the processing that's used to > > force SI queue reading (dz's old patch), so we'd need one more signal > > code than we use now. But we do have SIGUSR1 available. > > > > regards, tom lane
pgsql-hackers by date: