Bruce Momjian <pgman@candle.pha.pa.us> writes:
> Uh, why would you need more than maxbackends? Can't it be indexed by
> slot number --- each backend has a slot? Maybe I am missing something.
The postmaster has no way to know what slot number each backend will
get. For that matter, a sub-postmaster doesn't know yet either.
I think the simplest way to make this work is to use an array that's
2*MaxBackend items long (corresponding to the max number of children the
postmaster will fork). Establish the convention that unused entries are
zero. Then:
1. On forking a child, the postmaster scans the array for a free
(zero) slot, and stashes the cancel key and PID there (in that
order).
2. On receiving a child-termination report, the postmaster scans
the array for the corresponding entry, and zeroes it out (PID
first).
(Obviously these algorithms could be improved if they turn out to be
bottlenecks, but for the first cut KISS is applicable.)
3. To find or check a cencel key, a sub-postmaster scans the
array looking for the desired PID (either its own, or the one
it got from an incoming cancel request message).
There is a potential race condition if a sub-postmaster scans the array
before the postmaster has been able to store its PID there. I think it
is sufficient for the sub-postmaster to sleep a few milliseconds and try
again if it can't find its own PID in the array. There is no race
condition possible for the ProcessCancelRequest case --- the
sub-postmaster that spawned an active backend must have found its entry
before it could have sent the cancel key to the client, so any valid
cancel request from a client must reference an already-existing entry
in the array.
regards, tom lane