Kevin Brown <kevin@sysexperts.com> writes:
> This could be cleaned up rather dramatically if we were to use one of
> the file locking primitives supplied by the OS to grab an exclusive
> lock on the file, and the upside is that, when the locking code is
> used, the postmaster would *know* whether or not there's another
> postmaster running, but the price for that is that we'd have to eat a
> file descriptor (closing the file means losing the lock),
Yeah, I was just thinking about that this morning. Eating one file
descriptor in the postmaster is absolutely no problem --- the postmaster
doesn't have all that many files open anyhow. What I was wondering was
whether it was worth eating an FD for every backend process, by holding
open the file inherited from the postmaster. If we did that, we would
have a reliable way of detecting that the old postmaster died but left
surviving child backends. (As I mentioned in a nearby flamefest, the
existing interlock for this situation strikes me as mighty fragile.)
But this only wins if a child process inheriting an open file also
inherits copies of any locks held by the parent. If not, then the
issue is moot. Anybody have any idea if file locks work that way?
Is it portable??
> The second question for the group is: if we do indeed decide to do
> file locking in that manner, what *other* applications of the OS-level
> file locking mechanism will we have?
I can't see any use in partial-file locks for us, and would not want
to design an internal API that expects them to work.
regards, tom lane