I've been looking at the PID file creation mechanism we currently use.
It goes through a loop in an attempt to create the PID file, and if
one is there it attempts to remove it if the PID it contains no longer
exists (there are checks for shared memory usage as well).
This could be cleaned up rather dramatically if we were to use one of
the file locking primitives supplied by the OS to grab an exclusive
lock on the file, and the upside is that, when the locking code is
used, the postmaster would *know* whether or not there's another
postmaster running, but the price for that is that we'd have to eat a
file descriptor (closing the file means losing the lock), and we'd
still have to retain the old code anyway in the event that there is no
suitable file locking mechanism to use on the platform in question.
The first question for the group is: is it worth doing that?
The second question for the group is: if we do indeed decide to do
file locking in that manner, what *other* applications of the OS-level
file locking mechanism will we have? Some of them allow you to lock
sections of a file, for instance, while others apply a lock on the
entire file. It's not clear to me that the former will be available
on all the platforms we're interested in, so locking the entire file
is probably the only thing we can really count on (and keep in mind
that even if an API to lock sections of a file is available, it may
well be that it's implemented by locking the entire file anyway).
What I had in mind was implementation of a file locking function that
would take a file descriptor and a file range. If the underlying OS
mechanism supported it, it would lock that range. The interesting
case is when the underlying OS mechanism did *not* support it. Would
it be more useful in that case to return an error indication? Would
it be more useful to simply lock the entire file? If no underlying
file locking mechanism is available, it seems obvious to me that the
function would have to always return an error.
Thoughts?
--
Kevin Brown kevin@sysexperts.com