Tom Lane <tgl@sss.pgh.pa.us> writes:
> This way, if someone moves a data directory with a running postmaster
> in it, nothing breaks at all. It would probably run a bit faster too,
> since file open calls would have fewer directories to traverse through.
On reasonable platforms the time spent traversing shouldn't be a problem --
however if there are a lot of metadata operations happening at the same time
absolute file paths can cause contention, especially on the root and first few
path elements.
> The only downside I can see to it is that backend and postmaster crashes
> would all consistently dump core into $PGDATA (on platforms where cores
> dump into the working directory, which is many but not all). The
> current arrangement makes backends dump core into the subdirectory for
> the database they are in, which sometimes makes it a bit easier to
> identify what's what. But I can't see that that's a valuable enough
> property to override the advantages of using relative paths.
Having dumps occur in per-database directories vs per-cluster directories
isn't really that big a deal.
However it might be nice to have dumps go to a configurable place. Even to a
place to can be set by a session settable GUC. That would make debugging by
non-root users feasible. (You might need a second GUC to enable this feature
for security reasons though).
There's another approach that seems more robust. When initdb is run randomly
generate a unique id. Then whenever creating files include that unique id in
the first block of the file. Whenever you open a file sanity check the first
block. If it doesn't match PANIC immediately. (hm, actually you don't even
need to PANIC, jut shutting the one backend should be enough.)
This would ensure that you don't accidentally restore the wrong files from
your cold backup too. Or anything else anyone might try involving swapping
files around.
--
greg