[ pg-general removed from cc: list, as this is off topic for it ]
Marc Munro <marc@bloodnok.com> writes:
> Add hooks for begin_backup and end_backup at a data file level. Between
> the calls begin_backup(myfile) and end_backup(myfile), writes to myfile
> will be disabled allowing the file to be safely copied.
And the writes are going to go where instead? If you intend to just
hold the dirty pages in memory, the system will grind to a halt in no
time, ie as soon as it runs out of spare buffers. This strikes me as
only marginally better than "shut down the database while you copy the
files".
Perhaps more to the point, I'm not following how this helps achieve
point-in-time recovery. I suppose what you are after is to get an
instantaneous snapshot of the data files that could be used as a
starting point for replaying the WAL --- but AFAICS you'd need a
snapshot that's instantaneous across the *whole* database, ie,
all the data files are in the state corresponding to the chosen
starting point for the WAL. Locking and copying one file at a time
doesn't get you there.
It seems to me that you can get the desired results without any
locking. Assume that you start archiving the WAL just after a
checkpoint record. Also, start copying data files to your backup
medium. Some not inconsiderable time later, you are done copying
data files. You continue copying off and archiving WAL entries.
You cannot say that the copied data files correspond to any particular
point in the WAL, or that they form a consistent set of data at all
--- but if you were to reload them and replay the WAL into them
starting from the checkpoint, then you *would* have a consistent set
of files once you reached the point in the WAL corresponding to the
end-time of the data file backup. You could stop there, or continue
WAL replay to any later point in time.
regards, tom lane