Scott Whitney wrote:
> I'll be moving to PG9 (hopefully soon...probably 6 weeks).
>
> At that time, I'll be setting up hot-standby with streaming replication to 2 sites. Off-siting my
> pgdumps nightly is no longer going to be possible in the very near future, due to the size of the
> dumps.
>
> So...what I had planned to do was setup my production 9.x, setup my streaming standby (same network)
> 9.x and setup my disaster off-site (here at the office) also 9.x. Each one will do pg_dump at some
> point (nightly, probably) to ensure that I've got actual backup files available at each location. Yes,
> they'll be possibly-inconsistent, but only with one another, and that's a very minor issue for the
> dump files.
>
> Now, when I do the directory rsync/tar (in this case tar), I can bring it up pretty quickly on the
> standby that is there at the data center. However, of course, I need to also set it up here at my
> office. Which amounts to me driving back to the office, copying it over, and starting up PG (assuming
> I don't get interrupted 20 times walking in the door).
>
> So...something like this:
>
> SELECT pg_start_backup()
> tar off my pg directory
> SELECT pg_stop_backup()
>
> My question is this:
>
> Can I do stop_backup after I've tgzed to an external hard drive or do I have to wait to do stop_backup
> until both slaves are actually online?
>
> I _think_ that I'm merely telling the source db server that "I have my possibly-inconsistent file
> system backup, you can go back to what you were doing," and then when the slave(s) come up, they start
> replaying the WAL files until they catch up then use network communication to stay in sync.
>
> Is that a correct understanding of the process?
Roughly, yes.
You can run pg_stop_backup() as soon as your "tar" command is done.
You will need all archived WAL files to be copied over to the standby
machine as soon as they are written. They are necessary for the standby
to catch up to the master.
Yours,
Laurenz Albe