"Enio Schutt Junior" <enio@pmpf.rs.gov.br> writes:
> Here, where I work, the backups of the postgresql databases are being done =
> the following way:
> There is a daily copy of nearly all the hd (excluding /tmp, /proc, /dev and=
> so on) in which databases are=20
> and besides this there is also one script which makes the pg_dump of each o=
> ne of the databases on the server.
> This daily copy of the hd is made with postmaster being active (without sto=
> pping the daemon), so the data
> from /usr/local/pgsql/data would not be 100% consistent, I guess.=20
> Supposing there was a failure and it was needed to restore the whole thing,=
> I think the procedure to
> recovery would be the following:
> 1) Copy data from the backup hd to a new hd
> 2) Once this was done, delete the postmaster.pid file and start the postmas=
> ter service
> 3) Drop all databases and recreate them from those pg_dump files
I would just initdb and then load the pg_dump files. An unsynchronized
copy of /usr/local/pgsql/data is just about completely untrustworthy.
You should use pg_dumpall to make a dump of user and group status;
pg_dump will not do that.
> I was also thinking about excluding /usr/local/pgsql/data from the
> backup routine, as the data is also in other files generated by
> pg_dump. The problem is that this directory has not only the databases
> data but also some config files, like postgresql.conf.
Yeah. Instead, exclude the directories below it ($PGDATA/base, etc).
regards, tom lane