Forgive me if I send this to the wrong address; this is my first post to
this
list.
I have looked at previous posts regarding backup. All that I have seen
say
to use pg_dump during time of inactivity. However, on a business web
server
that is active 24/7, will it still be stable? I have tested it on an
isolated
system, and it appears to reflect any changes to records during the
pg_dump
process. However, when running 'pg_dump -a -v testdb -f testdb.dump'
I noticed that in a database with three tables each with about 2 megs of
data, if I inserted data into the first table as the second was being
dumped
(I checked the order in the dump file), the first table did not reflect
the
changes I had just made, but the database itself did. This suggests to
me that
pg_dump works on one table at a time, and after it is done with one
table, it
leaves it alone.
My goal with the backup is, of course, to get the most recent data with
as little
downtime as possible. Am I right in understanding that pg_dump is the
best option
for this? How well would an additional tar or file copy of the entire
'data'
directory tree work? Would I be able to restore that easily enough, or
are there
other files the data depends on?
Another concern of mine is that pg_dump may lose relational integrity
if, in
the previous example, someone adds a record to all three tables in such
a timing
that the record in the first table is not put in the dump file. Is this
just
a risk that needs to be taken, or is there a solution I should pursue?
For reference sake, we are running postgresql 6.4.1 and using PHP 3.0.4
as a web
interface.
--
Mark Joseph Holmes JoeH@LinuxMall.com
Linux Mall --> is The Linux Superstore
http://www.LinuxMall.com mailto:info@LinuxMall.com
Phone: 303-693-3321 Fax: 303-699-2793