Thomas Lockhart <lockhart@alumni.caltech.edu> writes:
> afaik this should all work. You can run pg_dump and pipe the output to a
> tape drive or to gzip. You *know* that a real backup will take something
> like the size of the database (maybe a factor of two or so less) since
> the data has to go somewhere.
pg_dump in default mode (ie, dump data as COPY commands) doesn't have a
problem with huge tables because the COPY data is just dumped out in a
streaming fashion.
If you insist on using the "dump data as insert commands" option then
huge tables cause a memory problem in pg_dump, but on the other hand you
are going to get pretty tired of waiting for such a script to reload,
too. I recommend just using the default behavior ...
regards, tom lane