Martijn van Oosterhout <kleptog@cupid.suninternet.com> writes:
> Is there a better way? Here pg_dumping the DB takes over half an hour
> (mainly because pg_dump chews all available memory).
pg_dump shouldn't be a performance hog if you are using the default
COPY-based style of data export. I'd only expect memory problems
if you are using INSERT-based export (-d or -D switch to pg_dump).
For now, the answer is "don't do that" ... at least not on big tables.
This could be fixed in either of two ways:
1. recode pg_dump to use DECLARE CURSOR and FETCH to grab table contents
in reasonable-size chunks (instead of with an all-at-once SELECT);
2. add an API to libpq that allows a select result to be retrieved
on-the-fly rather than accumulating it in libpq's memory.
The second is more work but would be more widely useful.
However, it's not been much of a priority, since insert-based data
export is so slow to reload that no sensible person uses it for
big tables anyway ;-)
regards, tom lane