I am dumping some larger (less-used) tables from 7.4.6 to facilitate an
upgrade to 8.0.
A pg_dump of one table ran for 28:53:29.50 and produced a 30 GB dump
before it aborted with:
pg_dump: dumpClasses(): SQL command failed
pg_dump: Error message from server: out of memory for query result
pg_dump: The command was: FETCH 100 FROM _pg_dump_cursor
What causes this? This is on a Sparc box with 8GB of real memory and
104GB of virtual, so I am fairly confident that it did not run out of
memory. What would cause pg_dump to run for almost 29 hours and then
die? The table has not been accessed during that period.
To be fair, the table is not tiny, it consists of 114 segments in the
base directory: files 17935163 through 17935163.113.
The table contains a text field that could contain several hundred MB of
data, although always less than 2GB.
How can I dump this table?
Thanks,
Marty