Tom Lane wrote:
> Ryan Bradetich <ryan_bradetich@hp.com> writes:
> > Tom Lane wrote:
> >> Ryan Bradetich <ryan_bradetich@hp.com> writes:
> >>>> -- dumping out the contents of Table 'medusa'
> >>>> FATAL 1: Memory exhausted in AllocSetAlloc()
> >>>> PQendcopy: resetting connection
> >>>> SQL query to dump the contents of Table 'medusa' did not execute
> >>>> correctly. After we read all the table contents from the backend,
> >>>> PQendcopy() failed. Explanation from backend: 'FATAL 1: Memory
> >>>> exhausted in AllocSetAlloc()
> >>>> '.
> >>>> The query was: 'COPY "medusa" WITH OIDS TO stdout;
>
> Now that I look at it, it appears that COPY WITH OIDS leaks the memory
> used for the string representation of the OIDs. That'd probably cost
> you 32 bytes or so of backend memory per row --- which you'd get back
> at the end of the COPY, but small comfort if you ran out before that.
>
> Is the table large enough to make that a plausible explanation?
>
> regards, tom lane
Tom,
This table is very large so that could be the problem.
Here are the startup parameters I am using (in case it matters): -B 1024 -S -o -F -o -o
/home/postgres/nohup.out -i -p 5432 -D/data08
nohup su - postgres -c "/opt/pgsql/bin/postmaster -B 1024 -S -o \"-F\" -o
\"-o /home/postgres/nohup.out\" -i -p 5432 -D/data08"
procman=# select count(*) from medusa; count
---------6986499
(1 row)
FYI:
That was the problem. Good job at spotting that Tom. I just successfully
completed a backup without using the -o
option to pg_dumpall.
Thanks again for the help!
- Ryan
--
Ryan Bradetich
AIT Operations
Unix Platform Team