Thread: pg_dump dying (and VACUUM ANALYZE woes)...

pg_dump dying (and VACUUM ANALYZE woes)...

From
Steve Wampler
Date:
System: postgresql-7.0.3-1 on Linux (RH6.2 with updated libs and 2.4.2 kernel).
        dual-P3 with 1GB ram (128MB shared memory area).  postgres configured
        with "-o -F -B 2048 -N 64 -S -i" via pg_ctl.

I'm getting the following error from pg_dump when trying to
dump a particular database:
==============================================================
-> pg_dump logdb >logdb.dump
pqWait() -- connection not open
PQendcopy: resetting connection
SQL query to dump the contents of Table 'messages' did not execute correctly.  After we read all the table contents
fromthe backend, PQendcopy() failed.  Explanation from backend: 'The Data Base System is starting up 
'.
The query was: 'COPY "messages" TO stdout;
'.
==============================================================
About 25MB has been dumped when this error occurs.  (There's 15GB
free on the disk partition.)

I had been able to successfully dump 3 other databases (one of which
produced a 57MB dump file), but this particular database always produces
this message (and apparently kills all the backends? - none are left
running after this message).

This particular database has 5 tables in it averaging about 200,000
rows each.  There's not much complexity - the table "messages" is
just a time-stamped set of logging messages (4 columns).  The other
tables are archives of previous logging information.

Hmmm, I just discovered I cannot VACUUM ANALYZE this database either!:
======================================================================
logdb=# vacuum analyze;
FATAL 1:  Memory exhausted in AllocSetAlloc()
pqReadData() -- backend closed the channel unexpectedly.
        This probably means the backend terminated abnormally
        before or while processing the request.
The connection to the server was lost. Attempting reset: Succeeded.
logdb=# \q
======================================================================
The "Memory exhausted" must be a clue, but given the system I'm having
trouble seeing how that could be happening (especially since a larger
DB has already been dumped...)

Here are the current limits:
======================================
->ulimit -a
cpu time (seconds)         unlimited
file size (blocks)         unlimited
data seg size (kbytes)     unlimited
stack size (kbytes)        8192
core file size (blocks)    1000000
resident set size (kbytes) unlimited
processes                  32766
file descriptors           1024
locked-in-memory size (kb) unlimited
virtual memory size (kb)   unlimited
====================================

Does anyone (a) know what's wrong and (b) have any suggestions on how to
fix it?

Thanks!
--
Steve Wampler-  SOLIS Project, National Solar Observatory
swampler@noao.edu

LOs and pg_dump, restore, vacuum for 7.1

From
"David Wall"
Date:
Does 7.1 "natively" handle large objects in dumps, restores and vaccums?  In
7.0.3 there was a contrib for LOs.

Thanks,
David


Re: LOs and pg_dump, restore, vacuum for 7.1

From
Peter Eisentraut
Date:
David Wall writes:

> Does 7.1 "natively" handle large objects in dumps, restores and vaccums?  In
> 7.0.3 there was a contrib for LOs.

Yes.

--
Peter Eisentraut      peter_e@gmx.net       http://yi.org/peter-e/


Re: pg_dump dying (and VACUUM ANALYZE woes)...

From
Tom Lane
Date:
Steve Wampler <swampler@noao.edu> writes:
> I'm getting the following error from pg_dump when trying to
> dump a particular database:
> ==============================================================
> -> pg_dump logdb >logdb.dump
> pqWait() -- connection not open
> PQendcopy: resetting connection
> SQL query to dump the contents of Table 'messages' did not execute correctly.  After we read all the table contents
fromthe backend, PQendcopy() failed.  Explanation from backend: 'The Data Base System is starting up 
> '.
> The query was: 'COPY "messages" TO stdout;
> '.
> ==============================================================
> About 25MB has been dumped when this error occurs.  (There's 15GB
> free on the disk partition.)

Looks like you've got corrupted data in that table (clobbered length
word in some variable-length field, most likely).

            regards, tom lane