Thread: Out of memory on pg_dump

Out of memory on pg_dump

From
"Chris Hopkins"
Date:

Hi all –

 

We are using Postgres 8.2.3 as our Confluence backing store and when trying to backup the database at night we are seeing this in the logs:

 

<snip>

pg_amop_opc_strat_index: 1024 total in 1 blocks; 216 free (0 chunks); 808 used

pg_aggregate_fnoid_index: 1024 total in 1 blocks; 392 free (0 chunks); 632 used

MdSmgr: 8192 total in 1 blocks; 6376 free (0 chunks); 1816 used

LOCALLOCK hash: 24576 total in 2 blocks; 14112 free (4 chunks); 10464 used

Timezones: 49432 total in 2 blocks; 5968 free (0 chunks); 43464 used

ErrorContext: 8192 total in 1 blocks; 8176 free (0 chunks); 16 used

2009-08-19 22:35:42 ERROR:  out of memory

2009-08-19 22:35:42 DETAIL:  Failed on request of size 536870912.

2009-08-19 22:35:42 STATEMENT:  COPY public.attachmentdata (attachmentdataid, attversion, data, attachmentid) TO stdout;

 

Is there an easy way to give pg_dump more memory? I don’t see a command line option for it and I’m not a Postgres expert by any means. This is the script we are using to backup our DB (backup.cmd):

 

@ECHO OFF

 

SET BACKUPS_DIR=C:\backups

SET PGPASSWORD=*******

 

REM Set the backup file name

 

SET prefix=confluence_dbbackup_

 

SET basename=%prefix%%date:~-4,4%%date:~-10,2%%date:~-7,2%.%time:~-11,2%.%time:~-8,2%.%time:~-5,2%

SET confluence_backup_path=%BACKUPS_DIR%\%basename%.dump

 

pg_dump  --username=confluence --file="%confluence_backup_path%"  --blobs --format c confluence

 

Thanks,

Chris

 

 
 
 
 
THIS MESSAGE IS INTENDED FOR THE USE OF THE PERSON TO WHOM IT IS ADDRESSED. IT MAY CONTAIN INFORMATION THAT IS PRIVILEGED, CONFIDENTIAL AND EXEMPT FROM DISCLOSURE UNDER APPLICABLE LAW. If you are not the intended recipient, your use of this message for any purpose is strictly prohibited. If you have received this communication in error, please delete the message and notify the sender so that we may correct our records.
 
 
 

Re: Out of memory on pg_dump

From
Tom Lane
Date:
"Chris Hopkins" <chopkins@cra.com> writes:
> 2009-08-19 22:35:42 ERROR:  out of memory
> 2009-08-19 22:35:42 DETAIL:  Failed on request of size 536870912.

> Is there an easy way to give pg_dump more memory?

That isn't pg_dump that's out of memory --- it's a backend-side message.
Unless you've got extremely wide fields in this table, I would bet on
this really being a corrupted-data situation --- that is, there's some
datum in the table whose length word has been corrupted into a very
large value.  You can try to isolate and delete the corrupted row(s).

            regards, tom lane

Re: Out of memory on pg_dump

From
"Chris Hopkins"
Date:
Thanks Tom. Next question (and sorry if this is an ignorant one)...how
would I go about doing that?

  - Chris






THIS MESSAGE IS INTENDED FOR THE USE OF THE PERSON TO WHOM IT IS ADDRESSED. IT MAY CONTAIN INFORMATION THAT IS
PRIVILEGED,CONFIDENTIAL AND EXEMPT FROM DISCLOSURE UNDER APPLICABLE LAW. If you are not the intended recipient, your
useof this message for any purpose is strictly prohibited. If you have received this communication in error, please
deletethe message and notify the sender so that we may correct our records. 

-----Original Message-----



From: Tom Lane [mailto:tgl@sss.pgh.pa.us]
Sent: Friday, August 21, 2009 11:07 AM
To: Chris Hopkins
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Out of memory on pg_dump

"Chris Hopkins" <chopkins@cra.com> writes:
> 2009-08-19 22:35:42 ERROR:  out of memory
> 2009-08-19 22:35:42 DETAIL:  Failed on request of size 536870912.

> Is there an easy way to give pg_dump more memory?

That isn't pg_dump that's out of memory --- it's a backend-side message.
Unless you've got extremely wide fields in this table, I would bet on
this really being a corrupted-data situation --- that is, there's some
datum in the table whose length word has been corrupted into a very
large value.  You can try to isolate and delete the corrupted row(s).

            regards, tom lane

Re: Out of memory on pg_dump

From
Tom Lane
Date:
"Chris Hopkins" <chopkins@cra.com> writes:
> Thanks Tom. Next question (and sorry if this is an ignorant one)...how
> would I go about doing that?

See the archives for previous discussions of corrupt-data recovery.
Basically it's divide-and-conquer to find the corrupt rows.

            regards, tom lane

Re: Out of memory on pg_dump

From
Martin Gainty
Date:
Chris-

did you look at Zdenek Kotala's pgcheck ?
http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/pgcheck/pgcheck/src/

download 3 source files and run makefile

anyone know of a PG integrity checker
?
Martin Gainty
______________________________________________
Verzicht und Vertraulichkeitanmerkung/Note de déni et de confidentialité
 
Diese Nachricht ist vertraulich. Sollten Sie nicht der vorgesehene Empfaenger sein, so bitten wir hoeflich um eine Mitteilung. Jede unbefugte Weiterleitung oder Fertigung einer Kopie ist unzulaessig. Diese Nachricht dient lediglich dem Austausch von Informationen und entfaltet keine rechtliche Bindungswirkung. Aufgrund der leichten Manipulierbarkeit von E-Mails koennen wir keine Haftung fuer den Inhalt uebernehmen.
Ce message est confidentiel et peut être privilégié. Si vous n'êtes pas le destinataire prévu, nous te demandons avec bonté que pour satisfaire informez l'expéditeur. N'importe quelle diffusion non autorisée ou la copie de ceci est interdite. Ce message sert à l'information seulement et n'aura pas n'importe quel effet légalement obligatoire. Étant donné que les email peuvent facilement être sujets à la manipulation, nous ne pouvons accepter aucune responsabilité pour le contenu fourni.





> Subject: Re: [GENERAL] Out of memory on pg_dump
> Date: Fri, 21 Aug 2009 11:29:48 -0400
> From: chopkins@cra.com
> To: tgl@sss.pgh.pa.us
> CC: pgsql-general@postgresql.org
>
> Thanks Tom. Next question (and sorry if this is an ignorant one)...how
> would I go about doing that?
>
> - Chris
>
>
>
>
>
>
> THIS MESSAGE IS INTENDED FOR THE USE OF THE PERSON TO WHOM IT IS ADDRESSED. IT MAY CONTAIN INFORMATION THAT IS PRIVILEGED, CONFIDENTIAL AND EXEMPT FROM DISCLOSURE UNDER APPLICABLE LAW. If you are not the intended recipient, your use of this message for any purpose is strictly prohibited. If you have received this communication in error, please delete the message and notify the sender so that we may correct our records.
>
> -----Original Message-----
>
>
>
> From: Tom Lane [mailto:tgl@sss.pgh.pa.us]
> Sent: Friday, August 21, 2009 11:07 AM
> To: Chris Hopkins
> Cc: pgsql-general@postgresql.org
> Subject: Re: [GENERAL] Out of memory on pg_dump
>
> "Chris Hopkins" <chopkins@cra.com> writes:
> > 2009-08-19 22:35:42 ERROR: out of memory
> > 2009-08-19 22:35:42 DETAIL: Failed on request of size 536870912.
>
> > Is there an easy way to give pg_dump more memory?
>
> That isn't pg_dump that's out of memory --- it's a backend-side message.
> Unless you've got extremely wide fields in this table, I would bet on
> this really being a corrupted-data situation --- that is, there's some
> datum in the table whose length word has been corrupted into a very
> large value. You can try to isolate and delete the corrupted row(s).
>
> regards, tom lane
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general


With Windows Live, you can organize, edit, and share your photos. Click here.