Thread: pg_restore : out of memory

pg_restore : out of memory

From
Franck Routier
Date:
Hi,

I am trying to restore a table out of a dump, and I get an 'out of
memory' error.

The table I want to restore is 5GB big.

Here is the exact message :

admaxg@goules:/home/backup-sas$ pg_restore -F c -a -d axabas -t cabmnt
axabas.dmp
pg_restore: [archiver (db)] Error while PROCESSING TOC:
pg_restore: [archiver (db)] Error from TOC entry 5492; 0 43701 TABLE
DATA cabmnt axabas
pg_restore: [archiver (db)] COPY failed: ERROR:  out of memory
DETAIL:  Failed on request of size 40.
CONTEXT:  COPY cabmnt, line 9038995: "FHSJ    CPTGEN    RE
200806_004    6.842725E7    6.842725E7    \N    7321100    1101
\N
00016    \N    \N    \N    \N    \N    \N    -1278.620..."
WARNING: errors ignored on restore: 1

Looking at the os level, the process is effectively eating all memory
(incl. swap), that is around 24 GB...

So, here is my question : is pg_restore supposed to eat all memory ? and
is there something I can do to prevent that ?

Thanks,

Franck



Re: pg_restore : out of memory

From
Craig Ringer
Date:
Franck Routier wrote:
> Hi,
>
> I am trying to restore a table out of a dump, and I get an 'out of
> memory' error.

- Operating system?
- PostgreSQL version?
- PostgreSQL configuration - work_mem, shared_buffers, etc?

> So, here is my question : is pg_restore supposed to eat all memory ?

No, but PostgreSQL's backends will if you tell them there's more memory
available than there really is.

> and
> is there something I can do to prevent that ?

Adjust your PostgreSQL configuration to ensure that shared_buffers,
work_mem, etc are appropriate for the system and don't tell Pg to use
more memory than is actually available.

pg_restore isn't using up your memory. The PostgreSQL backend is.

--
Craig Ringer

Re: pg_restore : out of memory

From
"sathiya psql"
Date:


On Thu, Dec 4, 2008 at 7:38 PM, Franck Routier <franck.routier@axege.com> wrote:
Hi,

I am trying to restore a table out of a dump, and I get an 'out of
memory' error.

The table I want to restore is 5GB big.

Here is the exact message :

admaxg@goules:/home/backup-sas$ pg_restore -F c -a -d axabas -t cabmnt
axabas.dmp
pg_restore: [archiver (db)] Error while PROCESSING TOC:
pg_restore: [archiver (db)] Error from TOC entry 5492; 0 43701 TABLE
DATA cabmnt axabas
pg_restore: [archiver (db)] COPY failed: ERROR:  out of memory
DETAIL:  Failed on request of size 40.
CONTEXT:  COPY cabmnt, line 9038995: "FHSJ    CPTGEN    RE
200806_004    6.842725E7    6.842725E7    \N    7321100    1101
\N
00016    \N    \N    \N    \N    \N    \N    -1278.620..."
WARNING: errors ignored on restore: 1

Looking at the os level, the process is effectively eating all memory
(incl. swap), that is around 24 GB...
how are you ensuring that it eats up all memory..

post those outputs ?

So, here is my question : is pg_restore supposed to eat all memory ? and
is there something I can do to prevent that ?

Thanks,

Franck



--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance