Re: Restoring large tables with COPY - Mailing list pgsql-hackers

From Tom Lane
Subject Re: Restoring large tables with COPY
Date
Msg-id 17277.1008086130@sss.pgh.pa.us
Whole thread Raw
In response to Restoring large tables with COPY  (Marko Kreen <marko@l-t.ee>)
Responses Re: Restoring large tables with COPY  (Marko Kreen <marko@l-t.ee>)
Re: Restoring large tables with COPY  (Marko Kreen <marko@l-t.ee>)
List pgsql-hackers
Marko Kreen <marko@l-t.ee> writes:
> Maybe I am missing something obvious, but I am unable to load
> larger tables (~300k rows) with COPY command that pg_dump by
> default produces.

I'd like to find out what the problem is, rather than work around it
with such an ugly hack.

> 1) Too few WAL files.
>    - well, increase the wal_files (eg to 32),

What PG version are you running?  7.1.3 or later should not have a
problem with WAL file growth.

> 2) Machine runs out of swap, PostgreSQL seems to keep whole TX
>    in memory.

That should not happen either.  Could we see the full schema of the
table you are having trouble with?

> Or shortly: during pg_restore the resource requirements are
> order of magnitude higher than during pg_dump,

We found some client-side memory leaks in pg_restore recently; is that
what you're talking about?
        regards, tom lane


pgsql-hackers by date:

Previous
From: Lee Kindness
Date:
Subject: Bulkloading using COPY - ignore duplicates?
Next
From: Tom Lane
Date:
Subject: Duplicate-rows bug reports