Thread: Out of memory error on pg_restore
I am running PostgreSQL 8.1.3 on Windows 2003 Server. I am trying to transfer the data from a table in db1on one machine to a table in db2 on a different machine. The table size is about 22Gb (about 280 million rows). I was trying to do it by generating a backup file of the table in db1 and restoring it to db2. First, I created a plain 22Gb SQL dump and tried inserting it via psql. After some time it failed with the following error: psql: ERROR: out of memory DETAIL: Failed on request of size 32. I tried creating a compressed backup file (about 800Mb) and restoring it using pg_restore but I got the following error: pg_restore: ERROR: out of memory DETAIL: Failed on request of size 32. CONTEXT: COPY lane_data, line 17345022: "<line of data goes here>" pg_restore: [archiver (db)] error returned by PQendcopy pg_restore: *** aborted because of error So it seems like I am getting the same error in both cases. What is the cause? How could I adjust the restoring process or database parameters to handle this transfer? Is there a better way to transfer this data? Thanks.
Nik wrote: > I am running PostgreSQL 8.1.3 on Windows 2003 Server. > > I am trying to transfer the data from a table in db1on one machine to a > table in db2 on a different machine. The table size is about 22Gb > (about 280 million rows). > > I was trying to do it by generating a backup file of the table in db1 > and restoring it to db2. First, I created a plain 22Gb SQL dump and > tried inserting it via psql. After some time it failed with the > following error: > > psql: ERROR: out of memory > DETAIL: Failed on request of size 32. And were you out of memory or not? If nothing else, task-manager should show how much memory everything is using. -- Richard Huxton Archonet Ltd
"Nik" <XLPizza@gmail.com> writes: > pg_restore: ERROR: out of memory > DETAIL: Failed on request of size 32. > CONTEXT: COPY lane_data, line 17345022: "<line of data goes here>" A COPY command by itself shouldn't eat memory. I'm wondering if the table being copied into has any AFTER triggers on it (eg for foreign key checks), as each pending trigger event uses memory and so a copy of a lot of rows could run out. pg_dump scripts ordinarily load data before creating triggers or foreign keys in order to avoid this problem. Perhaps you were trying a data-only restore? If so, best answer is "don't do that". A plain combined schema+data dump should work. regards, tom lane
other way is to set HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Session Manager\Memory Management bigger values but to restore a lot of data on windows take so many time 2006/3/8, Tom Lane <tgl@sss.pgh.pa.us>: > "Nik" <XLPizza@gmail.com> writes: > > pg_restore: ERROR: out of memory > > DETAIL: Failed on request of size 32. > > CONTEXT: COPY lane_data, line 17345022: "<line of data goes here>" > > A COPY command by itself shouldn't eat memory. I'm wondering if the > table being copied into has any AFTER triggers on it (eg for foreign key > checks), as each pending trigger event uses memory and so a copy of a > lot of rows could run out. > > pg_dump scripts ordinarily load data before creating triggers or foreign > keys in order to avoid this problem. Perhaps you were trying a > data-only restore? If so, best answer is "don't do that". A plain > combined schema+data dump should work. > > regards, tom lane > > ---------------------------(end of broadcast)--------------------------- > TIP 2: Don't 'kill -9' the postmaster >
Yes, I was indeed out of memory. That is the problem: the postgres.exe process corresponding to the pg_restore continuously consumes more and more memory until it runs out and fails with the mentioned error. Since I already have 4Gb of RAM, throwing more hardware at it is not a feasible solution, so I was wondering what else can I do to complete this process without running out of memory.
Tom was exactly right. I was trying to restore the dump file into an already created table structure that did have three foreign key constraints. I removed the primary key constraint to speed up the load, but was not aware of the memory usage of the foreign keys. I dropped the table and ran the pg_restore again, and it is running currently and behaving nicely at constant 13Mb of memory. It will probably take some time, but I think it is working fine. Thanks for the help.
8 Mar 2006 07:31:19 -0800, Nik <XLPizza@gmail.com>: > [...] > psql: ERROR: out of memory > DETAIL: Failed on request of size 32. > I also have this kind of error (out of memory) during the restoration of objects on my database. I use a 8.1.2 pg_dump on a 7.1.1 PostgreSQL server. Size of the dump is approx 20GB. I restore it using the 8.1.2 pg_restore on a 8.1.2 PostgreSQL server. And I finaly receive a "out of memory" error. I don't think there is trigger on pg_largeobject. What can be the cause of this one ? I will try with the 8.1.3 release but I don't see a related fix on the release notes. Thanks. -- Guillaume.