Thread: copy a large table raises out of memory exception
We have a large table (about 9,000,000 rows and total size is about 2.8 GB) which is exported to a binary file. Postgre 8.2 is running on a Windows 2003 Small business Server which has a 2 GB RAM. When we run "copy tablename from filepath" command, memory usage increases up to 1.8 GB and postgre raises exception "out of memory". If we copy a small part of the table (e.g 1,000,000 rows) everything works fine. As far as I understand, postgre is trying to load all the rows into RAM before writing it to the database. I tried running postgre with several different configuration parameters but the result is same. Did anybody face a similar problem? Kind regards A. Ozen Akyurek
On Mon, 10 Dec 2007, A. Ozen Akyurek wrote: > We have a large table (about 9,000,000 rows and total size is about 2.8 GB) > which is exported to a binary file. How was it exported? With "COPY tablename TO 'filename' WITH BINARY"? "The BINARY key word causes all data to be stored/read as binary format rather than as text. It is somewhat faster than the normal text mode, but a binary-format file is less portable across machine architectures and PostgreSQL versions." http://www.postgresql.org/docs/8.2/static/sql-copy.html Maybe you are bitten by this "less portable". > When we run "copy tablename from filepath" command, (...) and > postgre raises exception "out of memory". I'd try to use pg_dump/pg_restore in custom format, like this: pg_dump -a -Fc -Z1 -f [filename] -t [tablename] [olddatabasename] pg_restore -1 -a -d [newdatabasename] [filename] Regards Tometzky -- ...although Eating Honey was a very good thing to do, there was a moment just before you began to eat it which was better than when you were... Winnie the Pooh