Thread: Restoring a pg_dump fails with
Hello! By upgrading von PostgreSQL 6.5.3 to 7.1.0 I restored a dump file by psql dbname <dbname.dump After some time I get the following message: ERROR: copy: line 1638280, Memory exhausted in AllocSetAlloc(56) PQendcopy: resetting connection The total number of lines is about 10 Millions! Can anybody help me or explain what to do? May be, I have to convert to INSERT-statements? I also had a look to the dump via the editor "vi": I do not see anything strange on that line. Is there a limitation of COPY statements? -- Mit freundlichen Gruessen / With best regards Reiner Dassing
On Thu, Apr 26, 2001 at 05:35:02PM +0200, Reiner Dassing wrote: > By upgrading von PostgreSQL 6.5.3 to 7.1.0 I restored a dump file by > psql dbname <dbname.dump > > After some time I get the following message: > ERROR: copy: line 1638280, Memory exhausted in AllocSetAlloc(56) > PQendcopy: resetting connection > > The total number of lines is about 10 Millions! > > Can anybody help me or explain what to do? > May be, I have to convert to INSERT-statements? It's probably easier to cut the command into several smaller COPY commands. All it takes is a small awk/perl/python/your_favourite_scripting_language_here thing. -- Ragnar Kjørstad
Reiner Dassing <dassing@wettzell.ifag.de> writes: > By upgrading von PostgreSQL 6.5.3 to 7.1.0 I restored a dump file by > psql dbname <dbname.dump > After some time I get the following message: > ERROR: copy: line 1638280, Memory exhausted in AllocSetAlloc(56) > PQendcopy: resetting connection That shouldn't happen. Could we see the dump file? (Just the schema commands, not the data itself...) regards, tom lane
Tom Lane wrote: > > Reiner Dassing <dassing@wettzell.ifag.de> writes: > > By upgrading von PostgreSQL 6.5.3 to 7.1.0 I restored a dump file by > > psql dbname <dbname.dump > > > After some time I get the following message: > > ERROR: copy: line 1638280, Memory exhausted in AllocSetAlloc(56) > > PQendcopy: resetting connection > > That shouldn't happen. Could we see the dump file? (Just the schema > commands, not the data itself...) > > regards, tom lane Dear Tom! Thank you for your help. Certainly I will give this short schema: \connect - postgres ... CREATE TABLE "wetter" ( "sensor_id" int4 NOT NULL, "epoche" datetime NOT NULL, "wert" float4 NOT NULL); REVOKE ALL on "wetter" from PUBLIC; GRANT ALL on "wetter" to "postgres"; COPY "wetter" FROM stdin; 8 Thu Mar 02 02:34:02 2000 GMT -0.4 10 Thu Mar 02 02:34:02 2000 GMT 95 3 Thu Mar 02 02:34:02 2000 GMT 193.6 11 Thu Mar 02 02:34:02 2000 GMT 97.7 2 Thu Mar 02 02:34:02 2000 GMT 251.5 13 Thu Mar 02 02:34:02 2000 GMT 940.9 1 Thu Mar 02 02:34:02 2000 GMT 283.2 6 Thu Mar 02 02:34:02 2000 GMT 0.8 .... \. I have witten a script to convert these COPY statements to INSERTs and this worked! But it took a long time during night time. -- Mit freundlichen Gruessen / With best regards Reiner Dassing --------------------------------------------------- | Bundesamt fuer Kartographie und Geodaesie | Fundamentalstation Wettzell | Sackenrieder Str. 25 | D-93444 Koetzting | | Tel: (+49) (0)9941/603-112 | Fax: (+49) (0)9941/603-222 | Zentrale: (+49) (0)9941/603-0 | E-mail: dassing@wettzell.ifag.de | URL: http://www.ifag.de ---------------------------------------------------
Reiner Dassing <dassing@wettzell.ifag.de> writes: > Tom Lane wrote: >> That shouldn't happen. Could we see the dump file? (Just the schema >> commands, not the data itself...) > Thank you for your help. Certainly I will give this short schema: Hm. I replicated your data out to a million lines and then loaded the script. No memory leak visible at all. Perhaps there is some configuration or platform issue here? What were your configure options? Can anyone else reproduce a memory leak during COPY? regards, tom lane
Will vacuumdb help? On Fri, 27 Apr 2001, Tom Lane wrote: > Reiner Dassing <dassing@wettzell.ifag.de> writes: > > Tom Lane wrote: > >> That shouldn't happen. Could we see the dump file? (Just the schema > >> commands, not the data itself...) > > > Thank you for your help. Certainly I will give this short schema: > > Hm. I replicated your data out to a million lines and then loaded > the script. No memory leak visible at all. > > Perhaps there is some configuration or platform issue here? What were > your configure options? Can anyone else reproduce a memory leak during > COPY? > > regards, tom lane > > ---------------------------(end of broadcast)--------------------------- > TIP 5: Have you checked our extensive FAQ? > > http://www.postgresql.org/users-lounge/docs/faq.html >
Hello Tom! > Perhaps there is some configuration or platform issue here? What were > your configure options? The options were as follows: ./configure --with-pgport=5432 \ --with-x \ --with-perl \ --with-python \ --with-tcl \ --enable-odbc \ --enable-syslog \ --with-CC=cc --without-CXX \ --prefix=/Postgres/pgsql \ --with-tclconfig=/usr/local/lib \ --with-include=/usr/local/include >Can anyone else reproduce a memory leak during COPY? > > regards, tom lane I would like to point out the following question: There is the table pga_layout. What happens if this table is missing, empty or the contents is not correct in respect to COPY ...? -- Mit freundlichen Gruessen / With best regards Reiner Dassing
Reiner Dassing <dassing@wettzell.ifag.de> writes: >> Perhaps there is some configuration or platform issue here? What were >> your configure options? > The options were as follows: > ./configure --with-pgport=5432 \ > --with-x \ > --with-perl \ > --with-python \ > --with-tcl \ > --enable-odbc \ > --enable-syslog \ > --with-CC=cc --without-CXX \ > --prefix=/Postgres/pgsql \ > --with-tclconfig=/usr/local/lib \ > --with-include=/usr/local/include Drat. I was thinking maybe you were using multibyte --- there's some extra string-slinging for multibyte conversion that might have been a good place to look for a memory leak. But there's nothing above that looks significantly different from my setup. Again, can anyone else reproduce a memory leak during COPY IN? > There is the table pga_layout. > What happens if this table is missing, empty or the contents is not > correct in respect to COPY ...? Nothing. The backend does not know or care anything about pgaccess' tables... regards, tom lane
Hello Tom! I would like to contact you again with two phenomenas which are important in my opinion: 1. The error "Memory exhausted in AllocSetAlloc(32)" is there on the "delete from table" command, also. 2. There are some warnings during compilation which seems to be strange. to 1.: The following protocol will show the effect of "Memory exhausted": sh-2.04$ psql wetter Welcome to psql, the PostgreSQL interactive terminal. Type: \copyright for distribution terms \h for help with SQL commands \? for help on internal slash commands \g or terminate with semicolon to execute query \q to quit wetter=# vacuum analyze wetter; VACUUM wetter=# delete from wetter; ERROR: Memory exhausted in AllocSetAlloc(32) <----------- !!!!!!!!! wetter=# select count(*) from wetter; count --------- 4457236 (1 row) wetter=# \quit To 2: During compilation there are the following warnings: cc -std -O4 -Olimit 2000 -I../../../../src/include -c -o regproc.o regproc.c cc: Info: regproc.c, line 140: In this statement, an array subscript expression is either less than zero or greater than the largest value that can be represented by the size_t type. (badsubscript) result = (RegProcedure) heap_getattr(proctup, cc -std -O4 -Olimit 2000 -I../../../../src/include -c -o inval.o inval.c cc: Info: inval.c, line 180: In this statement, an array subscript expression is either less than zero or greater than the largest value that can be represented by the size_t type. (badsubscript) ((InvalidationUserData *) entry)->dataP[-1] = ------------------------------------------------^ cc: Info: inval.c, line 202: In this statement, an array subscript expression is either less than zero or greater than the largest value that can be represented by the size_t type. (badsubscript) &((InvalidationUserData *) invalid)->dataP[-1]; -------------------------------------------------------------------^ There are some more warnings regarding the imcompatability between 'char' and 'unsigned char' which I do not regard as serious. Hoping to help in a way! -- Mit freundlichen Gruessen / With best regards Reiner Dassing
Reiner Dassing <dassing@wettzell.ifag.de> writes: > 1. The error "Memory exhausted in AllocSetAlloc(32)" is there on the > "delete from table" command, also. I am still unable to reproduce this problem. Are you sure there is not something you've failed to mention about the table --- rules, triggers, weird default values, anything? > 2. There are some warnings during compilation which seems to be strange. > cc: Info: regproc.c, line 140: In this statement, an array subscript > expression is either less than zero or greater than the largest value > that can be represented by the size_t type. (badsubscript) Your compiler vendors appear not to have read the C specification very carefully --- array subscripts have integer type not unsigned type, and there is nothing invalid about the code being complained of. (The warning in regproc.c is particularly idiotic, because the compiler should also have been able to deduce that the code being complained of can never be reached --- it's inside an "if" with a constant condition.) regards, tom lane
Hello Tom! Sorry for coming up again with the same problem: COPY FROM fails with "Memory exhausted in AllocSetAlloc(40)" After inserting all the data fields by INSERTs I activated the following script by 'psql -e wetter < wetter.sql': wetter.sql contains the following: COPY wetter to '/Postgres/backup/h.h'; ALTER TABLE wetter RENAME TO o_wetter; DROP INDEX wetter_pkey; DROP INDEX idx_wetter_epoche; DROP INDEX wetter_wetter_sensor_id_idx; -- CREATE TABLE "wetter" ( "sensor_id" int4 NOT NULL REFERENCES sensoren_an_orten, "epoche" datetime NOT NULL, "wert" float4 NOT NULL, PRIMARY KEY (sensor_id,epoche)); REVOKE ALL on "wetter" from PUBLIC; GRANT ALL on "wetter" to "postgres"; CREATE INDEX wetter_epoche_idx ON wetter(epoche); CREATE INDEX wetter_sensor_id_idx ON wetter(sensor_id); -- -- COPY wetter FROM '/Postgres/backup/h.h'; SELECT count(*) FROM o_wetter; SELECT count(*) FROM wetter; Now: The statement "COPY wetter FROM '/Postgres/backup/h.h';" ends with "ERROR: copy: line 1638280, Memory exhausted in AllocSetAlloc(40)" PostgresLog says: May 9 09:21:45 sweaty postgres[584514]: [10] DEBUG: copy: line 1315779, XLogWrite: new log file created - consider increasing WAL_FILES May 9 09:23:27 sweaty postgres[584514]: [11] ERROR: copy: line 1638280, Memory exhausted in AllocSetAlloc(40) May 9 09:23:27 sweaty postgres[584514]: [11] ERROR: copy: line 1638280, Memory exhausted in AllocSetAlloc(40) May 9 09:23:27 sweaty postgres[584514]: [12] DEBUG: AbortCurrentTransaction This done under Postgres Version 7.1.1 There is no further Trigger, Rule or something. I hope this helps! -- Mit freundlichen Gruessen / With best regards Reiner Dassing
Reiner Dassing <dassing@wettzell.ifag.de> writes: > Sorry for coming up again with the same problem: > COPY FROM fails with "Memory exhausted in AllocSetAlloc(40)" > CREATE TABLE "wetter" ( > "sensor_id" int4 NOT NULL REFERENCES sensoren_an_orten, > "epoche" datetime NOT NULL, > "wert" float4 NOT NULL, > PRIMARY KEY (sensor_id,epoche)); Ah-hah, so there *is* an AFTER trigger on "wetter", namely the one that enforces the REFERENCES check. Probably you are running out of memory for the list of pending trigger activations; there will be one entry in it per row inserted during the current transaction, so I can easily believe that you might have a problem after a couple million rows have been loaded by COPY. We have a TO-DO item to allow the trigger list to be shoved out to disk when it gets too long, but that's not done yet ... I suggest you either break your data load into chunks of no more than a million rows, or (if you are sure the incoming data is good) disable the table's triggers during bulk load. For an example of doing that, look at the script that pg_dump emits for data-only backup. regards, tom lane