Thread: URGENT: pg_dump & Postgres 7.2b4
Hi all, when i try to do a simple pg_dump i got the following errors: 2002-01-10 14:57:11 FATAL 2: open of /usr/local/pgsql/data/pg_clog/0000 failed: No such file or directory 2002-01-10 14:57:11 DEBUG: server process (pid 1201) exited with exit code 2 2002-01-10 14:57:11 DEBUG: terminating any other active server processes 2002-01-10 14:57:11 NOTICE: Message from PostgreSQL backend: The Postmaster has informed me that some other backend died abnormally and possibly corrupted shared memory. I have rolled back the current transaction and am going to terminate your database system connection and exit. Please reconnect to the database system and repeat your query. 2002-01-10 14:57:11 NOTICE: Message from PostgreSQL backend: The Postmaster has informed me that some other backend died abnormally and possibly corrupted shared memory. I have rolled back the current transaction and am going to terminate your database system connection and exit. Please reconnect to the database system and repeat your query. 2002-01-10 14:57:11 NOTICE: Message from PostgreSQL backend: The Postmaster has informed me that some other backend died abnormally and possibly corrupted shared memory. I have rolled back the current transaction and am going to terminate your database system connection and exit. Please reconnect to the database system and repeat your query. 2002-01-10 14:57:11 NOTICE: Message from PostgreSQL backend: The Postmaster has informed me that some other backend died abnormally and possibly corrupted shared memory. I have rolled back the current transaction and am going to terminate your database system connection and exit. Please reconnect to the database system and repeat your query. 2002-01-10 14:57:11 DEBUG: all server processes terminated; reinitializing shared memory and semaphores 2002-01-10 14:57:11 DEBUG: database system was interrupted at 2002-01-10 14:55:48 GMT 2002-01-10 14:57:11 DEBUG: checkpoint record is at 0/1AD7A188 2002-01-10 14:57:11 DEBUG: redo record is at 0/1AD75E88; undo record is at 0/0; shutdown FALSE 2002-01-10 14:57:11 DEBUG: next transaction id: 2550628; next oid: 294388 2002-01-10 14:57:11 DEBUG: database system was not properly shut down; automatic recovery in progress 2002-01-10 14:57:11 DEBUG: redo starts at 0/1AD75E88 2002-01-10 14:57:11 DEBUG: ReadRecord: record with zero length at 0/1ADD14A8 2002-01-10 14:57:11 DEBUG: redo done at 0/1ADD1484 2002-01-10 14:57:12 FATAL 1: The database system is starting up 2002-01-10 14:57:12 FATAL 1: The database system is starting up 2002-01-10 14:57:13 FATAL 1: The database system is starting up 2002-01-10 14:57:13 FATAL 1: The database system is starting up 2002-01-10 14:57:13 FATAL 1: The database system is starting up 2002-01-10 14:57:13 FATAL 1: The database system is starting up 2002-01-10 14:57:13 FATAL 1: The database system is starting up 2002-01-10 14:57:13 DEBUG: database system is ready 2002-01-10 14:57:14 ERROR: Cannot insert a duplicate key into unique index eq_profile_pkey 2002-01-10 14:57:36 ERROR: Cannot insert a duplicate key into unique index eq_profile_pkey 2002-01-10 14:57:57 ERROR: Cannot insert a duplicate key into unique index account_pkey 2002-01-10 14:58:18 ERROR: Cannot insert a duplicate key into unique index eq_profile_pkey 2002-01-10 14:58:44 ERROR: Cannot insert a duplicate key into unique index eq_profile_pkey And in the console i got: pg_dump: query to get data of sequence "account_num_seq" failed: FATAL 2: open of /usr/local/pgsql/data/pg_clog/0000 failed: No such file or directory server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request. The only file i got in directory pg_clog is : 0002 Help plz. Thx in advance.
this error also appears during normal operations (querying/updating database). Is there a way to clean up those commit log ?, i tried to remove 0002 but it didnt help( it was then doing error on 0002 too) "Christian Meunier" <jelan@magelo.com> wrote in message news:a1kc2b$2m6h$1@news.tht.net... > Hi all, when i try to do a simple pg_dump i got the following errors: > > 2002-01-10 14:57:11 FATAL 2: open of /usr/local/pgsql/data/pg_clog/0000 > failed: No such file or directory > 2002-01-10 14:57:11 DEBUG: server process (pid 1201) exited with exit code > 2 > 2002-01-10 14:57:11 DEBUG: terminating any other active server processes > 2002-01-10 14:57:11 NOTICE: Message from PostgreSQL backend: > The Postmaster has informed me that some other backend > died abnormally and possibly corrupted shared memory. > I have rolled back the current transaction and am > going to terminate your database system connection and exit. > Please reconnect to the database system and repeat your query. > 2002-01-10 14:57:11 NOTICE: Message from PostgreSQL backend: > The Postmaster has informed me that some other backend > died abnormally and possibly corrupted shared memory. > I have rolled back the current transaction and am > going to terminate your database system connection and exit. > Please reconnect to the database system and repeat your query. > 2002-01-10 14:57:11 NOTICE: Message from PostgreSQL backend: > The Postmaster has informed me that some other backend > died abnormally and possibly corrupted shared memory. > I have rolled back the current transaction and am > going to terminate your database system connection and exit. > Please reconnect to the database system and repeat your query. > 2002-01-10 14:57:11 NOTICE: Message from PostgreSQL backend: > The Postmaster has informed me that some other backend > died abnormally and possibly corrupted shared memory. > I have rolled back the current transaction and am > going to terminate your database system connection and exit. > Please reconnect to the database system and repeat your query. > 2002-01-10 14:57:11 DEBUG: all server processes terminated; reinitializing > shared memory and semaphores > 2002-01-10 14:57:11 DEBUG: database system was interrupted at 2002-01-10 > 14:55:48 GMT > 2002-01-10 14:57:11 DEBUG: checkpoint record is at 0/1AD7A188 > 2002-01-10 14:57:11 DEBUG: redo record is at 0/1AD75E88; undo record is at > 0/0; shutdown FALSE > 2002-01-10 14:57:11 DEBUG: next transaction id: 2550628; next oid: 294388 > 2002-01-10 14:57:11 DEBUG: database system was not properly shut down; > automatic recovery in progress > 2002-01-10 14:57:11 DEBUG: redo starts at 0/1AD75E88 > 2002-01-10 14:57:11 DEBUG: ReadRecord: record with zero length at > 0/1ADD14A8 > 2002-01-10 14:57:11 DEBUG: redo done at 0/1ADD1484 > 2002-01-10 14:57:12 FATAL 1: The database system is starting up > 2002-01-10 14:57:12 FATAL 1: The database system is starting up > 2002-01-10 14:57:13 FATAL 1: The database system is starting up > 2002-01-10 14:57:13 FATAL 1: The database system is starting up > 2002-01-10 14:57:13 FATAL 1: The database system is starting up > 2002-01-10 14:57:13 FATAL 1: The database system is starting up > 2002-01-10 14:57:13 FATAL 1: The database system is starting up > 2002-01-10 14:57:13 DEBUG: database system is ready > 2002-01-10 14:57:14 ERROR: Cannot insert a duplicate key into unique index > eq_profile_pkey > 2002-01-10 14:57:36 ERROR: Cannot insert a duplicate key into unique index > eq_profile_pkey > 2002-01-10 14:57:57 ERROR: Cannot insert a duplicate key into unique index > account_pkey > 2002-01-10 14:58:18 ERROR: Cannot insert a duplicate key into unique index > eq_profile_pkey > 2002-01-10 14:58:44 ERROR: Cannot insert a duplicate key into unique index > eq_profile_pkey > > And in the console i got: > > pg_dump: query to get data of sequence "account_num_seq" failed: FATAL 2: > open of /usr/local/pgsql/data/pg_clog/0000 failed: No such file or directory > server closed the connection unexpectedly > This probably means the server terminated abnormally > before or while processing the request. > > The only file i got in directory pg_clog is : 0002 > > Help plz. > Thx in advance. > > >
"Christian Meunier" <jelan@magelo.com> writes: > Is there a way to clean up those commit log ?, i tried to remove 0002 but it > didnt help( it was then doing error on 0002 too) You did *what*? You may as well have done "rm -rf $PGDATA". You're hosed. I would have liked to look into why it was still trying to reference the 0000 segment after removing it; that suggests a logic problem somewhere. But with the database now completely nonfunctional due to loss of the active clog segment, there's probably no way to learn anything useful. Don't remove files when you don't know what they are. regards, tom lane
Dont panic, it was just a stupid try and i had saved the file and then restored it ;) If you could tell me how to figure out why Postgres is still trying to access a segment it deleted, i ll do it and report it. Thanks in advance ----- Original Message ----- From: "Tom Lane" <tgl@sss.pgh.pa.us> To: "Christian Meunier" <jelan@magelo.com> Cc: <pgsql-general@postgresql.org> Sent: Thursday, January 10, 2002 8:01 PM Subject: Re: [GENERAL] URGENT: pg_dump & Postgres 7.2b4 > "Christian Meunier" <jelan@magelo.com> writes: > > Is there a way to clean up those commit log ?, i tried to remove 0002 but it > > didnt help( it was then doing error on 0002 too) > > You did *what*? > > You may as well have done "rm -rf $PGDATA". You're hosed. > > I would have liked to look into why it was still trying to reference the > 0000 segment after removing it; that suggests a logic problem somewhere. > But with the database now completely nonfunctional due to loss of the > active clog segment, there's probably no way to learn anything useful. > > Don't remove files when you don't know what they are. > > regards, tom lane >
"Jelan" <jelan@magelo.com> writes: > Dont panic, it was just a stupid try and i had saved the file and then > restored it ;) Oh good. In that case see my followup to pghackers. I think your problem is isolated to the sequence object(s), and can be summarized as: nextval() will work, pg_dump'ing a sequence will not. So, to get out of trouble, try this: for each sequence, do nextval() to note the current value, drop the sequence, recreate it, do setval() to restore the count. Then you'll be in a state where you can pg_dump. Then try not to go a million transactions between sequence creations and pg_dumps while you're using 7.2b4 :-(. There will be a fix in the next beta version. Oh, and many thanks for finding this! Would've been embarrassing to have this glitch escape beta testing... regards, tom lane
I am not so sure that nextval() will work always, i tend to have some SQL errors like: Exception:ERROR: Cannot insert a duplicate key into unique index account_pkey Something that i didnt have at all with 7.1 Best regards