Thread: Dying PostgreSQL backend
Hello,
I am having problems keeping PostgreSQL alive in certain situations and I wonder if I can tune some parameters to allow it to handle higher load.
What gets logged in syslog log file are things like this:
May 5 21:52:34 localhost postgres[25012]: [64] NOTICE: TRANSACTION ISOLATION LEVEL is READ COMMITTED
May 5 21:52:34 localhost postgres[25013]: [61-1] NOTICE: Message from PostgreSQL backend:
May 5 21:52:34 localhost postgres[25013]: [61-2] ^IThe Postmaster has informed me that some other backend
May 5 21:52:34 localhost postgres[25013]: [61-3] ^Idied abnormally and possibly corrupted shared memory.
May 5 21:52:34 localhost postgres[25013]: [61-4] ^II have rolled back the current transaction and am
May 5 21:52:34 localhost postgres[25013]: [61-5] ^Igoing to terminate your database system connection and exit.
May 5 21:52:34 localhost postgres[25013]: [61-6] ^IPlease reconnect to the database system and repeat your query.
May 5 21:52:34 localhost postgres[25015]: [61] FATAL 1: The database system is starting up
...
The DB then restarts.
This happens when I hit the DB with a few simple UPDATE statements a second.
The statements are really simple, changing 3 TIMESTAMP, 2 INT, and one VARCHAR field using just the primary key to match the row to update.
I'm executing about 6-8 of those per second.
The DB I have has less than 10 tables, most of which are completely empty, and just one, the one I'm updating has about 12,000 pretty narrow (< 10 columns) rows.
I have run VACUUM ANALYZE on this DB, but once I start my application, which seems to be able to do about 7000-8000 updates before postgreSQL backend starts dying, I do not run VACUUM ANALYZE.
I have already changed a few postgresql.conf parameters:
shared_buffers = 64
sort_mem = 1024
fsync = false
deadlock_timeout = 5000
The UPDATE statements are the only ones running against this DB (except for a simple, no join, straight forward SELECT that runs ev ery 20 seconds).
What other parameters can I change?
Could the problem be that I'm doing a lot of UPDATEs without running VACUUM?
Thanks,
Otis
_______________________________________________________________
Sign up for FREE iVillage newsletters.
From health and pregnancy to shopping and relationships, iVillage
has the scoop on what matters most to you.
I am having problems keeping PostgreSQL alive in certain situations and I wonder if I can tune some parameters to allow it to handle higher load.
What gets logged in syslog log file are things like this:
May 5 21:52:34 localhost postgres[25012]: [64] NOTICE: TRANSACTION ISOLATION LEVEL is READ COMMITTED
May 5 21:52:34 localhost postgres[25013]: [61-1] NOTICE: Message from PostgreSQL backend:
May 5 21:52:34 localhost postgres[25013]: [61-2] ^IThe Postmaster has informed me that some other backend
May 5 21:52:34 localhost postgres[25013]: [61-3] ^Idied abnormally and possibly corrupted shared memory.
May 5 21:52:34 localhost postgres[25013]: [61-4] ^II have rolled back the current transaction and am
May 5 21:52:34 localhost postgres[25013]: [61-5] ^Igoing to terminate your database system connection and exit.
May 5 21:52:34 localhost postgres[25013]: [61-6] ^IPlease reconnect to the database system and repeat your query.
May 5 21:52:34 localhost postgres[25015]: [61] FATAL 1: The database system is starting up
...
The DB then restarts.
This happens when I hit the DB with a few simple UPDATE statements a second.
The statements are really simple, changing 3 TIMESTAMP, 2 INT, and one VARCHAR field using just the primary key to match the row to update.
I'm executing about 6-8 of those per second.
The DB I have has less than 10 tables, most of which are completely empty, and just one, the one I'm updating has about 12,000 pretty narrow (< 10 columns) rows.
I have run VACUUM ANALYZE on this DB, but once I start my application, which seems to be able to do about 7000-8000 updates before postgreSQL backend starts dying, I do not run VACUUM ANALYZE.
I have already changed a few postgresql.conf parameters:
shared_buffers = 64
sort_mem = 1024
fsync = false
deadlock_timeout = 5000
The UPDATE statements are the only ones running against this DB (except for a simple, no join, straight forward SELECT that runs ev ery 20 seconds).
What other parameters can I change?
Could the problem be that I'm doing a lot of UPDATEs without running VACUUM?
Thanks,
Otis
_______________________________________________________________
Sign up for FREE iVillage newsletters.
From health and pregnancy to shopping and relationships, iVillage
has the scoop on what matters most to you.
"otisg" <otisg@iVillage.com> writes: > What gets logged in syslog log file are things like this: The messages you quoted are no help, because none of them directly show the crashing backend. What I'd recommend doing is gdb'ing the core dump file to get a stack trace; that would give us some clue what's wrong. And you could do "p debug_query_string" to see what query made it crash. If you do not find a core file in the database subdirectory ($PGDATA/base/yourdbnumber/core) then you are probably running the postmaster with "ulimit -c 0" which disables core dumps. Restart it with environment "ulimit -c unlimited". regards, tom lane
Hello,
Unfortunately, I cannot get PostgreSQL to dump core despite setting ulimit -c to:
[root@linux2 otis]# ulimit -a
core file size (blocks) unlimited
And starting PostgreSQL this way:
su -l postgres -s /bin/sh -c "/usr/bin/pg_ctl -D $PGDATA -p /usr/bin/postmaster start"
The backend doesn't want to dump core, it just dies and restarts, dies and restarts.
I am really running only 2 very similar UPDATE statements. They update the same table, changing only a few fields.
I tried running VACUUM ANALYZE more frequently, but that didn't make any difference.
Is there any other way that I can get more information out of postgres backend?
Thanks,
Otis
_______________________________________________________________
Sign up for FREE iVillage newsletters.
From health and pregnancy to shopping and relationships, iVillage
has the scoop on what matters most to you.
<-----Original Message----->
>
> From: Tom Lane
> Sent: 5/6/2002 12:47:31 PM
> To: otisg@iVillage.com
> Cc: pgsql-admin@postgresql.org
> Subject: Re: [ADMIN] Dying PostgreSQL backend
>
> "otisg" writes:
> What gets logged in syslog log file are things like this:
>
> The messages you quoted are no help, because none of them directly
> show the crashing backend.
>
> What I'd recommend doing is gdb'ing the core dump file to get a stack
> trace; that would give us some clue what's wrong. And you could do
> "p debug_query_string" to see what query made it crash.
>
> If you do not find a core file in the database subdirectory
> ($PGDATA/base/yourdbnumber/core) then you are probably running the
> postmaster with "ulimit -c 0" which disables core dumps. Restart
> it with environment "ulimit -c unlimited".
>
> regards, tom lane
>
Unfortunately, I cannot get PostgreSQL to dump core despite setting ulimit -c to:
[root@linux2 otis]# ulimit -a
core file size (blocks) unlimited
And starting PostgreSQL this way:
su -l postgres -s /bin/sh -c "/usr/bin/pg_ctl -D $PGDATA -p /usr/bin/postmaster start"
The backend doesn't want to dump core, it just dies and restarts, dies and restarts.
I am really running only 2 very similar UPDATE statements. They update the same table, changing only a few fields.
I tried running VACUUM ANALYZE more frequently, but that didn't make any difference.
Is there any other way that I can get more information out of postgres backend?
Thanks,
Otis
_______________________________________________________________
Sign up for FREE iVillage newsletters.
From health and pregnancy to shopping and relationships, iVillage
has the scoop on what matters most to you.
<-----Original Message----->
>
> From: Tom Lane
> Sent: 5/6/2002 12:47:31 PM
> To: otisg@iVillage.com
> Cc: pgsql-admin@postgresql.org
> Subject: Re: [ADMIN] Dying PostgreSQL backend
>
> "otisg" writes:
> What gets logged in syslog log file are things like this:
>
> The messages you quoted are no help, because none of them directly
> show the crashing backend.
>
> What I'd recommend doing is gdb'ing the core dump file to get a stack
> trace; that would give us some clue what's wrong. And you could do
> "p debug_query_string" to see what query made it crash.
>
> If you do not find a core file in the database subdirectory
> ($PGDATA/base/yourdbnumber/core) then you are probably running the
> postmaster with "ulimit -c 0" which disables core dumps. Restart
> it with environment "ulimit -c unlimited".
>
> regards, tom lane
>
"otisg" <otisg@iVillage.com> writes: > Unfortunately, I cannot get PostgreSQL to dump core despite setting > ulimit -c to: > [root@linux2 otis]# ulimit -a > core file size (blocks) unlimited > And starting PostgreSQL this way: > su -l postgres -s /bin/sh -c "/usr/bin/pg_ctl -D $PGDATA -p > /usr/bin/postmaster start" Most likely, the su is picking up your system default of ulimit -c 0 again :-(. I'd suggest modifying the pg_ctl script to explicitly set ulimit -c before it starts the postmaster. > I am really running only 2 very similar UPDATE statements. Well, I'd like to look into it; it's not supposed to be that easy to make PG dump core ;-). But I need information. regards, tom lane
Hello,
> What I'd recommend doing is gdb'ing the core dump file to get a stack
> trace; that would give us some clue what's wrong. And you could do
> "p debug_query_string" to see what query made it crash.
>
> If you do not find a core file in the database subdirectory
> ($PGDATA/base/yourdbnumber/core) then you are probably running the
> postmaster with "ulimit -c 0" which disables core dumps. Restart
> it with environment "ulimit -c unlimited".
Yes, I fixed that.
Unfortunately I can't gdb to look into the core file:
(gdb) p debug_query_string
No symbol table is loaded. Use the "file" command.
(gdb) file /tmp/core4
"/tmp/core4": not in executable format: File format not recognized
(gdb)
Any ideas?
I did, however, see this in the syslogd log at the time of core dump:
/usr/bin/postmaster child[4014]: starting with (postgres -d16 -v131072 -p mydb )
invoking IpcMemoryCreate(size=2015232)
The first line I had seen before (every few seconds, since my application makes 5-7 updates per second), but never the second one. This was the first time.
Maybe it means something to those familiar with the guts of PostgreSQL.
Thanks,
Otis
_______________________________________________________________
Sign up for FREE iVillage newsletters.
From health and pregnancy to shopping and relationships, iVillage
has the scoop on what matters most to you.
> What I'd recommend doing is gdb'ing the core dump file to get a stack
> trace; that would give us some clue what's wrong. And you could do
> "p debug_query_string" to see what query made it crash.
>
> If you do not find a core file in the database subdirectory
> ($PGDATA/base/yourdbnumber/core) then you are probably running the
> postmaster with "ulimit -c 0" which disables core dumps. Restart
> it with environment "ulimit -c unlimited".
Yes, I fixed that.
Unfortunately I can't gdb to look into the core file:
(gdb) p debug_query_string
No symbol table is loaded. Use the "file" command.
(gdb) file /tmp/core4
"/tmp/core4": not in executable format: File format not recognized
(gdb)
Any ideas?
I did, however, see this in the syslogd log at the time of core dump:
/usr/bin/postmaster child[4014]: starting with (postgres -d16 -v131072 -p mydb )
invoking IpcMemoryCreate(size=2015232)
The first line I had seen before (every few seconds, since my application makes 5-7 updates per second), but never the second one. This was the first time.
Maybe it means something to those familiar with the guts of PostgreSQL.
Thanks,
Otis
_______________________________________________________________
Sign up for FREE iVillage newsletters.
From health and pregnancy to shopping and relationships, iVillage
has the scoop on what matters most to you.