Thread: ERROR: out of free buffers: time to abort!
To get rid of this message, do I increase shared_buffers? ERROR: out of free buffers: time to abort! This is taking place in an UPDATE.
Joseph Shraibman <jks@selectacast.net> writes: > To get rid of this message, do I increase shared_buffers? > ERROR: out of free buffers: time to abort! [blinks] Yeah, you do ... but I'd be interested to see how you caused that to happen. Even at the rock-bottom setting for shared_buffers, I think you must have been doing something out of the ordinary ... regards, tom lane
Tom Lane wrote: > Joseph Shraibman <jks@selectacast.net> writes: > >>To get rid of this message, do I increase shared_buffers? >>ERROR: out of free buffers: time to abort! > > > [blinks] Yeah, you do ... but I'd be interested to see how you caused > that to happen. Even at the rock-bottom setting for shared_buffers, > I think you must have been doing something out of the ordinary ... > > regards, tom lane Here is what I'm doing: begin; intarray = select intfield from table1; (11480 entries right now) for(i = 0; i < intarray.length; i++){ select some data for that int value if (come condition){ add int to list; } } chunks = split list into chunks; (to avoid an in() with too many things in it, max of 2000 per chunk) for each chunk{ //this update causes the error update table2 set status = newstatus where id in (chunk[0], chunk[1]...); } delete from table1; end;
Joseph Shraibman <jks@selectacast.net> writes: > //this update causes the error > update table2 set status = newstatus where id in (chunk[0], chunk[1]...); Hmm. Do you have a particularly large number of indexes on that table? Are you running a whole bunch of these operations in parallel? regards, tom lane
Tom Lane wrote: > Joseph Shraibman <jks@selectacast.net> writes: > >> //this update causes the error >> update table2 set status = newstatus where id in (chunk[0], chunk[1]...); > > > Hmm. Do you have a particularly large number of indexes on that table? > Are you running a whole bunch of these operations in parallel? > I have 3 indexes on the table, and I do this operation in a cron job once a day, not in parallel.
Tom Lane wrote: > Joseph Shraibman <jks@selectacast.net> writes: > >>To get rid of this message, do I increase shared_buffers? >>ERROR: out of free buffers: time to abort! > > > [blinks] Yeah, you do ... but I'd be interested to see how you caused > that to happen. Even at the rock-bottom setting for shared_buffers, > I think you must have been doing something out of the ordinary ... > > regards, tom lane I upped the shared_buffers from 128 to 256 and I'm still getting the error.
Joseph Shraibman <jks@selectacast.net> writes: > I upped the shared_buffers from 128 to 256 and I'm still getting the error. Hmph. I wonder if you've got a bug there. What PG version is this? Is it possible that the update is hitting rows that have been recently updated in concurrent transactions? regards, tom lane
Tom Lane wrote: > Joseph Shraibman <jks@selectacast.net> writes: > >>I upped the shared_buffers from 128 to 256 and I'm still getting the error. > > > Hmph. I wonder if you've got a bug there. What PG version is this? select version(); version --------------------------------------------------------------------------------------------------------- PostgreSQL 7.3.2 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 3.2 20020903 (Red Hat Linux 8.0 3.2-7) (1 row) > Is it possible that the update is hitting rows that have been recently > updated in concurrent transactions? Unlikely. I run this cron job every day, and every day I get the same error. The whole thing should be pretty quick.
Joseph Shraibman <jks@selectacast.net> writes: > Unlikely. I run this cron job every day, and every day I get the same > error. The whole thing should be pretty quick. Well, I can't reproduce the problem here, and in general this isn't an error message we hear of very often. So there's got to be something unusual about what you're doing. Any chance that you're invoking triggers recursively, or something like that? Could you possibly get a stack trace from the point of the elog call? regards, tom lane
Tom Lane wrote: > Joseph Shraibman <jks@selectacast.net> writes: > >>Unlikely. I run this cron job every day, and every day I get the same >>error. The whole thing should be pretty quick. > > > Well, I can't reproduce the problem here, and in general this isn't an > error message we hear of very often. So there's got to be something > unusual about what you're doing. Any chance that you're invoking > triggers recursively, or something like that? Could you possibly get > a stack trace from the point of the elog call? > > regards, tom lane My update looks like: UPDATE tablename SET intfield = 2 WHERE keyfield IN( ... ) If I lowered the number of items in the IN() then I didn't get the error, but what that number is is a moving target. 205 used to work a few minutes ago, but now 200 doesn't work. A vaccuum seems to help matters. In previous versions of postgres I was able to do up to 10000. I tried to make a simple test with a table with 10000 entries, but that had no problems. Maybe I would need a bigger table.
Joseph Shraibman <jks@selectacast.net> writes: > My update looks like: > UPDATE tablename SET intfield = 2 WHERE keyfield IN( ... ) > If I lowered the number of items in the IN() then I didn't get the error, but what that > number is is a moving target. 205 used to work a few minutes ago, but now 200 doesn't > work. A vaccuum seems to help matters. In previous versions of postgres I was able to do > up to 10000. Can we see the EXPLAIN plan for the failing update? Also, I've forgotten exactly which PG version you're using? regards, tom lane
Tom Lane wrote: > Joseph Shraibman <jks@selectacast.net> writes: > >>My update looks like: >>UPDATE tablename SET intfield = 2 WHERE keyfield IN( ... ) > > >>If I lowered the number of items in the IN() then I didn't get the error, but what that >>number is is a moving target. 205 used to work a few minutes ago, but now 200 doesn't >>work. A vaccuum seems to help matters. In previous versions of postgres I was able to do >>up to 10000. > > > Can we see the EXPLAIN plan for the failing update? The EXPLAIN is really big: Index Scan using d_pkey, d_pkey, ... (repeated a few thousand times) on d(cost=0.00..82560.71 rows=4809 width=424) Index Cond: ((key = 1) OR (key = 2) ... (repeated a few thousand times) (2 rows) > > Also, I've forgotten exactly which PG version you're using? > PostgreSQL 7.3.2 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 3.2 20020903 (Red Hat Linux 8.0 3.2-7)
Joseph Shraibman <jks@selectacast.net> writes: >>> UPDATE tablename SET intfield = 2 WHERE keyfield IN( ... ) [ scratches head ... ] I just tried to duplicate this again, and this time I see the problem. I'd swear I'm not doing anything different from before though ... Anyway, I'm on it. regards, tom lane
Joseph Shraibman <jks@selectacast.net> writes: > My update looks like: > UPDATE tablename SET intfield = 2 WHERE keyfield IN( ... ) > If I lowered the number of items in the IN() then I didn't get the > error, but what that number is is a moving target. I've applied the attached patch to prevent this problem in 7.3.*. A better solution will appear in 7.4, but I don't want to stick it into the stable branch with no beta testing ... regards, tom lane *** src/backend/access/index/indexam.c.orig Wed Jan 8 14:41:57 2003 --- src/backend/access/index/indexam.c Sun Mar 23 16:44:37 2003 *************** *** 415,421 **** --- 415,426 ---- * * Note that we hold the pin on the single tuple's buffer throughout * the scan once we are in this state. + * + * XXX disabled for 7.3.3 because it results in intra-query buffer leak + * when a multi-index indexscan is done. Full fix seems too risky to + * backpatch. */ + #ifdef NOT_USED if (scan->keys_are_unique && scan->got_tuple) { if (ScanDirectionIsForward(direction)) *************** *** 433,438 **** --- 438,444 ---- else return NULL; } + #endif /* Release any previously held pin */ if (BufferIsValid(scan->xs_cbuf))