Thread: Out of free buffers... HELP!

Out of free buffers... HELP!

From
"Diehl, Jeffrey"
Date:
Hi all.

I have a large query that I'm trying to run, but it never finishes.  I get
an error message and it quits.

I'm doing:

psql ids -c 'select src,dst,count(*) from brick* where src_port=135 group by
src,dst' > /tmp/135.dat

and I get:

ERROR:  out of free buffers: time to abort !

Any ideas on how to get this query to run?

Thanx,
Mike Diehl.



Re: Out of free buffers... HELP!

From
"Diehl, Jeffrey"
Date:
Hi all.

I sent this once before, but didn't see it appear on the list...  So here we
go again...

I have a large query that I'm trying to run, but it never finishes.  I
get an error message and it quits.

I'm doing:

psql ids -c 'select src,dst,count(*) from brick* where src_port=135
group by src,dst' > /tmp/135.dat

and I get:

ERROR:  out of free buffers: time to abort !

Any ideas on how to get this query to run?

Thanx,
Mike Diehl.



Re: Out of free buffers... HELP!

From
Haller Christoph
Date:
As long as there are no nulls allowed in column 'dst' the 
select src,dst,count(dst) from ... 
should retrieve the same result. 
Try it. It should run faster anyway. 
Maybe there are other ways to word your query, but without 
more knowledge about your table structure and intentions 
I can't tell you what. 
Alternatively, you may want to report your error to the 
[HACKERS] mailing list - there you might even find more
detailed support. 
Regards, Christoph 
> 
> Hi all.
> 
> I sent this once before, but didn't see it appear on the list...  So here we
> go again...
> 
> I have a large query that I'm trying to run, but it never finishes.  I
> get an error message and it quits.
> 
> I'm doing:
> 
> psql ids -c 'select src,dst,count(*) from brick* where src_port=135
> group by src,dst' > /tmp/135.dat
> 
> and I get:
> 
> ERROR:  out of free buffers: time to abort !
> 
> Any ideas on how to get this query to run?
> 
> Thanx,
> Mike Diehl.
> 
> 
> ---------------------------(end of broadcast)---------------------------
> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org
> 



Re: Out of free buffers... HELP!

From
darcy@druid.net (D'Arcy J.M. Cain)
Date:
Thus spake Diehl, Jeffrey
> I have a large query that I'm trying to run, but it never finishes.  I
> get an error message and it quits.
> 
> I'm doing:
> 
> psql ids -c 'select src,dst,count(*) from brick* where src_port=135
> group by src,dst' > /tmp/135.dat

Hard to tell without knowing more but perhaps you need another table
instead of/in addition to this one that just tracks the counts.  You
can use a trigger to increment it.  If you only allow inserts this
should be easy.  Deletes are easy too.  Updates are a little trickier.

-- 
D'Arcy J.M. Cain <darcy@{druid|vex}.net>   |  Democracy is three wolves
http://www.druid.net/darcy/                |  and a sheep voting on
+1 416 425 1212     (DoD#0082)    (eNTP)   |  what's for dinner.


Re: Out of free buffers... HELP!

From
Tom Lane
Date:
darcy@druid.net (D'Arcy J.M. Cain) writes:
> Thus spake Diehl, Jeffrey
>> psql ids -c 'select src,dst,count(*) from brick* where src_port=135
>> group by src,dst' > /tmp/135.dat

> Hard to tell without knowing more but perhaps you need another table
> instead of/in addition to this one that just tracks the counts.  You
> can use a trigger to increment it.  If you only allow inserts this
> should be easy.  Deletes are easy too.  Updates are a little trickier.

The thing is, you shouldn't ever get "out of free buffers", period.
Jeffrey must be running into some kind of buffer-leak bug ... but it's
hard to think what, unless he's running an extremely old PG version.
We haven't seen reports of buffer leaks in a long time.  So I'd like
to run that down, quite independently of whether he should change his
query for efficiency.
        regards, tom lane


Re: Out of free buffers... HELP!

From
Tom Lane
Date:
"Diehl, Jeffrey" <jdiehl@sandia.gov> writes:
> psql ids -c 'select src,dst,count(*) from brick* where src_port=135
> group by src,dst' > /tmp/135.dat
> and I get:
> ERROR:  out of free buffers: time to abort !

Oh?  What PG version is this?  What is the schema you are actually
working with --- how many tables are implied by "brick*", for example?
        regards, tom lane


Re: Out of free buffers... HELP!

From
Kovacs Baldvin
Date:
> psql ids -c 'select src,dst,count(*) from brick* where src_port=135
> group by src,dst' > /tmp/135.dat
>
> and I get:
>
> ERROR:  out of free buffers: time to abort !

Does anybody knows if the buffers of pgsql are insufficient, or
the operating system cries out this error?

Regards,
Baldvin


>
> Any ideas on how to get this query to run?
>
> Thanx,
> Mike Diehl.
>
>
> ---------------------------(end of broadcast)---------------------------
> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org
>

Üdv,
Baldvin




Re: Out of free buffers... HELP!

From
"Josh Berkus"
Date:
Mike,

> psql ids -c 'select src,dst,count(*) from brick* where src_port=135
> group by src,dst' > /tmp/135.dat
>
> and I get:
>
> ERROR: out of free buffers: time to abort !
>
> Any ideas on how to get this query to run?

You may be using the default buffer settings for Postgres on a large
database. Or your computer may be running a lot of software and not
have enough physical memory.

Suggestion:
1. Check memory usage on your machine.
2. If memory usage is low, adjust the "Shared Buffer" setting in the
postgresql.conf file. It defaults to 16; I have mine set at 2048.
3. For more tuning help, see techdocs.postgresql.org.

-Josh

______AGLIO DATABASE SOLUTIONS___________________________
                                       Josh Berkus
  Complete information technology      josh@agliodbs.com
   and data management solutions       (415) 565-7293
  for law firms, small businesses        fax 621-2533
    and non-profit organizations.      San Francisco

Attachment

Re: Out of free buffers... HELP!

From
Mark kirkwood
Date:
Previously:
>psql ids -c 'select src,dst,count(*) from brick* where src_port=135
>group by src,dst' > /tmp/135.dat

This is just a guess, increasing the parameters shared_buffers and sort_mem 
might help. 

For example if your table is about 1Gb in size then try shared_buffers=10000 
and sort_mem=20000 ( you need an extra 100Mb ram for this )

Cheers

Mark


Re: Out of free buffers... HELP!

From
"Diehl, Jeffrey"
Date:
I recently lost a hard drive and had to re-install, so the PG version I am
using is only about 3 weeks old.  However, as I said in my original post, I
have A LOT of data in this database.  Brick* implies 1334 tables, some of
which have 2.5M records...  All told, I have about 355G of data.  I don't
care if it takes DAYS to run queries like this, but I need them to finish.

Ok, you may be thinking that I am nuts to try to store this much data and to
break it up into so many tables.  Well, I used to have it all in one
table....  It took several days to vacuum it, and because inserts are
happening all the time, I was unable to have the tables indexed and still be
able to keep up with the incoming data.  This new structure addresses these
and other issues.

Any help would be most appreciated.

Mike Diehl,
Network Monitoring Tool Devl.
Sandia National Laboratories.
(505) 284-3137
jdiehl@sandia.gov

> -----Original Message-----
> From: Tom Lane [mailto:tgl@sss.pgh.pa.us]
> Sent: September 18, 2001 5:26 PM
> To: darcy@druid.net
> Cc: jdiehl@sandia.gov; pgsql-sql@postgresql.org
> Subject: Re: [SQL] Out of free buffers... HELP!
> 
> 
> darcy@druid.net (D'Arcy J.M. Cain) writes:
> > Thus spake Diehl, Jeffrey
> >> psql ids -c 'select src,dst,count(*) from brick* where src_port=135
> >> group by src,dst' > /tmp/135.dat
> 
> > Hard to tell without knowing more but perhaps you need another table
> > instead of/in addition to this one that just tracks the counts.  You
> > can use a trigger to increment it.  If you only allow inserts this
> > should be easy.  Deletes are easy too.  Updates are a 
> little trickier.
> 
> The thing is, you shouldn't ever get "out of free buffers", period.
> Jeffrey must be running into some kind of buffer-leak bug ... but it's
> hard to think what, unless he's running an extremely old PG version.
> We haven't seen reports of buffer leaks in a long time.  So I'd like
> to run that down, quite independently of whether he should change his
> query for efficiency.
> 
>             regards, tom lane
> 
> ---------------------------(end of 
> broadcast)---------------------------
> TIP 4: Don't 'kill -9' the postmaster
> 



Re: Out of free buffers... HELP!

From
"Diehl, Jeffrey"
Date:
Ok, can someone explain to me why this first query might run faster than the
second?

select src,dst,count(dst) from data;

select src,dst,count(*) from data;

I've always used the second form.  If the first is faster, I need to know.

Thanx,
Mike Diehl,
Network Monitoring Tool Devl.
Sandia National Laboratories.
(505) 284-3137
jdiehl@sandia.gov

> -----Original Message-----
> From: Haller Christoph [mailto:ch@rodos.fzk.de]
> Sent: September 18, 2001 7:18 AM
> To: jdiehl@sandia.gov
> Cc: pgsql-sql@postgresql.org
> Subject: Re: [SQL] Out of free buffers... HELP!
> 
> 
> As long as there are no nulls allowed in column 'dst' the 
> select src,dst,count(dst) from ... 
> should retrieve the same result. 
> Try it. It should run faster anyway. 
> Maybe there are other ways to word your query, but without 
> more knowledge about your table structure and intentions 
> I can't tell you what. 
> Alternatively, you may want to report your error to the 
> [HACKERS] mailing list - there you might even find more
> detailed support. 
> Regards, Christoph 
> > 
> > Hi all.
> > 
> > I sent this once before, but didn't see it appear on the 
> list...  So here we
> > go again...
> > 
> > I have a large query that I'm trying to run, but it never 
> finishes.  I
> > get an error message and it quits.
> > 
> > I'm doing:
> > 
> > psql ids -c 'select src,dst,count(*) from brick* where src_port=135
> > group by src,dst' > /tmp/135.dat
> > 
> > and I get:
> > 
> > ERROR:  out of free buffers: time to abort !
> > 
> > Any ideas on how to get this query to run?
> > 
> > Thanx,
> > Mike Diehl.
> > 
> > 
> > ---------------------------(end of 
> broadcast)---------------------------
> > TIP 1: subscribe and unsubscribe commands go to 
> majordomo@postgresql.org
> > 
> 
> 
> ---------------------------(end of 
> broadcast)---------------------------
> TIP 2: you can get off all lists at once with the unregister command
>     (send "unregister YourEmailAddressHere" to 
> majordomo@postgresql.org)
> 



Re: Out of free buffers... HELP!

From
Tom Lane
Date:
"Diehl, Jeffrey" <jdiehl@sandia.gov> writes:
> Brick* implies 1334 tables

Hmm ... I wonder if the query is somehow holding onto a buffer pin for
the last block of each successively-accessed table?  Will look into it.

What -B (# of shared buffers) setting are you running the postmaster
with?  Try setting it to 1500 or more, if it's not already.
        regards, tom lane


Re: Out of free buffers... HELP!

From
"Josh Berkus"
Date:
Mike,

> I recently lost a hard drive and had to re-install, so the PG version
> I am
> using is only about 3 weeks old.  However, as I said in my original
> post, I
> have A LOT of data in this database.  Brick* implies 1334 tables,
> some of
> which have 2.5M records...  All told, I have about 355G of data.  I
> don't
> care if it takes DAYS to run queries like this, but I need them to
> finish.

Um, Mike, I think you need *professional* DB tuning/performance help.
Maybe a contract with PostgreSQL Inc. or Bruce Momjian or Red Hat?  Or
me, if your budget is limited (but I'm not as good as Bruce or Chris)

-Josh

______AGLIO DATABASE SOLUTIONS___________________________                                      Josh Berkus Complete
informationtechnology      josh@agliodbs.com  and data management solutions       (415) 565-7293 for law firms, small
businesses       fax 621-2533   and non-profit organizations.      San Francisco
 


Re: Out of free buffers... HELP!

From
Tom Lane
Date:
"Diehl, Jeffrey" <jdiehl@sandia.gov> writes:
> Ok, can someone explain to me why this first query might run faster than the
> second?
> select src,dst,count(dst) from data;
> select src,dst,count(*) from data;

Hmm, I'd expect the second to be marginally faster.  count(*) counts the
number of rows matching the WHERE condition; count(foo) counts the
number of rows matching the WHERE condition for which foo is not NULL.
So count(foo) ought to involve a couple extra cycles to test for
non-NULL-ness of the specified field.  But it's hard to believe you
could measure the difference --- what results are you getting?
        regards, tom lane


Re: Out of free buffers... HELP!

From
"Diehl, Jeffrey"
Date:
Well, this was just a suggestion to make my queries run fast.  I didn't
quite understand the difference between the two, so I thought I'd ask.

Thanx for clearing that up for me.

Mike Diehl,
Network Monitoring Tool Devl.
Sandia National Laboratories.
(505) 284-3137
jdiehl@sandia.gov

> -----Original Message-----
> From: Tom Lane [mailto:tgl@sss.pgh.pa.us]
> Sent: September 20, 2001 12:56 AM
> To: Diehl, Jeffrey
> Cc: 'Haller Christoph'; pgsql-sql@postgresql.org
> Subject: Re: [SQL] Out of free buffers... HELP!
> 
> 
> "Diehl, Jeffrey" <jdiehl@sandia.gov> writes:
> > Ok, can someone explain to me why this first query might 
> run faster than the
> > second?
> > select src,dst,count(dst) from data;
> > select src,dst,count(*) from data;
> 
> Hmm, I'd expect the second to be marginally faster.  count(*) 
> counts the
> number of rows matching the WHERE condition; count(foo) counts the
> number of rows matching the WHERE condition for which foo is not NULL.
> So count(foo) ought to involve a couple extra cycles to test for
> non-NULL-ness of the specified field.  But it's hard to believe you
> could measure the difference --- what results are you getting?
> 
>             regards, tom lane
> 



Re: Out of free buffers... HELP!

From
"Diehl, Jeffrey"
Date:
Um no, I just need a smaller problem to solve.  The database worked quite
well when the problem was half this size.  Additionally, I'm processing
back-logged data right now.  I've also recently redesigned the database
schema to take advantage of inheritance.  This has enabled me to write
larger queries. ;^)

I've been given a very large problem to solve and my management understands
that it is hard.  They also understand that this is a development process.
Thanks to the extreme reliability of Linux and PostgresQL, we have been able
to get very useful results so far.  It is these initial results which have
driven my management to request that I take on a larger chunk of the
problem.  That is what has caused the situation I am in now.  Imagine this:
"Well, if 30 day's worth of data was that useful, just think of what we
could do with 60 day's...!"  And they are right, if it can be done...  If it
can't, I'll tell them and they will understand.

Thanx for your input,
Mike Diehl,
Network Monitoring Tool Devl.
Sandia National Laboratories.
(505) 284-3137
jdiehl@sandia.gov

> -----Original Message-----
> From: Josh Berkus [mailto:josh@agliodbs.com]
> Sent: September 19, 2001 6:36 PM
> To: Diehl, Jeffrey; 'Tom Lane'; darcy@druid.net
> Cc: Diehl, Jeffrey; pgsql-sql@postgresql.org
> Subject: Re: [SQL] Out of free buffers... HELP!
> 
> 
> Mike,
> 
> > I recently lost a hard drive and had to re-install, so the 
> PG version
> > I am
> > using is only about 3 weeks old.  However, as I said in my original
> > post, I
> > have A LOT of data in this database.  Brick* implies 1334 tables,
> > some of
> > which have 2.5M records...  All told, I have about 355G of data.  I
> > don't
> > care if it takes DAYS to run queries like this, but I need them to
> > finish.
> 
> Um, Mike, I think you need *professional* DB tuning/performance help.
> Maybe a contract with PostgreSQL Inc. or Bruce Momjian or Red Hat?  Or
> me, if your budget is limited (but I'm not as good as Bruce or Chris)
> 
> -Josh
> 
> ______AGLIO DATABASE SOLUTIONS___________________________
>                                        Josh Berkus
>   Complete information technology      josh@agliodbs.com
>    and data management solutions       (415) 565-7293
>   for law firms, small businesses        fax 621-2533
>     and non-profit organizations.      San Francisco
> 



Re: Out of free buffers... HELP!

From
"Josh Berkus"
Date:
Diehl,

> Um no, I just need a smaller problem to solve.  The database worked
> quite
> well when the problem was half this size. 
<snip>
> could do with 60 day's...!"  And they are right, if it can be done...
> If it
> can't, I'll tell them and they will understand.

What I'm saying is, based on your description, you need a lot of hand-on
performance tuning help.  Someone who knows the pgsql performance
parameters and can experiment with tweaking and tuning to avoid swamping
the memory of the machine(s) you're running on.  Because that's what it
sounds like the problem is.

However, that sort of help will take some paid consultant time and
possibly hardware.  I think that if you can run a query on 150gb of
data, you can probably run it on 355gb ... you just need some help
performance tuning.  But I don't think general advice on a list is gonna
do it, y'know?

-Josh

______AGLIO DATABASE SOLUTIONS___________________________                                      Josh Berkus Complete
informationtechnology      josh@agliodbs.com  and data management solutions       (415) 565-7293 for law firms, small
businesses       fax 621-2533   and non-profit organizations.      San Francisco
 


Re: Out of free buffers... HELP!

From
"Diehl, Jeffrey"
Date:
I hear you.  I'm just not having a good day today.  My biggest problem is my
project/time ration is way too high.

I agree with you, though.  If I can get it to work on 150Gb, I can probably
get it to work on 355Gb.  I just may have to change the manner in which I
perform these queries.

Mike Diehl,
Network Monitoring Tool Devl.
Sandia National Laboratories.
(505) 284-3137
jdiehl@sandia.gov

> -----Original Message-----
> From: Josh Berkus [mailto:josh@agliodbs.com]
> Sent: September 20, 2001 1:50 PM
> To: Diehl, Jeffrey; 'Josh Berkus'; darcy@druid.net
> Cc: Diehl, Jeffrey; pgsql-sql@postgresql.org
> Subject: Re: [SQL] Out of free buffers... HELP!
> 
> 
> Diehl,
> 
> > Um no, I just need a smaller problem to solve.  The database worked
> > quite
> > well when the problem was half this size. 
> <snip>
> > could do with 60 day's...!"  And they are right, if it can 
> be done...
> > If it
> > can't, I'll tell them and they will understand.
> 
> What I'm saying is, based on your description, you need a lot 
> of hand-on
> performance tuning help.  Someone who knows the pgsql performance
> parameters and can experiment with tweaking and tuning to 
> avoid swamping
> the memory of the machine(s) you're running on.  Because 
> that's what it
> sounds like the problem is.
> 
> However, that sort of help will take some paid consultant time and
> possibly hardware.  I think that if you can run a query on 150gb of
> data, you can probably run it on 355gb ... you just need some help
> performance tuning.  But I don't think general advice on a 
> list is gonna
> do it, y'know?
> 
> -Josh
> 
> ______AGLIO DATABASE SOLUTIONS___________________________
>                                        Josh Berkus
>   Complete information technology      josh@agliodbs.com
>    and data management solutions       (415) 565-7293
>   for law firms, small businesses        fax 621-2533
>     and non-profit organizations.      San Francisco
> 



Re: Out of free buffers... HELP!

From
Tom Lane
Date:
[ digging through some old email ]

I wrote:
> "Diehl, Jeffrey" <jdiehl@sandia.gov> writes:
>> Brick* implies 1334 tables

> Hmm ... I wonder if the query is somehow holding onto a buffer pin for
> the last block of each successively-accessed table?  Will look into it.

I find that indeed 7.1 and older versions do that, but 7.2 does not
(due to simplifications in heap_getnext).  So this problem should be
gone as of the current release.
        regards, tom lane