Thread: BUG #8013: Memory leak

BUG #8013: Memory leak

From
stiening@comcast.net
Date:
The following bug has been logged on the website:

Bug reference:      8013
Logged by:          Rae Stiening
Email address:      stiening@comcast.net
PostgreSQL version: 9.2.3
Operating system:   Suse Linux 9.3
Description:        =


The query:
SELECT pts_key,count(*)
         FROM tm_tm_pairs GROUP BY pts_key HAVING count(*) !=3D1 ORDER BY
pts_key

Which is executed as:
 GroupAggregate  (cost=3D108680937.80..119278286.60 rows=3D470993280 width=
=3D4)
   Filter: (count(*) <> 1)
   ->  Sort  (cost=3D108680937.80..109858421.00 rows=3D470993280 width=3D4)
         Sort Key: pts_key
         ->  Seq Scan on tm_tm_pairs  (cost=3D0.00..8634876.80 rows=3D47099=
3280
width=3D4)

uses all available memory (32GB).  pts_key is an integer and the table
contains about 500 million rows.

PostgreSQL 9.2.3 on x86_64-unknown-linux-gnu, compiled by gcc (SUSE Linux)
4.7.2 20130108 [gcc-4_7-branch revision 195012], 64-bit
(

compile options:

env CFLAGS=3D'-O3 -march=3Dnative' ./configure --with-segsize=3D128

Re: BUG #8013: Memory leak

From
ajmcello
Date:
unsubscribe


On Sat, Mar 30, 2013 at 7:01 AM, <stiening@comcast.net> wrote:

> The following bug has been logged on the website:
>
> Bug reference:      8013
> Logged by:          Rae Stiening
> Email address:      stiening@comcast.net
> PostgreSQL version: 9.2.3
> Operating system:   Suse Linux 9.3
> Description:
>
> The query:
> SELECT pts_key,count(*)
>          FROM tm_tm_pairs GROUP BY pts_key HAVING count(*) !=1 ORDER BY
> pts_key
>
> Which is executed as:
>  GroupAggregate  (cost=108680937.80..119278286.60 rows=470993280 width=4)
>    Filter: (count(*) <> 1)
>    ->  Sort  (cost=108680937.80..109858421.00 rows=470993280 width=4)
>          Sort Key: pts_key
>          ->  Seq Scan on tm_tm_pairs  (cost=0.00..8634876.80 rows=470993280
> width=4)
>
> uses all available memory (32GB).  pts_key is an integer and the table
> contains about 500 million rows.
>
> PostgreSQL 9.2.3 on x86_64-unknown-linux-gnu, compiled by gcc (SUSE Linux)
> 4.7.2 20130108 [gcc-4_7-branch revision 195012], 64-bit
> (
>
> compile options:
>
> env CFLAGS='-O3 -march=native' ./configure --with-segsize=128
>
>
>
>
> --
> Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-bugs
>

Re: BUG #8013: Memory leak

From
Jeff Lake
Date:
memory leak with 500 Million rows ??
sounds like to big of a db

-----------------------------------------------------
-Jeff Lake K8JSL
MichiganWxSystem.com
AllisonHouse.com
TheWeatherCenter.net
GRLevelXStuff.com

On 3/30/2013 10:01, stiening@comcast.net wrote:
> The following bug has been logged on the website:
>
> Bug reference:      8013
> Logged by:          Rae Stiening
> Email address:      stiening@comcast.net
> PostgreSQL version: 9.2.3
> Operating system:   Suse Linux 9.3
> Description:
>
> The query:
> SELECT pts_key,count(*)
>           FROM tm_tm_pairs GROUP BY pts_key HAVING count(*) !=1 ORDER BY
> pts_key
>
> Which is executed as:
>   GroupAggregate  (cost=108680937.80..119278286.60 rows=470993280 width=4)
>     Filter: (count(*) <> 1)
>     ->  Sort  (cost=108680937.80..109858421.00 rows=470993280 width=4)
>           Sort Key: pts_key
>           ->  Seq Scan on tm_tm_pairs  (cost=0.00..8634876.80 rows=470993280
> width=4)
>
> uses all available memory (32GB).  pts_key is an integer and the table
> contains about 500 million rows.
>
> PostgreSQL 9.2.3 on x86_64-unknown-linux-gnu, compiled by gcc (SUSE Linux)
> 4.7.2 20130108 [gcc-4_7-branch revision 195012], 64-bit
> (
>
> compile options:
>
> env CFLAGS='-O3 -march=native' ./configure --with-segsize=128
>
>
>
>

Re: BUG #8013: Memory leak

From
Greg Stark
Date:
On Sat, Mar 30, 2013 at 2:01 PM,  <stiening@comcast.net> wrote:
> uses all available memory (32GB).  pts_key is an integer and the table
> contains about 500 million rows.

Please post the schema definition and all the log messages that occur
from this. If it Postgres runs out memory it should include a dump of
the memory usage.


--
greg

Re: BUG #8013: Memory leak

From
Peter Geoghegan
Date:
On 30 March 2013 14:01,  <stiening@comcast.net> wrote:
> env CFLAGS='-O3 -march=native' ./configure --with-segsize=128

Why did you build with a segment size of 128GB? Postgres binaries
built with a non-standard segment size are not widely used.


--
Regards,
Peter Geoghegan

Re: BUG #8013: Memory leak

From
ajmcello
Date:
unsubscribe


On Sat, Mar 30, 2013 at 8:42 PM, Jeff Lake <admin@michiganwxsystem.com>wrote:

> memory leak with 500 Million rows ??
> sounds like to big of a db
>
> ------------------------------**-----------------------
> -Jeff Lake K8JSL
> MichiganWxSystem.com
> AllisonHouse.com
> TheWeatherCenter.net
> GRLevelXStuff.com
>
>
> On 3/30/2013 10:01, stiening@comcast.net wrote:
>
>> The following bug has been logged on the website:
>>
>> Bug reference:      8013
>> Logged by:          Rae Stiening
>> Email address:      stiening@comcast.net
>> PostgreSQL version: 9.2.3
>> Operating system:   Suse Linux 9.3
>> Description:
>>
>> The query:
>> SELECT pts_key,count(*)
>>           FROM tm_tm_pairs GROUP BY pts_key HAVING count(*) !=1 ORDER BY
>> pts_key
>>
>> Which is executed as:
>>   GroupAggregate  (cost=108680937.80..119278286.**60 rows=470993280
>> width=4)
>>     Filter: (count(*) <> 1)
>>     ->  Sort  (cost=108680937.80..109858421.**00 rows=470993280 width=4)
>>           Sort Key: pts_key
>>           ->  Seq Scan on tm_tm_pairs  (cost=0.00..8634876.80
>> rows=470993280
>> width=4)
>>
>> uses all available memory (32GB).  pts_key is an integer and the table
>> contains about 500 million rows.
>>
>> PostgreSQL 9.2.3 on x86_64-unknown-linux-gnu, compiled by gcc (SUSE Linux)
>> 4.7.2 20130108 [gcc-4_7-branch revision 195012], 64-bit
>> (
>>
>> compile options:
>>
>> env CFLAGS='-O3 -march=native' ./configure --with-segsize=128
>>
>>
>>
>>
>>
>
>
> --
> Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/**mailpref/pgsql-bugs<http://www.postgresql.org/mailpref/pgsql-bugs>
>

Re: BUG #8013: Memory leak

From
Tom Lane
Date:
stiening@comcast.net writes:
> The query:
> SELECT pts_key,count(*)
>          FROM tm_tm_pairs GROUP BY pts_key HAVING count(*) !=1 ORDER BY
> pts_key

> Which is executed as:
>  GroupAggregate  (cost=108680937.80..119278286.60 rows=470993280 width=4)
>    Filter: (count(*) <> 1)
>    ->  Sort  (cost=108680937.80..109858421.00 rows=470993280 width=4)
>          Sort Key: pts_key
>          ->  Seq Scan on tm_tm_pairs  (cost=0.00..8634876.80 rows=470993280
> width=4)

> uses all available memory (32GB).  pts_key is an integer and the table
> contains about 500 million rows.

That query plan doesn't look like it should produce any undue memory
consumption on the server side.  How many distinct values of pts_key are
there, and what are you using to collect the query result client-side?
psql, for instance, would try to absorb the whole query result
in-memory, so there'd be a lot of memory consumed by psql if there are
a lot of pts_key values.  (You can set FETCH_COUNT to alleviate that.)

A different line of thought is that you might have set work_mem to
an unreasonably large value --- the sort step will happily try to
consume work_mem worth of memory.

            regards, tom lane

BUG #8013: Memory leak

From
Jeff Janes
Date:
On Sunday, March 31, 2013, Tom Lane wrote:

>
> A different line of thought is that you might have set work_mem to
> an unreasonably large value --- the sort step will happily try to
> consume work_mem worth of memory.
>

I don't think that that can be the problem here, because memtuples can
never be more than 1GB even if work_mem is much larger than that.  Even if
his sort is using pass-by-reference (I don't think it would be), they
should be skinny enough that that limitation should prevent it from blowing
out memory.

Cheers,

Jeff

Re: BUG #8013: Memory leak

From
Daniel Farina
Date:
On Sat, Mar 30, 2013 at 8:41 PM, ajmcello <ajmcello78@gmail.com> wrote:
> unsubscribe

That's not how you unsubscribe from this list; you'll want to do that here:

http://www.postgresql.org/community/lists/subscribe/

--
fdr

Re: BUG #8013: Memory leak

From
Rae Stiening
Date:
I found that by replacing the postgresql.conf file with the original =
that is present following an initdb the query ran without a memory =
problem.  I looked at the "bad" configuration file and couldn't see =
anything wrong with it.  I regret that because of a typing error the bad =
file was accidentally deleted.  I have subsequently been unable to =
reproduce the bad behavior.  After editing the original file to be the =
same as what I had intended for the erased file the query still ran =
without a problem.  Memory usage topped out at about 2.1 GB.  Even =
setting work_mem and maintenance_work_mem to 30000MB did not change the =
maximum memory usage during the query.

Regards,
Rae Stiening





On Mar 31, 2013, at 1:16 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:

> stiening@comcast.net writes:
>> The query:
>> SELECT pts_key,count(*)
>>         FROM tm_tm_pairs GROUP BY pts_key HAVING count(*) !=3D1 ORDER =
BY
>> pts_key
>=20
>> Which is executed as:
>> GroupAggregate  (cost=3D108680937.80..119278286.60 rows=3D470993280 =
width=3D4)
>>   Filter: (count(*) <> 1)
>>   ->  Sort  (cost=3D108680937.80..109858421.00 rows=3D470993280 =
width=3D4)
>>         Sort Key: pts_key
>>         ->  Seq Scan on tm_tm_pairs  (cost=3D0.00..8634876.80 =
rows=3D470993280
>> width=3D4)
>=20
>> uses all available memory (32GB).  pts_key is an integer and the =
table
>> contains about 500 million rows.
>=20
> That query plan doesn't look like it should produce any undue memory
> consumption on the server side.  How many distinct values of pts_key =
are
> there, and what are you using to collect the query result client-side?
> psql, for instance, would try to absorb the whole query result
> in-memory, so there'd be a lot of memory consumed by psql if there are
> a lot of pts_key values.  (You can set FETCH_COUNT to alleviate that.)
>=20
> A different line of thought is that you might have set work_mem to
> an unreasonably large value --- the sort step will happily try to
> consume work_mem worth of memory.
>=20
>             regards, tom lane