Re: BUG #8013: Memory leak - Mailing list pgsql-bugs

From Rae Stiening
Subject Re: BUG #8013: Memory leak
Date
Msg-id FDF3405F-45DB-434B-8764-757F563DFDB2@comcast.net
Whole thread Raw
In response to Re: BUG #8013: Memory leak  (Tom Lane <tgl@sss.pgh.pa.us>)
List pgsql-bugs
I found that by replacing the postgresql.conf file with the original =
that is present following an initdb the query ran without a memory =
problem.  I looked at the "bad" configuration file and couldn't see =
anything wrong with it.  I regret that because of a typing error the bad =
file was accidentally deleted.  I have subsequently been unable to =
reproduce the bad behavior.  After editing the original file to be the =
same as what I had intended for the erased file the query still ran =
without a problem.  Memory usage topped out at about 2.1 GB.  Even =
setting work_mem and maintenance_work_mem to 30000MB did not change the =
maximum memory usage during the query.

Regards,
Rae Stiening





On Mar 31, 2013, at 1:16 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:

> stiening@comcast.net writes:
>> The query:
>> SELECT pts_key,count(*)
>>         FROM tm_tm_pairs GROUP BY pts_key HAVING count(*) !=3D1 ORDER =
BY
>> pts_key
>=20
>> Which is executed as:
>> GroupAggregate  (cost=3D108680937.80..119278286.60 rows=3D470993280 =
width=3D4)
>>   Filter: (count(*) <> 1)
>>   ->  Sort  (cost=3D108680937.80..109858421.00 rows=3D470993280 =
width=3D4)
>>         Sort Key: pts_key
>>         ->  Seq Scan on tm_tm_pairs  (cost=3D0.00..8634876.80 =
rows=3D470993280
>> width=3D4)
>=20
>> uses all available memory (32GB).  pts_key is an integer and the =
table
>> contains about 500 million rows.
>=20
> That query plan doesn't look like it should produce any undue memory
> consumption on the server side.  How many distinct values of pts_key =
are
> there, and what are you using to collect the query result client-side?
> psql, for instance, would try to absorb the whole query result
> in-memory, so there'd be a lot of memory consumed by psql if there are
> a lot of pts_key values.  (You can set FETCH_COUNT to alleviate that.)
>=20
> A different line of thought is that you might have set work_mem to
> an unreasonably large value --- the sort step will happily try to
> consume work_mem worth of memory.
>=20
>             regards, tom lane

pgsql-bugs by date:

Previous
From: Tom Lane
Date:
Subject: Re: BUG #8025: PostgreSQL crash (>= 9.1 64 bit)
Next
From: Simon Riggs
Date:
Subject: Re: Completely broken replica after PANIC: WAL contains references to invalid pages