I found that by replacing the postgresql.conf file with the original =
that is present following an initdb the query ran without a memory =
problem. I looked at the "bad" configuration file and couldn't see =
anything wrong with it. I regret that because of a typing error the bad =
file was accidentally deleted. I have subsequently been unable to =
reproduce the bad behavior. After editing the original file to be the =
same as what I had intended for the erased file the query still ran =
without a problem. Memory usage topped out at about 2.1 GB. Even =
setting work_mem and maintenance_work_mem to 30000MB did not change the =
maximum memory usage during the query.
Regards,
Rae Stiening
On Mar 31, 2013, at 1:16 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> stiening@comcast.net writes:
>> The query:
>> SELECT pts_key,count(*)
>> FROM tm_tm_pairs GROUP BY pts_key HAVING count(*) !=3D1 ORDER =
BY
>> pts_key
>=20
>> Which is executed as:
>> GroupAggregate (cost=3D108680937.80..119278286.60 rows=3D470993280 =
width=3D4)
>> Filter: (count(*) <> 1)
>> -> Sort (cost=3D108680937.80..109858421.00 rows=3D470993280 =
width=3D4)
>> Sort Key: pts_key
>> -> Seq Scan on tm_tm_pairs (cost=3D0.00..8634876.80 =
rows=3D470993280
>> width=3D4)
>=20
>> uses all available memory (32GB). pts_key is an integer and the =
table
>> contains about 500 million rows.
>=20
> That query plan doesn't look like it should produce any undue memory
> consumption on the server side. How many distinct values of pts_key =
are
> there, and what are you using to collect the query result client-side?
> psql, for instance, would try to absorb the whole query result
> in-memory, so there'd be a lot of memory consumed by psql if there are
> a lot of pts_key values. (You can set FETCH_COUNT to alleviate that.)
>=20
> A different line of thought is that you might have set work_mem to
> an unreasonably large value --- the sort step will happily try to
> consume work_mem worth of memory.
>=20
> regards, tom lane