Re: Tuning Postgres for single user manipulating large amounts of data - Mailing list pgsql-general

From tv@fuzzy.cz
Subject Re: Tuning Postgres for single user manipulating large amounts of data
Date
Msg-id 2e6229cad881cc9f69461fd998add072.squirrel@sq.gransy.com
Whole thread Raw
In response to Re: Tuning Postgres for single user manipulating large amounts of data  ("Reid Thompson" <Reid.Thompson@ateb.com>)
List pgsql-general
> Hi ( sorry for the double posting, thought Id use the wrong email
> address but both have been posted anyway). As far as the db is concerned
> Im just reading data then writing the data to a lucene search index (which
> is outside of the database) , but my labtop is jut a test machine I want
> to run the same code on production. Why would work_mem not be safe at 64MB
> if I have 2GB of memory - what have I got to be careful of.
>
> Paul

Well, the problem with work_mem is that it's 'per operation' so a query
may actually need several work_mem segments. And it's not just sorting, a
query containing a hash join, hash aggregate and a sort may consume up to
3x work_mem memory.

And if you have a lot of concurrent users running such queries, you may
easily run out of memory - in that case the feared OOM killer comes and
kills one of the processes (usually postmaster, which means the database
goes bottoms up). Not sure how OOM works on MacOS.

But as you said there will be single user running queries on the database,
you can set the work_mem limit pretty high. Depends on the queries though
- a complicated query may consume a lot of memory.

Tomas


pgsql-general by date:

Previous
From: "Reid Thompson"
Date:
Subject: Re: Tuning Postgres for single user manipulating large amounts of data
Next
From: Scott Marlowe
Date:
Subject: Re: Performance tuning in Pgsql