Re: Huge amount of memory consumed during transaction - Mailing list pgsql-performance

From Erik Jones
Subject Re: Huge amount of memory consumed during transaction
Date
Msg-id EAC95D94-3479-494E-B107-F8E366E8517C@myemma.com
Whole thread Raw
In response to Re: Huge amount of memory consumed during transaction  (henk de wit <henk53602@hotmail.com>)
Responses Re: Huge amount of memory consumed during transaction
List pgsql-performance
On Oct 12, 2007, at 4:09 PM, henk de wit wrote:

> > It looks to me like you have work_mem set optimistically large. This
> > query seems to be doing *many* large sorts and hashes:
>
> I have work_mem set to 256MB. Reading in PG documentation I now
> realize that "several sort or hash operations might be running in
> parallel". So this is most likely the problem, although I don't
> really understand why memory never seems to increase for any of the
> other queries (not executed in a transaction). Some of these are at
> least the size of the query that is giving problems.

Wow.  That's inordinately high.  I'd recommend dropping that to 32-43MB.

>
> Btw, is there some way to determine up front how many sort or hash
> operations will be running in parallel for a given query?

Explain is your friend in that respect.

Erik Jones

Software Developer | Emma®
erik@myemma.com
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate & market in style.
Visit us online at http://www.myemma.com



pgsql-performance by date:

Previous
From: henk de wit
Date:
Subject: Re: How to speed up min/max(id) in 50M rows table?
Next
From: henk de wit
Date:
Subject: Re: How to speed up min/max(id) in 50M rows table?