Re: optimization ideas for frequent, large(ish) updates - Mailing list pgsql-performance

From Jeff Trout
Subject Re: optimization ideas for frequent, large(ish) updates
Date
Msg-id 460CEC02-5FDB-11D8-8142-000D9366F0C4@jefftrout.com
Whole thread Raw
In response to Re: optimization ideas for frequent, large(ish) updates  ("Marinos J. Yannikos" <mjy@geizhals.at>)
Responses Re: optimization ideas for frequent, large(ish) updates  ("Marinos J. Yannikos" <mjy@geizhals.at>)
List pgsql-performance
On Feb 14, 2004, at 9:02 PM, Marinos J. Yannikos wrote:

> Josh Berkus wrote:
>
>> 800MB for sort mem?   Are you sure you typed that correctly?   You
>> must be counting on not having a lot of concurrent queries.  It sure
>> will speed up index updating, though!
>
> 800MB is correct, yes... There are usually only 10-30 postgres
> processes  active (imagine 5-10 people working on the web front-end
> while cron jobs access the db occasionally). Very few queries can use
> such large amounts of memory for sorting, but they do exist.
>

Remember that it is going to allocate 800MB per sort.  It is not "you
can allocate up to 800MB, so if you need 1 meg, use one meg".  Some
queries may end up having a few sort steps.

In terms of sort mem it is best to set a system default to a nice good
value for most queries.  and then in your reporting queries or other
ones set sort_mem for that session (set sort_mem = 800000) then only
that session will use the looney sort_mem

It would be interesting to know if your machine is swapping.
--
Jeff Trout <jeff@jefftrout.com>
http://www.jefftrout.com/
http://www.stuarthamm.net/


pgsql-performance by date:

Previous
From: Jeff Trout
Date:
Subject: Re: optimization ideas for frequent, large(ish) updates
Next
From: "Marinos J. Yannikos"
Date:
Subject: Re: optimization ideas for frequent, large(ish) updates