Re: Slow query with a lot of data - Mailing list pgsql-performance

From Moritz Onken
Subject Re: Slow query with a lot of data
Date
Msg-id 6F57A01F-F83F-477F-B86B-617D0F843FB1@houseofdesign.de
Whole thread Raw
In response to Re: Slow query with a lot of data  (Moritz Onken <onken@houseofdesign.de>)
Responses Re: Slow query with a lot of data  (Zoltan Boszormenyi <zb@cybertec.at>)
Re: Slow query with a lot of data  ("Scott Carey" <scott@richrelevance.com>)
List pgsql-performance
Am 19.08.2008 um 17:23 schrieb Moritz Onken:

>
> Am 19.08.2008 um 16:49 schrieb Scott Carey:
>
>> What is your work_mem set to?  The default?
>>
>> Try increasing it significantly if you have the RAM and seeing if
>> that affects the explain plan.  You may even want to set it to a
>> number larger than the RAM you have just to see what happens.  In
>> all honesty, it may be faster to overflow to OS swap space than
>> sort too many rows, but ONLY if it changes the plan to a
>> significantly more efficient one.
>>
>> Simply type
>> 'SET work_mem = '500MB';
>> before running your explain.  Set it to even more RAM if you have
>> the space for this experiment.
>>
>> In my experience the performance of aggregates on large tables is
>> significantly affected by work_mem and the optimizer will chosse
>> poorly without enough of it.  It will rule out plans that may be
>> fast enough when overflowing to disk in preference to colossal
>> sized sorts (which likely also overflow to disk but take hours or
>> days).
>
> Thanks for that advice but the explain is not different :-(
>
> moritz
>
> --

Hi,

I started the query with work_mem set to 3000MB. The explain output
didn't change but it runs now much faster (about 10 times). The swap
isn't used. How can you explain that?

moritz

pgsql-performance by date:

Previous
From: Tommy Gildseth
Date:
Subject: Re: Software vs. Hardware RAID Data
Next
From: Zoltan Boszormenyi
Date:
Subject: Re: Slow query with a lot of data