Re: Optimising a query - Mailing list pgsql-performance

From Gregory Stark
Subject Re: Optimising a query
Date
Msg-id 87wsrbhv83.fsf@oxford.xeocode.com
Whole thread Raw
In response to Re: Optimising a query  (Richard Huxton <dev@archonet.com>)
Responses Re: Optimising a query
List pgsql-performance
"Richard Huxton" <dev@archonet.com> writes:

> Paul Lambert wrote:
>
>> "  ->  Sort  (cost=30197.98..30714.85 rows=206748 width=16) (actual >> time=5949.691..7018.931 rows=206748 loops=1)"
>> "        Sort Key: dealer_id, year_id, subledger_id, account_id"
>> "        Sort Method:  external merge  Disk: 8880kB"

> Before that though, try issuing a "SET work_mem = '9MB'" before running your
> query. If that doesn't change the plan step up gradually. You should be able to
> get the sort stage to happen in RAM rather than on disk (see "Sort Method"
> above).

FWIW you'll probably need more than that. Try something more like 20MB.

Also, note you can change this with SET for just this connection and even just
this query and then reset it to the normal value (or use SET LOCAL). You don't
have to change it in the config file and restart the whole server.

Also, try replacing the DISTINCT with GROUP BY. The code path for DISTINCT
unfortunately needs a bit of cleaning up and isn't exactly equivalent to GROUP
BY. In particular it doesn't support hash aggregates which, if your work_mem
is large enough, might work for you here.

--
  Gregory Stark
  EnterpriseDB          http://www.enterprisedb.com
  Get trained by Bruce Momjian - ask me about EnterpriseDB's PostgreSQL training!

pgsql-performance by date:

Previous
From: Richard Huxton
Date:
Subject: Re: Optimising a query
Next
From: Gregory Stark
Date:
Subject: Re: Optimising a query