Re: Slow query with a lot of data - Mailing list pgsql-performance

From Merlin Moncure
Subject Re: Slow query with a lot of data
Date
Msg-id b42b73150808211008g43ac1fcer26e5eaa2420ab6ca@mail.gmail.com
Whole thread Raw
In response to Re: Slow query with a lot of data  (Moritz Onken <onken@houseofdesign.de>)
Responses Re: Slow query with a lot of data
List pgsql-performance
On Thu, Aug 21, 2008 at 11:07 AM, Moritz Onken <onken@houseofdesign.de> wrote:
>
> Am 21.08.2008 um 16:39 schrieb Scott Carey:
>
>> It looks to me like the work_mem did have an effect.
>>
>> Your earlier queries had a sort followed by group aggregate at the top,
>> and now its a hash-aggregate.  So the query plan DID change.  That is likely
>> where the first 10x performance gain came from.
>
> But it didn't change as I added the sub select.
> Thank you guys very much. The speed is now ok and I hope I can finish tihs
> work soon.
>
> But there is another problem. If I run this query without the limitation of
> the user id, postgres consumes about 150GB of disk space and dies with
>
> ERROR:  could not write block 25305351 of temporary file: No space left on
> device
>
> After that the avaiable disk space is back to normal.
>
> Is this normal? The resulting table (setup1) is not bigger than 1.5 GB.

Maybe the result is too big.  if you explain the query, you should get
an estimate of rows returned.  If this is the case, you need to
rethink your query or do something like a cursor to browse the result.

merlin

pgsql-performance by date:

Previous
From: Moritz Onken
Date:
Subject: Re: Slow query with a lot of data
Next
From: Dan Harris
Date:
Subject: The state of PG replication in 2008/Q2?