Re: extremly bad select performance on huge table - Mailing list pgsql-performance

From Björn Wittich
Subject Re: extremly bad select performance on huge table
Date
Msg-id 5449E0C0.60904@gmx.de
Whole thread Raw
In response to Re: extremly bad select performance on huge table  (Björn Wittich <Bjoern_Wittich@gmx.de>)
List pgsql-performance
Hi,

with a cursor the behaviour is the same. So I would like to ask a more
general question:

My client needs to receive data from a huge join. The time the client
waits for being able to fetch the first row is very long. When the
retrieval starts after about 10 mins, the client itself is I/O bound so
it is not able to catch up the elapsed time.

My workaround was to build a queue of small joins (assuming the huge
join delivers 10 mio rows I now have 10000 joins delivering 1000 rows ).
So the general question is: Is there a better solution then my crude
workaround?


Thank you

> Hi Kevin,
>
>
> this is what I need (I think). Hopefully a cursor can operate on a
> join.  Will read docu now.
>
> Thanks!
>
>
> Björn
>
> Am 22.10.2014 16:53, schrieb Kevin Grittner:
>> Björn Wittich <Bjoern_Wittich@gmx.de> wrote:
>>
>>> I do not want the db server to prepare the whole query result at
>>> once, my intention is that the asynchronous retrieval starts as
>>> fast as possible.
>> Then you probably should be using a cursor.
>>
>> --
>> Kevin Grittner
>> EDB: http://www.enterprisedb.com
>> The Enterprise PostgreSQL Company
>>
>>
>
>
>



pgsql-performance by date:

Previous
From: pinker
Date:
Subject: Checkpoints tuning
Next
From: "Huang, Suya"
Date:
Subject: unnecessary sort in the execution plan when doing group by