Re: Performance issues when the number of records are around 10 Million - Mailing list pgsql-performance

From Kevin Grittner
Subject Re: Performance issues when the number of records are around 10 Million
Date
Msg-id 4BEA6D2902000025000315F9@gw.wicourts.gov
Whole thread Raw
In response to Re: Performance issues when the number of records are around 10 Million  ("Kevin Grittner" <Kevin.Grittner@wicourts.gov>)
Responses Re: Performance issues when the number of records are around 10 Million
List pgsql-performance
venu madhav <venutaurus539@gmail.com> wrote:

>> > If the records are more in the interval,
>>
>> How do you know that before you run your query?
>>
> I calculate the count first.

This and other comments suggest that the data is totally static
while this application is running.  Is that correct?

> If generate all the pages at once, to retrieve all the 10 M
> records at once, it would take much longer time

Are you sure of that?  It seems to me that it's going to read all
ten million rows once for the count and again for the offset.  It
might actually be faster to pass them just once and build the pages.

Also, you didn't address the issue of storing enough information on
the page to read off either edge in the desired sequence with just a
LIMIT and no offset.  "Last page" or "page up" would need to reverse
the direction on the ORDER BY.  This would be very fast if you have
appropriate indexes.  Your current technique can never be made very
fast.

-Kevin

pgsql-performance by date:

Previous
From: "Kevin Grittner"
Date:
Subject: Re: Performance issues when the number of records are around 10 Million
Next
From: Craig James
Date:
Subject: Re: Performance issues when the number of records are around 10 Million