Re: Are 50 million rows a problem for postgres ? - Mailing list pgsql-admin

From Tom Lane
Subject Re: Are 50 million rows a problem for postgres ?
Date
Msg-id 13996.1063034040@sss.pgh.pa.us
Whole thread Raw
In response to Re: Are 50 million rows a problem for postgres ?  ("Donald Fraser" <demolish@cwgsy.net>)
List pgsql-admin
"Donald Fraser" <demolish@cwgsy.net> writes:
> My analysis at the time was that to access random records, performance
> deteriorated the further away the records that you were accessing were
> from the beginning of the index. For example using a query that had
> say OFFSET 250000 would cause large delays.

Well, yeah.  OFFSET implies generating and discarding that number of
records.  AFAICS there isn't any shortcut for this, even in a query
that's just an indexscan, since the index alone can't tell us whether
any given record would actually be returned.

            regards, tom lane

pgsql-admin by date:

Previous
From: "Daniel Seichter"
Date:
Subject: Logifle analysis
Next
From: "Gaetano Mendola"
Date:
Subject: Duplicate key