> On Mon, 8 Sep 2003, Vasilis Ventirozos wrote:
>
> > Hi all, i work in a telco and i have huge ammount of data, (50 million)
> > but i see a lack of performance at huge tables with postgres,
> > are 50 million rows the "limit" of postgres ? (with a good performance)
> > i am waiting for 2004 2 billion records so i have to do something.
> > Does anyone have a huge database to ask him some issues ?
> >
> > my hardware is good ,my indexes are good plz dont answer me something like
use
> > vacuum :)
>
I did some performance testing back on PostgreSQL version 7.2 on a table of
350,000 records.
My analysis at the time was that to access random records, performance
deteriorated the further away the records that you were accessing were from the
beginning of the index. For example using a query that had say OFFSET 250000
would cause large delays. On a text index these delays were in the order of 60
seconds! Which was unacceptable for the application I was developing.
To overcome this problem I had to do away with queries that used OFFSET and
developed queries that accessed the data relative to previous records I had
accessed. The only draw back was that I had to make the indexes unique by
appending a unique id column as the last column in the index. I now have no
problem scanning a table in access of 1 million records in small chunks at a
time.
Anyway without seeing what sort of query you are having problems with nobody on
these email lists will be able to fully help you.
Minimum we need to see an SQL statement, and the results of EXPLAIN.
Regards
Donald Fraser.