Re: [GENERAL] identifying performance hits: how to ??? - Mailing list pgsql-general

From The Hermit Hacker
Subject Re: [GENERAL] identifying performance hits: how to ???
Date
Msg-id Pine.BSF.4.21.0001121356380.46499-100000@thelab.hub.org
Whole thread Raw
In response to Re: [GENERAL] identifying performance hits: how to ???  (Karl DeBisschop <kdebisschop@range.infoplease.com>)
List pgsql-general
On Wed, 12 Jan 2000, Karl DeBisschop wrote:

>
> >  Anyone know if read performance on a postgres database decreases at
> >  an increasing rate, as the number of stored records increase?
> >
> >  It seems as if I'm missing something fundamental... maybe I am... is
> >  some kind of database cleanup necessary?  With less than ten
> >  records, the grid populates very quickly.  Beyond that, performance
> >  slows to a crawl, until it _seems_ that every new record doubles the
> >  time needed to retrieve...
>
> Are you using indexes?
>
> Are you vacuuming?
>
> I may have incorrectly inferred table sizes and such, but the behavior
> you describe seems odd - we typically work with hundreds of thousands
> of entries in our tables with good results (though things do slow down
> for the one DB we use with tens of millions of entries).

An example of a large database that ppl can see in action...the search
engine we are using on PostgreSQL, when fully populated, works out to
around 6million records... and is reasnably quick...



pgsql-general by date:

Previous
From: admin
Date:
Subject: indices on tab1.a=tab2.a
Next
From: The Hermit Hacker
Date:
Subject: Re: [GENERAL] identifying performance hits: how to ???