On Wed, 12 Jan 2000, Karl DeBisschop wrote:
>
> > Anyone know if read performance on a postgres database decreases at
> > an increasing rate, as the number of stored records increase?
> >
> > It seems as if I'm missing something fundamental... maybe I am... is
> > some kind of database cleanup necessary? With less than ten
> > records, the grid populates very quickly. Beyond that, performance
> > slows to a crawl, until it _seems_ that every new record doubles the
> > time needed to retrieve...
>
> Are you using indexes?
>
> Are you vacuuming?
>
> I may have incorrectly inferred table sizes and such, but the behavior
> you describe seems odd - we typically work with hundreds of thousands
> of entries in our tables with good results (though things do slow down
> for the one DB we use with tens of millions of entries).
An example of a large database that ppl can see in action...the search
engine we are using on PostgreSQL, when fully populated, works out to
around 6million records... and is reasnably quick...