>
> If I do a large search the first time is about three times slower than
> any subsequent overlapping (same data) searches. I would like to always
> get the higher performance.
>
> How are the buffers that I specify to the postmaster used?
> Will increasing this number improve things?
>
> The issue that I am encountering is that no matter how much memory I
> have on a computer, the performance is not improving. I am willing to
> fund a project to implement a postgres specific, user configurable
> cache.
>
> Any ideas?
> -Edwin S. Ramirez-
I think that the fact you are seeing an improvement already shows a good level
of caching.
What happens the first time is that it must read the data off the disc. After
that the data comes from memory IF it is cached. Disc read will always be
slower with current disc technology.
I would imagine (Im not an expert, but through observation) that if you
drasticly increase the number of shared memory buffers, then when you
startup your front-end simply do a select * from the tables, it may even keep
them all in memory from the start.
M Simms