On Thu, 2007-08-02 at 12:50 -0400, Tom Lane wrote:
> Josh Berkus <josh@agliodbs.com> writes:
> > Tom,
> >> I don't actually think that what Jignesh is testing is a particularly
> >> realistic scenario, and so I object to making performance decisions on
> >> the strength of that one measurement.
>
> > What do you mean by "not realistic"? What would be a realistic scenario?
>
> The difference between maxing out at 1200 sessions and 1300 sessions
> doesn't excite me a lot --- in most environments you'd be well advised
> to use many fewer backends and a connection pooler. But in any case
> the main point is that this is *one* benchmark on *one* platform. Does
> anyone outside Sun even know what the benchmark is, beyond the fact that
> it's running a whole lot of sessions?
I like Greg Smith's idea to add a parameter, at least for testing.
transaction_buffers?
> Also, you should not imagine that boosting NUM_CLOG_BUFFERS has zero
> cost. The linear searches used in slru.c start to look pretty
> questionable if we want more than a couple dozen buffers. I find it
> entirely likely that simply changing the constant would be a net loss
> on many workloads.
Doesn't that just beg the question: why do we have linear searches in
slru? The majority of access is going to be to the first 1-3 pages, so
adding an array that keeps track of the LRU would be much faster anyhow.
We can still scan the whole LRU before doing an I/O. That way we would
be able to vary the size of the caches.
-- Simon Riggs EnterpriseDB http://www.enterprisedb.com