On Mon, Dec 8, 2008 at 5:52 PM, Greg Smith <gsmith@gregsmith.com> wrote:
> On Mon, 8 Dec 2008, Scott Marlowe wrote:
>
>> Well, I have 32 Gig of ram and wanted to test it against a database
>> that was at least twice as big as memory. I'm not sure why you'd
>> consider the results uninteresting though, I'd think knowing how the
>> db will perform with a very large transactional store that is twice or
>> more the size of memory would be when it starts getting interesting.
>
> If you refer back to the picture associated with the link Josh suggested:
>
> http://www.westnet.com/~gsmith/gregsmith/content/postgresql/scaling.png
>
> You'll see that pgbench results hit a big drop once you clear the amount of
> memory being used to cache the accounts table. This curve isn't unique to
> what I did; I've seen the same basic shape traced out by multiple other
> testers on different hardware, independent of me. It just expands to the
> right based on the amount of RAM available.
>
> All I was trying to suggest was that even if you've got 32GB of RAM, you may
> already be into the more flat right section of that curve even with a 40GB
> database. That was a little system with 1GB of RAM+256MB of disk cache, and
> it was already toast at 750MB of database. Once you've gotten a database
> big enough to reach that point, results degrade to something related to
> database seeks/second rather than anything else, and further increases don't
> give you that much more info. This is why I'm not sure if the current limit
> really matters with 32GB of RAM, but it sure will be important if you want
> any sort of useful pgbench results at 64GB.
I wonder if shared_buffers has any effect on how far you can go before
you hit the 'tipping point'.
merlin