On Wed, Aug 18, 2010 at 1:25 PM, Yeb Havinga <yebhavinga@gmail.com> wrote:
> Samuel Gendler wrote:
>>
>> When running pgbench on a db which fits easily into RAM (10% of RAM =
>> -s 380), I see transaction counts a little less than 5K. When I go to
>> 90% of RAM (-s 3420), transaction rate dropped to around 1000 ( at a
>> fairly wide range of concurrencies). At that point, I decided to
>> investigate the performance impact of write barriers.
>
> At 90% of RAM you're probable reading data as well, not only writing.
> Watching iostat -xk 1 or vmstat 1 during a test should confirm this. To find
> the maximum database size that fits comfortably in RAM you could try out
> http://github.com/gregs1104/pgbench-tools - my experience with it is that it
> takes less than 10 minutes to setup and run and after some time you get
> rewarded with nice pictures! :-)
Yes. I've intentionally sized it at 90% precisely so that I am
reading as well as writing, which is what an actual production
environment will resemble.