Benchmarking a large server - Mailing list pgsql-performance

From Chris Hoover
Subject Benchmarking a large server
Date
Msg-id BANLkTikONPZ_7zQNfbyyN4MGBnn5PXHrAg@mail.gmail.com
Whole thread Raw
Responses Re: Benchmarking a large server  (Merlin Moncure <mmoncure@gmail.com>)
Re: Benchmarking a large server  (Ben Chobot <bench@silentmedia.com>)
Re: Benchmarking a large server  (Shaun Thomas <sthomas@peak6.com>)
Re: Benchmarking a large server  (Greg Smith <greg@2ndQuadrant.com>)
Re: Benchmarking a large server  (Cédric Villemain <cedric.villemain.debian@gmail.com>)
Re: Benchmarking a large server  (Yeb Havinga <yebhavinga@gmail.com>)
Re: Benchmarking a large server  (Claudio Freire <klaussfreire@gmail.com>)
List pgsql-performance
I've got a fun problem.

My employer just purchased some new db servers that are very large.  The specs on them are:

4 Intel X7550 CPU's (32 physical cores, HT turned off)
1 TB Ram
1.3 TB Fusion IO (2 1.3 TB Fusion IO Duo cards in a raid 10)
3TB Sas Array (48 15K 146GB spindles)

The issue we are running into is how do we benchmark this server, specifically, how do we get valid benchmarks for the Fusion IO card?  Normally to eliminate the cache effect, you run iozone and other benchmark suites at 2x the ram.  However, we can't do that due to 2TB > 1.3TB. 

So, does anyone have any suggestions/experiences in benchmarking storage when the storage is smaller then 2x memory?  

Thanks,

Chris

pgsql-performance by date:

Previous
From: "Kevin Grittner"
Date:
Subject: Re: wildcard makes seq scan on prod db but not in test
Next
From: Merlin Moncure
Date:
Subject: good performance benchmark