On Tue, 2010-08-03 at 10:40 +0200, Yeb Havinga wrote:
> se note that the 10% was on a slower CPU. On a more recent CPU the
> difference was 47%, based on tests that ran for an hour.
I am not surprised at all that reading and writing almost twice as much
data from/to disk takes 47% longer. If less time is spent on seeking the
amount of data starts playing bigger role.
> That's why I
> absolutely agree with Merlin Moncure that more testing in this
> department is welcome, preferably by others since after all I could be
> on the pay roll of OCZ :-)
:)
> I looked a bit into Bonnie++ but fail to see how I could do a test that
> somehow matches the PostgreSQL setup during the pgbench tests (db that
> fits in memory,
Did it fit in shared_buffers, or system cache ?
Once we are in high tps ground, the time it takes to move pages between
userspace and system cache starts to play bigger role.
I first noticed this several years ago, when doing a COPY to a large
table with indexes took noticably longer (2-3 times longer) when the
indexes were in system cache than when they were in shared_buffers.
> so the test is actually how fast the ssd can capture
> sequential WAL writes and fsync without barriers, mixed with an
> occasional checkpoint with random write IO on another partition). Since
> the WAL writing is the same for both block_size setups, I decided to
> compare random writes to a file of 5GB with Oracle's Orion tool:
Are you sure that you are not writing full WAL pages ?
Do you have any stats on how much WAL is written for 8kb and 4kb test
cases ?
And for other disk i/o during the tests ?
--
Hannu Krosing http://www.2ndQuadrant.com
PostgreSQL Scalability and Availability
Services, Consulting and Training