Thread: Hardware performance

Hardware performance

From
CS DBA
Date:
Hi All;

We're talking with a HW / Data Center company about scaling up our DB servers... Below are some questions they asked relaed to moving to SSD's or maybe a Fusion IO drive.

Anyone have any thoughts, specifically on the queue depth question?

Thanks in advance...


So our question I think would be:

- What queue depth do you think your postgres server can saturate under your maximum load, when you'd want 35,000 IOPs?

Basically my concern is this: assuming a ~5ms round trip, which probably a top end for a "good" SSD array, you effectively get 200 IOPs per queue depth at the full 5ms (1000ms / 5ms for 200 IOPs per QD; so 1600 IOPS for QD=16).

That means to get 35k IOPs you'd need a collective QD=175.

An additional question would be, and Lou already touched on this:

- Are you comfortable with being presented multiple LUNs and striping them for your database filesystem? I'm not 100% sure about the per-LUN limits of the EMC offerings which we've started looking into, but SolidFire tends to max out around 17k IOPs per volume, for example; they have a stated maximum of 15k, and when the array is under full load it's more realistic to only expect about 8k. So to get 35k, we'd probably want to present multiple volumes to the OS and let it stripe them (no parity since the durability would be on the array).

A related question would be:

- As you migrate away from RHCS, how do you feel about utilizing local SSDs to meet your performance targets? If your performance characteristics around your queue depth don't fit well with an array, the lower latency to a local SSD might make it easier to sustain 35k IOPs with local RAID SSDs.