Re: Opinions on SSDs - Mailing list pgsql-admin
From | Scott Whitney |
---|---|
Subject | Re: Opinions on SSDs |
Date | |
Msg-id | 14817358.266767.1376323269109.JavaMail.root@mail.int.journyx.com Whole thread Raw |
In response to | Re: Opinions on SSDs (Craig James <cjames@emolecules.com>) |
Responses |
Re: Opinions on SSDs
|
List | pgsql-admin |
When you say "16 10K drives," do you mean:
a) RAID 0 with 16 drives?
b) RAID 1 with 8+8 drives?
c) RAID 5 with 12 drives?
d) RAID 1 with 7+7 drives and 2 hotspares?
We moved from a 14 FC drive (15k RPM) array (6+6 with 2 hotspares) to a 6 SSD array (2+2 with 2 hotspares) because our iops would max out regularly on the spinning drives. The SSD solution I put in has show significant speed improvements, to say the very least.
The short answer is unless you're going with option a (which has no redundancy), you're going to be have some I/O wait at 5k tps.
Now, there is a LOT to understand about drive iops. You could start here, if you would like to read a bit about it:
Basically, just assume that you're getting 130 iops per drive. Well, 16 drives in a RAID 0 is going to max you out at 2100ish iops, which is low for your stated peak usage (but probably within range for your average usage). However, you start looking at 8+8 or 7+7 with HSP, you're looking at cutting that in half, and you're going to see I/O wait, period. Of course if you were to use a non-recommended RAID5, you'd be taking an even bigger hit on writes.
Now, I do _not_ want to open a can of worms here and start a war about SSD versus spindles and "perceived performance vs real," or any such thing. I attended Greg Smith's (excellent) talk in Austin at PG Day wrt "Seeking Postgres," and I had also personally amassed quite a bit of data on such comparisons myself. Unfortunately, some of that talk not compare apples to apples (3-disk RAID 0 versus singe Intel 520SSD), and I quite simply find that the benchmarks do not really reflect real world usage.
Source: months and months of real-world stress-testing specifically 10k drives (SAS) against SSD (SATA) drives in the same configuration on the same machine using the same tests plus over a year (total) of production deployment among 3 servers thusly configured.
So far as personal experience with the Intel drives, I don't have that, personally. I'm using Crucial, and I'm pretty happy with those. The _problem_ with SSD is there is no "put it in the freezer" magic bullet. When they fail, they fail, and they're gone. So, IMO (and there are MANY MANY valid opinions on this), use slightly cheaper drives and proactively replace them every 9 months or year.
So far as personal experience with the Intel drives, I don't have that, personally. I'm using Crucial, and I'm pretty happy with those. The _problem_ with SSD is there is no "put it in the freezer" magic bullet. When they fail, they fail, and they're gone. So, IMO (and there are MANY MANY valid opinions on this), use slightly cheaper drives and proactively replace them every 9 months or year.
On Mon, Aug 12, 2013 at 8:28 AM, David F. Skoll <dfs@roaringpenguin.com> wrote:
3) Our current workload peaks at about 5000 transactions per second;
you can assume about one-third to one-half of those are writes. Do
you think we can get away with 16 10Krpm SATA drives instead of the
SSDs?
pgbench peaks out at 5K-7K transactions per second on my server which uses just 10ea. of 7Krpm SATA drives:
WAL: RAID1 (2 disks)
Data: RAID10 (8 disks)
3Ware RAID controller with BBU
2x4 core Intel CPUs
12 GB memory
I don't know how pgbench compares to your workload. But suspect 16 10K SATA drives would be pretty fast if you combine them with a BBU RAID controller.
On the other hand, I swore this would be the last server I buy with spinning storage.
Craig
pgsql-admin by date: