For background, please read the thread "Fusion-io ioDrive", archived at
http://archives.postgresql.org/pgsql-performance/2008-07/msg00010.php
To recap, I tested an ioDrive versus a 6-disk RAID with pgbench on an
ordinary PC. I now also have a 32GB Samsung SATA SSD, and I have tested
it in the same machine with the same software and configuration. I
tested it connected to the NVIDIA CK804 SATA controller on the
motherboard, and as a pass-through disk on the Areca RAID controller,
with write-back caching enabled.
Service Time Percentile, millis
R/W TPS R-O TPS 50th 80th 90th 95th
RAID 182 673 18 32 42 64
Fusion 971 4792 8 9 10 11
SSD+NV 442 4399 12 18 36 43
SSD+Areca 252 5937 12 15 17 21
As you can see, there are tradeoffs. The motherboard's ports are
substantially faster on the TPC-B type of workload. This little, cheap
SSD achieves almost half the performance of the ioDrive (i.e. similar
performance to a 50-disk SAS array.) The RAID controller does a better
job on the read-only workload, surpassing the ioDrive by 20%.
Strangely the RAID controller behaves badly on the TPC-B workload. It
is faster than disk, but not by a lot, and it's much slower than the
other flash configurations. The read/write benchmark did not vary when
changing the number of clients between 1 and 8. I suspect this is some
kind of problem with Areca's kernel driver or firmware.
On the bright side, the Samsung+Areca configuration offers excellent
service time distribution, comparable to that achieved by the ioDrive.
Using the motherboard's SATA ports gave service times comparable to the
disk RAID.
The performance is respectable for a $400 device. You get about half
the tps and half the capacity of the ioDrive, but for one fifth the
price and in the much more convenient SATA form factor.
Your faithful investigator,
jwb