Re: Completely un-tuned Postgresql benchmark results: SSD vs desktop HDD - Mailing list pgsql-performance

From
Subject Re: Completely un-tuned Postgresql benchmark results: SSD vs desktop HDD
Date
Msg-id 20100811205356.AHB77050@ms14.lnh.mail.rcn.net
Whole thread Raw
Responses Re: Completely un-tuned Postgresql benchmark results: SSD vs desktop HDD  (Greg Smith <greg@2ndquadrant.com>)
Re: Completely un-tuned Postgresql benchmark results: SSD vs desktop HDD  (Arjen van der Meijden <acmmailing@tweakers.net>)
List pgsql-performance
A number of amusing aspects to this discussion.

- I've carried out similar tests using the Intel X-25M with both PG and DB2 (both on linux).  While it is a simple
matterto build parallel databases on DB2, on HDD and SSD, with buffers and tablespaces and logging and on and on set to
recreateas many scenarios as one wishes using a single engine instance, not so for PG.  While PG is the "best" OS
database,from a tuning and admin point of view there's rather a long way to go.  No one should think that retail SSD
shouldbe used to support an enterprise database.  People have gotten lulled into thinking otherwise as a result of the
blurringof the two use cases in the HDD world where the difference is generally just QA. 

- All flash SSD munge the byte stream, some (SandForce controlled in particular) more than others.  Industrial strength
flashSSD can have 64 internal channels, written in parallel; they don't run on commodity controllers.  Treating SSD as
justa faster HDD is a trip on the road to perdition.  Industrial strength (DRAM) SSDs have been used by serious
databasefolks for a couple of decades, but not the storefront semi-professionals who pervade the web start up world.   

- The value of SSD in the database world is not as A Faster HDD(tm).  Never was, despite the naive' who assert
otherwise. The value of SSD is to enable BCNF datastores.  Period.  If you're not going to do that, don't bother.
Siliconstorage will never reach equivalent volumetric density, ever.  SSD will never be useful in the byte bloat world
ofxml and other flat file datastores (resident in databases or not).  Industrial strength SSD will always be more
expensive/GB,and likely by a lot.  (Re)factoring to high normalization strips out an order of magnitude of byte bloat,
increasesnative data integrity by as much, reduces much of the redundant code, and puts the ACID where it belongs.  All
goodthings, but not effortless. 

You're arguing about the wrong problem.  Sufficiently bulletproof flash SSD exist and have for years, but their names
arenot well known (no one on this thread has named any), but neither the Intel parts nor any of their retail cousins
haveany place in the mix except development machines.  Real SSD have MTBFs measured in decades; OEMs have qualified
suchparts, but you won't find them on the shelf at Best Buy.  You need to concentrate on understanding what can be done
withsuch drives that can't be done with vanilla HDD that cost 1/50 the dollars.  Just being faster won't be the answer.
Removing the difference between sequential file processing and true random access is what makes SSD worth the bother;
makestrue relational datastores second nature rather than rocket science. 

Robert

pgsql-performance by date:

Previous
From: Karl Denninger
Date:
Subject: Re: Completely un-tuned Postgresql benchmark results: SSD vs desktop HDD
Next
From: Bruce Momjian
Date:
Subject: Re: Questions on query planner, join types, and work_mem