Re: SCSI vs SATA - Mailing list pgsql-performance

From Richard Troy
Subject Re: SCSI vs SATA
Date
Msg-id Pine.LNX.4.33.0704052057431.10617-100000@denzel.in
Whole thread Raw
In response to Re: SCSI vs SATA  (david@lang.hm)
List pgsql-performance
On Thu, 5 Apr 2007 david@lang.hm wrote:
> On Thu, 5 Apr 2007, Ron wrote:
> > At 10:07 PM 4/5/2007, david@lang.hm wrote:
> >> On Thu, 5 Apr 2007, Scott Marlowe wrote:
> >>
> >> > Server class drives are designed with a longer lifespan in mind.
> >> >
> >> > Server class hard drives are rated at higher temperatures than desktop
> >> > drives.
> >>
> >> these two I question.
> >>
> >> David Lang
> > Both statements are the literal truth.  Not that I would suggest abusing your
> > server class HDs just because they are designed to live longer and in more
> > demanding environments.
> >
> > Overheating, nasty electrical phenomenon, and abusive physical shocks will
> > trash a server class HD almost as fast as it will a consumer grade one.
> >
> > The big difference between the two is that a server class HD can sit in a
> > rack with literally 100's of its brothers around it, cranking away on server
> > class workloads 24x7 in a constant vibration environment (fans, other HDs,
> > NOC cooling systems) and be quite happy while a consumer HD will suffer
> > greatly shortened life and die a horrible death in such a environment and
> > under such use.
>
> Ron,
>    I know that the drive manufacturers have been claiming this, but I'll
> say that my experiance doesn't show a difference and neither do the google
> and CMU studies (and they were all in large datacenters, some HPC labs,
> some commercial companies).
>
> again the studies showed _no_ noticable difference between the
> 'enterprise' SCSI drives and the 'consumer' SATA drives.
>
> David Lang

Hi David, Ron,

I was just about to chime in to Ron's post when you did already, David. My
experience supports David's view point. I'm a scientist and with that hat
on my head I must acknowledge that it wasn't my goal to do a study on the
subject so my data is more of the character of anecdote. However, I work
with some pretty large shops, such as UC's SDSC, NOAA's NCDC (probably the
world's largest non-classified data center), Langley, among many others,
so my perceptions include insights from a lot of pretty sharp folks.

...When you provide your disk drives with clean power, cool, dry air, and
avoid serious shocks, it seems to be everyone's perception that all modern
drives - say, of the last ten years or a bit more - are exceptionally
reliable, and it's not at all rare to get 7 years and more out of a drive.
What seems _most_ detremental is power-cycles, without regard to which
type of drive you might have. This isn't to say the two types, "server
class" and "PC", are equal. PC drives are by comparison rather slow, and
that's their biggest downside, but they are also typically rather large.

Again, anecdotal evidence says that PC disks are typically cycled far more
often and so they also fail more often. Put them in the same environ as a
server-class disk and they'll also live a long time. Science Tools set up
our data center ten years ago this May, something more than a terrabyte -
large at the time (and it's several times that now), and we also adopted a
good handful of older equipment at that time, some twelve and fifteen
years old by now. We didn't have a single disk failure in our first seven
years, but then, we also never turn anything off unless it's being
serviced. Our disk drives are decidedly mixed - SCSI, all forms of ATA
and, some SATA in the last couple of years, and plenty of both server and
PC class. Yes, the older ones are dieing now - we lost one on a server
just now (so recently we haven't yet replaced it), but the death rate is
still remarkably low.

I should point out that we've had far more controller failures than drive
failures, and these have come all through these ten years at seemingly
random times. Unfortunately, I can't really comment on which brands are
better or worse, but I do remember once when we had a 50% failure rate of
some new SATA cards a few years back. Perhaps it's also worth a keystroke
or two to comment that we rotate new drives in on an annual basis, and the
older ones get moved to less critical, less stressful duties. Generally,
our oldest drives are now serving our gateway / firewall systems (of which
we have several), while our newest are providing primary daily workhorse
service, and middle-aged are serving hot-backup duty. Perhaps you could
argue that this putting out to pasture isn't comparable to heavy 24/7/356
demands, but then, that wouldn't be appropriate for a fifteen year old
drive, now would it? -smile-

Good luck with your drives,
Richard

--
Richard Troy, Chief Scientist
Science Tools Corporation
510-924-1363 or 202-747-1263
rtroy@ScienceTools.com, http://ScienceTools.com/


pgsql-performance by date:

Previous
From: david@lang.hm
Date:
Subject: Re: a question about Direct I/O and double buffering
Next
From: Greg Smith
Date:
Subject: Re: SCSI vs SATA