Re: 3ware vs. MegaRAID

From: Jesper Krogh
Subject: Re: 3ware vs. MegaRAID
Date: ,
(view: Whole thread, Raw)
In response to: Re: 3ware vs. MegaRAID  (Greg Smith)
Responses: Re: 3ware vs. MegaRAID  (Scott Marlowe)
List: pgsql-performance

Tree view

3ware vs. MegaRAID  (Ireneusz Pluta, )
 Re: 3ware vs. MegaRAID  (Matteo Beccati, )
 Re: 3ware vs. MegaRAID  (Greg Smith, )
  Re: 3ware vs. MegaRAID  (Matteo Beccati, )
  Re: 3ware vs. MegaRAID  (Ireneusz Pluta, )
   Re: 3ware vs. MegaRAID  (Scott Carey, )
    Re: 3ware vs. MegaRAID  (Dave Crooke, )
     Re: 3ware vs. MegaRAID  (Jesper Krogh, )
      Re: 3ware vs. MegaRAID  (Greg Smith, )
       Re: 3ware vs. MegaRAID  (Jesper Krogh, )
        Re: 3ware vs. MegaRAID  (Greg Smith, )
         Re: 3ware vs. MegaRAID  (Jesper Krogh, )
          Re: 3ware vs. MegaRAID  (Scott Marlowe, )
    Re: 3ware vs. MegaRAID  (Greg Smith, )
     Re: 3ware vs. MegaRAID  (Scott Carey, )
 Re: 3ware vs. MegaRAID  (Francisco Reyes, )

On 2010-04-09 20:22, Greg Smith wrote:
> Jesper Krogh wrote:
>> I've spent quite some hours googling today. Am I totally wrong if the:
>> HP MSA-20/30/70 and Sun Oracle J4200's:
>> are of the same type just from "major" vendors.
> Yes, those are the same type of implementation.  Every vendor has
> their own preferred way to handle port expansion, and most are
> somewhat scared about discussing the whole thing now because EMC has a
> ridiculous patent on the whole idea[1].  They all work the same from
> the user perspective, albeit sometimes with their own particular daisy
> chaining rules.
>> That would enable me to reuse the existing server and moving to
>> something
>> like Intel's X25-M 160GB disks with just a higher amount (25) in a
>> MSA-70.
> I guess, but note that several of us here consider Intel's SSDs
> unsuitable for critical database use.  There are some rare but not
> impossible to encounter problems with its write caching implementation
> that leave you exposed to database corruption if there's a nasty power
> interruption.  Can't get rid of the problem without destroying both
> performance and longevity of the drive[2][3].  If you're going to
> deploy something using those drives, please make sure you're using an
> aggressive real-time backup scheme such as log shipping in order to
> minimize your chance of catastrophic data loss.
> [1]
> [2]
> [3]

There are some things in my scenario... that cannot be said to be
general in all database situations.

Having to go a week back (backup) is "not really a problem", so as
long as i have a reliable backup and the problems doesnt occour except from
unexpected poweroffs then I think I can handle it.
Another thing is that the overall usage is far dominated by random-reads,
which is the performance I dont ruin by disabling write-caching.

And by adding a 512/1024MB BBWC on the controller I bet I can "re-gain"
enough write performance to easily make the system funcition. Currently
the average writeout is way less than 10MB/s but the reading processes
all spends most of their time in iowait.

Since my application is dominated by by random reads I "think" that
I still should have a huge gain over regular SAS drives on that side
of the equation, but most likely not on the write-side. But all of this is
so far only speculations, since the vendors doesnt seem eager on
lending out stuff these day, so everything is only on paper so far.

There seem to be consensus that on the write-side, SAS-disks can
fairly easy outperform SSDs. I have not seen anything showing that
they dont still have huge benefits on the read-side.

It would be nice if there was an easy way to test and confirm that it
actually was robust to power-outtake..

.. just having a disk-array with build-in-battery for the SSDs would
solve the problem.


pgsql-performance by date:

From: Robert Haas
Subject: Re: significant slow down with various LIMIT
From: Greg Smith
Subject: Re: About “context-switching issue on Xeon” test case ?