Thread: Re: [pgsql-performance] Daily digest v1.3606 (10 messages)

Re: [pgsql-performance] Daily digest v1.3606 (10 messages)

From
John Lister
Date:
>We've reached to the point when we would like to try SSDs. We've got a
>central DB currently 414 GB in size and increasing. Working set does not
>fit into our 96GB RAM server anymore.
>So, the main question is what to take. Here what we've got:
>1) Intel 320. Good, but slower then current generation sandforce drives
>2) Intel 330. Looks like cheap 520 without capacitor
>3) Intel 520. faster then 320 No capacitor.
>4) OCZ Vertex 3 Pro - No available. Even on OCZ site
>5) OCZ Deneva - can't find in my country :)
>We are using Areca controller with BBU. So as for me, question is: Can 520
>series be set up to handle fsyncs correctly? We've got the Areca to handle
>buffering.

I was thinking the same thing, setting up a new server with ssds instead of hdds as I currently have
and was wondering what the thoughts are in regards to using raid (it would probably be a dell R710 card -
any comments on these as they are new). I was planning on using intel 320s as the 710s are a little too
pricey at the moment and we aren't massively heavy on the writes (but can revisit this in the next
6months/year if required). I was thinking raid 10 as I've done with the hdds, but not sure if this is the
best choice for ssds, given wear level and firmware is likely to be the same, I'd expect concurrent
failure on a stipe. Therefore I'd imagine using an alternate mfr/drive for the mirrors is a better bet?

What are peoples thoughts on using a non enterprise drive for this - the choice of enterprise drives is limited :(

I was thinking if we have sudden power failure then mark the consumer drive as bad and rebuild it from the other one,
or is this highly risky?

Does any one do anything different?

Thanks

John


Re: [pgsql-performance] Daily digest v1.3606 (10 messages)

From
Merlin Moncure
Date:
On Tue, May 15, 2012 at 4:09 PM, John Lister <john.lister@kickstone.com> wrote:
>> We've reached to the point when we would like to try SSDs. We've got a
>> central DB currently 414 GB in size and increasing. Working set does not
>> fit into our 96GB RAM server anymore.
>> So, the main question is what to take. Here what we've got:
>> 1) Intel 320. Good, but slower then current generation sandforce drives
>> 2) Intel 330. Looks like cheap 520 without capacitor
>> 3) Intel 520. faster then 320 No capacitor.
>> 4) OCZ Vertex 3 Pro - No available. Even on OCZ site
>> 5) OCZ Deneva - can't find in my country :)
>> We are using Areca controller with BBU. So as for me, question is: Can 520
>> series be set up to handle fsyncs correctly? We've got the Areca to handle
>> buffering.
>
>
> I was thinking the same thing, setting up a new server with ssds instead of
> hdds as I currently have
> and was wondering what the thoughts are in regards to using raid (it would
> probably be a dell R710 card -
> any comments on these as they are new). I was planning on using intel 320s
> as the 710s are a little too
> pricey at the moment and we aren't massively heavy on the writes (but can
> revisit this in the next
> 6months/year if required). I was thinking raid 10 as I've done with the
> hdds, but not sure if this is the
> best choice for ssds, given wear level and firmware is likely to be the
> same, I'd expect concurrent
> failure on a stipe. Therefore I'd imagine using an alternate mfr/drive for
> the mirrors is a better bet?
>
> What are peoples thoughts on using a non enterprise drive for this - the
> choice of enterprise drives is limited :(
>
> I was thinking if we have sudden power failure then mark the consumer drive
> as bad and rebuild it from the other one,
> or is this highly risky?

I think the multiple vendor strategy is dicey as the only player in
the game that seems to have a reasonable enterprise product offering
is intel.  The devices should work within spec and should be phased
out when they are approaching EOL.  SMART gives good info regarding
ssd wear and should be checked at regular intervals.

Regarding RAID, a good theoretical case could be made for RAID 5 on
SSD since the 'write hole' penalty is going to be far less. I'd still
be sticking with raid 10 however if it was my stuff.  Aside: I would
also be using software raid.  I'm a big believer in mdadm on linux
especially when using SSD and it looks like support for trim here:

http://serverfault.com/questions/227918/possible-to-get-ssd-trim-discard-working-on-ext4-lvm-software-raid-in-linu

merlin