Re: Areca 1260 Performance - Mailing list pgsql-performance

From Brian Wipf
Subject Re: Areca 1260 Performance
Date
Msg-id 205957AA-B783-4CA7-B620-EFBF86D047EE@clickspace.com
Whole thread Raw
In response to Re: Areca 1260 Performance (was: File Systems  (Ron <rjpeace@earthlink.net>)
Responses Re: Areca 1260 Performance
List pgsql-performance
I appreciate your suggestions, Ron. And that helps answer my question
on processor selection for our next box; I wasn't sure if the lower
MHz speed of the Kentsfield compared to the Woodcrest but with double
the cores would be better for us overall or not.

On 6-Dec-06, at 4:25 PM, Ron wrote:

> The 1100 series is PCI-X based.  The 1200 series is PCI-E x8
> based.  Apples and oranges.
>
> I still think Luke Lonergan or Josh Berkus may have some
> interesting ideas regarding possible OS and SW optimizations.
>
> WD1500ADFDs are each good for ~90MBps read and ~60MBps write ASTR.
> That means your 16 HD RAID 10 should be sequentially transferring
> ~720MBps read and ~480MBps write.
> Clearly more HDs will be required to allow a ARC-12xx to attain its
> peak performance.
>
> One thing that occurs to me with your present HW is that your CPU
> utilization numbers are relatively high.
> Since 5160s are clocked about as high as is available, that leaves
> trying CPUs with more cores and trying more CPUs.
>
> You've got basically got 4 HW threads at the moment.  If you can,
> evaluate CPUs and mainboards that allow for 8 or 16 HW threads.
> Intel-wise, that's the new Kentfields.  AMD-wise, you have lot's of
> 4S mainboard options, but the AMD 4C CPUs won't be available until
> sometime late in 2007.
>
> I've got other ideas, but this list is not the appropriate venue
> for the level of detail required.
>
> Ron Peacetree
>
>
> At 05:30 PM 12/6/2006, Brian Wipf wrote:
>> On 6-Dec-06, at 2:47 PM, Brian Wipf wrote:
>>
>>>> Hmmm.   Something is not right.  With a 16 HD RAID 10 based on 10K
>>>> rpm HDs, you should be seeing higher absolute performance numbers.
>>>>
>>>> Find out what HW the Areca guys and Tweakers guys used to test the
>>>> 1280s.
>>>> At LW2006, Areca was demonstrating all-in-cache reads and writes
>>>> of ~1600MBps and ~1300MBps respectively along with RAID 0
>>>> Sustained Rates of ~900MBps read, and ~850MBps write.
>>>>
>>>> Luke, I know you've managed to get higher IO rates than this with
>>>> this class of HW.  Is there a OS or SW config issue Brian should
>>>> closely investigate?
>>>
>>> I wrote 1280 by a mistake. It's actually a 1260. Sorry about that.
>>> The IOP341 class of cards weren't available when we ordered the
>>> parts for the box, so we had to go with the 1260. The box(es) we
>>> build next month will either have the 1261ML or 1280 depending on
>>> whether we go 16 or 24 disk.
>>>
>>> I noticed Bucky got almost 800 random seeks per second on her 6
>>> disk 10000 RPM SAS drive Dell PowerEdge 2950. The random seek
>>> performance of this box disappointed me the most. Even running 2
>>> concurrent bonnies, the random seek performance only increased from
>>> 644 seeks/sec to 813 seeks/sec. Maybe there is some setting I'm
>>> missing? This card looked pretty impressive on tweakers.net.
>>
>> Areca has some performance numbers in a downloadable PDF for the
>> Areca ARC-1120, which is in the same class as the ARC-1260, except
>> with 8 ports. With all 8 drives in a RAID 0 the card gets the
>> following performance numbers:
>>
>> Card         single thread write    20 thread write      single
>> thread read        20 thread read
>> ARC-1120     321.26 MB/s            404.76 MB/s          412.55
>> MB/ s               672.45 MB/s
>>
>> My numbers for sequential i/o for the ARC-1260 in a 16 disk RAID 10
>> are slightly better than the ARC-1120 in an 8 disk RAID 0 for a
>> single thread. I guess this means my numbers are reasonable.
>
>
> ---------------------------(end of
> broadcast)---------------------------
> TIP 5: don't forget to increase your free space map settings
>



pgsql-performance by date:

Previous
From: Ron
Date:
Subject: Re: Areca 1260 Performance (was: File Systems
Next
From: asif ali
Date:
Subject: Re: VACUUM FULL does not works.......