On Tue, 2006-08-22 at 08:16, Marty Jia wrote:
> Hi, Mark
>
> Thanks, here is our hardware info:
>
> RAID 10, using 3Par virtual volume technology across ~200 physical FC
> disks. 4 virtual disks for PGDATA, striped with LVM into one volume, 2
> virtual disks for WAL, also striped. SAN attached with Qlogic SAN
> surfer multipathing to load balance each LUN on two 2GBs paths. HBAs
> are Qlogic 2340's. 16GB host cache on 3Par.
A few points.
Someone (Luke I think) posted that Linux's LVM has a throughput limit of
around 600 Megs/second.
Why are you using multiple virtual disks on an LPAR? Did you try this
with just a single big virtual disk first to have something to compare
it to? I think your disk subsystem is overthought for an LPAR. If you
were running physical disks on a locally attached RAID card, it would be
a good idea. But here you're just adding layers of complexity for no
gain, and in fact may be heading backwards.
I'd make two volumes on the LPAR, and let the LPAR do all the
virtualization for you. Put a couple disks in a mirror set for the
pg_xlog, format it ext2, and mount it noatime. Make another from a
dozen or so disks in an RAID 0 on top of RAID 1 (i.e. make a bunch of
mirror sets and stripe them into one big partition) and mount that for
PGDATA. Simplify, and get a baseline. Then, start mucking about to see
if you can get better performance. change ONE THING at a time, and only
one thing, and test it well.
Got the latest and greatest drivers for the qlogic cards?
I would suggest some component testing to make sure everything is
working well. bonnie++ and dd come to mind.