Re: Configuration Recommendations - Mailing list pgsql-performance

From Robert Klemme
Subject Re: Configuration Recommendations
Date
Msg-id CAM9pMnPkgq8TkFYqa-jZczLW=e4FwqH3EJPMNGFTLtna93KHyw@mail.gmail.com
Whole thread Raw
In response to Re: Configuration Recommendations  (Jan Nielsen <jan.sture.nielsen@gmail.com>)
Responses Re: Configuration Recommendations
Re: Configuration Recommendations
List pgsql-performance
Hi Jan,

On Thu, May 3, 2012 at 4:10 AM, Jan Nielsen <jan.sture.nielsen@gmail.com> wrote:
> Below is the hardware, firmware, OS, and PG configuration pieces that I'm
> settling in on. As was noted, the local storage used for OS is actually two
> disks with RAID 10. If anything appears like a mistake or something is
> missing, I'd appreciate the feedback.

You should quickly patent this solution.  As far as I know you need at
least four disks for RAID 10. :-)
http://en.wikipedia.org/wiki/RAID#Nested_.28hybrid.29_RAID

Or did you mean RAID 1?

> I'm still working on the benchmarks scripts and I don't have good/reliable
> numbers yet since our SAN is still very busy reconfiguring from the 2x4 to
> 1x8. I'm hoping to get them running tomorrow when the SAN should complete
> its 60 hours of reconfiguration.

Yeah, does not seem to make a lot of sense to test during this phase.

> Thanks, again, for all the great feedback.

You're welcome!

> 300GB RAID10 2x15k drive for OS on local storage
> */dev/sda1 RA*                                            4096
> */dev/sda1 FS*                                            ext4
> */dev/sda1 MO*

See above.

> 600GB RAID 10 8x15k drive for $PGDATA on SAN
> *IO Scheduler sda*            noop anticipatory deadline [cfq]
> */dev/sdb1 RA*                                            4096
> */dev/sdb1 FS*                                             xfs
> */dev/sdb1 MO*
> allocsize=256m,attr2,logbufs=8,logbsize=256k,noatime
>
> 300GB RAID 10 2x15k drive for $PGDATA/pg_xlog on SAN
> *IO Scheduler sdb*            noop anticipatory deadline [cfq]
> */dev/sde1 RA*                                            4096
> */dev/sde1 FS*                                             xfs
> */dev/sde1 MO*        allocsize=256m,attr2,logbufs=8,logbsize=256k,noatime
> *IO Scheduler sde*            noop anticipatory deadline [cfq]

See above.

With regard to the scheduler, I have frequently read that [deadline]
and [noop] perform better for PG loads.  Fortunately this can be
easily changed.

Maybe this also has some additional input:
http://www.fccps.cz/download/adv/frr/hdd/hdd.html

On Thu, May 3, 2012 at 8:54 AM, John Lister <john.lister@kickstone.co.uk> wrote:
> I was wondering if it would be better to put the xlog on the same disk as
> the OS? Apart from the occasional log writes I'd have thought most OS data
> is loaded into cache at the beginning, so you effectively have an unused
> disk. This gives you another spindle (mirrored) for your data.
>
> Or have I missed something fundamental?

Separating avoids interference between OS and WAL logging (i.e. a
script running berserk and filling OS filesystem).  Also it's easier
to manage (e.g. in case of relocation to another volume etc.).  And
you can have different mount options (i.e. might want to have atime
for OS volume).

Kind regards

robert


--
remember.guy do |as, often| as.you_can - without end
http://blog.rubybestpractices.com/

pgsql-performance by date:

Previous
From: "Albe Laurenz"
Date:
Subject: Re: Several optimization options (config/hardware)
Next
From: Ants Aasma
Date:
Subject: Re: Query got slow from 9.0 to 9.1 upgrade