On Fri, 23 Apr 2004, Chris Hoover wrote:
> DB's on Powervaults 220S using raid 5 (over 6 disks)
What controller is this, the adaptec? We've found it to be slower than
the LSI megaraid based controller, but YMMV.
> Running RH ES 2.1
Are you running the latest kernel for ES 2.1? Early 2.4 kernels are
pretty pokey and have some odd behaviour under load that later 2.4
kernels seemed to fix.
> Here is the postgresql.conf from the server with the 11GB db:
>
> max_connections = 64
> shared_buffers = 32768 # 256MB=32768(buffs)*8192(bytes/buff)
> max_fsm_relations = 1000 # min 10, fsm is free space map, ~40 bytes
> max_fsm_pages = 10000 # min 1000, fsm is free space map, ~6 bytes
IF you're doing lots of updates and such, you might want these higher.
Have you vacuumed full the databases since taking over?
> sort_mem = 4096 # 256MB=4096(bytes/proc)*64(procs or conns)
Sorry, that's wrong. sort_mem is measure in kbytes. i.e. 8192 means 8
megs sort_mem. Try setting it a bit higher (you've got LOTS of ram in these
boxes) to something like 16 or 32 meg.
> checkpoint_segments = 16 # in logfile segments, min 1, 16MB each
> checkpoint_timeout = 30 # range 30-3600, in seconds
> effective_cache_size = 131072 # typically 8KB each
This still looks low. On one machine you're showing kernel cache of about
.7 gig, on the other it's 6 gig. 6 gigs of kernel cache would be a
setting of 800000. It's more of a nudge factor than an exact science, so
don't worry too much.
If you've got fast I/O look at lowering random page cost to something
between 1 and 2. We use 1.3 to 1.4 on most of our machines with fast
drives under them.
I'd use vmstat to see if you're I/O bound.
also, look for index bloat. Before 7.4 it was a serious problem. With
7.4 regular vacuuming should reclaim most lost space, but there are corner
cases where you still might need to re-index.