s == sven@dmv.com writes:
s> What are you using to create your raid?
Hm. I didn't set this up. I'll have to check.
s> You say it is "no doubt disk
s> I/O" - does iostat confirm this? A lot of performance issues are related
s> to the size of the stripe you chose for the striped portion of the
s> array, the actual array configuration, etc. I am assuming you have
s> looked at system variables such as autoup and the likes? What tweaks
s> have you done?
I've mainly been using Glance which shows a lot of queued requests for
the disks in question.
Here's currently what we have in /etc/system related to ufs:
set ncsize = 257024
set autoup = 90
set bufhwm = 15000
set tune_t_fsflushr = 15
set ufs:ufs_HW = 16777216
set ufs:ufs_LW = 8388608
s> Also, are your pg_xlog and data directories separated onto separate
s> volumes? Doing so will help immensely.
No, they are on the same volume.
s> What are you using to measure
s> performance?
Nothing too scientific other than the fact that since we have moved
the DB, we consistenly see a large number of postmater processes
(close to 100) where before we did not.
--
Brandon