Hello All,
I have an iostat question in that one of the raid arrays seems to act
differently than the other 3. Is this reasonable behavior for the
database or should I suspect a hardware or configuration problem?
But first some background:
Postgresql 7.4.2
Linux 2.4.20, 2GB RAM, 1-Xeon 2.4ghz with HT turned off
3Ware SATA RAID controller with 8 identical drives configured as 4
RAID-1 spindles
64MB RAM disk
postgresql.conf differences to postgresql.conf.sample:
tcpip_socket = true
max_connections = 128
shared_buffers = 2048
vacuum_mem = 16384
max_fsm_pages = 50000
wal_buffers = 128
checkpoint_segments = 64
effective_cache_size = 196000
random_page_cost = 1
default_statistics_target = 100
stats_command_string = true
stats_block_level = true
stats_row_level = true
The database is spread over 5 spindles:
/ram0 holds the busiest insert/update/delete table and assoc. indexes for
temporary session data
/sda5 holds the OS and most of the tables and indexes
/sdb2 holds the WAL
/sdc1 holds the 2nd busiest i/u/d table (70% of the writes)
/sdd1 holds the single index for that busy table on/sdc1
Lately we have 45 connections open from a python/psycopg connection pool.
99% of the reads are cached.
No swapping.
And finally iostat reports:
Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s
avgrq-sz avgqu-sz await svctm %util
/dev/sda5 0.01 3.32 0.01 0.68 0.16 32.96 0.08 16.48
48.61 0.09 12.16 2.01 0.14
/dev/sdb2 0.00 6.38 0.00 3.54 0.01 79.36 0.00 39.68
22.39 0.12 3.52 1.02 0.36
/dev/sdc1 0.03 0.13 0.00 0.08 0.27 1.69 0.13 0.84
24.06 0.13 163.28 13.75 0.11
/dev/sdd1 0.01 8.67 0.00 0.77 0.06 82.35 0.03 41.18
107.54 0.09 10.51 2.76 0.21
The /sdc1's await seems awfully long compared to the rest to the stats.
Jelle
--
http://www.sv650.org/audiovisual/loading_a_bike.mpeg
Osama-in-October office pool.