Load Increase - Mailing list pgsql-general

From Ogden
Subject Load Increase
Date
Msg-id 96D39ECF-707E-43AA-B1CA-AF4F126EA11D@darkstatic.com
Whole thread Raw
List pgsql-general
PostgreSQL 9.0.3 has been running very smooth for us and we have streaming replication running on it as well as WAL
archiving.Things have run consistently and we are extremely happy with the performance.  

During the early morning hours, we have processes that run and import certain data from clients, nothing too crazy:
about4-5 Mb CSV files being imported in. This runs flawlessly, however, this morning the load of the servers were high
anda few of the input processes were running for over 2 hours. The load was around 4.00 and stayed there for a while.
Theimport scripts eventually finished and the load went back down, however, any time there was a heavy write, the load
wouldspike. I don't know whether this is because traffic on the database box increased or whether it was
Postgres/Kernelrelated. I saw this in my dmesg: 

Things appear to be normal but I want to ask: what is a heavy load just by looking at uptime and also what causes the
loadto increase under reasonably heavy writes? Is it the streaming that could be causing some load increase? 

Thank you

Ogden

[3215764.704206] INFO: task postmaster:5087 blocked for more than 120 seconds.
[3215764.704236] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[3215764.704281] postmaster    D 0000000000000000     0  5087  20996 0x00000000
[3215764.704285]  ffff88043e46b880 0000000000000086 0000000000000000 0000000000000000
[3215764.704289]  ffff8800144ffe48 ffff8800144ffe48 000000000000f9e0 ffff8800144fffd8
[3215764.704293]  0000000000015780 0000000000015780 ffff88043b50d4c0 ffff88043b50d7b8
[3215764.704296] Call Trace:
[3215764.704302]  [<ffffffff812fb05a>] ? __mutex_lock_common+0x122/0x192
[3215764.704306]  [<ffffffff810f8e72>] ? getname+0x23/0x1a0
[3215764.704309]  [<ffffffff812fb182>] ? mutex_lock+0x1a/0x31
[3215764.704314]  [<ffffffff810e5881>] ? virt_to_head_page+0x9/0x2a
[3215764.704318]  [<ffffffff810ef4bf>] ? generic_file_llseek+0x22/0x53
[3215764.704322]  [<ffffffff810ee2f8>] ? sys_lseek+0x44/0x64
[3215764.704325]  [<ffffffff81010b42>] ? system_call_fastpath+0x16/0x1b
[3215764.704328] INFO: task postmaster:5090 blocked for more than 120 seconds.
[3215764.704357] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[3215764.704402] postmaster    D 0000000000000000     0  5090  20996 0x00000000
[3215764.704406]  ffff88043e46b880 0000000000000082 0000000000000000 0000000000000000
[3215764.704410]  ffff88001433de48 ffff88001433de48 000000000000f9e0 ffff88001433dfd8
[3215764.704414]  0000000000015780 0000000000015780 ffff88043b7569f0 ffff88043b756ce8
[3215764.704418] Call Trace:
[3215764.704421]  [<ffffffff812fb05a>] ? __mutex_lock_common+0x122/0x192
[3215764.704425]  [<ffffffff810f8e72>] ? getname+0x23/0x1a0
[3215764.704428]  [<ffffffff812fb182>] ? mutex_lock+0x1a/0x31
[3215764.704431]  [<ffffffff810e5881>] ? virt_to_head_page+0x9/0x2a
[3215764.704435]  [<ffffffff810ef4bf>] ? generic_file_llseek+0x22/0x53
[3215764.704438]  [<ffffffff810ee2f8>] ? sys_lseek+0x44/0x64
[3215764.704441]  [<ffffffff81010b42>] ? system_call_fastpath+0x16/0x1b


pgsql-general by date:

Previous
From: Merlin Moncure
Date:
Subject: Re: anonymous record as an in parameter
Next
From: hook
Date:
Subject: not like perl..