Thread: SMP on a heavy loaded database
Centos 5.X kernel 2.6.18-274 pgsql-9.1 from pgdg-91-centos.repo relatively small database 3.2Gb Lot of insert, update, delete. I see non balanced _User_ usage on 14 CPU, exclusively assigned to the hardware raid controller. What I'm doing wrong, and is it possible somehow to fix? Thanks in advance. Andrew. # top -d 10.00 -b -n 2 -U postgres -c top - 23:18:19 up 453 days, 57 min, 3 users, load average: 0.55, 0.47, 0.42 Tasks: 453 total, 1 running, 452 sleeping, 0 stopped, 0 zombie Cpu0 : 0.6%us, 0.1%sy, 0.0%ni, 99.3%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu1 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu2 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu3 : 1.2%us, 0.1%sy, 0.0%ni, 98.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu4 : 2.6%us, 0.4%sy, 0.0%ni, 96.8%id, 0.0%wa, 0.0%hi, 0.2%si, 0.0%st Cpu5 : 0.8%us, 0.0%sy, 0.0%ni, 99.2%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu6 : 5.4%us, 0.2%sy, 0.0%ni, 94.2%id, 0.2%wa, 0.0%hi, 0.0%si, 0.0%st Cpu7 : 3.3%us, 0.4%sy, 0.0%ni, 96.1%id, 0.0%wa, 0.0%hi, 0.2%si, 0.0%st Cpu8 : 1.4%us, 0.3%sy, 0.0%ni, 98.2%id, 0.0%wa, 0.0%hi, 0.1%si, 0.0%st Cpu9 : 0.0%us, 0.1%sy, 0.0%ni, 99.9%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu10 : 0.0%us, 0.1%sy, 0.0%ni, 99.9%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu11 : 1.6%us, 0.6%sy, 0.0%ni, 97.4%id, 0.0%wa, 0.0%hi, 0.4%si, 0.0%st Cpu12 : 0.5%us, 0.1%sy, 0.0%ni, 99.4%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu13 : 1.4%us, 0.2%sy, 0.0%ni, 98.4%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu14 : 24.2%us, 0.8%sy, 0.0%ni, 74.5%id, 0.3%wa, 0.0%hi, 0.2%si, 0.0%st Cpu15 : 0.7%us, 0.1%sy, 0.0%ni, 99.0%id, 0.0%wa, 0.1%hi, 0.1%si, 0.0%st Mem: 16426540k total, 16356772k used, 69768k free, 215764k buffers Swap: 4194232k total, 145280k used, 4048952k free, 14434356k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 6513 postgres 16 0 4329m 235m 225m S 3.1 1.5 0:02.24 postgres: XXXX_DB [local] idle 6891 postgres 16 0 4331m 223m 213m S 1.7 1.4 0:01.44 postgres: XXXX_DB [local] idle 6829 postgres 16 0 4329m 219m 210m S 1.6 1.4 0:01.56 postgres: XXXX_DB [local] idle 6539 postgres 16 0 4330m 319m 308m S 1.5 2.0 0:03.64 postgres: XXXX_DB [local] idle 6487 postgres 16 0 4329m 234m 224m S 1.2 1.5 0:02.95 postgres: XXXX_DB [local] idle 6818 postgres 16 0 4328m 224m 215m S 1.2 1.4 0:02.00 postgres: XXXX_DB [local] idle 6831 postgres 16 0 4328m 215m 206m S 1.2 1.3 0:01.41 postgres: XXXX_DB [local] idle 6868 postgres 16 0 4330m 223m 213m S 1.2 1.4 0:01.46 postgres: XXXX_DB [local] idle 6899 postgres 15 0 4328m 220m 211m S 1.2 1.4 0:01.61 postgres: XXXX_DB [local] idle 6515 postgres 15 0 4331m 233m 223m S 1.0 1.5 0:02.66 postgres: XXXX_DB [local] idle 6890 postgres 16 0 4331m 279m 268m S 1.0 1.7 0:02.01 postgres: XXXX_DB [local] idle 7083 postgres 15 0 4328m 207m 199m S 1.0 1.3 0:00.77 postgres: XXXX_DB [local] idle 6374 postgres 16 0 4329m 245m 235m S 0.9 1.5 0:04.30 postgres: XXXX_DB [local] idle 6481 postgres 15 0 4328m 293m 285m S 0.9 1.8 0:03.17 postgres: XXXX_DB [local] idle 6484 postgres 16 0 4329m 236m 226m S 0.9 1.5 0:02.82 postgres: XXXX_DB [local] idle 6509 postgres 16 0 4332m 237m 225m S 0.9 1.5 0:02.90 postgres: XXXX_DB [local] idle 6522 postgres 15 0 4330m 238m 228m S 0.9 1.5 0:02.35 postgres: XXXX_DB [local] idle 6812 postgres 16 0 4329m 283m 274m S 0.9 1.8 0:02.19 postgres: XXXX_DB [local] idle 7086 postgres 15 0 4328m 202m 194m S 0.9 1.3 0:00.70 postgres: XXXX_DB [local] idle 6494 postgres 15 0 4329m 317m 306m S 0.8 2.0 0:03.98 postgres: XXXX_DB [local] idle 6542 postgres 16 0 4330m 309m 299m S 0.8 1.9 0:02.79 postgres: XXXX_DB [local] idle 6550 postgres 15 0 4329m 287m 277m S 0.8 1.8 0:02.80 postgres: XXXX_DB [local] idle 6777 postgres 16 0 4329m 229m 219m S 0.8 1.4 0:02.13 postgres: XXXX_DB [local] idle 6816 postgres 16 0 4329m 230m 220m S 0.8 1.4 0:01.61 postgres: XXXX_DB [local] idle 6822 postgres 15 0 4329m 305m 295m S 0.8 1.9 0:02.09 postgres: XXXX_DB [local] idle 6897 postgres 15 0 4328m 219m 210m S 0.8 1.4 0:01.69 postgres: XXXX_DB [local] idle 6926 postgres 16 0 4328m 209m 200m S 0.8 1.3 0:00.81 postgres: XXXX_DB [local] idle 6473 postgres 16 0 4329m 236m 226m S 0.7 1.5 0:02.81 postgres: XXXX_DB [local] idle 6826 postgres 16 0 4330m 226m 216m S 0.7 1.4 0:02.14 postgres: XXXX_DB [local] idle 6834 postgres 16 0 4331m 282m 271m S 0.7 1.8 0:03.06 postgres: XXXX_DB [local] idle 6882 postgres 15 0 4330m 222m 212m S 0.7 1.4 0:01.83 postgres: XXXX_DB [local] idle 6885 postgres 16 0 4328m 104m 96m S 0.6 0.7 0:00.94 postgres: XXXX_DB [local] idle 6878 postgres 15 0 4319m 2992 1472 S 0.4 0.0 40:20.10 postgres: wal sender process postgres 555.555.555.555(47880)streaming 21B/2BFE82F8 6519 postgres 16 0 4330m 249m 240m S 0.3 1.6 0:03.14 postgres: XXXX_DB [local] idle 6477 postgres 16 0 4331m 239m 228m S 0.2 1.5 0:02.75 postgres: XXXX_DB [local] idle 6500 postgres 16 0 4328m 227m 219m S 0.2 1.4 0:01.84 postgres: XXXX_DB [local] idle 6576 postgres 16 0 4331m 289m 278m S 0.2 1.8 0:03.01 postgres: XXXX_DB [local] idle 6637 postgres 16 0 4330m 230m 220m S 0.2 1.4 0:02.13 postgres: XXXX_DB [local] idle 6773 postgres 16 0 4330m 225m 214m S 0.2 1.4 0:02.98 postgres: XXXX_DB [local] idle 6838 postgres 16 0 4329m 224m 215m S 0.2 1.4 0:01.30 postgres: XXXX_DB [local] idle 7283 postgres 16 0 4326m 24m 18m S 0.2 0.2 0:00.08 postgres: XXXX_DB [local] idle 6378 postgres 16 0 4329m 267m 258m S 0.1 1.7 0:03.74 postgres: XXXX_DB [local] idle 6439 postgres 15 0 4330m 256m 244m S 0.1 1.6 0:03.62 postgres: XXXX_DB [local] idle 6535 postgres 15 0 4330m 289m 279m S 0.1 1.8 0:03.14 postgres: XXXX_DB [local] idle 6538 postgres 15 0 4330m 231m 221m S 0.1 1.4 0:02.17 postgres: XXXX_DB [local] idle 6544 postgres 15 0 4329m 226m 216m S 0.1 1.4 0:01.86 postgres: XXXX_DB [local] idle 6546 postgres 15 0 4329m 229m 219m S 0.1 1.4 0:02.40 postgres: XXXX_DB [local] idle 6552 postgres 16 0 4330m 246m 236m S 0.1 1.5 0:02.49 postgres: XXXX_DB [local] idle 6555 postgres 15 0 4328m 226m 217m S 0.1 1.4 0:02.05 postgres: XXXX_DB [local] idle 6558 postgres 16 0 4329m 233m 223m S 0.1 1.5 0:02.59 postgres: XXXX_DB [local] idle 6572 postgres 16 0 4328m 227m 218m S 0.1 1.4 0:01.69 postgres: XXXX_DB [local] idle 6580 postgres 16 0 4329m 229m 220m S 0.1 1.4 0:02.34 postgres: XXXX_DB [local] idle 6724 postgres 16 0 4331m 231m 220m S 0.1 1.4 0:01.80 postgres: XXXX_DB [local] idle 6804 postgres 16 0 4328m 115m 106m S 0.1 0.7 0:01.48 postgres: XXXX_DB [local] idle 6811 postgres 15 0 4329m 223m 214m S 0.1 1.4 0:01.51 postgres: XXXX_DB [local] idle 6821 postgres 16 0 4331m 306m 295m S 0.1 1.9 0:02.19 postgres: XXXX_DB [local] idle 6836 postgres 16 0 4329m 226m 216m S 0.1 1.4 0:01.72 postgres: XXXX_DB [local] idle 6879 postgres 16 0 4330m 222m 212m S 0.1 1.4 0:01.84 postgres: XXXX_DB [local] idle 6888 postgres 16 0 4328m 216m 208m S 0.1 1.4 0:01.32 postgres: XXXX_DB [local] idle 6896 postgres 16 0 4328m 213m 206m S 0.1 1.3 0:01.07 postgres: XXXX_DB [local] idle 14999 postgres 15 0 115m 1840 808 S 0.1 0.0 29:59.16 postgres: stats collector process 830 postgres 15 0 4319m 8396 6420 S 0.0 0.1 0:00.06 postgres: XXXX_DB 192.168.0.1(42974) idle 6808 postgres 15 0 4328m 222m 214m S 0.0 1.4 0:01.80 postgres: XXXX_DB [local] idle 6873 postgres 15 0 4329m 222m 213m S 0.0 1.4 0:01.92 postgres: XXXX_DB [local] idle 6875 postgres 16 0 4329m 228m 219m S 0.0 1.4 0:02.46 postgres: XXXX_DB [local] idle 6906 postgres 16 0 4328m 216m 208m S 0.0 1.4 0:00.83 postgres: XXXX_DB [local] idle 7274 postgres 15 0 4344m 534m 531m S 0.0 3.3 0:00.37 postgres: autovacuum worker process XXXX_DB 7818 postgres 15 0 4319m 6640 4680 S 0.0 0.0 0:00.06 postgres: postgres XXXX_DB 193.8.246.6(1032) idle 10553 postgres 15 0 4319m 6940 5000 S 0.0 0.0 0:00.01 postgres: XXXX_DB 192.168.0.1(35402) idle 10600 postgres 15 0 4319m 6780 4848 S 0.0 0.0 0:00.01 postgres: XXXX_DB 192.168.0.1(35612) idle 11146 postgres 15 0 4319m 7692 5744 S 0.0 0.0 0:00.01 postgres: XXXX_DB 192.168.0.1(39366) idle 12291 postgres 15 0 4319m 6716 4784 S 0.0 0.0 0:00.00 postgres: XXXX_DB 192.168.0.1(49540) idle 12711 postgres 15 0 4319m 8048 5984 S 0.0 0.0 0:00.02 postgres: XXXX_DB 192.168.0.1(51440) idle 12717 postgres 15 0 4319m 6768 4836 S 0.0 0.0 0:00.00 postgres: XXXX_DB 192.168.0.1(51616) idle 12815 postgres 15 0 4319m 6540 4608 S 0.0 0.0 0:00.00 postgres: XXXX_DB 192.168.0.1(52989) idle 13140 postgres 15 0 4319m 7736 5660 S 0.0 0.0 0:00.01 postgres: XXXX_DB 192.168.0.1(55225) idle 14378 postgres 15 0 4320m 7324 4928 S 0.0 0.0 0:00.03 postgres: postgres postgres 222.222.222.222(1030) idle 14979 postgres 15 0 4316m 104m 103m S 0.0 0.6 6:46.80 /usr/pgsql-9.1/bin/postmaster -p 5432 -D /var/lib/pgsql/9.1/data 14981 postgres 15 0 112m 1368 728 S 0.0 0.0 0:00.06 postgres: logger process 14995 postgres 15 0 4320m 2.0g 2.0g S 0.0 12.7 4:44.31 postgres: writer process 14996 postgres 15 0 4318m 17m 16m S 0.0 0.1 0:12.76 postgres: wal writer process 14997 postgres 15 0 4319m 3312 1568 S 0.0 0.0 0:10.14 postgres: autovacuum launcher process 14998 postgres 15 0 114m 1444 756 S 0.0 0.0 0:13.06 postgres: archiver process last was 000000010000021B0000002A 15027 postgres 15 0 4319m 80m 77m S 0.0 0.5 31:35.48 postgres: monitor XXXX_DB 10.0.0.0 (55433) idle 15070 postgres 15 0 4319m 82m 80m S 0.0 0.5 28:39.80 postgres: monitor XXXX_DB 10.10.0.1 (59360) idle 15808 postgres 15 0 4324m 15m 10m S 0.0 0.1 0:00.27 postgres: postgres XXXX_DB 222.222.222.222 (1031) idle 18787 postgres 15 0 4319m 8004 5932 S 0.0 0.0 0:00.02 postgres: XXXX_DB 192.168.0.1(46831) idle 18850 postgres 15 0 4319m 7364 5304 S 0.0 0.0 0:00.01 postgres: XXXX_DB 192.168.0.1(48843) idle 20331 postgres 15 0 4319m 6592 4660 S 0.0 0.0 0:00.00 postgres: XXXX_DB 192.168.0.1(60573) idle 26950 postgres 15 0 4319m 8172 6136 S 0.0 0.0 0:00.03 postgres: XXXX_DB 192.168.0.1(47890) idle 27599 postgres 15 0 4319m 8220 6200 S 0.0 0.1 0:00.04 postgres: XXXX_DB 192.168.0.1(49566) idle 28039 postgres 15 0 4319m 6644 4696 S 0.0 0.0 0:00.00 postgres: XXXX_DB 192.168.0.1(38329) idle 30450 postgres 15 0 4319m 8412 6316 S 0.0 0.1 0:00.03 postgres: XXXX_DB 192.168.0.1(49490) idle 31327 postgres 15 0 4319m 8508 6412 S 0.0 0.1 0:00.03 postgres: XXXX_DB 192.168.0.1(57064) idle 31363 postgres 15 0 4319m 8428 6364 S 0.0 0.1 0:00.03 postgres: XXXX_DB 192.168.0.1(58128) idle 32624 postgres 15 0 4319m 7356 5340 S 0.0 0.0 0:00.01 postgres: XXXX_DB 192.168.0.1(38002) idle 32651 postgres 15 0 4319m 8540 6572 S 0.0 0.1 0:00.07 postgres: XXXX_DB 192.168.0.1(38544) idle
On Thu, Jan 3, 2013 at 4:45 PM, nobody nowhere <devnull@mail.ua> wrote: > Centos 5.X kernel 2.6.18-274 > pgsql-9.1 from pgdg-91-centos.repo > relatively small database 3.2Gb > Lot of insert, update, delete. > > I see non balanced _User_ usage on 14 CPU, exclusively assigned to the hardware raid controller. > What I'm doing wrong, and is it possible somehow to fix? > > Thanks in advance. > > Andrew. > > # top -d 10.00 -b -n 2 -U postgres -c > > top - 23:18:19 up 453 days, 57 min, 3 users, load average: 0.55, 0.47, 0.42 > Tasks: 453 total, 1 running, 452 sleeping, 0 stopped, 0 zombie > Cpu0 : 0.6%us, 0.1%sy, 0.0%ni, 99.3%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st > Cpu1 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st > Cpu2 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st > Cpu3 : 1.2%us, 0.1%sy, 0.0%ni, 98.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st > Cpu4 : 2.6%us, 0.4%sy, 0.0%ni, 96.8%id, 0.0%wa, 0.0%hi, 0.2%si, 0.0%st > Cpu5 : 0.8%us, 0.0%sy, 0.0%ni, 99.2%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st > Cpu6 : 5.4%us, 0.2%sy, 0.0%ni, 94.2%id, 0.2%wa, 0.0%hi, 0.0%si, 0.0%st > Cpu7 : 3.3%us, 0.4%sy, 0.0%ni, 96.1%id, 0.0%wa, 0.0%hi, 0.2%si, 0.0%st > Cpu8 : 1.4%us, 0.3%sy, 0.0%ni, 98.2%id, 0.0%wa, 0.0%hi, 0.1%si, 0.0%st > Cpu9 : 0.0%us, 0.1%sy, 0.0%ni, 99.9%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st > Cpu10 : 0.0%us, 0.1%sy, 0.0%ni, 99.9%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st > Cpu11 : 1.6%us, 0.6%sy, 0.0%ni, 97.4%id, 0.0%wa, 0.0%hi, 0.4%si, 0.0%st > Cpu12 : 0.5%us, 0.1%sy, 0.0%ni, 99.4%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st > Cpu13 : 1.4%us, 0.2%sy, 0.0%ni, 98.4%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st > Cpu14 : 24.2%us, 0.8%sy, 0.0%ni, 74.5%id, 0.3%wa, 0.0%hi, 0.2%si, 0.0%st > Cpu15 : 0.7%us, 0.1%sy, 0.0%ni, 99.0%id, 0.0%wa, 0.1%hi, 0.1%si, 0.0%st > Mem: 16426540k total, 16356772k used, 69768k free, 215764k buffers > Swap: 4194232k total, 145280k used, 4048952k free, 14434356k cached > So how many concurrent users are accessing this db? pgsql assigns one process on one core so to speak. It can't spread load for one user over all cores.
Пятница, 4 января 2013, 0:42 -07:00 от Scott Marlowe <scott.marlowe@gmail.com>:
On Thu, Jan 3, 2013 at 4:45 PM, nobody nowhere <devnull@mail.ua> wrote:
> Centos 5.X kernel 2.6.18-274
> pgsql-9.1 from pgdg-91-centos.repo
> relatively small database 3.2Gb
> Lot of insert, update, delete.
>
> I see non balanced _User_ usage on 14 CPU, exclusively assigned to the hardware raid controller.
> What I'm doing wrong, and is it possible somehow to fix?
>
> Thanks in advance.
>
> Andrew.
>
> # top -d 10.00 -b -n 2 -U postgres -c
>
> top - 23:18:19 up 453 days, 57 min, 3 users, load average: 0.55, 0.47, 0.42
> Tasks: 453 total, 1 running, 452 sleeping, 0 stopped, 0 zombie
> Cpu0 : 0.6%us, 0.1%sy, 0.0%ni, 99.3%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
> Cpu1 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
> Cpu2 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
> Cpu3 : 1.2%us, 0.1%sy, 0.0%ni, 98.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
> Cpu4 : 2.6%us, 0.4%sy, 0.0%ni, 96.8%id, 0.0%wa, 0.0%hi, 0.2%si, 0.0%st
> Cpu5 : 0.8%us, 0.0%sy, 0.0%ni, 99.2%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
> Cpu6 : 5.4%us, 0.2%sy, 0.0%ni, 94.2%id, 0.2%wa, 0.0%hi, 0.0%si, 0.0%st
> Cpu7 : 3.3%us, 0.4%sy, 0.0%ni, 96.1%id, 0.0%wa, 0.0%hi, 0.2%si, 0.0%st
> Cpu8 : 1.4%us, 0.3%sy, 0.0%ni, 98.2%id, 0.0%wa, 0.0%hi, 0.1%si, 0.0%st
> Cpu9 : 0.0%us, 0.1%sy, 0.0%ni, 99.9%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
> Cpu10 : 0.0%us, 0.1%sy, 0.0%ni, 99.9%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
> Cpu11 : 1.6%us, 0.6%sy, 0.0%ni, 97.4%id, 0.0%wa, 0.0%hi, 0.4%si, 0.0%st
> Cpu12 : 0.5%us, 0.1%sy, 0.0%ni, 99.4%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
> Cpu13 : 1.4%us, 0.2%sy, 0.0%ni, 98.4%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
> Cpu14 : 24.2%us, 0.8%sy, 0.0%ni, 74.5%id, 0.3%wa, 0.0%hi, 0.2%si, 0.0%st
> Cpu15 : 0.7%us, 0.1%sy, 0.0%ni, 99.0%id, 0.0%wa, 0.1%hi, 0.1%si, 0.0%st
> Mem: 16426540k total, 16356772k used, 69768k free, 215764k buffers
> Swap: 4194232k total, 145280k used, 4048952k free, 14434356k cached
>
So how many concurrent users are accessing this db? pgsql assigns one
process on one core so to speak. It can't spread load for one user
over all cores.
64 php Fast-cgi processes over the Unix socket and about 20-30 over tcp> Centos 5.X kernel 2.6.18-274
> pgsql-9.1 from pgdg-91-centos.repo
> relatively small database 3.2Gb
> Lot of insert, update, delete.
>
> I see non balanced _User_ usage on 14 CPU, exclusively assigned to the hardware raid controller.
> What I'm doing wrong, and is it possible somehow to fix?
>
> Thanks in advance.
>
> Andrew.
>
> # top -d 10.00 -b -n 2 -U postgres -c
>
> top - 23:18:19 up 453 days, 57 min, 3 users, load average: 0.55, 0.47, 0.42
> Tasks: 453 total, 1 running, 452 sleeping, 0 stopped, 0 zombie
> Cpu0 : 0.6%us, 0.1%sy, 0.0%ni, 99.3%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
> Cpu1 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
> Cpu2 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
> Cpu3 : 1.2%us, 0.1%sy, 0.0%ni, 98.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
> Cpu4 : 2.6%us, 0.4%sy, 0.0%ni, 96.8%id, 0.0%wa, 0.0%hi, 0.2%si, 0.0%st
> Cpu5 : 0.8%us, 0.0%sy, 0.0%ni, 99.2%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
> Cpu6 : 5.4%us, 0.2%sy, 0.0%ni, 94.2%id, 0.2%wa, 0.0%hi, 0.0%si, 0.0%st
> Cpu7 : 3.3%us, 0.4%sy, 0.0%ni, 96.1%id, 0.0%wa, 0.0%hi, 0.2%si, 0.0%st
> Cpu8 : 1.4%us, 0.3%sy, 0.0%ni, 98.2%id, 0.0%wa, 0.0%hi, 0.1%si, 0.0%st
> Cpu9 : 0.0%us, 0.1%sy, 0.0%ni, 99.9%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
> Cpu10 : 0.0%us, 0.1%sy, 0.0%ni, 99.9%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
> Cpu11 : 1.6%us, 0.6%sy, 0.0%ni, 97.4%id, 0.0%wa, 0.0%hi, 0.4%si, 0.0%st
> Cpu12 : 0.5%us, 0.1%sy, 0.0%ni, 99.4%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
> Cpu13 : 1.4%us, 0.2%sy, 0.0%ni, 98.4%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
> Cpu14 : 24.2%us, 0.8%sy, 0.0%ni, 74.5%id, 0.3%wa, 0.0%hi, 0.2%si, 0.0%st
> Cpu15 : 0.7%us, 0.1%sy, 0.0%ni, 99.0%id, 0.0%wa, 0.1%hi, 0.1%si, 0.0%st
> Mem: 16426540k total, 16356772k used, 69768k free, 215764k buffers
> Swap: 4194232k total, 145280k used, 4048952k free, 14434356k cached
>
So how many concurrent users are accessing this db? pgsql assigns one
process on one core so to speak. It can't spread load for one user
over all cores.
________________________________ > From: devnull@mail.ua > To: pgsql-performance@postgresql.org > Subject: [PERFORM] Re[2]: [PERFORM] SMP on a heavy loaded database > Date: Fri, 4 Jan 2013 18:41:25 +0400 > > > > > Пятница, 4 января 2013, 0:42 -07:00 от Scott Marlowe > <scott.marlowe@gmail.com>: > On Thu, Jan 3, 2013 at 4:45 PM, nobody nowhere > <devnull@mail.ua<http://sentmsg?compose&To=devnull%40mail.ua>> wrote: > > Centos 5.X kernel 2.6.18-274 > > pgsql-9.1 from pgdg-91-centos.repo > > relatively small database 3.2Gb > > Lot of insert, update, delete. > > > > I see non balanced _User_ usage on 14 CPU, exclusively assigned to > the hardware raid controller. > > What I'm doing wrong, and is it possible somehow to fix? > > > > Thanks in advance. > > > > Andrew. > > > > # top -d 10.00 -b -n 2 -U postgres -c > > > > top - 23:18:19 up 453 days, 57 min, 3 users, load average: 0.55, 0.47, 0.42 > > Tasks: 453 total, 1 running, 452 sleeping, 0 stopped, 0 zombie > > Cpu0 : 0.6%us, 0.1%sy, 0.0%ni, 99.3%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st > > Cpu1 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st > > Cpu2 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st > > Cpu3 : 1.2%us, 0.1%sy, 0.0%ni, 98.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st > > Cpu4 : 2.6%us, 0.4%sy, 0.0%ni, 96.8%id, 0.0%wa, 0.0%hi, 0.2%si, 0.0%st > > Cpu5 : 0.8%us, 0.0%sy, 0.0%ni, 99.2%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st > > Cpu6 : 5.4%us, 0.2%sy, 0.0%ni, 94.2%id, 0.2%wa, 0.0%hi, 0.0%si, 0.0%st > > Cpu7 : 3.3%us, 0.4%sy, 0.0%ni, 96.1%id, 0.0%wa, 0.0%hi, 0.2%si, 0.0%st > > Cpu8 : 1.4%us, 0.3%sy, 0.0%ni, 98.2%id, 0.0%wa, 0.0%hi, 0.1%si, 0.0%st > > Cpu9 : 0.0%us, 0.1%sy, 0.0%ni, 99.9%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st > > Cpu10 : 0.0%us, 0.1%sy, 0.0%ni, 99.9%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st > > Cpu11 : 1.6%us, 0.6%sy, 0.0%ni, 97.4%id, 0.0%wa, 0.0%hi, 0.4%si, 0.0%st > > Cpu12 : 0.5%us, 0.1%sy, 0.0%ni, 99.4%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st > > Cpu13 : 1.4%us, 0.2%sy, 0.0%ni, 98.4%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st > > Cpu14 : 24.2%us, 0.8%sy, 0.0%ni, 74.5%id, 0.3%wa, 0.0%hi, 0.2%si, 0.0%st > > Cpu15 : 0.7%us, 0.1%sy, 0.0%ni, 99.0%id, 0.0%wa, 0.1%hi, 0.1%si, 0.0%st > > Mem: 16426540k total, 16356772k used, 69768k free, 215764k buffers > > Swap: 4194232k total, 145280k used, 4048952k free, 14434356k cached > > > > So how many concurrent users are accessing this db? pgsql assigns one > process on one core so to speak. It can't spread load for one user > over all cores. > 64 php Fast-cgi processes over the Unix socket and about 20-30 over tcp Are you running IRQ Balance ? The OS can pin process to the respective IRQ handler.
On Fri, Jan 4, 2013 at 11:41 AM, nobody nowhere <devnull@mail.ua> wrote: > So how many concurrent users are accessing this db? pgsql assigns one > process on one core so to speak. It can't spread load for one user > over all cores. > > 64 php Fast-cgi processes over the Unix socket and about 20-30 over tcp I guess that means the server isn't dedicated to postgres... ...have you checked which PID is using that core? Is it postgres-related?
Пятница, 4 января 2013, 9:47 -05:00 от Charles Gomes <charlesrg@outlook.com>:
I switch off IRQ Balance on any heavy loaded servers and statically assign IRQ's to processors using________________________________
> From: devnull@mail.ua
> To: pgsql-performance@postgresql.org
> Subject: [PERFORM] Re[2]: [PERFORM] SMP on a heavy loaded database
> Date: Fri, 4 Jan 2013 18:41:25 +0400
>
>
>
>
> Пятница, 4 января 2013, 0:42 -07:00 от Scott Marlowe
> <scott.marlowe@gmail.com>:
> On Thu, Jan 3, 2013 at 4:45 PM, nobody nowhere
> <devnull@mail.ua<http://sentmsg?compose&To=devnull%40mail.ua>> wrote:
> > Centos 5.X kernel 2.6.18-274
> > pgsql-9.1 from pgdg-91-centos.repo
> > relatively small database 3.2Gb
> > Lot of insert, update, delete.
> >
> > I see non balanced _User_ usage on 14 CPU, exclusively assigned to
> the hardware raid controller.
> > What I'm doing wrong, and is it possible somehow to fix?
> >
> > Thanks in advance.
> >
> > Andrew.
> >
> > # top -d 10.00 -b -n 2 -U postgres -c
> >
> > top - 23:18:19 up 453 days, 57 min, 3 users, load average: 0.55, 0.47, 0.42
> > Tasks: 453 total, 1 running, 452 sleeping, 0 stopped, 0 zombie
> > Cpu0 : 0.6%us, 0.1%sy, 0.0%ni, 99.3%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
> > Cpu1 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
> > Cpu2 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
> > Cpu3 : 1.2%us, 0.1%sy, 0.0%ni, 98.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
> > Cpu4 : 2.6%us, 0.4%sy, 0.0%ni, 96.8%id, 0.0%wa, 0.0%hi, 0.2%si, 0.0%st
> > Cpu5 : 0.8%us, 0.0%sy, 0.0%ni, 99.2%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
> > Cpu6 : 5.4%us, 0.2%sy, 0.0%ni, 94.2%id, 0.2%wa, 0.0%hi, 0.0%si, 0.0%st
> > Cpu7 : 3.3%us, 0.4%sy, 0.0%ni, 96.1%id, 0.0%wa, 0.0%hi, 0.2%si, 0.0%st
> > Cpu8 : 1.4%us, 0.3%sy, 0.0%ni, 98.2%id, 0.0%wa, 0.0%hi, 0.1%si, 0.0%st
> > Cpu9 : 0.0%us, 0.1%sy, 0.0%ni, 99.9%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
> > Cpu10 : 0.0%us, 0.1%sy, 0.0%ni, 99.9%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
> > Cpu11 : 1.6%us, 0.6%sy, 0.0%ni, 97.4%id, 0.0%wa, 0.0%hi, 0.4%si, 0.0%st
> > Cpu12 : 0.5%us, 0.1%sy, 0.0%ni, 99.4%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
> > Cpu13 : 1.4%us, 0.2%sy, 0.0%ni, 98.4%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
> > Cpu14 : 24.2%us, 0.8%sy, 0.0%ni, 74.5%id, 0.3%wa, 0.0%hi, 0.2%si, 0.0%st
> > Cpu15 : 0.7%us, 0.1%sy, 0.0%ni, 99.0%id, 0.0%wa, 0.1%hi, 0.1%si, 0.0%st
> > Mem: 16426540k total, 16356772k used, 69768k free, 215764k buffers
> > Swap: 4194232k total, 145280k used, 4048952k free, 14434356k cached
> >
>
> So how many concurrent users are accessing this db? pgsql assigns one
> process on one core so to speak. It can't spread load for one user
> over all cores.
> 64 php Fast-cgi processes over the Unix socket and about 20-30 over tcp
Are you running IRQ Balance ? The OS can pin process to the respective IRQ handler.
echo 000X > /proc/irq/XX/smp_affinity
irqballance do not help to fix it..
Пятница, 4 января 2013, 11:52 -03:00 от Claudio Freire <klaussfreire@gmail.com>:
On Fri, Jan 4, 2013 at 11:41 AM, nobody nowhere <devnull@mail.ua> wrote:
> So how many concurrent users are accessing this db? pgsql assigns one
> process on one core so to speak. It can't spread load for one user
> over all cores.
>
> 64 php Fast-cgi processes over the Unix socket and about 20-30 over tcp
I guess that means the server isn't dedicated to postgres...
...have you checked which PID is using that core? Is it postgres-related?
How do I know it?> So how many concurrent users are accessing this db? pgsql assigns one
> process on one core so to speak. It can't spread load for one user
> over all cores.
>
> 64 php Fast-cgi processes over the Unix socket and about 20-30 over tcp
I guess that means the server isn't dedicated to postgres...
...have you checked which PID is using that core? Is it postgres-related?
Only postgres on this server heavely use the Raid controller. PHP comletely in XCache. At night I'll try to change socket to tcp. May be it will help,
On Fri, Jan 4, 2013 at 1:23 PM, nobody nowhere <devnull@mail.ua> wrote: > > ...have you checked which PID is using that core? Is it postgres-related? > > How do I know it? An unfiltered top or ps might give you a clue. You could also try iotop, php does hit the filesystem (sessions stored in disk), and if it's on the same partition as postgres, postgres' fsyncs might cause it to flush to disk quite heavily.
On Fri, Jan 4, 2013 at 3:38 PM, nobody nowhere <devnull@mail.ua> wrote: > > An unfiltered top or ps might give you a clue. You could also try > > Look at letter on the topic start. It's filtered by -u postgres, so you can't see apache there. > iotop, php does hit the filesystem (sessions stored in disk), and if > it's on the same partition as postgres, postgres' fsyncs might cause > it to flush to disk quite heavily. > > The question was "which PID is using that core?" > Can you using top or iotop certanly answer on this question? I can't. If you see some process hogging CPU/IO in a way that's consistent with CPU14, then you have a candidate. I don't see much in that iotop, except the 640k/s writes in pg's writer, which isn't much at all unless you have a seriously underpowered/broken system. If all fails, you can look for processes with high accumulated cputime, like the "monitor" ones there on the first top (though it doesn't say much, since that top is incomplete), or the walsender. Without the ability to compare against all other processes, none of that means much - but once you do, you can inspect those processes more closely. Oh... and you can also tell top to show the "last used processor". I guess I should have said this first ;-)
>Oh... and you can also tell top to show the "last used processor". I >guess I should have said this first ;-) Even if do not fix it, I'll know a new feature of top :) Certainly sure 14 CPU Total DISK READ: top - 21:54:38 up 453 days, 23:34, 1 user, load average: 0.56, 0.55, 0.48 Tasks: 429 total, 1 running, 428 sleeping, 0 stopped, 0 zombie Cpu0 : 0.2%us, 0.1%sy, 0.0%ni, 99.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu1 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu2 : 0.1%us, 0.1%sy, 0.0%ni, 99.8%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu3 : 0.7%us, 0.1%sy, 0.0%ni, 99.3%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu4 : 1.5%us, 0.4%sy, 0.0%ni, 98.0%id, 0.0%wa, 0.0%hi, 0.1%si, 0.0%st Cpu5 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu6 : 2.1%us, 0.2%sy, 0.0%ni, 97.4%id, 0.2%wa, 0.0%hi, 0.1%si, 0.0%st Cpu7 : 2.4%us, 0.4%sy, 0.0%ni, 97.0%id, 0.0%wa, 0.0%hi, 0.2%si, 0.0%st Cpu8 : 1.4%us, 0.4%sy, 0.0%ni, 98.1%id, 0.0%wa, 0.0%hi, 0.1%si, 0.0%st Cpu9 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu10 : 0.1%us, 0.0%sy, 0.0%ni, 99.9%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu11 : 1.2%us, 0.5%sy, 0.0%ni, 97.9%id, 0.0%wa, 0.0%hi, 0.5%si, 0.0%st Cpu12 : 0.1%us, 0.0%sy, 0.0%ni, 99.9%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu13 : 0.1%us, 0.0%sy, 0.0%ni, 99.9%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu14 : 20.5%us, 0.9%sy, 0.0%ni, 78.1%id, 0.4%wa, 0.0%hi, 0.1%si, 0.0%st Cpu15 : 1.2%us, 0.1%sy, 0.0%ni, 98.5%id, 0.0%wa, 0.0%hi, 0.2%si, 0.0%st Mem: 16426540k total, 16173980k used, 252560k free, 219348k buffers Swap: 4194232k total, 147296k used, 4046936k free, 14482096k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ P COMMAND 47 root RT -5 0 0 0 S 0.0 0.0 0:34.84 15 [migration/15] 48 root 34 19 0 0 0 S 0.0 0.0 0:01.42 15 [ksoftirqd/15] 49 root RT -5 0 0 0 S 0.0 0.0 0:00.00 15 [watchdog/15] 65 root 10 -5 0 0 0 S 0.0 0.0 0:00.03 15 [events/15] 238 root 10 -5 0 0 0 S 0.0 0.0 0:03.76 15 [kblockd/15] 406 root 13 -5 0 0 0 S 0.0 0.0 0:00.00 15 [cqueue/15] 601 root 15 0 0 0 0 S 0.0 0.0 88:52.30 15 [pdflush] 620 root 15 -5 0 0 0 S 0.0 0.0 0:00.00 15 [aio/15] 964 root 17 -5 0 0 0 S 0.0 0.0 0:00.00 15 [ata/15] 2684 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 15 [kmpathd/15] 2914 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 15 [rpciod/15] 3270 root 19 -5 0 0 0 S 0.0 0.0 0:00.00 15 [ib_cm/15] 5906 rpc 15 0 8072 688 552 S 0.0 0.0 0:00.00 15 portmap 14979 postgres 15 0 4316m 104m 103m S 0.0 0.6 6:54.39 15 /usr/pgsql-9.1/bin/postmaster -p 5432 -D /var/lib/pgsql/9.1/data 44 root RT -5 0 0 0 S 0.0 0.0 0:40.50 14 [migration/14] 45 root 34 19 0 0 0 S 0.0 0.0 0:03.51 14 [ksoftirqd/14] 46 root RT -5 0 0 0 S 0.0 0.0 0:00.00 14 [watchdog/14] 64 root 10 -5 0 0 0 S 0.0 0.0 0:00.04 14 [events/14] 237 root 10 -5 0 0 0 S 0.0 0.0 9:51.44 14 [kblockd/14] 405 root 13 -5 0 0 0 S 0.0 0.0 0:00.00 14 [cqueue/14] 619 root 15 -5 0 0 0 S 0.0 0.0 0:00.00 14 [aio/14] 963 root 16 -5 0 0 0 S 0.0 0.0 0:00.00 14 [ata/14] 1092 root 10 -5 0 0 0 S 0.0 0.0 52:21.12 14 [kjournald] 2683 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 14 [kmpathd/14] 2724 root 10 -5 0 0 0 S 0.0 0.0 2:15.40 14 [kjournald] 2726 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 14 [kjournald] 2913 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 14 [rpciod/14] 3269 root 18 -5 0 0 0 S 0.0 0.0 0:00.00 14 [ib_cm/14] 8970 postgres 16 0 4327m 205m 197m S 0.2 1.3 0:01.33 14 postgres: user user_db [local] idle 8973 postgres 15 0 4327m 199m 191m S 0.1 1.2 0:00.37 14 postgres: user user_db [local] idle 8977 postgres 16 0 4328m 48m 40m S 0.7 0.3 0:00.76 14 postgres: user user_db [local] idle 8980 postgres 16 0 4328m 51m 43m S 0.1 0.3 0:00.50 14 postgres: user user_db [local] idle 8981 postgres 15 0 4327m 203m 195m S 0.0 1.3 0:00.72 14 postgres: user user_db [local] idle 8985 postgres 15 0 4327m 43m 36m S 0.1 0.3 0:00.29 14 postgres: user user_db [local] idle 8988 postgres 16 0 4328m 205m 196m S 0.0 1.3 0:00.91 14 postgres: user user_db [local] idle 8991 postgres 15 0 4327m 205m 197m S 0.1 1.3 0:00.79 14 postgres: user user_db [local] idle 8993 postgres 15 0 4328m 207m 199m S 1.9 1.3 0:00.99 14 postgres: user user_db [local] idle 8996 postgres 15 0 4328m 205m 196m S 1.1 1.3 0:00.93 14 postgres: user user_db [local] idle 9000 postgres 16 0 4328m 207m 199m S 0.7 1.3 0:00.82 14 postgres: user user_db [local] idle 9004 postgres 16 0 4329m 204m 194m S 0.1 1.3 0:00.69 14 postgres: user user_db [local] idle 9005 postgres 15 0 4327m 200m 193m S 0.7 1.2 0:00.63 14 postgres: user user_db [local] idle 9007 postgres 15 0 4327m 199m 192m S 0.1 1.2 0:00.49 14 postgres: user user_db [local] idle 9010 postgres 15 0 4327m 202m 195m S 0.2 1.3 0:00.65 14 postgres: user user_db [local] idle 9016 postgres 15 0 4326m 34m 28m S 0.1 0.2 0:00.15 14 postgres: user user_db [local] idle 9018 postgres 16 0 4327m 203m 195m S 1.0 1.3 0:00.72 14 postgres: user user_db [local] idle 9020 postgres 15 0 4327m 45m 37m S 0.1 0.3 0:00.49 14 postgres: user user_db [local] idle 9022 postgres 15 0 4327m 42m 35m S 0.1 0.3 0:00.20 14 postgres: user user_db [local] idle 9025 postgres 16 0 4328m 201m 193m S 0.3 1.3 0:00.75 14 postgres: user user_db [local] idle 9026 postgres 16 0 4327m 47m 40m S 0.1 0.3 0:00.49 14 postgres: user user_db [local] idle 9038 postgres 16 0 4327m 201m 193m S 0.1 1.3 0:00.70 14 postgres: user user_db [local] idle 9042 postgres 15 0 4327m 201m 193m S 1.8 1.3 0:00.71 14 postgres: user user_db [local] idle 9046 postgres 15 0 4327m 201m 193m S 0.1 1.3 0:00.65 14 postgres: user user_db [local] idle 9048 postgres 15 0 4327m 200m 193m S 1.4 1.2 0:00.52 14 postgres: user user_db [local] idle 9049 postgres 15 0 4328m 200m 192m S 0.1 1.2 0:00.50 14 postgres: user user_db [local] idle 9053 postgres 15 0 4327m 44m 37m S 0.1 0.3 0:00.34 14 postgres: user user_db [local] idle 9054 postgres 16 0 4327m 46m 40m S 0.1 0.3 0:00.43 14 postgres: user user_db [local] idle 9055 postgres 16 0 4328m 200m 192m S 0.0 1.3 0:00.39 14 postgres: user user_db [local] idle 9056 postgres 16 0 4328m 201m 192m S 0.7 1.3 0:00.75 14 postgres: user user_db [local] idle 9057 postgres 16 0 4327m 200m 192m S 0.2 1.3 0:00.72 14 postgres: user user_db [local] idle 9061 postgres 15 0 4328m 200m 192m S 0.0 1.2 0:00.49 14 postgres: user user_db [local] idle 9065 postgres 15 0 4328m 204m 196m S 0.3 1.3 0:00.80 14 postgres: user user_db [local] idle 9067 postgres 15 0 4327m 43m 35m S 0.0 0.3 0:00.30 14 postgres: user user_db [local] idle 9071 postgres 15 0 4327m 48m 40m S 0.1 0.3 0:00.53 14 postgres: user user_db [local] idle 9076 postgres 15 0 4326m 43m 36m S 0.0 0.3 0:00.61 14 postgres: user user_db [local] idle 9078 postgres 15 0 4328m 206m 198m S 0.0 1.3 0:00.64 14 postgres: user user_db [local] idle 9079 postgres 15 0 4327m 45m 38m S 0.0 0.3 0:00.37 14 postgres: user user_db [local] idle 9080 postgres 16 0 4327m 200m 193m S 0.0 1.3 0:00.62 14 postgres: user user_db [local] idle 9082 postgres 16 0 4328m 202m 193m S 1.5 1.3 0:00.84 14 postgres: user user_db [local] idle 9084 postgres 15 0 4327m 46m 38m S 0.0 0.3 0:00.54 14 postgres: user user_db [local] idle 9086 postgres 15 0 4328m 203m 194m S 0.0 1.3 0:00.38 14 postgres: user user_db [local] idle 9087 postgres 16 0 4327m 199m 192m S 1.0 1.2 0:00.63 14 postgres: user user_db [local] idle 9089 postgres 16 0 4328m 205m 196m S 0.2 1.3 0:00.87 14 postgres: user user_db [local] idle 9091 postgres 15 0 4327m 45m 38m S 0.1 0.3 0:00.41 14 postgres: user user_db [local] idle 9092 postgres 16 0 4326m 41m 34m S 0.0 0.3 0:00.27 14 postgres: user user_db [local] idle 9098 postgres 16 0 4329m 203m 194m S 3.5 1.3 0:00.65 14 postgres: user user_db [local] idle 9099 postgres 16 0 4327m 45m 38m S 0.0 0.3 0:00.41 14 postgres: user user_db [local] idle 13629 root 18 0 65288 280 140 S 0.0 0.0 0:00.00 14 rpc.rquotad
On Fri, Jan 4, 2013 at 6:07 PM, nobody nowhere <devnull@mail.ua> wrote: > 9092 postgres 16 0 4326m 41m 34m S 0.0 0.3 0:00.27 14 postgres: user user_db [local] idle > 9098 postgres 16 0 4329m 203m 194m S 3.5 1.3 0:00.65 14 postgres: user user_db [local] idle > 9099 postgres 16 0 4327m 45m 38m S 0.0 0.3 0:00.41 14 postgres: user user_db [local] idle That looks like pg has been pinned to CPU14. I don't think it's pg's doing. All I can think of is: check scheduler tweaks, numa, and pg's initscript. Just in case it's being pinned explicitly.
Пятница, 4 января 2013, 18:20 -03:00 от Claudio Freire <klaussfreire@gmail.com>:
On Fri, Jan 4, 2013 at 6:07 PM, nobody nowhere <devnull@mail.ua> wrote:
> 9092 postgres 16 0 4326m 41m 34m S 0.0 0.3 0:00.27 14 postgres: user user_db [local] idle
> 9098 postgres 16 0 4329m 203m 194m S 3.5 1.3 0:00.65 14 postgres: user user_db [local] idle
> 9099 postgres 16 0 4327m 45m 38m S 0.0 0.3 0:00.41 14 postgres: user user_db [local] idle
That looks like pg has been pinned to CPU14. I don't think it's pg's
doing. All I can think of is: check scheduler tweaks, numa, and pg's
initscript. Just in case it's being pinned explicitly.
Not pinned. > 9092 postgres 16 0 4326m 41m 34m S 0.0 0.3 0:00.27 14 postgres: user user_db [local] idle
> 9098 postgres 16 0 4329m 203m 194m S 3.5 1.3 0:00.65 14 postgres: user user_db [local] idle
> 9099 postgres 16 0 4327m 45m 38m S 0.0 0.3 0:00.41 14 postgres: user user_db [local] idle
That looks like pg has been pinned to CPU14. I don't think it's pg's
doing. All I can think of is: check scheduler tweaks, numa, and pg's
initscript. Just in case it's being pinned explicitly.
Forks with tcp connection use other CPU. I just add connections pool and change socket to tcp
#top -d 10.00 -b -n 2 -U postgres
top - 22:29:00 up 454 days, 8 min, 1 user, load average: 0.39, 0.51, 0.46
Tasks: 429 total, 1 running, 428 sleeping, 0 stopped, 0 zombie
Cpu0 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu1 : 0.1%us, 0.0%sy, 0.0%ni, 99.9%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu2 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu3 : 0.9%us, 0.1%sy, 0.0%ni, 98.9%id, 0.0%wa, 0.0%hi, 0.1%si, 0.0%st
Cpu4 : 1.9%us, 0.4%sy, 0.0%ni, 97.5%id, 0.0%wa, 0.0%hi, 0.2%si, 0.0%st
Cpu5 : 0.1%us, 0.2%sy, 0.0%ni, 99.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu6 : 2.6%us, 0.1%sy, 0.0%ni, 97.2%id, 0.1%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu7 : 1.6%us, 0.3%sy, 0.0%ni, 98.0%id, 0.0%wa, 0.0%hi, 0.1%si, 0.0%st
Cpu8 : 1.6%us, 0.3%sy, 0.0%ni, 97.9%id, 0.0%wa, 0.0%hi, 0.2%si, 0.0%st
Cpu9 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu10 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu11 : 1.1%us, 0.5%sy, 0.0%ni, 98.1%id, 0.0%wa, 0.0%hi, 0.3%si, 0.0%st
Cpu12 : 1.0%us, 0.0%sy, 0.0%ni, 99.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu13 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu14 : 18.7%us, 0.3%sy, 0.0%ni, 80.6%id, 0.3%wa, 0.0%hi, 0.1%si, 0.0%st
Cpu15 : 0.0%us, 0.0%sy, 0.0%ni, 99.7%id, 0.0%wa, 0.1%hi, 0.2%si, 0.0%st
Mem: 16426540k total, 16368832k used, 57708k free, 219524k buffers
Swap: 4194232k total, 147312k used, 4046920k free, 14468220k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ P COMMAND
10129 postgres 16 0 4329m 243m 233m S 1.9 1.5 0:04.05 14 postgres: user user_db [local] idle
10198 postgres 16 0 4329m 243m 234m S 1.9 1.5 0:03.49 14 postgres: user user_db [local] idle
10092 postgres 16 0 4330m 238m 228m S 1.7 1.5 0:03.09 14 postgres: user user_db [local] idle
10190 postgres 15 0 4328m 234m 226m S 1.7 1.5 0:02.94 14 postgres: user user_db [local] idle
10169 postgres 16 0 4329m 235m 225m S 1.3 1.5 0:03.22 14 postgres: user user_db [local] idle
10102 postgres 15 0 4328m 237m 227m S 1.2 1.5 0:03.24 14 postgres: user user_db [local] idle
10217 postgres 16 0 4329m 241m 231m S 1.2 1.5 0:04.73 14 postgres: user user_db [local] idle
10094 postgres 15 0 4330m 244m 233m S 0.9 1.5 0:03.67 14 postgres: user user_db [local] idle
10137 postgres 16 0 4331m 238m 227m S 0.8 1.5 0:03.14 14 postgres: user user_db [local] idle
10149 postgres 15 0 4328m 238m 229m S 0.8 1.5 0:03.07 14 postgres: user user_db [local] idle
10161 postgres 16 0 4331m 245m 234m S 0.8 1.5 0:03.91 6 postgres: user user_db [local] idle
10178 postgres 16 0 4330m 245m 234m S 0.8 1.5 0:04.01 14 postgres: user user_db [local] idle
10182 postgres 16 0 4330m 236m 227m S 0.8 1.5 0:02.38 14 postgres: user user_db [local] idle
10189 postgres 15 0 4330m 241m 231m S 0.8 1.5 0:03.07 14 postgres: user user_db [local] idle
10208 postgres 16 0 4329m 237m 227m S 0.8 1.5 0:03.74 14 postgres: user user_db [local] idle
10128 postgres 16 0 4330m 240m 229m S 0.7 1.5 0:03.15 14 postgres: user user_db [local] idle
10142 postgres 16 0 4331m 241m 230m S 0.7 1.5 0:03.23 14 postgres: user user_db [local] idle
10194 postgres 15 0 4328m 236m 227m S 0.7 1.5 0:03.24 14 postgres: user user_db [local] idle
6878 postgres 15 0 4319m 2992 1472 S 0.3 0.0 44:06.10 11 postgres: wal sender process postgres XXX.XXX.XXX.XXX(47880) streaming 21D/D76286B0
10180 postgres 16 0 4329m 240m 231m S 0.3 1.5 0:02.88 4 postgres: user user_db [local] idle
10115 postgres 16 0 4331m 236m 225m S 0.2 1.5 0:03.53 14 postgres: user user_db [local] idle
10162 postgres 16 0 4330m 240m 230m S 0.2 1.5 0:03.01 14 postgres: user user_db [local] idle
10212 postgres 16 0 4329m 238m 228m S 0.2 1.5 0:03.52 14 postgres: user user_db [local] idle
10213 postgres 15 0 4329m 238m 228m S 0.2 1.5 0:02.96 14 postgres: user user_db [local] idle
10100 postgres 16 0 4331m 237m 226m S 0.1 1.5 0:03.39 14 postgres: user user_db [local] idle
10112 postgres 16 0 4331m 240m 229m S 0.1 1.5 0:03.83 14 postgres: user user_db [local] idle
10117 postgres 15 0 4329m 239m 229m S 0.1 1.5 0:04.42 14 postgres: user user_db [local] idle
10121 postgres 16 0 4330m 240m 230m S 0.1 1.5 0:03.08 6 postgres: user user_db [local] idle
10125 postgres 15 0 4329m 243m 233m S 0.1 1.5 0:04.90 14 postgres: user user_db [local] idle
10127 postgres 15 0 4329m 238m 228m S 0.1 1.5 0:02.81 14 postgres: user user_db [local] idle
10135 postgres 15 0 4329m 238m 229m S 0.1 1.5 0:03.20 14 postgres: user user_db [local] idle
10136 postgres 16 0 4329m 237m 227m S 0.1 1.5 0:02.77 14 postgres: user user_db [local] idle
10138 postgres 16 0 4330m 243m 232m S 0.1 1.5 0:03.46 14 postgres: user user_db [local] idle
10139 postgres 15 0 4330m 236m 225m S 0.1 1.5 0:03.14 14 postgres: user user_db [local] idle
10143 postgres 16 0 4330m 246m 236m S 0.1 1.5 0:02.93 14 postgres: user user_db [local] idle
10144 postgres 16 0 4331m 237m 227m S 0.1 1.5 0:02.81 14 postgres: user user_db [local] idle
10148 postgres 15 0 4331m 251m 240m S 0.1 1.6 0:04.07 14 postgres: user user_db [local] idle
10165 postgres 16 0 4331m 246m 235m S 0.1 1.5 0:02.36 14 postgres: user user_db [local] idle
10166 postgres 15 0 4330m 235m 226m S 0.1 1.5 0:02.55 14 postgres: user user_db [local] idle
10168 postgres 15 0 4329m 234m 225m S 0.1 1.5 0:03.26 14 postgres: user user_db [local] idle
10173 postgres 16 0 4329m 236m 226m S 0.1 1.5 0:02.82 6 postgres: user user_db [local] idle
10174 postgres 15 0 4328m 240m 232m S 0.1 1.5 0:03.98 14 postgres: user user_db [local] idle
10184 postgres 16 0 4328m 237m 228m S 0.1 1.5 0:02.85 14 postgres: user user_db [local] idle
10186 postgres 15 0 4329m 239m 229m S 0.1 1.5 0:03.47 14 postgres: user user_db [local] idle
10191 postgres 15 0 4330m 243m 233m S 0.1 1.5 0:03.69 14 postgres: user user_db [local] idle
10195 postgres 16 0 4329m 240m 231m S 0.1 1.5 0:03.02 14 postgres: user user_db [local] idle
10199 postgres 15 0 4331m 234m 222m S 0.1 1.5 0:02.87 14 postgres: user user_db [local] idle
10203 postgres 15 0 4329m 234m 224m S 0.1 1.5 0:04.00 14 postgres: user user_db [local] idle
10207 postgres 16 0 4331m 236m 225m S 0.1 1.5 0:03.52 6 postgres: user user_db [local] idle
10210 postgres 15 0 4330m 237m 227m S 0.1 1.5 0:02.90 14 postgres: user user_db [local] idle
10211 postgres 15 0 4330m 244m 234m S 0.1 1.5 0:03.24 14 postgres: user user_db [local] idle
10225 postgres 16 0 4330m 237m 226m S 0.1 1.5 0:03.55 14 postgres: user user_db [local] idle
10226 postgres 16 0 4330m 235m 224m S 0.1 1.5 0:02.59 14 postgres: user user_db [local] idle
10227 postgres 15 0 4332m 247m 236m S 0.1 1.5 0:03.71 14 postgres: user user_db [local] idle
10229 postgres 16 0 4329m 236m 226m S 0.1 1.5 0:02.38 14 postgres: user user_db [local] idle
7818 postgres 15 0 4319m 6640 4680 S 0.0 0.0 0:00.06 8 postgres: postgres user_db XXX.XXX.XXX.XXX(1032) idle
10097 postgres 16 0 4328m 235m 226m S 0.0 1.5 0:03.25 14 postgres: user user_db [local] idle
10114 postgres 16 0 4331m 245m 234m S 0.0 1.5 0:03.79 14 postgres: user user_db [local] idle
10118 postgres 15 0 4328m 235m 226m S 0.0 1.5 0:03.53 14 postgres: user user_db [local] idle
10152 postgres 15 0 4331m 241m 229m S 0.0 1.5 0:03.55 14 postgres: user user_db [local] idle
10170 postgres 16 0 4330m 240m 229m S 0.0 1.5 0:03.19 14 postgres: user user_db [local] idle
10185 postgres 15 0 4330m 235m 225m S 0.0 1.5 0:03.83 14 postgres: user user_db [local] idle
10187 postgres 16 0 4330m 237m 226m S 0.0 1.5 0:03.34 14 postgres: user user_db [local] idle
10202 postgres 16 0 4330m 234m 224m S 0.0 1.5 0:02.74 14 postgres: user user_db [local] idle
10220 postgres 16 0 4329m 258m 248m S 0.0 1.6 0:03.85 6 postgres: user user_db [local] idle
10223 postgres 16 0 4331m 243m 233m S 0.0 1.5 0:03.85 14 postgres: user user_db [local] idle
14378 postgres 15 0 4320m 7324 4928 S 0.0 0.0 0:00.03 4 postgres: postgres postgres XXX.XXX.XXX.XXX(1030) idle
14979 postgres 15 0 4316m 104m 103m S 0.0 0.6 6:54.61 8 /usr/pgsql-9.1/bin/postmaster -p 5432 -D /var/lib/pgsql/9.1/data
14981 postgres 15 0 112m 1368 728 S 0.0 0.0 0:00.06 12 postgres: logger process
14995 postgres 15 0 4320m 2.0g 2.0g S 0.0 12.7 4:49.23 15 postgres: writer process
14996 postgres 15 0 4318m 17m 16m S 0.0 0.1 0:12.96 15 postgres: wal writer process
14997 postgres 15 0 4319m 3312 1568 S 0.0 0.0 0:10.30 2 postgres: autovacuum launcher process
14998 postgres 15 0 114m 1444 756 S 0.0 0.0 0:13.32 15 postgres: archiver process last was 000000010000021D000000D6
14999 postgres 15 0 115m 1840 808 S 0.0 0.0 30:32.88 1 postgres: stats collector process
15027 postgres 15 0 4319m 80m 78m S 0.0 0.5 32:10.90 11 postgres: monitor user_db XXX.XXX.XXX.XXX(55433) idle
15070 postgres 15 0 4319m 82m 80m S 0.0 0.5 29:12.70 7 postgres: monitor user_db XXX.XXX.XXX.XXX(59360) idle
15808 postgres 16 0 4324m 15m 10m S 0.0 0.1 0:00.27 7 postgres: postgres user_db XXX.XXX.XXX.XXX(1031) idle
19598 postgres 16 0 4320m 7328 4932 S 0.0 0.0 0:00.00 15 postgres: postgres postgres XXX.XXX.XXX.XXX(59745) idle
19599 postgres 15 0 4321m 13m 10m S 0.0 0.1 0:00.10 4 postgres: postgres user_db XXX.XXX.XXX.XXX(59746) idle
19625 postgres 15 0 4320m 8844 6076 S 0.0 0.1 0:00.04 11 postgres: postgres user_db XXX.XXX.XXX.XXX(59768) idle
19633 postgres 15 0 4320m 7112 4880 S 0.0 0.0 0:00.00 11 postgres: postgres postgres XXX.XXX.XXX.XXX(3586) idle
19634 postgres 15 0 4327m 19m 9.9m S 0.0 0.1 0:00.15 11 postgres: postgres user_db XXX.XXX.XXX.XXX(3588) idle
19639 postgres 15 0 4321m 58m 55m S 0.0 0.4 0:00.15 4 postgres: postgres user_db XXX.XXX.XXX.XXX(3612) idle
On Fri, Jan 4, 2013 at 6:38 PM, nobody nowhere <devnull@mail.ua> wrote: > On Fri, Jan 4, 2013 at 6:07 PM, nobody nowhere <devnull@mail.ua> wrote: >> 9092 postgres 16 0 4326m 41m 34m S 0.0 0.3 0:00.27 14 postgres: user >> user_db [local] idle >> 9098 postgres 16 0 4329m 203m 194m S 3.5 1.3 0:00.65 14 postgres: user >> user_db [local] idle >> 9099 postgres 16 0 4327m 45m 38m S 0.0 0.3 0:00.41 14 postgres: user >> user_db [local] idle > > That looks like pg has been pinned to CPU14. I don't think it's pg's > doing. All I can think of is: check scheduler tweaks, numa, and pg's > initscript. Just in case it's being pinned explicitly. > > Not pinned. > Forks with tcp connection use other CPU. I just add connections pool and > change socket to tcp How interesting. It must be a peculiarity of unix sockets. I know unix sockets have close to no buffering, task-switching to the consumer instead of buffering. Perhaps what you're experiencing here is this "optimization" effect. It's probably not harmful at all. The OS will switch to another CPU if the need arises. Have you done any stress testing? Is there any actual performance impact?
=?UTF-8?B?bm9ib2R5IG5vd2hlcmU=?= <devnull@mail.ua> writes: > [ all postgres processes seem to be pinned to CPU 14 ] I wonder whether this is a "benefit" of sched_autogroup_enabled? http://archives.postgresql.org/message-id/50E4AAB1.9040902@optionshouse.com regards, tom lane
Пятница, 4 января 2013, 18:53 -03:00 от Claudio Freire <klaussfreire@gmail.com>: >On Fri, Jan 4, 2013 at 6:38 PM, nobody nowhere < devnull@mail.ua > wrote: >> On Fri, Jan 4, 2013 at 6:07 PM, nobody nowhere < devnull@mail.ua > wrote: >>> 9092 postgres 16 0 4326m 41m 34m S 0.0 0.3 0:00.27 14 postgres: user >>> user_db [local] idle >>> 9098 postgres 16 0 4329m 203m 194m S 3.5 1.3 0:00.65 14 postgres: user >>> user_db [local] idle >>> 9099 postgres 16 0 4327m 45m 38m S 0.0 0.3 0:00.41 14 postgres: user >>> user_db [local] idle >> >> That looks like pg has been pinned to CPU14. I don't think it's pg's >> doing. All I can think of is: check scheduler tweaks, numa, and pg's >> initscript. Just in case it's being pinned explicitly. >> >> Not pinned. >> Forks with tcp connection use other CPU. I just add connections pool and >> change socket to tcp > >How interesting. It must be a peculiarity of unix sockets. I know unix >sockets have close to no buffering, task-switching to the consumer >instead of buffering. Perhaps what you're experiencing here is this >"optimization" effect. It's probably not harmful at all. The OS will >switch to another CPU if the need arises. It's not socket problem. Ths same result when I change php fast-cgi connection to tcp, Remote clients over tcp use insert-delete. Just data collection. Nothing more. Locally php its lot of PL data processing functions. It's PL problem !! > > >Have you done any stress testing? Is there any actual performance impact? On my experience stress testing and real production perfomance usually absolutely different. :) No application development going together with business growing. We just add functional to the system step by step. For a last couple month we just grow up quickly and I decide to check performance :( > > >-- >Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) >To make changes to your subscription: >http://www.postgresql.org/mailpref/pgsql-performance
Re[2]: [PERFORM] Re[4]: [PERFORM] Re[2]: [PERFORM] SMP on a heavy loaded database
From
nobody nowhere
Date:
> > [ all postgres processes seem to be pinned to CPU 14 ] > > I wonder whether this is a "benefit" of sched_autogroup_enabled? > > http://archives.postgresql.org/message-id/50E4AAB1.9040902@optionshouse.com > > regards, tom lane Thanks Lane RHEL 5.x :(
Fixed by synchronous_commit = off Суббота, 5 января 2013, 12:53 +04:00 от nobody nowhere <devnull@mail.ua>: > > > [ all postgres processes seem to be pinned to CPU 14 ] > > > > I wonder whether this is a "benefit" of sched_autogroup_enabled? > > > > http://archives.postgresql.org/message-id/50E4AAB1.9040902@optionshouse.com > > > > regards, tom lane > > Thanks Lane > > RHEL 5.x > :( > -- > Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-performance >