Re: Huge number of INSERTs - Mailing list pgsql-general
From | Phoenix Kiula |
---|---|
Subject | Re: Huge number of INSERTs |
Date | |
Msg-id | CAFWfU=unJDJ-c8h7xh2DT744W=e8hkbfMwy9PhDhx2DWvJSpeg@mail.gmail.com Whole thread Raw |
In response to | Re: Huge number of INSERTs (Tomas Vondra <tv@fuzzy.cz>) |
Responses |
Re: Huge number of INSERTs
|
List | pgsql-general |
On Sun, Nov 20, 2011 at 2:11 PM, Tomas Vondra <tv@fuzzy.cz> wrote: > Dne 18.11.2011 13:30, Phoenix Kiula napsal(a): >> Full DB: 32GB >> The big table referenced above: 28 GB >> >> It's inserts into this one that are taking time. > > Hm, in that case the shared_buffers is probably too low. It'd be nice to > have at least the indexes on the table in the buffers, and I guess > they're significantly over 256MB (your shared_buffers). > > But regarding the "vmstat 5 10" output you've posted, you probably don't > issue with I/O as the iowait is 0 most of the time. > > You do have a serious problem with CPU, though - most of the time, the > CPU is almost 100% utilized. Not sure which process is responsible for > this, but this might be the main problem problem. > > I'm not saying adding a row to the table (and indexes) is extremely > expensive, but you do have an insane number of processes (350 > connections, a lot of apache workers) and a lot of them are asking for > CPU time. > > So once again: set the number of connections and workers to sane values, > considering your current hardware. Those numbers are actually a handy > throttle - you may increase the numbers until the CPU is reasonably > utilized (don't use 100%, leave a reasonable margin - I wouldn't go > higher than 90%). Thanks Tomas. And others. Some observations and questions from my ongoing saga. I have disabled all ADDing of data (INSERT + UPDATE) and just allowed SELECTs so far. Site is under maintenance. For a moment there, I unleashed the valve and allowed the INSERT functionality. The log was immediately flooded with this: LOG: duration: 6851.054 ms statement: select nextval('maintable_id_seq') LOG: duration: 6848.266 ms statement: select nextval('maintable_id_seq') LOG: duration: 6846.672 ms statement: select nextval('maintable_id_seq') LOG: duration: 6853.451 ms statement: select nextval('maintable_id_seq') LOG: duration: 6991.966 ms statement: select nextval('maintable_id_seq') LOG: duration: 8244.315 ms statement: select nextval('maintable_id_seq') LOG: duration: 6991.071 ms statement: select nextval('maintable_id_seq') LOG: duration: 6990.043 ms statement: select nextval('maintable_id_seq') LOG: duration: 6988.483 ms statement: select nextval('maintable_id_seq') LOG: duration: 6986.793 ms statement: select nextval('maintable_id_seq') LOG: duration: 6985.802 ms statement: select nextval('maintable_id_seq') ... I hope it's just because of too much load that even a simple query such as this was taking so much time? Other queries taking too much time are also indexed queries! Anyway, right now, with that valve closed, and only SELECTs allowed, here's the stats: > vmstat 5 10 procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------ r b swpd free buff cache si so bi bo in cs us sy id wa st 7 1 1352 47596 26412 6189960 3 3 5228 243 17 10 51 19 26 4 0 16 1 1352 45520 26440 6191656 0 0 1230 3819 1368 65722 68 31 1 0 0 9 0 1352 61048 26464 6174688 0 0 1000 4290 1370 65545 67 32 1 0 0 27 1 1352 51908 26508 6183852 0 0 1332 3916 1381 65684 68 32 1 0 0 29 0 1352 48380 26536 6185764 0 0 977 3983 1368 65684 67 32 1 0 0 24 1 1352 46436 26576 6189080 0 0 220 4135 1373 65743 66 33 1 0 0 25 1 1352 46204 26616 6191452 0 0 0 3963 1348 66867 67 32 1 0 0 13 1 1352 57444 26692 6193220 0 0 24 4038 1436 66891 66 32 2 0 0 22 1 1352 51300 26832 6196736 0 0 439 5088 1418 66995 66 31 2 0 0 26 1 1352 51940 26872 6198384 0 0 0 3354 1385 67122 67 31 2 0 0 > iostat -d -x 5 3 Linux 2.6.18-238.9.1.el5 (host.MYDB.com) 11/20/2011 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util sda 86.34 151.41 392.90 92.36 41796.00 1949.66 90.15 1.59 3.27 0.40 19.65 sda1 0.00 0.00 0.00 0.00 0.01 0.00 22.38 0.00 3.04 3.01 0.00 sda2 0.27 8.20 0.06 0.22 3.35 67.05 255.22 0.01 34.36 3.02 0.08 sda3 1.02 13.83 3.29 3.65 165.35 139.75 43.96 0.16 22.52 7.32 5.08 sda4 0.00 0.00 0.00 0.00 0.00 0.00 2.00 0.00 0.00 0.00 0.00 sda5 0.57 3.63 0.64 0.72 26.52 34.72 45.16 0.02 11.26 4.67 0.63 sda6 0.21 0.57 0.41 0.27 13.79 6.76 30.24 0.02 24.31 16.51 1.12 sda7 0.24 5.36 0.11 0.44 1.92 46.32 86.94 0.02 44.21 7.99 0.44 sda8 2.24 2.25 1.22 0.98 27.62 25.83 24.33 0.06 27.61 18.20 4.00 sda9 81.79 117.57 387.18 86.08 41557.45 1629.24 91.25 1.30 2.75 0.39 18.30 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util sda 0.00 775.20 0.00 143.40 0.00 7348.80 51.25 0.04 0.30 0.16 2.28 sda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sda2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sda3 0.00 8.20 0.00 1.60 0.00 78.40 49.00 0.00 0.50 0.50 0.08 sda4 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sda5 0.00 0.20 0.00 1.20 0.00 11.20 9.33 0.00 0.00 0.00 0.00 sda6 0.00 0.80 0.00 0.60 0.00 11.20 18.67 0.00 0.00 0.00 0.00 sda7 0.00 0.20 0.00 2.40 0.00 20.80 8.67 0.00 0.50 0.25 0.06 sda8 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sda9 0.00 765.80 0.00 137.60 0.00 7227.20 52.52 0.04 0.30 0.16 2.20 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util sda 0.60 913.00 4.40 149.40 160.00 8499.20 56.30 0.07 0.46 0.25 3.88 sda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sda2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sda3 0.60 10.00 3.20 1.60 145.60 92.80 49.67 0.02 3.29 2.58 1.24 sda4 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sda5 0.00 1.00 0.80 1.40 11.20 19.20 13.82 0.01 3.09 3.00 0.66 sda6 0.00 0.80 0.00 0.60 0.00 11.20 18.67 0.00 0.00 0.00 0.00 sda7 0.00 0.20 0.00 2.40 0.00 20.80 8.67 0.00 0.67 0.33 0.08 sda8 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sda9 0.00 901.00 0.40 143.40 3.20 8355.20 58.13 0.05 0.32 0.15 2.14
pgsql-general by date: