Thread: Problems with pg_locks explosion
88 EC2 Compute Units (2 x Intel Xeon E5-2670, eight-core)
3370 GB of instance storage
64-bit platform
I/O Performance: Very High (10 Gigabit Ethernet)
EBS-Optimized Available: No**
Hello, i think that your system during the checkpoint pauses all clients in order to flush all data from controller's cache to the disks if i were you i'd try to tune my checkpoint parameters better, if that doesn't work, show us some vmstat output please
Vasilis Ventirozos
From: "Armand du Plessis" <adp@bank.io>
Date: Apr 2, 2013 1:37 AM
Subject: [PERFORM] Problems with pg_locks explosion
To: "pgsql-performance" <pgsql-performance@postgresql.org>
Cc:
88 EC2 Compute Units (2 x Intel Xeon E5-2670, eight-core)
3370 GB of instance storage
64-bit platform
I/O Performance: Very High (10 Gigabit Ethernet)
EBS-Optimized Available: No**
Hello, i think that your system during the checkpoint pauses all clients in order to flush all data from controller's cache to the disks if i were you i'd try to tune my checkpoint parameters better, if that doesn't work, show us some vmstat output please
Vasilis Ventirozos
---------- Forwarded message ----------
From: "Armand du Plessis" <adp@bank.io>
Date: Apr 2, 2013 1:37 AM
Subject: [PERFORM] Problems with pg_locks explosion
To: "pgsql-performance" <pgsql-performance@postgresql.org>
Cc:[Apologies, I first sent this to the incorrect list, postgres-admin, in the event you receive it twice]Hi there,I'm hoping someone on the list can shed some light on an issue I'm having with our Postgresql cluster. I'm literally tearing out my hair and don't have a deep enough understanding of Postgres to find the problem.What's happening is I had severe disk/io issues on our original Postgres cluster (9.0.8) and switched to a new instance with a RAID-0 volume array. The machine's CPU usage would hover around 30% and our database would run lightning fast with pg_locks hovering between 100-200.Within a few seconds something would trigger a massive increase in pg_locks so that it suddenly shoots up to 4000-8000. At this point everything dies. Queries that usually take a few milliseconds takes minutes and everything is unresponsive until I restart postgres.The instance still idles at this point. The only clue I could find was that it usually starts a few minutes after the checkpoint entries appear in my logs.Any suggestions would really be appreciated. It's killing our business at the moment. I can supply more info if required but pasted what I thought would be useful below. Not sure what else to change in the settings.Kind regards,ArmandIt's on Amazon EC2 -* cc2.8xlarge instance type* 6 volumes in RAID-0 configuration. (1000 PIOPS)60.5 GiB of memory
88 EC2 Compute Units (2 x Intel Xeon E5-2670, eight-core)
3370 GB of instance storage
64-bit platform
I/O Performance: Very High (10 Gigabit Ethernet)
EBS-Optimized Available: No**API name: cc2.8xlargepostgresql.conffsync = offfull_page_writes = offdefault_statistics_target = 100maintenance_work_mem = 1GBcheckpoint_completion_target = 0.9effective_cache_size = 48GBwork_mem = 64MBwal_buffers = -1checkpoint_segments = 128shared_buffers = 32GBmax_connections = 800effective_io_concurrency = 3 # Down from 6# - Background Writer -#bgwriter_delay = 200ms # 10-10000ms between rounds#bgwriter_lru_maxpages = 100 # 0-1000 max buffers written/round#bgwriter_lru_multiplier = 2.0 # 0-10.0 multipler on buffers scanned/round$ freetotal used free shared buffers cachedMem: 61368192 60988180 380012 0 784 44167172-/+ buffers/cache: 16820224 44547968Swap: 0 0 0$ top -ctop - 21:55:51 up 12 days, 12:41, 4 users, load average: 6.03, 16.10, 24.15top - 21:55:54 up 12 days, 12:41, 4 users, load average: 6.03, 15.94, 24.06Tasks: 837 total, 6 running, 831 sleeping, 0 stopped, 0 zombieCpu(s): 15.7%us, 1.7%sy, 0.0%ni, 81.6%id, 0.3%wa, 0.0%hi, 0.6%si, 0.0%stMem: 61368192k total, 54820988k used, 6547204k free, 9032k buffer[ec2-user@ip-10-155-231-112 ~]$ sudo iostatLinux 3.2.39-6.88.amzn1.x86_64 () 04/01/2013 _x86_64_ (32 CPU)avg-cpu: %user %nice %system %iowait %steal %idle21.00 0.00 1.10 0.26 0.00 77.63Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtnxvda 0.21 5.00 2.22 5411830 2401368xvdk 98.32 1774.67 969.86 1919359965 1048932113xvdj 98.28 1773.68 969.14 1918288697 1048156776xvdi 98.29 1773.69 969.61 1918300250 1048662470xvdh 98.24 1773.92 967.54 1918544618 1046419936xvdg 98.27 1774.15 968.85 1918790636 1047842846xvdf 98.32 1775.56 968.69 1920316435 1047668172md127 733.85 10645.68 5813.70 11513598393 6287682313What bugs me on this is the throughput percentage on the volumes in Cloudwatch is 100% on all volumes.The problems seem to overlap with checkpoints.2013-04-01 21:31:35.592 UTC,,,26877,,5159fa5f.68fd,1,,2013-04-01 21:21:35 UTC,,0,LOG,00000,"checkpoint starting: time",,,,,,,,,""2013-04-01 21:40:35.033 UTC,,,26877,,5159fa5f.68fd,2,,2013-04-01 21:21:35 UTC,,0,LOG,00000,"checkpoint complete: wrote 100635 buffers (2.4%); 0 transaction log file(s) added, 0 removed, 1 recycled; write=539.439 s, sync=0.000 s, total=539.441 s; sync files=0, longest=0.000 s, average=0.000 s",,,,,,,,,""2013-04-01 21:41:35.093 UTC,,,26877,,5159fa5f.68fd,3,,2013-04-01 21:21:35 UTC,,0,LOG,00000,"checkpoint starting: time",,,,,,,,,""
Thanks for the reply.I've now updated the background writer settings to:# - Background Writer -bgwriter_delay = 200ms # 10-10000ms between roundsbgwriter_lru_maxpages = 400 # 0-1000 max buffers written/roundbgwriter_lru_multiplier = 2.0 # 0-10.0 multipler on buffers scanned/roundcheckpoint_segments = 128checkpoint_timeout = 25minIt's still happening at the moment, this time without any checkpoint entries in the log :(Below the output from vmstat. I'm not sure what to look for in there?Thanks again,Armand$ sudo vmstat 5procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu-----r b swpd free buff cache si so bi bo in cs us sy id wa st2 0 0 485800 4224 44781700 0 0 167 91 1 0 21 1 78 0 07 0 0 353920 4224 44836176 0 0 6320 54 21371 12921 11 2 87 0 032 0 0 352220 4232 44749544 0 0 1110 8 19414 9620 6 42 52 0 03 0 0 363044 4232 44615772 0 0 59 1943 11185 3774 0 81 18 0 048 0 0 360076 4240 44550744 0 0 0 34 9563 5210 0 74 26 0 033 0 0 413708 4240 44438248 0 0 92 962 11250 8169 0 61 39 0 0109 0 0 418080 4240 44344596 0 0 605 3490 10098 6216 1 49 50 0 058 0 0 425388 4240 44286528 0 0 5 10 10794 2470 1 91 8 0 053 0 0 435864 4240 44243000 0 0 11 0 9755 2428 0 92 8 0 012 0 0 440792 4248 44213164 0 0 134 5 7883 3038 0 51 49 0 03 0 0 440360 4256 44158684 0 0 548 146 8450 3930 2 27 70 0 02 0 0 929236 4256 44248608 0 0 10466 845 22575 14196 20 5 74 0 04 0 0 859160 4256 44311828 0 0 7120 61 20890 12835 12 1 86 0 04 0 0 685308 4256 44369404 0 0 6110 24 20645 12545 13 1 85 0 04 0 0 695440 4256 44396304 0 0 5351 1208 19529 11781 11 1 88 0 04 0 0 628276 4256 44468116 0 0 9202 0 19875 12172 9 1 89 0 06 0 0 579716 4256 44503848 0 0 3799 22 19223 11772 10 1 88 0 03 1 0 502948 4256 44539784 0 0 3721 6700 20620 11939 13 1 85 0 04 0 0 414120 4256 44583456 0 0 3860 856 19801 12092 10 1 89 0 06 0 0 349240 4256 44642880 0 0 6122 48 19834 11933 11 2 87 0 03 0 0 400536 4256 44535872 0 0 6287 5 18945 11461 10 1 89 0 03 0 0 364256 4256 44592412 0 0 5487 2018 20145 12344 11 1 87 0 07 0 0 343732 4256 44598784 0 0 4209 24 19099 11482 10 1 88 0 06 0 0 339608 4236 44576768 0 0 6805 151 18821 11333 9 2 89 0 09 1 0 339364 4236 44556884 0 0 2597 4339 19205 11918 11 3 85 0 024 0 0 341596 4236 44480368 0 0 6165 5309 19353 11562 11 4 84 1 030 0 0 359044 4236 44416452 0 0 1364 6 12638 6138 5 28 67 0 04 0 0 436468 4224 44326500 0 0 3704 1264 11346 7545 4 27 68 0 03 1 0 459736 4224 44384788 0 0 6541 8 20159 12097 11 1 88 0 08 1 0 347812 4224 44462100 0 0 12292 2860 20851 12377 9 1 89 1 01 0 0 379752 4224 44402396 0 0 5849 147 20171 12253 11 1 88 0 04 0 0 453692 4216 44243480 0 0 6546 269 20689 13028 12 2 86 0 08 0 0 390160 4216 44259768 0 0 4243 0 20476 21238 6 16 78 0 06 0 0 344504 4216 44336264 0 0 7214 2 20919 12625 11 1 87 0 04 0 0 350128 4200 44324976 0 0 10726 2173 20417 12351 10 1 88 0 02 1 0 362300 4200 44282484 0 0 7148 714 22469 14468 12 2 86 0 03 0 0 366252 4184 44311680 0 0 7617 133 20487 12364 9 1 90 0 06 0 0 368904 4184 44248152 0 0 5162 6 22910 15221 14 7 80 0 02 0 0 383108 4184 44276780 0 0 5846 1120 21109 12563 11 1 88 0 07 0 0 338348 4184 44274472 0 0 9270 5 21243 12698 10 1 88 0 024 0 0 339676 4184 44213036 0 0 6639 18 22976 12700 13 12 74 0 012 0 0 371848 4184 44146500 0 0 657 133 18968 7445 5 53 43 0 037 0 0 374516 4184 44076212 0 0 16 2 9156 4472 1 48 52 0 016 0 0 398412 4184 43971060 0 0 127 0 9967 6018 0 48 52 0 04 0 0 417312 4184 44084392 0 0 17434 1072 23661 14268 16 6 78 1 04 0 0 407672 4184 44139896 0 0 5785 0 19779 11869 11 1 88 0 09 0 0 349544 4184 44051596 0 0 6899 8 20376 12774 10 3 88 0 05 0 0 424628 4184 44059628 0 0 9105 175 24546 15354 13 20 66 1 02 0 0 377164 4184 44070564 0 0 9363 3 21191 12608 11 2 87 0 05 0 0 353360 4184 44040804 0 0 6661 0 20931 12815 12 2 85 0 04 0 0 355144 4180 44034620 0 0 7061 8 21264 12379 11 3 86 0 0procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu-----r b swpd free buff cache si so bi bo in cs us sy id wa st21 0 0 358396 4180 43958420 0 0 7595 1749 23258 12299 10 27 63 0 06 1 0 437480 4160 43922152 0 0 17565 14 17059 14928 6 18 74 2 06 0 0 380304 4160 43993932 0 0 10120 168 21519 12798 11 2 87 0 08 0 0 337740 4160 44007432 0 0 6033 520 20872 12461 11 1 88 0 013 0 0 349712 4132 43927784 0 0 6777 6 20919 12568 11 2 86 0 06 1 0 351180 4112 43899756 0 0 8640 0 22543 12519 11 10 78 0 06 0 0 356392 4112 43921532 0 0 6206 48 20383 12050 12 1 86 0 06 0 0 355552 4108 43863448 0 0 6106 3 21244 11817 9 9 82 0 03 0 0 364992 7312 43856824 0 0 11283 199 21296 12638 13 2 85 0 04 1 0 371968 7120 43818552 0 0 6715 1534 22322 13305 11 7 81 0 0debug2: channel 0: window 999365 sent adjust 4921112 0 0 338540 7120 43822256 0 0 9142 3 21520 12194 13 5 82 0 08 0 0 386016 7112 43717136 0 0 2123 3 20465 11466 8 20 72 0 08 0 0 352388 7112 43715872 0 0 10366 51 25758 13879 16 19 65 0 020 0 0 351472 7112 43701060 0 0 13091 10 23766 12832 11 11 77 1 02 0 0 386820 7112 43587520 0 0 482 210 17187 6773 3 69 28 0 064 0 0 401956 7112 43473728 0 0 0 5 10796 9487 0 55 44 0 0On Tue, Apr 2, 2013 at 12:56 AM, Vasilis Ventirozos <v.ventirozos@gmail.com> wrote:Hello, i think that your system during the checkpoint pauses all clients in order to flush all data from controller's cache to the disks if i were you i'd try to tune my checkpoint parameters better, if that doesn't work, show us some vmstat output please
Vasilis Ventirozos
---------- Forwarded message ----------
From: "Armand du Plessis" <adp@bank.io>
Date: Apr 2, 2013 1:37 AM
Subject: [PERFORM] Problems with pg_locks explosion
To: "pgsql-performance" <pgsql-performance@postgresql.org>
Cc:[Apologies, I first sent this to the incorrect list, postgres-admin, in the event you receive it twice]Hi there,I'm hoping someone on the list can shed some light on an issue I'm having with our Postgresql cluster. I'm literally tearing out my hair and don't have a deep enough understanding of Postgres to find the problem.What's happening is I had severe disk/io issues on our original Postgres cluster (9.0.8) and switched to a new instance with a RAID-0 volume array. The machine's CPU usage would hover around 30% and our database would run lightning fast with pg_locks hovering between 100-200.Within a few seconds something would trigger a massive increase in pg_locks so that it suddenly shoots up to 4000-8000. At this point everything dies. Queries that usually take a few milliseconds takes minutes and everything is unresponsive until I restart postgres.The instance still idles at this point. The only clue I could find was that it usually starts a few minutes after the checkpoint entries appear in my logs.Any suggestions would really be appreciated. It's killing our business at the moment. I can supply more info if required but pasted what I thought would be useful below. Not sure what else to change in the settings.Kind regards,ArmandIt's on Amazon EC2 -* cc2.8xlarge instance type* 6 volumes in RAID-0 configuration. (1000 PIOPS)60.5 GiB of memory
88 EC2 Compute Units (2 x Intel Xeon E5-2670, eight-core)
3370 GB of instance storage
64-bit platform
I/O Performance: Very High (10 Gigabit Ethernet)
EBS-Optimized Available: No**API name: cc2.8xlargepostgresql.conffsync = offfull_page_writes = offdefault_statistics_target = 100maintenance_work_mem = 1GBcheckpoint_completion_target = 0.9effective_cache_size = 48GBwork_mem = 64MBwal_buffers = -1checkpoint_segments = 128shared_buffers = 32GBmax_connections = 800effective_io_concurrency = 3 # Down from 6# - Background Writer -#bgwriter_delay = 200ms # 10-10000ms between rounds#bgwriter_lru_maxpages = 100 # 0-1000 max buffers written/round#bgwriter_lru_multiplier = 2.0 # 0-10.0 multipler on buffers scanned/round$ freetotal used free shared buffers cachedMem: 61368192 60988180 380012 0 784 44167172-/+ buffers/cache: 16820224 44547968Swap: 0 0 0$ top -ctop - 21:55:51 up 12 days, 12:41, 4 users, load average: 6.03, 16.10, 24.15top - 21:55:54 up 12 days, 12:41, 4 users, load average: 6.03, 15.94, 24.06Tasks: 837 total, 6 running, 831 sleeping, 0 stopped, 0 zombieCpu(s): 15.7%us, 1.7%sy, 0.0%ni, 81.6%id, 0.3%wa, 0.0%hi, 0.6%si, 0.0%stMem: 61368192k total, 54820988k used, 6547204k free, 9032k buffer[ec2-user@ip-10-155-231-112 ~]$ sudo iostatLinux 3.2.39-6.88.amzn1.x86_64 () 04/01/2013 _x86_64_ (32 CPU)avg-cpu: %user %nice %system %iowait %steal %idle21.00 0.00 1.10 0.26 0.00 77.63Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtnxvda 0.21 5.00 2.22 5411830 2401368xvdk 98.32 1774.67 969.86 1919359965 1048932113xvdj 98.28 1773.68 969.14 1918288697 1048156776xvdi 98.29 1773.69 969.61 1918300250 1048662470xvdh 98.24 1773.92 967.54 1918544618 1046419936xvdg 98.27 1774.15 968.85 1918790636 1047842846xvdf 98.32 1775.56 968.69 1920316435 1047668172md127 733.85 10645.68 5813.70 11513598393 6287682313What bugs me on this is the throughput percentage on the volumes in Cloudwatch is 100% on all volumes.The problems seem to overlap with checkpoints.2013-04-01 21:31:35.592 UTC,,,26877,,5159fa5f.68fd,1,,2013-04-01 21:21:35 UTC,,0,LOG,00000,"checkpoint starting: time",,,,,,,,,""2013-04-01 21:40:35.033 UTC,,,26877,,5159fa5f.68fd,2,,2013-04-01 21:21:35 UTC,,0,LOG,00000,"checkpoint complete: wrote 100635 buffers (2.4%); 0 transaction log file(s) added, 0 removed, 1 recycled; write=539.439 s, sync=0.000 s, total=539.441 s; sync files=0, longest=0.000 s, average=0.000 s",,,,,,,,,""2013-04-01 21:41:35.093 UTC,,,26877,,5159fa5f.68fd,3,,2013-04-01 21:21:35 UTC,,0,LOG,00000,"checkpoint starting: time",,,,,,,,,""
cast(date_trunc('second',query_start) AS timestamp) AS query_start,
substr(current_query,1,25) AS query
FROM pg_locks
LEFT OUTER JOIN pg_class ON (pg_locks.relation = pg_class.oid)
LEFT OUTER JOIN pg_namespace ON (pg_namespace.oid = pg_class.
relnamespace), pg_stat_activity
WHERE
NOT pg_locks.pid=pg_backend_pid() AND pg_locks.pid=pg_stat_activity.procpid;
locked.pid AS locked_pid, locker.pid AS locker_pid, locked_act.usename AS locked_user, locker_act.usename AS locker_user,
locked.virtualtransaction, locked.transactionid, locked.locktype
pg_locks locked, pg_locks locker, pg_stat_activity locked_act, pg_stat_activity locker_act
WHERE
locker.granted=true AND locked.granted=false AND locked.pid=locked_act.procpid AND
locker.pid=locker_act.procpid AND (locked.virtualtransaction=locker.virtualtransaction OR locked.transactionid=locker.transactionid);
locked.pid AS locked_pid, locker.pid AS locker_pid, locked_act.usename AS locked_user, locker_act.usename AS locker_user,
locked.virtualtransaction, locked.transactionid, relname
FROM
pg_locks locked
LEFT OUTER JOIN pg_class ON (locked.relation = pg_class.oid), pg_locks locker,pg_stat_activity locked_act, pg_stat_activity locker_act
WHERE
locker.granted=true AND locked.granted=false AND locked.pid=locked_act.procpid AND locker.pid=locker_act.procpid AND locked.relation=locker.relation;
In addition to tuning the various Postgres config knobs you may need to look at how your AWS server is set up. If your load is causing an IO stall then *symptoms* of this will be lots of locks... You have quite a lot of memory (60G), so look at tuning the vm.dirty_background_ratio, vm.dirty_ratio sysctls to avoid trying to *suddenly* write out many gigs of dirty buffers. Your provisioned volumes are much better than the default AWS ones, but are still not hugely fast (i.e 1000 IOPS is about 8 MB/s worth of Postgres 8k buffers). So you may need to look at adding more volumes into the array, or adding some separate ones and putting pg_xlog directory on 'em. However before making changes I would recommend using iostat or sar to monitor how volumes are handling the load (I usually choose a 1 sec granularity and look for 100% util and high - server hundred ms - awaits). Also iotop could be enlightening. Regards Mark On 02/04/13 11:35, Armand du Plessis wrote: > > It's on Amazon EC2 - > * cc2.8xlarge instance type > * 6 volumes in RAID-0 configuration. (1000 PIOPS) > > 60.5 GiB of memory > 88 EC2 Compute Units (2 x Intel Xeon E5-2670, eight-core) > 3370 GB of instance storage > 64-bit platform > I/O Performance: Very High (10 Gigabit Ethernet) > EBS-Optimized Available: No** > API name: cc2.8xlarge >
[Apologies, I first sent this to the incorrect list, postgres-admin, in the event you receive it twice]Hi there,I'm hoping someone on the list can shed some light on an issue I'm having with our Postgresql cluster. I'm literally tearing out my hair and don't have a deep enough understanding of Postgres to find the problem.What's happening is I had severe disk/io issues on our original Postgres cluster (9.0.8)
and switched to a new instance with a RAID-0 volume array.
The machine's CPU usage would hover around 30% and our database would run lightning fast with pg_locks hovering between 100-200.Within a few seconds something would trigger a massive increase in pg_locks so that it suddenly shoots up to 4000-8000. At this point everything dies. Queries that usually take a few milliseconds takes minutes and everything is unresponsive until I restart postgres.
shared_buffers = 32GB
max_connections = 800
The problems seem to overlap with checkpoints.2013-04-01 21:31:35.592 UTC,,,26877,,5159fa5f.68fd,1,,2013-04-01 21:21:35 UTC,,0,LOG,00000,"checkpoint starting: time",,,,,,,,,""2013-04-01 21:40:35.033 UTC,,,26877,,5159fa5f.68fd,2,,2013-04-01 21:21:35 UTC,,0,LOG,00000,"checkpoint complete: wrote 100635 buffers (2.4%); 0 transaction log file(s) added, 0 removed, 1 recycled; write=539.439 s, sync=0.000 s, total=539.441 s; sync files=0, longest=0.000 s, average=0.000 s",,,,,,,,,""2013-04-01 21:41:35.093 UTC,,,26877,,5159fa5f.68fd,3,,2013-04-01 21:21:35 UTC,,0,LOG,00000,"checkpoint starting: time",,,,,,,,,""
Armand
In addition to tuning the various Postgres config knobs you may need to look at how your AWS server is set up. If your load is causing an IO stall then *symptoms* of this will be lots of locks...
You have quite a lot of memory (60G), so look at tuning the vm.dirty_background_ratio, vm.dirty_ratio sysctls to avoid trying to *suddenly* write out many gigs of dirty buffers.
Your provisioned volumes are much better than the default AWS ones, but are still not hugely fast (i.e 1000 IOPS is about 8 MB/s worth of Postgres 8k buffers). So you may need to look at adding more volumes into the array, or adding some separate ones and putting pg_xlog directory on 'em.
However before making changes I would recommend using iostat or sar to monitor how volumes are handling the load (I usually choose a 1 sec granularity and look for 100% util and high - server hundred ms - awaits). Also iotop could be enlightening.
Regards
Mark
On Mon, Apr 1, 2013 at 3:35 PM, Armand du Plessis <adp@bank.io> wrote:[Apologies, I first sent this to the incorrect list, postgres-admin, in the event you receive it twice]Hi there,I'm hoping someone on the list can shed some light on an issue I'm having with our Postgresql cluster. I'm literally tearing out my hair and don't have a deep enough understanding of Postgres to find the problem.What's happening is I had severe disk/io issues on our original Postgres cluster (9.0.8)and switched to a new instance with a RAID-0 volume array.What was the old instance IO? Did you do IO benchmarking on both?The machine's CPU usage would hover around 30% and our database would run lightning fast with pg_locks hovering between 100-200.Within a few seconds something would trigger a massive increase in pg_locks so that it suddenly shoots up to 4000-8000. At this point everything dies. Queries that usually take a few milliseconds takes minutes and everything is unresponsive until I restart postgres.I think that pg_locks is pretty much a red herring. All it means is that you have a lot more active connections than you used to. All active connections are going to hold various locks, while most idle connections (other than 'idle in transaction') connections will not hold any.Although I doubt it will solve this particular problem, you should probably use a connection pooler.shared_buffers = 32GBThat seems very high. There are reports that using >8 GB leads to precisely the type of problem you are seeing (checkpoint associated freezes). Although I've never seen those reports when fsync=off.I thought you might be suffering from the problem solved in release 9.1 by item "Merge duplicate fsync requests (Robert Haas, Greg Smith)", but then I realized that with fsync=off it could not be that.max_connections = 800That also is very high.The problems seem to overlap with checkpoints.2013-04-01 21:31:35.592 UTC,,,26877,,5159fa5f.68fd,1,,2013-04-01 21:21:35 UTC,,0,LOG,00000,"checkpoint starting: time",,,,,,,,,""2013-04-01 21:40:35.033 UTC,,,26877,,5159fa5f.68fd,2,,2013-04-01 21:21:35 UTC,,0,LOG,00000,"checkpoint complete: wrote 100635 buffers (2.4%); 0 transaction log file(s) added, 0 removed, 1 recycled; write=539.439 s, sync=0.000 s, total=539.441 s; sync files=0, longest=0.000 s, average=0.000 s",,,,,,,,,""2013-04-01 21:41:35.093 UTC,,,26877,,5159fa5f.68fd,3,,2013-04-01 21:21:35 UTC,,0,LOG,00000,"checkpoint starting: time",,,,,,,,,""I think you changed checkpoint_timout from default (5 min) to 10 minutes, without telling us. Anyway, this is where it would be nice to know how much of the 539.439 s in the write phase was spent blocking on writes, and how much was spent napping. But that info is not collected by pgsql.
If you could upgrade to 9.2 and capture some data with track_io_timing, that could be useful.Your top output looked for it was a time at which there were no problems, and it didn't include the top processes, so it wasn't very informative.
Cheers,Jeff
Hi Jeff,Sorry I should've mentioned the new instance is Postgres 9.2.3. The old instance IO maxed out the disk/io available on a single EBS volume on AWS. It had 2000 PIOPS but was constantly bottlenecked. I assumed that striping 6 1000 IOPS volumes in RAID-0 would give me some breathing space on that front, and looking at the iostat (just included in previous email) it seems to be doing OK.I actually had pg_pool running as a test but to avoid having too many moving parts in the change removed it from the equation. Need to look into the proper configuration so it doesn't saturate my cluster worse than I'm doing myself.I've commented inline.Regards,ArmandPS. This is probably the most helpful mailing list I've ever come across. Starting to feel a little more that it can be solved.On Tue, Apr 2, 2013 at 2:21 AM, Jeff Janes <jeff.janes@gmail.com> wrote:On Mon, Apr 1, 2013 at 3:35 PM, Armand du Plessis <adp@bank.io> wrote:[Apologies, I first sent this to the incorrect list, postgres-admin, in the event you receive it twice]Hi there,I'm hoping someone on the list can shed some light on an issue I'm having with our Postgresql cluster. I'm literally tearing out my hair and don't have a deep enough understanding of Postgres to find the problem.What's happening is I had severe disk/io issues on our original Postgres cluster (9.0.8)and switched to a new instance with a RAID-0 volume array.What was the old instance IO? Did you do IO benchmarking on both?The machine's CPU usage would hover around 30% and our database would run lightning fast with pg_locks hovering between 100-200.Within a few seconds something would trigger a massive increase in pg_locks so that it suddenly shoots up to 4000-8000. At this point everything dies. Queries that usually take a few milliseconds takes minutes and everything is unresponsive until I restart postgres.I think that pg_locks is pretty much a red herring. All it means is that you have a lot more active connections than you used to. All active connections are going to hold various locks, while most idle connections (other than 'idle in transaction') connections will not hold any.Although I doubt it will solve this particular problem, you should probably use a connection pooler.shared_buffers = 32GBThat seems very high. There are reports that using >8 GB leads to precisely the type of problem you are seeing (checkpoint associated freezes). Although I've never seen those reports when fsync=off.I thought you might be suffering from the problem solved in release 9.1 by item "Merge duplicate fsync requests (Robert Haas, Greg Smith)", but then I realized that with fsync=off it could not be that.max_connections = 800That also is very high.The problems seem to overlap with checkpoints.2013-04-01 21:31:35.592 UTC,,,26877,,5159fa5f.68fd,1,,2013-04-01 21:21:35 UTC,,0,LOG,00000,"checkpoint starting: time",,,,,,,,,""2013-04-01 21:40:35.033 UTC,,,26877,,5159fa5f.68fd,2,,2013-04-01 21:21:35 UTC,,0,LOG,00000,"checkpoint complete: wrote 100635 buffers (2.4%); 0 transaction log file(s) added, 0 removed, 1 recycled; write=539.439 s, sync=0.000 s, total=539.441 s; sync files=0, longest=0.000 s, average=0.000 s",,,,,,,,,""2013-04-01 21:41:35.093 UTC,,,26877,,5159fa5f.68fd,3,,2013-04-01 21:21:35 UTC,,0,LOG,00000,"checkpoint starting: time",,,,,,,,,""I think you changed checkpoint_timout from default (5 min) to 10 minutes, without telling us. Anyway, this is where it would be nice to know how much of the 539.439 s in the write phase was spent blocking on writes, and how much was spent napping. But that info is not collected by pgsql.I did actually change it to 25 minutes. Apologies it was probably lost in the text of a previous email. Here's the changed settings:# - Background Writer -bgwriter_delay = 200ms # 10-10000ms between roundsbgwriter_lru_maxpages = 400 # 0-1000 max buffers written/roundbgwriter_lru_multiplier = 2.0 # 0-10.0 multipler on buffers scanned/roundcheckpoint_segments = 128checkpoint_timeout = 25minIt seems to be lasting longer with these settings.If you could upgrade to 9.2 and capture some data with track_io_timing, that could be useful.Your top output looked for it was a time at which there were no problems, and it didn't include the top processes, so it wasn't very informative.I'm looking into track_io_timing.Cheers,Jeff
I've run an EXPLAIN ANALYZE on one of the queries that appeared in the pg_locks (although like you say that might be a red herring) both during normal response times (2) and also after the locks backlog materialized (1)The output below, I've just blanked out some columns. The IO timings do seem an order of magnitude slower but not excessive unless I'm reading it wrong."Limit (cost=2364.19..2365.44 rows=500 width=177) (actual time=6501.103..6507.196 rows=500 loops=1)"" Output:" Buffers: shared hit=7163 read=137"" I/O Timings: read=107.771"" -> Sort (cost=2364.19..2365.56 rows=549 width=177) (actual time=6501.095..6503.216 rows=500 loops=1)"" Output:" Sort Key: messages.created_at"" Sort Method: quicksort Memory: 294kB"" Buffers: shared hit=7163 read=137"" I/O Timings: read=107.771"" -> Nested Loop (cost=181.19..2339.21 rows=549 width=177) (actual time=6344.410..6495.377 rows=783 loops=1)"" Output:" Buffers: shared hit=7160 read=137"" I/O Timings: read=107.771"" -> Nested Loop (cost=181.19..1568.99 rows=549 width=177) (actual time=6344.389..6470.549 rows=783 loops=1)"" Output:" Buffers: shared hit=3931 read=137"" I/O Timings: read=107.771"" -> Bitmap Heap Scan on public.messages (cost=181.19..798.78 rows=549 width=177) (actual time=6344.342..6436.117 rows=783 loops=1)"" Output:" Recheck Cond:" Buffers: shared hit=707 read=137"" I/O Timings: read=107.771"" -> BitmapOr (cost=181.19..181.19 rows=549 width=0) (actual time=6344.226..6344.226 rows=0 loops=1)"" Buffers: shared hit=120 read=20"" I/O Timings: read=37.085"" -> Bitmap Index Scan on messages_sender_type_created_at_idx (cost=0.00..23.41 rows=309 width=0) (actual time=6343.358..6343.358 rows=366 loops=1)"" Index Cond:" Buffers: shared hit=26 read=15"" I/O Timings: read=36.977"" -> Bitmap Index Scan on messages_type_sender_recipient_created_at" Buffers: shared hit=94 read=5"" I/O Timings: read=0.108"" -> Index Only Scan using profiles_pkey on public.profiles (cost=0.00..1.39 rows=1 width=4) (actual time=0.018..0.024 rows=1 loops=783)"" Output: profiles.id"" Index Cond: (profiles.id = messages.sender)"" Heap Fetches: 661"" Buffers: shared hit=3224"" -> Index Only Scan using profiles_pkey on public.profiles recipient_profiles_messages (cost=0.00..1.39 rows=1 width=4) (actual time=0.014..0.018 rows=1 loops=783)"" Output: recipient_profiles_messages.id"" Index Cond: (recipient_profiles_messages.id = messages.recipient)"" Heap Fetches: 667"" Buffers: shared hit=3229""Total runtime: 6509.328 ms""Limit (cost=2366.57..2367.82 rows=500 width=177) (actual time=73.284..76.296 rows=500 loops=1)"" Output: various columns"" Buffers: shared hit=6738 read=562"" I/O Timings: read=19.212"" -> Sort (cost=2366.57..2367.94 rows=549 width=177) (actual time=73.276..74.300 rows=500 loops=1)"" Output: various columns"" Sort Key: messages.created_at"" Sort Method: quicksort Memory: 294kB"" Buffers: shared hit=6738 read=562"" I/O Timings: read=19.212"" -> Nested Loop (cost=181.19..2341.59 rows=549 width=177) (actual time=3.556..69.866 rows=783 loops=1)"" Output: various columns" Buffers: shared hit=6735 read=562"" I/O Timings: read=19.212"" -> Nested Loop (cost=181.19..1570.19 rows=549 width=177) (actual time=3.497..53.820 rows=783 loops=1)"" Output: various columns" Buffers: shared hit=3506 read=562"" I/O Timings: read=19.212"" -> Bitmap Heap Scan on public.messages (cost=181.19..798.78 rows=549 width=177) (actual time=3.408..32.906 rows=783 loops=1)"" Output: various columns" Recheck Cond: ()" Buffers: shared hit=282 read=562"" I/O Timings: read=19.212"" -> BitmapOr (cost=181.19..181.19 rows=549 width=0) (actual time=3.279..3.279 rows=0 loops=1)"" Buffers: shared hit=114 read=26"" I/O Timings: read=1.755"" -> Bitmap Index Scan on messages_sender_type_created_at_idx (cost=0.00..23.41 rows=309 width=0) (actual time=1.882..1.882 rows=366 loops=1)"" Index Cond:" Buffers: shared hit=25 read=16"" I/O Timings: read=1.085"" -> Bitmap Index Scan on" Buffers: shared hit=89 read=10"" I/O Timings: read=0.670"" -> Index Only Scan using profiles_pkey on public.profiles (cost=0.00..1.40 rows=1 width=4) (actual time=0.012..0.015 rows=1 loops=783)"" Output: profiles.id"" Index Cond: (profiles.id = messages.sender)"" Heap Fetches: 654"" Buffers: shared hit=3224"" -> Index Only Scan using profiles_pkey on public.profiles recipient_profiles_messages (cost=0.00..1.40 rows=1 width=4) (actual time=0.007..0.009 rows=1 loops=783)"" Output: recipient_profiles_messages.id"" Index Cond: (recipient_profiles_messages.id = messages.recipient)"" Heap Fetches: 647"" Buffers: shared hit=3229""Total runtime: 77.528 ms"On Tue, Apr 2, 2013 at 2:40 AM, Armand du Plessis <adp@bank.io> wrote:Hi Jeff,Sorry I should've mentioned the new instance is Postgres 9.2.3. The old instance IO maxed out the disk/io available on a single EBS volume on AWS. It had 2000 PIOPS but was constantly bottlenecked. I assumed that striping 6 1000 IOPS volumes in RAID-0 would give me some breathing space on that front, and looking at the iostat (just included in previous email) it seems to be doing OK.I actually had pg_pool running as a test but to avoid having too many moving parts in the change removed it from the equation. Need to look into the proper configuration so it doesn't saturate my cluster worse than I'm doing myself.I've commented inline.Regards,ArmandPS. This is probably the most helpful mailing list I've ever come across. Starting to feel a little more that it can be solved.On Tue, Apr 2, 2013 at 2:21 AM, Jeff Janes <jeff.janes@gmail.com> wrote:On Mon, Apr 1, 2013 at 3:35 PM, Armand du Plessis <adp@bank.io> wrote:[Apologies, I first sent this to the incorrect list, postgres-admin, in the event you receive it twice]Hi there,I'm hoping someone on the list can shed some light on an issue I'm having with our Postgresql cluster. I'm literally tearing out my hair and don't have a deep enough understanding of Postgres to find the problem.What's happening is I had severe disk/io issues on our original Postgres cluster (9.0.8)and switched to a new instance with a RAID-0 volume array.What was the old instance IO? Did you do IO benchmarking on both?The machine's CPU usage would hover around 30% and our database would run lightning fast with pg_locks hovering between 100-200.Within a few seconds something would trigger a massive increase in pg_locks so that it suddenly shoots up to 4000-8000. At this point everything dies. Queries that usually take a few milliseconds takes minutes and everything is unresponsive until I restart postgres.I think that pg_locks is pretty much a red herring. All it means is that you have a lot more active connections than you used to. All active connections are going to hold various locks, while most idle connections (other than 'idle in transaction') connections will not hold any.Although I doubt it will solve this particular problem, you should probably use a connection pooler.shared_buffers = 32GBThat seems very high. There are reports that using >8 GB leads to precisely the type of problem you are seeing (checkpoint associated freezes). Although I've never seen those reports when fsync=off.I thought you might be suffering from the problem solved in release 9.1 by item "Merge duplicate fsync requests (Robert Haas, Greg Smith)", but then I realized that with fsync=off it could not be that.max_connections = 800That also is very high.The problems seem to overlap with checkpoints.2013-04-01 21:31:35.592 UTC,,,26877,,5159fa5f.68fd,1,,2013-04-01 21:21:35 UTC,,0,LOG,00000,"checkpoint starting: time",,,,,,,,,""2013-04-01 21:40:35.033 UTC,,,26877,,5159fa5f.68fd,2,,2013-04-01 21:21:35 UTC,,0,LOG,00000,"checkpoint complete: wrote 100635 buffers (2.4%); 0 transaction log file(s) added, 0 removed, 1 recycled; write=539.439 s, sync=0.000 s, total=539.441 s; sync files=0, longest=0.000 s, average=0.000 s",,,,,,,,,""2013-04-01 21:41:35.093 UTC,,,26877,,5159fa5f.68fd,3,,2013-04-01 21:21:35 UTC,,0,LOG,00000,"checkpoint starting: time",,,,,,,,,""I think you changed checkpoint_timout from default (5 min) to 10 minutes, without telling us. Anyway, this is where it would be nice to know how much of the 539.439 s in the write phase was spent blocking on writes, and how much was spent napping. But that info is not collected by pgsql.I did actually change it to 25 minutes. Apologies it was probably lost in the text of a previous email. Here's the changed settings:# - Background Writer -bgwriter_delay = 200ms # 10-10000ms between roundsbgwriter_lru_maxpages = 400 # 0-1000 max buffers written/roundbgwriter_lru_multiplier = 2.0 # 0-10.0 multipler on buffers scanned/roundcheckpoint_segments = 128checkpoint_timeout = 25minIt seems to be lasting longer with these settings.If you could upgrade to 9.2 and capture some data with track_io_timing, that could be useful.Your top output looked for it was a time at which there were no problems, and it didn't include the top processes, so it wasn't very informative.I'm looking into track_io_timing.Cheers,Jeff
Armand,All of the symptoms you describe line up perfectly with a problem I had recently when upgrading DB hardware.Everything ran find until we hit some threshold somewhere at which point the locks would pile up in the thousands just as you describe, all while we were not I/O bound.I was moving from a DELL 810 that used a flex memory bridge to a DELL 820 that used round robin on their quad core intels.(Interestingly we also found out that DELL is planning on rolling back to the flex memory bridge later this year.)Any chance you could find out if your old processors might have been using flex while you're new processors might be using round robin?-s
Yeah, as I understand it you should have 6000 IOPS available for the md device (ideally). The iostats you display certainly look benign... but the key time to be sampling would be when you see the lock list explode - could look very different then. Re vm.dirty* - I would crank the values down by a factor of 5: vm.dirty_background_ratio = 1 (down from 5) vm.dirty_ratio = 2 (down from 10) Assuming of course that you actually are seeing an IO stall (which should be catchable via iostat or iotop)... and not some other issue. Otherwise leave 'em alone and keep looking :-) Cheers Mark On 02/04/13 13:31, Armand du Plessis wrote: > > > I had a look at the iostat output (on a 5s interval) and pasted it > below. The utilization and waits seems low. Included a sample below, #1 > taken during normal operation and then when the locks happen it > basically drops to 0 across the board. My (mis)understanding of the IOPS > was that it would be 1000 IOPS per/volume and when in RAID0 should give > me quite a bit higher throughput than in a single EBS volume setup. (My > naive envelop calculation was #volumes * PIOPS = Effective IOPS :/) > > > I'm looking into vm.dirty_background_ratio, vm.dirty_ratio sysctls. Is > there any guidance or links available that would be useful as a starting > point? > > Thanks again for the help, I really appreciate it. >
Your provisioned volumes are much better than the default AWS ones, but are still not hugely fast (i.e 1000 IOPS is about 8 MB/s worth of Postgres 8k buffers). So you may need to look at adding more volumes into the array, or adding some separate ones and putting pg_xlog directory on 'em.
However before making changes I would recommend using iostat or sar to monitor how volumes are handling the load (I usually choose a 1 sec granularity and look for 100% util and high - server hundred ms - awaits). Also iotop could be enlightening.
I've run an EXPLAIN ANALYZE on one of the queries that appeared in the pg_locks (although like you say that might be a red herring) both during normal response times (2) and also after the locks backlog materialized (1)The output below, I've just blanked out some columns. The IO timings do seem an order of magnitude slower but not excessive unless I'm reading it wrong."Limit (cost=2364.19..2365.44 rows=500 width=177) (actual time=6501.103..6507.196 rows=500 loops=1)"" Output:" Buffers: shared hit=7163 read=137"" I/O Timings: read=107.771"
"Limit (cost=2366.57..2367.82 rows=500 width=177) (actual time=73.284..76.296 rows=500 loops=1)"" Output: various columns"" Buffers: shared hit=6738 read=562"" I/O Timings: read=19.212"
On Monday, April 1, 2013, Armand du Plessis wrote:I've run an EXPLAIN ANALYZE on one of the queries that appeared in the pg_locks (although like you say that might be a red herring) both during normal response times (2) and also after the locks backlog materialized (1)The output below, I've just blanked out some columns. The IO timings do seem an order of magnitude slower but not excessive unless I'm reading it wrong."Limit (cost=2364.19..2365.44 rows=500 width=177) (actual time=6501.103..6507.196 rows=500 loops=1)"" Output:" Buffers: shared hit=7163 read=137"" I/O Timings: read=107.771"..."Limit (cost=2366.57..2367.82 rows=500 width=177) (actual time=73.284..76.296 rows=500 loops=1)"" Output: various columns"" Buffers: shared hit=6738 read=562"" I/O Timings: read=19.212"You are correct that the different in IO timing for reads is not nearly enough to explain the difference, but the ratio is still large enough to perhaps be suggestive. It could be be that all the extra time is spent in IO writes (not reported here). If you turn on track_io_timing on system-wide you could check the write times in pg_stat_database.(Write time has an attribution problem. I need to make room for my data, so I write out someone else's. Is the time spent attributed to the one doing the writing, or the one who owns the data written?)But it is perhaps looking like it might not be IO at all, but rather some kind of internal kernel problem, such as the "zone reclaim" and "huge pages" and memory interleaving, which have been discussed elsewhere in this list for high CPU high RAM machines. I would summarize it for you, but I don't understand it, and don't have ready access to machines with 64 CPUs and 128 GB of RAM in order to explore it for myself.But if that is the case, then using a connection pooler to restrict the number of simultaneously active connections might actually be a big win (despite what I said previously).Cheers,Jeff
On 02/04/13 19:08, Jeff Janes wrote: > On Monday, April 1, 2013, Mark Kirkwood wrote: > > > Your provisioned volumes are much better than the default AWS ones, > but are still not hugely fast (i.e 1000 IOPS is about 8 MB/s worth > of Postgres 8k buffers). So you may need to look at adding more > volumes into the array, or adding some separate ones and putting > pg_xlog directory on 'em. > > However before making changes I would recommend using iostat or sar > to monitor how volumes are handling the load (I usually choose a 1 > sec granularity and look for 100% util and high - server hundred ms > - awaits). Also iotop could be enlightening. > > > Hi Mark, > > Do you have experience using these tools with AWS? When using non-DAS > in other contexts, I've noticed that these tools often give deranged > results, because the kernel doesn't correctly know what time to > attribute to "network" and what to attribute to "disk". But I haven't > looked into it on AWS EBS, maybe they do a better job there. > Thanks for any insight, > Hi Jeff, That is a very good point. I did notice a reasonable amount of network traffic on the graphs posted previously, along with a suspiciously low amount of IO for md127 (which I assume is the raid0 array)...and wondered if iostat was not seeing IO fully, however it slipped my mind (I am on leave with kittens - so claim that for the purrrfect excuse)! However I don't recall there being a problem with the io tools for standard EBS volumes - but I haven't benchmarked AWS for a over a year, so things could be different now - and I have no experience with these new provisioned volumes. Armand - it might be instructive to do some benchmarking (with another host and volume set) where you do something like: $ dd if=/dev/zero of=file bs=8k count=1000000 and see if iostat and friends actually show you doing IO as expected!
Hi Jeff,On 02/04/13 19:08, Jeff Janes wrote:On Monday, April 1, 2013, Mark Kirkwood wrote:
Your provisioned volumes are much better than the default AWS ones,
but are still not hugely fast (i.e 1000 IOPS is about 8 MB/s worth
of Postgres 8k buffers). So you may need to look at adding more
volumes into the array, or adding some separate ones and putting
pg_xlog directory on 'em.
However before making changes I would recommend using iostat or sar
to monitor how volumes are handling the load (I usually choose a 1
sec granularity and look for 100% util and high - server hundred ms
- awaits). Also iotop could be enlightening.
Hi Mark,
Do you have experience using these tools with AWS? When using non-DAS
in other contexts, I've noticed that these tools often give deranged
results, because the kernel doesn't correctly know what time to
attribute to "network" and what to attribute to "disk". But I haven't
looked into it on AWS EBS, maybe they do a better job there.
Thanks for any insight,
That is a very good point. I did notice a reasonable amount of network traffic on the graphs posted previously, along with a suspiciously low amount of IO for md127 (which I assume is the raid0 array)...and wondered if iostat was not seeing IO fully, however it slipped my mind (I am on leave with kittens - so claim that for the purrrfect excuse)!
However I don't recall there being a problem with the io tools for standard EBS volumes - but I haven't benchmarked AWS for a over a year, so things could be different now - and I have no experience with these new provisioned volumes.
Armand - it might be instructive to do some benchmarking (with another host and volume set) where you do something like:
$ dd if=/dev/zero of=file bs=8k count=1000000
and see if iostat and friends actually show you doing IO as expected!
Also it is worth checking what your sysctl vm.zone_reclaim_mode is set to - if 1 then override to 0. As Jeff mentioned, this gotcha for larger cpu number machines has been discussed at length on this list - but still traps us now and again! Cheers Mark On 02/04/13 19:33, Armand du Plessis wrote: > I had my reservations about my almost 0% IO usage on the raid0 array as > well. I'm looking at the numbers in atop and it doesn't seem to reflect > the aggregate of the volumes as one would expect. I'm just happy I am > seeing numbers on the volumes, they're not too bad. > > One thing I was wondering, as a last possible IO resort. Provisioned EBS > volumes requires that you maintain a wait queue of 1 for every 200 > provisioned IOPS to get reliable IO. My wait queue hovers between 0-1 > and with the 1000 IOPS it should be 5. Even thought about artificially > pushing more IO to the volumes but I think Jeff's right, there's some > internal kernel voodoo at play here. I have a feeling it'll be under > control with pg_pool (if I can just get the friggen setup there right) > and then I'll have more time to dig into it deeper. > > Apologies to the kittens for the interrupting your leave :) >
Also it is worth checking what your sysctl vm.zone_reclaim_mode is set to - if 1 then override to 0. As Jeff mentioned, this gotcha for larger cpu number machines has been discussed at length on this list - but still traps us now and again!
Cheers
Mark
On 02/04/13 19:33, Armand du Plessis wrote:I had my reservations about my almost 0% IO usage on the raid0 array as
well. I'm looking at the numbers in atop and it doesn't seem to reflect
the aggregate of the volumes as one would expect. I'm just happy I am
seeing numbers on the volumes, they're not too bad.
One thing I was wondering, as a last possible IO resort. Provisioned EBS
volumes requires that you maintain a wait queue of 1 for every 200
provisioned IOPS to get reliable IO. My wait queue hovers between 0-1
and with the 1000 IOPS it should be 5. Even thought about artificially
pushing more IO to the volumes but I think Jeff's right, there's some
internal kernel voodoo at play here. I have a feeling it'll be under
control with pg_pool (if I can just get the friggen setup there right)
and then I'll have more time to dig into it deeper.
Apologies to the kittens for the interrupting your leave :)
Touch wood but I think I found the problem thanks to these pointers. I checked the vm.zone_reclaim_mode and mine was set to 0. However just before the locking starts I can see many of my CPUs flashing red and jump to high percentage sys usage. When I look at top it's the migration kernel tasks that seem to trigger it.So it seems it was a bit trigger happy with task migrations, setting the kernel.sched_migration_cost to 5000000 (5ms) seemed to have resolved my woes. I'm yet to see locks climb and it's been running stable for a bit. This post was invaluable in explaining the cause -> http://www.postgresql.org/message-id/50E4AAB1.9040902@optionshouse.com# Postgres Kernel Tweakskernel.sched_migration_cost = 5000000# kernel.sched_autogroup_enabled = 0The second recommended setting 'sched_autogroup_enabled' is not available on the kernel I'm running but it doesn't seem to be a problem.Again, thanks again for the help. It was seriously appreciated. Long night was long.If things change and the problem pops up again I'll update you guys.Cheers,ArmandOn Tue, Apr 2, 2013 at 8:43 AM, Mark Kirkwood <mark.kirkwood@catalyst.net.nz> wrote:Also it is worth checking what your sysctl vm.zone_reclaim_mode is set to - if 1 then override to 0. As Jeff mentioned, this gotcha for larger cpu number machines has been discussed at length on this list - but still traps us now and again!
Cheers
Mark
Jumped the gun a bit. the problem still exists like before. But it's definitely on the right track, below is the output from top in the seconds before the cluster locks up. For some reason still insisting on moving tasks around despite bumping the sched_migration_cost cost up to 100ms.77 root RT 0 0 0 0 S 32.3 0.0 13:55.20 [migration/24]26512 postgres 20 0 8601m 7388 4992 R 32.3 0.0 0:02.17 postgres: other_user xxxx(52944) INSERT38 root RT 0 0 0 0 S 31.3 0.0 17:26.15 [migration/11] 65 root RT 0 0 0 0 S 30.0 0.0 13:18.66 [migration/20]62 root RT 0 0 0 0 S 29.7 0.0 12:58.81 [migration/19]47 root RT 0 0 0 0 S 29.0 0.0 18:16.43 [migration/14]29 root RT 0 0 0 0 S 28.7 0.0 25:21.47 [migration/8]71 root RT 0 0 0 0 S 28.4 0.0 13:20.31 [migration/22]95 root RT 0 0 0 0 S 23.8 0.0 13:37.31 [migration/30]26518 postgres 20 0 8601m 9684 5228 S 21.2 0.0 0:01.89 postgres: other_user xxxxx(52954) INSERT6 root RT 0 0 0 0 S 20.5 0.0 39:17.72 [migration/0]41 root RT 0 0 0 0 S 19.6 0.0 18:21.36 [migration/12]68 root RT 0 0 0 0 S 19.6 0.0 13:04.62 [migration/21]74 root RT 0 0 0 0 S 18.9 0.0 13:39.41 [migration/23]305 root 20 0 0 0 0 S 18.3 0.0 11:34.52 [kworker/27:1]44 root RT 0 0 0 0 S 17.0 0.0 18:30.71 [migration/13]89 root RT 0 0 0 0 S 16.0 0.0 12:13.42 [migration/28]7 root RT 0 0 0 0 S 15.3 0.0 21:58.56 [migration/1]35 root RT 0 0 0 0 S 15.3 0.0 20:02.05 [migration/10]53 root RT 0 0 0 0 S 14.0 0.0 12:51.46 [migration/16]11254 root 0 -20 21848 7532 2788 S 11.7 0.0 22:35.66 atop 114 root RT 0 0 0 0 S 10.8 0.0 19:36.56 [migration/3]26463 postgres 20 0 8601m 7492 5100 R 10.8 0.0 0:00.33 postgres: other_user xxxxx(32835) INSERT32 root RT 0 0 0 0 S 10.1 0.0 20:46.18 [migration/9]16793 root 20 0 0 0 0 S 6.5 0.0 1:12.72 [kworker/25:0]20 root RT 0 0 0 0 S 5.5 0.0 18:51.81 [migration/5]48 root 20 0 0 0 0 S 5.5 0.0 3:52.93 [kworker/14:0]On Tue, Apr 2, 2013 at 10:16 AM, Armand du Plessis <adp@bank.io> wrote:Touch wood but I think I found the problem thanks to these pointers. I checked the vm.zone_reclaim_mode and mine was set to 0. However just before the locking starts I can see many of my CPUs flashing red and jump to high percentage sys usage. When I look at top it's the migration kernel tasks that seem to trigger it.So it seems it was a bit trigger happy with task migrations, setting the kernel.sched_migration_cost to 5000000 (5ms) seemed to have resolved my woes. I'm yet to see locks climb and it's been running stable for a bit. This post was invaluable in explaining the cause -> http://www.postgresql.org/message-id/50E4AAB1.9040902@optionshouse.com# Postgres Kernel Tweakskernel.sched_migration_cost = 5000000# kernel.sched_autogroup_enabled = 0The second recommended setting 'sched_autogroup_enabled' is not available on the kernel I'm running but it doesn't seem to be a problem.Again, thanks again for the help. It was seriously appreciated. Long night was long.If things change and the problem pops up again I'll update you guys.Cheers,ArmandOn Tue, Apr 2, 2013 at 8:43 AM, Mark Kirkwood <mark.kirkwood@catalyst.net.nz> wrote:Also it is worth checking what your sysctl vm.zone_reclaim_mode is set to - if 1 then override to 0. As Jeff mentioned, this gotcha for larger cpu number machines has been discussed at length on this list - but still traps us now and again!
Cheers
Mark