Thread: Autovacuum issues with truncate and create index ...
Hi everybody, We are having issues with the autovacuum process. Our database is composed by two kinds of tables : - the first ones are partitions, - the second ones are classic tables. Each five minutes we execute the following process : - we drop constraint of the target partition - we drop indexes (2) of the target partition - we truncate all data of the target partition - we copy new data (around 150 000 rows) in the target partition - we create indexes (2) in the target partition - we create the constraint of the target partition - we aggregate data from main table of the previous partition and we insert them in the second kind of tables (max 200 rows) - finally we delete old data of the second kind of tables Our problem is that we can not perform the last action (deletion of old data) because it take too much time (2-3min for about200 rows). Some tables have about 5 000 000 000 of rows to delete so we have to proceed this action. So I look in the pg_stat_all_tables, and I see that each partition of first kind of tables is analyzed once a day, but secondkind of tables are analyzed only one time each 5 or 6 days. I have tried to decrease autovacuum thresholds for these tables with the following command : ALTER TABLE tableName SET (autovacuum_vacuum_threshold=20, autovacuum_vacuum_scale_factor=0.01, autovacuum_analyze_threshold=10,autovacuum_analyze_scale_factor=0.005, autovacuum_vacuum_cost_delay=10, autovacuum_vacuum_cost_limit=1000); Then the autovacuum process starts to work on the second kind of tables, but our process blocks into step 3 (truncate) orstep 5 (create index). As soon as I reset the autovacuum thresholds for the second kind of tables, our process run again perfectly. Can our process could create deadlock with the autovacuum process ? Do we do something wrong in the autovacuum process configuration ? Our postgresql is on a server with 16 CPUs 16Go RAM 4Go SWAP. name | current_setting -----------------------------+------------------------------------------------------------------------------------------------------- version | PostgreSQL 8.4.11 on x86_64-pc-linux-gnu, compiled by GCC gcc-4.4.real (Debian 4.4.5-8) 4.4.5,64-bit autovacuum_naptime | 30s checkpoint_segments | 64 constraint_exclusion | partition custom_variable_classes | alaloop effective_cache_size | 14GB external_pid_file | /var/run/postgresql/8.4-main.pid lc_collate | en_US.UTF-8 lc_ctype | en_US.UTF-8 listen_addresses | * log_autovacuum_min_duration | 0 log_line_prefix | %t maintenance_work_mem | 1GB max_connections | 100 max_locks_per_transaction | 256 max_prepared_transactions | 100 max_stack_depth | 5MB port | 5432 server_encoding | UTF8 shared_buffers | 756MB ssl | on stats_temp_directory | /dev/shm temp_buffers | 1536 TimeZone | localtime unix_socket_directory | /var/run/postgresql work_mem | 756MB --- Baptiste LHOSTE blhoste@alaloop.com ALALOOP S.A.S. - Technopole Izarbel - 64210 Bidart Téléphone : +33 (0) 5 59 41 51 10 www.alaloop.com
Baptiste LHOSTE wrote: > - finally we delete old data of the second kind of tables > Then the autovacuum process starts to work on the second kind of > tables, but our process blocks into step 3 (truncate) or step 5 > (create index). > > As soon as I reset the autovacuum thresholds for the second kind > of tables, our process run again perfectly. > > Can our process could create deadlock with the autovacuum > process ? Would it be possible for you to create such a situation and capture the contents of pg_stat_activity and pg_locks while it is going on? What messages related to autovacuum or deadlocks do you see in the server log while this is going on? > PostgreSQL 8.4.11 Would it be possible to update your 8.4 installation to the latest bug fix (currently 8.4.15) to rule out the influence of any bugs which have already been fixed? -Kevin
> Would it be possible for you to create such a situation and capture > the contents of pg_stat_activity and pg_locks while it is going on? > What messages related to autovacuum or deadlocks do you see in the > server log while this is going on? Before the change we can only see only automatic analyze logs on first kind of tables. After the change I can see automatic analyze logs on both kind of tables. Here's the pg_stat_activity during the issue : datid | datname | procpid | usesysid | usename | current_query | waiting | xact_start | query_start | backend_start | client_addr | client_port -------+---------+---------+----------+----------+-------------------------------------------------------------------------+---------+-------------------------------+-------------------------------+-------------------------------+---------------+------------- 42539 | flows | 13792 | 16384 | asa | TRUNCATE flowpool_24_segment_221 | f | 2012-12-20 16:55:01.346986+01 | 2012-12-20 16:55:01.347051+01 | 2012-12-20 16:50:22.073586+01 | 10.48.168.160| 33550 42539 | flows | 12443 | 16384 | asa | <IDLE> | f | | 2012-12-20 15:51:32.787359+01 | 2012-12-20 13:00:09.968633+01 | 10.48.168.163| 58799 42539 | flows | 13804 | 10 | postgres | autovacuum: ANALYZE public.agg_t1406_outgoing_a3_src_net_and_dst_net_f5| f | 2012-12-20 16:56:00.919196+01 | 2012-12-20 16:56:00.919196+01| 2012-12-20 16:51:27.079612+01 | | 42539 | flows | 12444 | 16384 | asa | <IDLE> | f | | 2012-12-20 16:09:53.123326+01 | 2012-12-20 13:00:16.82323+01 | 10.48.168.163| 58800 42539 | flows | 12453 | 16384 | asa | <IDLE> | f | | 2012-12-20 13:01:15.019182+01 | 2012-12-20 13:01:15.015847+01 | 10.48.168.163| 58801 42539 | flows | 13629 | 16384 | asa | TRUNCATE flowpool_15_segment_216 | f | 2012-12-20 16:55:01.783653+01 | 2012-12-20 16:55:01.783725+01 | 2012-12-20 16:30:22.331191+01 | 10.48.168.160| 33495 42539 | flows | 13793 | 16384 | asa | TRUNCATE flowpool_19_segment_215 | f | 2012-12-20 16:55:01.296588+01 | 2012-12-20 16:55:01.297449+01 | 2012-12-20 16:50:22.095245+01 | 10.48.168.160| 33551 42539 | flows | 13822 | 10 | postgres | autovacuum: ANALYZE public.agg_t449_incoming_a3_src_net_and_dst_net_f5 | f | 2012-12-20 16:53:34.720815+01 | 2012-12-20 16:53:34.720815+01| 2012-12-20 16:51:57.097049+01 | | 42539 | flows | 13658 | 16384 | asa | TRUNCATE flowpool_2_segment_218 | f | 2012-12-20 16:55:01.606198+01 | 2012-12-20 16:55:01.606266+01 | 2012-12-20 16:35:30.327792+01 | 10.48.168.160| 33505 42539 | flows | 13680 | 16384 | asa | TRUNCATE flowpool_16_segment_228 | f | 2012-12-20 16:55:01.710645+01 | 2012-12-20 16:55:01.713793+01 | 2012-12-20 16:40:21.689958+01 | 10.48.168.160| 33512 42539 | flows | 13870 | 16384 | asa | select * from pg_stat_activity; | f | 2012-12-20 16:56:05.58373+01 | 2012-12-20 16:56:05.58373+01 | 2012-12-20 16:56:03.432322+01 | | -1 42539 | flows | 13770 | 16384 | asa | TRUNCATE flowpool_31_segment_208 | f | 2012-12-20 16:55:01.782363+01 | 2012-12-20 16:55:01.791481+01 | 2012-12-20 16:45:24.082223+01 | 10.48.168.160| 33544 42539 | flows | 13771 | 16384 | asa | TRUNCATE flowpool_17_segment_211 | f | 2012-12-20 16:55:01.729515+01 | 2012-12-20 16:55:01.736037+01 | 2012-12-20 16:45:24.147856+01 | 10.48.168.160| 33545 42539 | flows | 13849 | 10 | postgres | autovacuum: ANALYZE public.agg_t1251_incoming_a7_src_port_and_proto_f5 | f | 2012-12-20 16:56:01.00984+01 | 2012-12-20 16:56:01.00984+01 | 2012-12-20 16:52:27.111586+01 | | 42539 | flows | 13795 | 16384 | asa | TRUNCATE flowpool_22_segment_217 | f | 2012-12-20 16:55:01.342442+01 | 2012-12-20 16:55:01.345095+01 | 2012-12-20 16:50:22.159256+01 | 10.48.168.160| 33553 Here's the pg_locks during the issue : locktype | database | relation | page | tuple | virtualxid | transactionid | classid | objid | objsubid | virtualtransaction| pid | mode | granted ---------------+----------+----------+------+-------+------------+---------------+---------+-------+----------+--------------------+-------+ --------------------------+--------- transactionid | | | | | | 27720953 | | | | 12/0 | 13770 | ExclusiveLock | t virtualxid | | | | | 11/3140 | | | | | 11/3140 | 13870 | ExclusiveLock | t relation | 42539 | 106752 | | | | | | | | 13/0 | 13771 | ShareLock | t relation | 42539 | 106752 | | | | | | | | 13/0 | 13771 | AccessExclusiveLock | t relation | 42539 | 88542 | | | | | | | | 12/0 | 13770 | ShareLock | t relation | 42539 | 88542 | | | | | | | | 12/0 | 13770 | AccessExclusiveLock | t transactionid | | | | | | 27720938 | | | | 1/0 | 13792 | ExclusiveLock | t transactionid | | | | | | 27720969 | | | | 7/2476 | 13793 | ExclusiveLock | t virtualxid | | | | | 13/2821 | | | | | 13/0 | 13771 | ExclusiveLock | t relation | 42539 | 106755 | | | | | | | | 13/0 | 13771 | ShareLock | t relation | 42539 | 106755 | | | | | | | | 13/0 | 13771 | AccessExclusiveLock | t relation | 42539 | 112041 | | | | | | | | 1/0 | 13792 | ShareLock | t relation | 42539 | 112041 | | | | | | | | 1/0 | 13792 | AccessExclusiveLock | t virtualxid | | | | | 1/15720 | | | | | 1/0 | 13792 | ExclusiveLock | t virtualxid | | | | | 7/2476 | | | | | 7/2476 | 13793 | ExclusiveLock | t relation | 42539 | 88544 | | | | | | | | 12/0 | 13770 | AccessExclusiveLock | t transactionid | | | | | | 27720971 | | | | 9/2935 | 13658 | ExclusiveLock | t relation | 42539 | 10969 | | | | | | | | 11/3140 | 13870 | AccessShareLock | t relation | 42539 | 6697688 | | | | | | | | 14/2459 | 13849 | AccessShareLock | t relation | 42539 | 49499 | | | | | | | | 8/15669 | 13822 | AccessShareLock | t relation | 42539 | 88539 | | | | | | | | 12/0 | 13770 | ShareLock | t relation | 42539 | 88539 | | | | | | | | 12/0 | 13770 | AccessExclusiveLock | t relation | 42539 | 112038 | | | | | | | | 1/0 | 13792 | ShareLock | t relation | 42539 | 112038 | | | | | | | | 1/0 | 13792 | AccessExclusiveLock | t relation | 42539 | 106757 | | | | | | | | 13/0 | 13771 | AccessExclusiveLock | t virtualxid | | | | | 6/2974 | | | | | 6/2974 | 13629 | ExclusiveLock | t relation | 42539 | 50023 | | | | | | | | 3/3137 | 13804 | ShareUpdateExclusiveLock | t virtualxid | | | | | 3/3137 | | | | | 3/3137 | 13804 | ExclusiveLock | t relation | 42539 | 80808 | | | | | | | | 7/2476 | 13793 | RowExclusiveLock | t transactionid | | | | | | 27720951 | | | | 13/0 | 13771 | ExclusiveLock | t virtualxid | | | | | 9/2935 | | | | | 9/2935 | 13658 | ExclusiveLock | t relation | 42539 | 49492 | | | | | | | | 8/15669 | 13822 | ShareUpdateExclusiveLock | t virtualxid | | | | | 15/2206 | | | | | 15/2206 | 13795 | ExclusiveLock | t relation | 42539 | 91227 | | | | | | | | 9/2935 | 13658 | RowExclusiveLock | t relation | 42539 | 49498 | | | | | | | | 8/15669 | 13822 | AccessShareLock | t relation | 42539 | 112043 | | | | | | | | 1/0 | 13792 | AccessExclusiveLock | t relation | 42539 | 6697685 | | | | | | | | 14/2459 | 13849 | ShareUpdateExclusiveLock | t virtualxid | | | | | 14/2459 | | | | | 14/2459 | 13849 | ExclusiveLock | t relation | 42539 | 50026 | | | | | | | | 3/3137 | 13804 | RowExclusiveLock | t relation | 42539 | 6697689 | | | | | | | | 14/2459 | 13849 | AccessShareLock | t relation | 42539 | 86013 | | | | | | | | 6/2974 | 13629 | RowExclusiveLock | t virtualxid | | | | | 8/15669 | | | | | 8/15669 | 13822 | ExclusiveLock | t virtualxid | | | | | 12/14901 | | | | | 12/0 | 13770 | ExclusiveLock | t transactionid | | | | | | 27720970 | | | | 6/2974 | 13629 | ExclusiveLock | t relation | 42539 | 50027 | | | | | | | | 3/3137 | 13804 | RowExclusiveLock | t (45 rows) > Would it be possible to update your 8.4 installation to the latest > bug fix (currently 8.4.15) to rule out the influence of any bugs > which have already been fixed? Is there a way to upgrade without having to dump all data and restore them after the upgrade ? Best regards, Baptiste. --- Baptiste LHOSTE blhoste@alaloop.com ALALOOP S.A.S. - Technopole Izarbel - 64210 Bidart Téléphone : +33 (0) 5 59 41 51 10 www.alaloop.com
>> Would it be possible to update your 8.4 installation to the latest >> bug fix (currently 8.4.15) to rule out the influence of any bugs >> which have already been fixed? > Is there a way to upgrade without having to dump all data and restore them after the upgrade ? I have check but debian proposed only to upgrade to the 8.4.13. Best regards, Baptiste. --- Baptiste LHOSTE blhoste@alaloop.com ALALOOP S.A.S. - Technopole Izarbel - 64210 Bidart Téléphone : +33 (0) 5 59 41 51 10 www.alaloop.com -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Baptiste LHOSTE wrote: > Here's the pg_stat_activity during the issue : > [no processes waiting] > Here's the pg_locks during the issue : > [all locks granted] Was the blocking you described occurring at the time you captured this? It doesn't seem to be showing any problem. > Is there a way to upgrade without having to dump all data and > restore them after the upgrade ? A minor release (where the version number doesn't change before the second dot) never requires a dump and restore. There is sometimes a need to do some cleanup work; for example, if a bug is fixed which could corrupt a particular type of index, the release notes may recommend rebuilding all indexes of that type to repair any damage which may have occurred before the bug was fixed. http://www.postgresql.org/support/versioning/ -Kevin
> Was the blocking you described occurring at the time you captured > this? It doesn't seem to be showing any problem. Yes indeed. We have noticed that any process seems to be in waiting situation but : - before the autovacuum process starts to work on the both kind of tables, truncate and index creation take less than 2seconds - after the autovacuum process starts to work on the both kind of tables, truncate and index creation never end We have to stop our process, then reset the autovacuum thresold for second kind of tables, then restart our process. Is it possible that the fact that statistics of postgresql are not up-to-date could explain this behavior ? Is it possible that the autovacuum process does not stop itself when we perform a truncate or a create index ? > A minor release (where the version number doesn't change before the > second dot) never requires a dump and restore. There is sometimes a > need to do some cleanup work; for example, if a bug is fixed which > could corrupt a particular type of index, the release notes may > recommend rebuilding all indexes of that type to repair any damage > which may have occurred before the bug was fixed. As we mention in the previous mail, we can only upgrade to the 8.4.13 with Debian.
Baptiste LHOSTE wrote: >> Was the blocking you described occurring at the time you >> captured this? It doesn't seem to be showing any problem. > > Yes indeed. We have noticed that any process seems to be in > waiting situation but : > - before the autovacuum process starts to work on the both kind > of tables, truncate and index creation take less than 2 > seconds > - after the autovacuum process starts to work on the both kind > of tables, truncate and index creation never end Just so we know how to interpret that, how many minutes, hours, or days did you wait to see whether it would ever end? > We have to stop our process, then reset the autovacuum thresold > for second kind of tables, then restart our process. > > Is it possible that the fact that statistics of postgresql are > not up-to-date could explain this behavior ? > Is it possible that the autovacuum process does not stop itself > when we perform a truncate or a create index ? At the time you captured data from pg_stat_activity and pg_locks, there were three autovacuum processes active, all running ANALYZE, and eight TRUNCATE commands active on "normal" connections. All the TRUNCATE statements started in the same second. One of the ANALYZE tasks started about a minute and a half before that, the other two started about a minute after the TRUNCATE statements. All are on different tables, and no heavyweight locking is showing up. I've heard enough reports of behavior like this to believe that there is some sort of bug here, but the nature of it is not apparent. We could really use more clues. If it doesn't cause too much pain to let it get into this state for a few minutes, it might help diagnose the issue if you could start `vmstat 1` before you let it get into this state, and capture `ps aux | postgres`, pg_stat_activity, and pg_locks at intervals while it is in this state. Looking at all of the above might suggest a cause. If we can find the cause, we can almost certainly fix it. -Kevin
Hi, Sorry for the late response but I was on holidays for christmas. > Just so we know how to interpret that, how many minutes, hours, or > days did you wait to see whether it would ever end? I have waiting for 15 minutes in this state. I can not wait more time without losing some data for our client. > At the time you captured data from pg_stat_activity and pg_locks, > there were three autovacuum processes active, all running ANALYZE, > and eight TRUNCATE commands active on "normal" connections. All the > TRUNCATE statements started in the same second. One of the ANALYZE > tasks started about a minute and a half before that, the other two > started about a minute after the TRUNCATE statements. All are on > different tables, and no heavyweight locking is showing up. > I've heard enough reports of behavior like this to believe that > there is some sort of bug here, but the nature of it is not > apparent. We could really use more clues. > If it doesn't cause too much pain to let it get into this state for > a few minutes, it might help diagnose the issue if you could start > `vmstat 1` before you let it get into this state, and capture `ps > aux | postgres`, pg_stat_activity, and pg_locks at intervals while > it is in this state. Looking at all of the above might suggest a > cause. If we can find the cause, we can almost certainly fix it. #######################################################################################" #######################################################################################" #######################################################################################" Here is the state before changes of autovacuum thresholds : LOAD 1 minute : 0.9 vmstat 1 : procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 1 12 833844 353632 174740 9998572 0 0 6916 124 1985 1014 0 0 80 20 0 10 833844 347848 174748 10004896 0 0 6280 28 2165 1079 0 0 89 10 1 0 833844 343244 174768 10009772 0 0 4892 264 2213 1337 1 0 86 14 0 0 833844 337244 174776 10015832 0 0 5116 256 1872 1851 2 0 97 1 0 14 833844 331728 174780 10021824 0 0 5892 8072 2017 1287 0 0 79 21 0 13 833844 325056 174792 10028556 0 0 6920 124 1902 1035 0 0 75 25 0 3 833844 317872 174800 10035104 0 0 6404 264 2043 1642 0 0 84 15 0 0 833844 310928 174816 10043196 0 0 7176 92 1861 1859 0 0 98 2 0 1 833844 304240 174832 10049404 0 0 6664 16 1811 1728 0 0 98 2 0 12 833844 297156 174836 10056628 0 0 6788 8868 2043 1329 0 0 94 6 0 0 833844 309184 174844 10057936 0 0 904 480 1260 1727 1 0 93 6 0 0 833844 308960 174844 10058236 0 0 0 0 702 1834 0 0 100 0 0 0 833844 308780 174848 10058664 0 0 0 76 786 1935 0 0 100 0 0 0 833844 308532 174848 10059052 0 0 0 0 747 1978 0 0 100 0 0 0 833844 308152 174856 10059836 0 0 0 9384 1024 2213 0 0 84 16 0 0 833844 308168 174856 10059780 0 0 0 0 718 1753 0 0 100 0 0 0 833844 307796 174864 10060124 0 0 0 16 717 1755 0 0 99 0 0 1 833844 307472 174868 10061052 0 0 0 20 793 1746 0 0 100 0 0 0 833844 307728 174872 10061004 0 0 0 4 734 1675 0 0 99 0 0 0 833844 307336 174888 10061736 0 0 0 9680 970 2041 0 0 91 9 0 0 833844 307688 174888 10061732 0 0 0 0 632 1502 0 0 100 0 ps aux | grep postgres : postgres 9189 2.5 4.7 991916 774368 ? Ss 08:45 1:11 postgres: asa flows 10.48.168.160(35031) idle postgres 9363 3.0 2.8 979456 470404 ? Ss 09:15 0:29 postgres: asa flows 10.48.168.160(35091) idle postgres 9364 2.6 3.2 979872 527124 ? Ss 09:15 0:26 postgres: asa flows 10.48.168.160(35092) idle root 9439 0.0 0.0 69512 3240 pts/3 S+ 09:19 0:00 /usr/lib/postgresql/8.4/bin/psql -U asa flows postgres 9440 0.0 0.0 887536 6152 ? Ss 09:19 0:00 postgres: asa flows [local] idle postgres 9450 2.5 2.2 980604 370920 ? Ss 09:20 0:17 postgres: asa flows 10.48.168.160(35104) idle postgres 9452 2.5 2.7 981196 459996 ? Ss 09:20 0:17 postgres: asa flows 10.48.168.160(35106) idle postgres 9505 6.0 1.3 966604 223428 ? Ss 09:30 0:05 postgres: asa flows 10.48.168.160(35121) idle postgres 9507 4.8 1.2 965212 198444 ? Ss 09:30 0:04 postgres: asa flows 10.48.168.160(35123) idle postgres 9508 4.8 1.1 975860 190392 ? Ss 09:30 0:04 postgres: asa flows 10.48.168.160(35124) idle root 9516 0.0 0.0 7548 728 pts/2 D+ 09:31 0:00 grep postgres postgres 11433 0.0 0.1 885744 26056 ? S Dec20 0:17 /usr/lib/postgresql/8.4/bin/postgres -D /var/lib/postgresql/8.4/main-c config_file=/etc/postgresql/8.4/main/postgresql.conf postgres 11435 0.0 4.8 889276 790004 ? Ss Dec20 10:44 postgres: writer process postgres 11436 0.0 0.0 886124 704 ? Ss Dec20 1:47 postgres: wal writer process postgres 11437 0.0 0.0 894472 5104 ? Ss Dec20 4:42 postgres: autovacuum launcher process postgres 11438 0.1 0.0 81956 4768 ? Ss Dec20 13:52 postgres: stats collector process postgres 12443 0.0 4.9 903820 810512 ? Ss Dec20 0:58 postgres: asa flows 10.48.168.163(58799) idle postgres 12444 0.0 5.1 945064 844512 ? Ss Dec20 6:09 postgres: asa flows 10.48.168.163(58800) idle postgres 12453 0.0 0.0 887384 6860 ? Ss Dec20 0:00 postgres: asa flows 10.48.168.163(58801) idle postgres 26141 0.3 5.3 976928 885872 ? Ss Dec22 33:47 postgres: asa flows 10.48.168.160(40884) idle select * from pg_stat_activity : datid | datname | procpid | usesysid | usename | current_query | waiting | xact_start | query_start | backend_start | client_addr | client_port -------+---------+---------+----------+---------+---------------------------------+---------+------------------------------+-------------------------------+-------------------------------+---------------+------------- 42539 | flows | 9189 | 16384 | asa | <IDLE> | f | | 2012-12-28 09:30:30.2453+01 | 2012-12-28 08:45:15.923245+01 | 10.48.168.160 | 35031 42539 | flows | 12443 | 16384 | asa | <IDLE> | f | | 2012-12-27 21:30:35.767577+01 | 2012-12-20 13:00:09.968633+01 | 10.48.168.163 | 58799 42539 | flows | 9505 | 16384 | asa | <IDLE> | f | | 2012-12-28 09:30:26.735293+01 | 2012-12-28 09:30:17.273639+01 | 10.48.168.160 | 35121 42539 | flows | 12444 | 16384 | asa | <IDLE> | f | | 2012-12-27 15:15:22.785422+01 | 2012-12-20 13:00:16.82323+01 | 10.48.168.163 | 58800 42539 | flows | 12453 | 16384 | asa | <IDLE> | f | | 2012-12-20 13:01:15.019182+01 | 2012-12-20 13:01:15.015847+01 | 10.48.168.163 | 58801 42539 | flows | 26141 | 16384 | asa | <IDLE> | f | | 2012-12-28 06:19:58.348495+01 | 2012-12-22 06:00:55.453306+01 | 10.48.168.160 | 40884 42539 | flows | 9363 | 16384 | asa | <IDLE> | f | | 2012-12-28 09:30:26.016735+01 | 2012-12-28 09:15:23.006751+01 | 10.48.168.160 | 35091 42539 | flows | 9364 | 16384 | asa | <IDLE> | f | | 2012-12-28 09:30:35.51892+01 | 2012-12-28 09:15:23.179672+01 | 10.48.168.160 | 35092 42539 | flows | 9440 | 16384 | asa | select * from pg_stat_activity; | f | 2012-12-28 09:32:12.41096+01| 2012-12-28 09:32:12.41096+01 | 2012-12-28 09:19:19.854963+01 | | -1 42539 | flows | 9450 | 16384 | asa | <IDLE> | f | | 2012-12-28 09:30:18.432941+01 | 2012-12-28 09:20:20.321349+01 | 10.48.168.160 | 35104 42539 | flows | 9507 | 16384 | asa | <IDLE> | f | | 2012-12-28 09:30:26.265634+01 | 2012-12-28 09:30:17.444456+01 | 10.48.168.160 | 35123 42539 | flows | 9508 | 16384 | asa | <IDLE> | f | | 2012-12-28 09:30:26.763076+01 | 2012-12-28 09:30:17.487318+01 | 10.48.168.160 | 35124 42539 | flows | 9452 | 16384 | asa | <IDLE> | f | | 2012-12-28 09:30:26.093083+01 | 2012-12-28 09:20:20.385564+01 | 10.48.168.160 | 35106 (13 rows) select * from pg_locks : locktype | database | relation | page | tuple | virtualxid | transactionid | classid | objid | objsubid | virtualtransaction| pid | mode | granted ------------+----------+----------+------+-------+------------+---------------+---------+-------+----------+--------------------+------+-----------------+--------- virtualxid | | | | | 9/122683 | | | | | 9/122683 | 9440 | ExclusiveLock | t relation | 42539 | 10969 | | | | | | | | 9/122683 | 9440 | AccessShareLock | t (2 rows) #######################################################################################" #######################################################################################" #######################################################################################" Here is the state 1 minute after changes of autovacuum thresholds : LOAD 1 minute : 10.0 vmstat 1 : 1 0 834028 290856 174980 10081904 0 0 0 0 688 1620 0 0 100 0 0 0 834028 290484 174984 10082368 0 0 0 0 748 1682 0 0 100 0 0 0 834028 289748 174992 10082764 0 0 0 24 765 1647 0 0 100 0 0 0 834028 289732 175000 10083084 0 0 0 2376 783 1818 0 0 97 3 0 0 834028 289128 175000 10083488 0 0 0 0 719 1668 0 0 100 0 0 0 834028 288392 175016 10083812 0 0 0 0 772 1607 0 0 99 0 0 0 834028 288740 175016 10084172 0 0 0 0 731 1662 0 0 100 0 1 0 834028 288128 175016 10084088 0 0 0 1508 771 1627 0 0 100 0 0 0 834028 287988 175024 10084868 0 0 0 824 814 1775 0 0 98 2 0 0 834028 287492 175024 10084840 0 0 0 0 692 1605 0 0 100 0 3 0 834028 275216 175064 10091948 64 0 2364 4352 1227 3882 1 0 97 1 0 0 833564 289228 175064 10076076 280 0 344 312 3777 2424 39 0 60 1 0 0 833564 289196 175064 10077140 0 0 0 56 733 1750 0 0 100 0 1 0 833564 292132 175080 10077124 0 0 4 2176 813 1888 0 0 95 5 0 0 833564 295636 175080 10077532 0 0 0 0 783 1676 0 0 100 0 0 0 833564 295360 175088 10078324 0 0 0 20 719 1685 0 0 99 0 0 0 833564 294864 175096 10078692 0 0 0 52 756 1778 0 0 100 0 1 0 833564 294864 175096 10079096 0 0 0 0 769 1802 0 0 100 0 0 0 833564 294924 175104 10079096 0 0 0 2420 889 1957 0 0 96 4 0 0 833564 295056 175112 10079132 0 0 0 48 704 1722 0 0 100 0 0 0 833564 294936 175112 10079492 0 0 0 0 737 1737 0 0 100 0 3 0 833564 294564 175112 10079832 0 0 0 0 750 1698 0 0 100 0 0 0 833564 286876 175124 10082320 0 0 2036 88 1258 1716 1 0 99 0 0 13 833564 279544 175132 10089292 0 0 6928 1888 2303 1134 0 0 68 32 0 9 833564 270596 175136 10096112 0 0 6916 32 2539 1176 0 0 68 32 0 6 833564 262924 175176 10102960 0 0 6952 44 2527 1287 0 0 84 15 0 1 833564 255472 175184 10110188 0 0 7048 20 2118 1579 0 0 92 7 0 1 833564 248296 175196 10117928 0 0 7044 244 2107 1992 0 0 95 5 0 11 833564 241104 175216 10125004 0 0 7176 716 2475 1284 0 0 78 21 0 7 833564 234160 175224 10131872 0 0 7044 44 2113 1282 0 0 78 22 0 0 833564 228340 175252 10138096 0 0 5524 284 2062 1849 1 0 91 7 1 0 833564 220792 175272 10145728 0 0 7244 0 1887 2029 0 0 99 1 0 0 833564 213072 175276 10152340 0 0 6532 40 2672 1997 0 0 98 1 0 9 833564 207104 175284 10158476 0 0 5768 1492 2368 1140 0 0 84 16 1 7 833564 201748 175288 10164284 0 0 5764 32 2377 1150 0 0 90 10 0 7 833564 195664 175292 10169732 0 0 5508 68 2599 1407 0 0 90 10 0 7 833564 189968 175300 10175568 0 0 5768 36 2560 1377 0 0 79 21 0 9 833564 184732 175336 10180860 0 0 4876 1004 2380 1925 1 0 90 9 1 8 833564 177296 175340 10187792 0 0 6916 24 2206 1200 0 0 65 35 0 0 833564 170592 175356 10194744 0 0 6792 216 2091 1546 0 0 79 21 0 0 833564 162904 175364 10202792 0 0 7560 0 1886 1956 0 0 99 0 1 0 833564 155464 175372 10210116 0 0 7048 0 2290 2053 0 0 99 1 1 11 833564 147776 175380 10216412 0 0 5872 1592 2250 2072 1 0 98 1 0 0 833564 142928 175408 10221808 0 0 5120 400 2040 2277 1 0 92 7 0 0 833564 135696 175412 10229700 0 0 7300 0 2024 2025 0 0 99 0 0 0 833564 128504 175420 10237300 0 0 7176 0 2062 1951 0 0 99 0 0 0 833564 120940 175428 10244916 0 0 7432 0 2062 1917 0 0 99 0 0 10 833564 113848 175436 10252160 0 0 7176 2184 1952 1843 0 0 99 1 1 11 833564 106640 175440 10259608 0 0 7172 48 1982 1126 0 0 52 48 ps aux | grep postgres : postgres 9189 2.4 4.7 991916 774368 ? Ss 08:45 1:12 postgres: asa flows 10.48.168.160(35031) idle postgres 9363 2.6 2.8 979456 470404 ? Ss 09:15 0:30 postgres: asa flows 10.48.168.160(35091) idle postgres 9364 2.3 3.2 979872 527124 ? Ss 09:15 0:26 postgres: asa flows 10.48.168.160(35092) idle postgres 9450 2.1 2.2 980604 370920 ? Ss 09:20 0:18 postgres: asa flows 10.48.168.160(35104) idle postgres 9452 2.1 2.7 981196 459996 ? Ss 09:20 0:18 postgres: asa flows 10.48.168.160(35106) idle postgres 9505 2.3 1.3 966604 223488 ? Ss 09:30 0:05 postgres: asa flows 10.48.168.160(35121) idle postgres 9507 1.9 1.2 965212 198452 ? Ss 09:30 0:04 postgres: asa flows 10.48.168.160(35123) idle postgres 9508 1.9 1.1 975860 190428 ? Ss 09:30 0:05 postgres: asa flows 10.48.168.160(35124) idle postgres 9530 2.9 0.1 896732 29008 ? Ss 09:33 0:01 postgres: autovacuum worker process flows postgres 9565 1.2 0.1 900472 26768 ? Ss 09:34 0:00 postgres: autovacuum worker process flows root 9568 0.0 0.0 7548 864 pts/2 S+ 09:34 0:00 grep postgres postgres 11433 0.0 0.1 885744 26056 ? S Dec20 0:17 /usr/lib/postgresql/8.4/bin/postgres -D /var/lib/postgresql/8.4/main-c config_file=/etc/postgresql/8.4/main/postgresql.conf postgres 11435 0.0 4.8 889276 790004 ? Ss Dec20 10:44 postgres: writer process postgres 11436 0.0 0.0 886124 704 ? Ss Dec20 1:47 postgres: wal writer process postgres 11437 0.0 0.0 894472 5104 ? Ss Dec20 4:42 postgres: autovacuum launcher process postgres 11438 0.1 0.0 81956 4768 ? Ss Dec20 13:53 postgres: stats collector process postgres 12443 0.0 4.9 903820 810512 ? Ss Dec20 0:58 postgres: asa flows 10.48.168.163(58799) idle postgres 12444 0.0 5.1 945064 844596 ? Ss Dec20 6:09 postgres: asa flows 10.48.168.163(58800) idle postgres 12453 0.0 0.0 887384 6860 ? Ss Dec20 0:00 postgres: asa flows 10.48.168.163(58801) idle postgres 26141 0.3 5.3 976928 885892 ? Ss Dec22 33:47 postgres: asa flows 10.48.168.160(40884) idle select * from pg_stat_activity : datid | datname | procpid | usesysid | usename | current_query | waiting | xact_start | query_start | backend_start | client_addr | client_port -------+---------+---------+----------+----------+------------------------------------------------------------------------------+---------+-------------------------------+-------------------------------+-------------------------------+---------------+------------- 42539 | flows | 9189 | 16384 | asa | TRUNCATE flowpool_31_segment_121 | f | 2012-12-28 09:41:38.438799+01 | 2012-12-28 09:41:38.441412+01 | 2012-12-28 08:45:15.923245+01 | 10.48.168.160| 35031 42539 | flows | 12443 | 16384 | asa | <IDLE> | f | | 2012-12-27 21:30:35.767577+01 | 2012-12-20 13:00:09.968633+01 | 10.48.168.163| 58799 42539 | flows | 9505 | 16384 | asa | <IDLE> | f | | 2012-12-28 09:39:38.19554+01 | 2012-12-28 09:30:17.273639+01 | 10.48.168.160| 35121 42539 | flows | 12444 | 16384 | asa | <IDLE> | f | | 2012-12-27 15:15:22.785422+01 | 2012-12-20 13:00:16.82323+01 | 10.48.168.163| 58800 42539 | flows | 12453 | 16384 | asa | <IDLE> | f | | 2012-12-20 13:01:15.019182+01 | 2012-12-20 13:01:15.015847+01 | 10.48.168.163| 58801 42539 | flows | 26141 | 16384 | asa | <IDLE> | f | | 2012-12-28 06:19:58.348495+01 | 2012-12-22 06:00:55.453306+01 | 10.48.168.160| 40884 42539 | flows | 9364 | 16384 | asa | TRUNCATE flowpool_15_segment_129 | f | 2012-12-28 09:41:38.864158+01 | 2012-12-28 09:41:38.866918+01 | 2012-12-28 09:15:23.179672+01 | 10.48.168.160| 35092 42539 | flows | 9530 | 10 | postgres | autovacuum: ANALYZE public.agg_t187_incoming_a3_src_net_and_dst_net_f5 | f | 2012-12-28 09:42:15.902363+01 | 2012-12-28 09:42:15.902363+01| 2012-12-28 09:33:37.591954+01 | | 42539 | flows | 9565 | 10 | postgres | autovacuum: VACUUM ANALYZE public.agg_t482_outgoing_a7_src_port_and_proto_f5| f | 2012-12-28 09:41:47.314486+01 | 2012-12-28 09:41:47.314486+01| 2012-12-28 09:34:07.609676+01 | | 42539 | flows | 9569 | 10 | postgres | autovacuum: ANALYZE public.agg_t1258_outgoing_a3_src_net_and_dst_net_f5 | f | 2012-12-28 09:42:01.611648+01 | 2012-12-28 09:42:01.611648+01| 2012-12-28 09:34:37.627542+01 | | 42539 | flows | 9571 | 16384 | asa | select * from pg_stat_activity; | f | 2012-12-28 09:43:08.646964+01 | 2012-12-28 09:43:08.646964+01 | 2012-12-28 09:34:44.817718+01 | | -1 42539 | flows | 9590 | 16384 | asa | TRUNCATE flowpool_29_segment_132 | f | 2012-12-28 09:41:38.618347+01 | 2012-12-28 09:41:38.621873+01 | 2012-12-28 09:39:20.604855+01 | 10.48.168.160| 35128 42539 | flows | 9508 | 16384 | asa | TRUNCATE flowpool_26_segment_139 | f | 2012-12-28 09:41:38.696041+01 | 2012-12-28 09:41:38.70312+01 | 2012-12-28 09:30:17.487318+01 | 10.48.168.160| 35124 42539 | flows | 9452 | 16384 | asa | TRUNCATE flowpool_21_segment_136 | f | 2012-12-28 09:41:38.884658+01 | 2012-12-28 09:41:38.888602+01 | 2012-12-28 09:20:20.385564+01 | 10.48.168.160| 35106 42539 | flows | 9592 | 16384 | asa | TRUNCATE flowpool_17_segment_124 | f | 2012-12-28 09:41:38.598054+01 | 2012-12-28 09:41:38.601024+01 | 2012-12-28 09:39:20.649273+01 | 10.48.168.160| 35130 42539 | flows | 9593 | 16384 | asa | TRUNCATE flowpool_2_segment_131 | f | 2012-12-28 09:41:37.902248+01 | 2012-12-28 09:41:37.902334+01 | 2012-12-28 09:39:20.671335+01 | 10.48.168.160| 35131 (16 rows) select * from pg_locks : locktype | database | relation | page | tuple | virtualxid | transactionid | classid | objid | objsubid | virtualtransaction| pid | mode | granted ---------------+----------+----------+------+-------+------------+---------------+---------+-------+----------+--------------------+------+--------------------------+--------- virtualxid | | | | | 20/33 | | | | | 20/33 | 9593 | ExclusiveLock | t relation | 42539 | 2168418 | | | | | | | | 12/128804 | 9569 | AccessShareLock | t relation | 42539 | 93056 | | | | | | | | 15/111991 | 9590 | AccessExclusiveLock | t virtualxid | | | | | 8/140596 | | | | | 8/140596 | 9364 | ExclusiveLock | t relation | 42539 | 103506 | | | | | | | | 16/99223 | 9508 | ShareLock | t relation | 42539 | 103506 | | | | | | | | 16/99223 | 9508 | AccessExclusiveLock | t relation | 42539 | 105972 | | | | | | | | 19/303 | 9592 | ShareLock | t relation | 42539 | 105972 | | | | | | | | 19/303 | 9592 | AccessExclusiveLock | t relation | 42539 | 103511 | | | | | | | | 16/99223 | 9508 | AccessExclusiveLock | t relation | 42539 | 105969 | | | | | | | | 19/303 | 9592 | ShareLock | t relation | 42539 | 105969 | | | | | | | | 19/303 | 9592 | AccessExclusiveLock | t relation | 42539 | 85230 | | | | | | | | 8/140596 | 9364 | ShareLock | t relation | 42539 | 85230 | | | | | | | | 8/140596 | 9364 | AccessExclusiveLock | t transactionid | | | | | | 28615213 | | | | 1/170439 | 9189 | ExclusiveLock | t virtualxid | | | | | 19/303 | | | | | 19/303 | 9592 | ExclusiveLock | t virtualxid | | | | | 12/128804 | | | | | 12/128804 | 9569 | ExclusiveLock | t relation | 42539 | 87756 | | | | | | | | 1/170439 | 9189 | ShareLock | t relation | 42539 | 87756 | | | | | | | | 1/170439 | 9189 | AccessExclusiveLock | t relation | 42539 | 87761 | | | | | | | | 1/170439 | 9189 | AccessExclusiveLock | t relation | 42539 | 82695 | | | | | | | | 17/43658 | 9452 | ShareLock | t relation | 42539 | 82695 | | | | | | | | 17/43658 | 9452 | AccessExclusiveLock | t transactionid | | | | | | 28615204 | | | | 20/33 | 9593 | ExclusiveLock | t transactionid | | | | | | 28615214 | | | | 19/303 | 9592 | ExclusiveLock | t relation | 42539 | 105974 | | | | | | | | 19/303 | 9592 | AccessExclusiveLock | t relation | 42539 | 50023 | | | | | | | | 10/120119 | 9565 | ShareUpdateExclusiveLock | t virtualxid | | | | | 1/170439 | | | | | 1/170439 | 9189 | ExclusiveLock | t virtualxid | | | | | 15/111991 | | | | | 15/111991 | 9590 | ExclusiveLock | t relation | 42539 | 45409 | | | | | | | | 9/122742 | 9530 | ShareUpdateExclusiveLock | t relation | 42539 | 10969 | | | | | | | | 13/114750 | 9571 | AccessShareLock | t relation | 42539 | 90449 | | | | | | | | 20/33 | 9593 | AccessExclusiveLock | t relation | 42539 | 82698 | | | | | | | | 17/43658 | 9452 | ShareLock | t relation | 42539 | 82698 | | | | | | | | 17/43658 | 9452 | AccessExclusiveLock | t transactionid | | | | | | 28615218 | | | | 17/43658 | 9452 | ExclusiveLock | t virtualxid | | | | | 13/114750 | | | | | 13/114750 | 9571 | ExclusiveLock | t virtualxid | | | | | 10/120119 | | | | | 10/120119 | 9565 | ExclusiveLock | t relation | 42539 | 90444 | | | | | | | | 20/33 | 9593 | ShareLock | t relation | 42539 | 90444 | | | | | | | | 20/33 | 9593 | AccessExclusiveLock | t relation | 42539 | 93051 | | | | | | | | 15/111991 | 9590 | ShareLock | t relation | 42539 | 93051 | | | | | | | | 15/111991 | 9590 | AccessExclusiveLock | t relation | 42539 | 93054 | | | | | | | | 15/111991 | 9590 | ShareLock | t relation | 42539 | 93054 | | | | | | | | 15/111991 | 9590 | AccessExclusiveLock | t relation | 42539 | 103509 | | | | | | | | 16/99223 | 9508 | ShareLock | t relation | 42539 | 103509 | | | | | | | | 16/99223 | 9508 | AccessExclusiveLock | t relation | 42539 | 45416 | | | | | | | | 9/122742 | 9530 | AccessShareLock | t virtualxid | | | | | 16/99223 | | | | | 16/99223 | 9508 | ExclusiveLock | t relation | 42539 | 45415 | | | | | | | | 9/122742 | 9530 | AccessShareLock | t virtualxid | | | | | 9/122742 | | | | | 9/122742 | 9530 | ExclusiveLock | t relation | 42539 | 82700 | | | | | | | | 17/43658 | 9452 | AccessExclusiveLock | t relation | 42539 | 90447 | | | | | | | | 20/33 | 9593 | ShareLock | t relation | 42539 | 90447 | | | | | | | | 20/33 | 9593 | AccessExclusiveLock | t relation | 42539 | 50027 | | | | | | | | 10/120119 | 9565 | RowExclusiveLock | t transactionid | | | | | | 28615216 | | | | 16/99223 | 9508 | ExclusiveLock | t relation | 42539 | 2168412 | | | | | | | | 12/128804 | 9569 | ShareUpdateExclusiveLock | t virtualxid | | | | | 17/43658 | | | | | 17/43658 | 9452 | ExclusiveLock | t relation | 42539 | 87759 | | | | | | | | 1/170439 | 9189 | ShareLock | t relation | 42539 | 87759 | | | | | | | | 1/170439 | 9189 | AccessExclusiveLock | t transactionid | | | | | | 28615215 | | | | 15/111991 | 9590 | ExclusiveLock | t relation | 42539 | 50026 | | | | | | | | 10/120119 | 9565 | RowExclusiveLock | t transactionid | | | | | | 28615217 | | | | 8/140596 | 9364 | ExclusiveLock | t relation | 42539 | 2168419 | | | | | | | | 12/128804 | 9569 | AccessShareLock | t relation | 42539 | 85233 | | | | | | | | 8/140596 | 9364 | ShareLock | t relation | 42539 | 85233 | | | | | | | | 8/140596 | 9364 | AccessExclusiveLock | t relation | 42539 | 85235 | | | | | | | | 8/140596 | 9364 | AccessExclusiveLock | t (63 rows) #######################################################################################" #######################################################################################" #######################################################################################" Here is the state 5 minutes after changes of autovacuum thresholds : LOAD 1 minute : 16.4 vmstat 1 : procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 0 14 835088 58252 161736 10309624 0 0 5580 52 1381 1427 0 0 49 51 0 10 835088 57012 161724 10310472 0 0 6304 56 1403 1823 0 0 65 35 0 7 835088 56328 161700 10312044 0 0 5892 40 1367 2482 0 0 86 14 0 5 835088 55584 161692 10311892 0 0 6232 232 1454 2675 0 0 92 7 0 6 835088 59444 161616 10308496 0 0 7240 3108 1508 2407 0 0 93 6 0 13 835088 61024 161620 10306632 0 0 5040 20 1266 1514 0 0 81 19 1 15 835088 59412 161636 10307920 0 0 6184 1012 1511 1503 1 0 65 34 0 13 835088 55320 161644 10311396 0 0 4908 4548 1460 1466 1 0 68 31 0 10 835088 55676 161624 10311552 0 0 5460 1236 1322 1595 0 0 64 35 0 8 835088 55676 161616 10311452 0 0 5412 488 1386 1709 0 0 75 25 0 7 835088 55740 161624 10311924 0 0 5264 204 1302 2145 0 0 79 20 0 7 835088 59068 161624 10308456 0 0 5948 568 1363 2454 0 0 79 20 0 9 835088 57580 161640 10309812 0 0 7012 1548 1443 2664 0 0 80 19 0 7 835088 56648 161644 10311844 0 0 6912 56 1444 2452 0 0 81 18 0 7 835088 55176 161648 10313636 0 0 7200 716 1397 2503 0 0 87 13 0 7 835088 63104 161572 10305228 0 0 4960 12 1324 2452 0 0 91 9 1 5 835088 58144 161580 10309428 0 0 3960 2160 1334 2551 1 0 87 12 0 5 835088 57880 161520 10309432 0 0 5432 4184 1517 2342 1 0 86 13 2 6 835088 58652 161348 10308900 0 0 4800 0 1409 2222 0 0 87 13 0 6 835088 58496 161320 10308456 0 0 5144 15740 1426 2231 0 0 87 13 0 6 835088 57396 161340 10309544 0 0 5956 16 1510 2359 0 0 87 13 0 7 835088 57928 161344 10308664 0 0 4996 12800 1510 2286 0 0 87 13 0 5 835088 59524 161360 10307244 0 0 3892 15032 1490 2224 1 0 90 9 0 13 835088 58364 161372 10307468 0 0 5272 120 1527 2096 0 0 94 6 1 11 835088 57504 161364 10308012 0 0 4968 11188 1496 1830 1 0 87 12 0 12 835088 59280 161376 10307464 0 0 4732 348 1448 1287 1 0 87 12 0 11 835088 57548 161152 10308528 0 0 6096 632 1388 1421 0 0 87 13 0 12 835088 57380 160996 10309848 0 0 5892 0 1426 1431 0 0 87 13 0 13 835088 57184 160936 10310072 0 0 5184 1032 1451 1332 0 0 86 14 0 12 835088 59812 160744 10308436 0 0 4180 0 1363 1349 0 0 87 13 0 11 835088 59892 160756 10308440 0 0 5664 4 1465 1280 0 0 87 13 0 12 835088 55840 160764 10312860 0 0 4408 2196 1343 1280 0 0 91 9 0 11 835088 59840 160564 10309784 0 0 5908 4 1515 1279 0 0 87 13 0 10 835088 56016 160572 10313336 0 0 4088 0 1288 1272 0 0 87 13 0 11 835088 56868 160512 10314216 0 0 5856 368 1333 1730 0 0 86 13 0 11 835088 57364 160500 10314208 0 0 5388 12 1736 1659 0 0 86 14 0 12 835088 57240 160488 10314340 0 0 5320 0 1749 1259 0 0 86 14 0 9 835088 56868 160472 10314544 0 0 5588 16 1756 1405 0 0 87 13 0 10 835088 56752 160464 10315480 0 0 5472 40 1684 1571 0 0 86 13 0 6 835088 56908 160448 10314812 0 0 5008 44 1707 2104 0 0 91 9 0 15 835080 74532 160464 10320344 0 0 5244 2280 1656 1497 0 0 87 12 0 14 835080 68704 160476 10326004 0 0 5732 20 1700 1306 0 0 78 21 0 11 835080 63992 160476 10330436 0 0 4400 32 1735 1399 0 0 76 24 0 11 835080 58836 160496 10335780 0 0 5416 420 1883 1636 1 0 77 22 1 9 835080 57512 160464 10325176 0 0 4892 1220 2289 2061 4 0 73 23 0 6 835080 59156 160348 10321712 0 0 4908 11312 1911 1892 3 0 69 28 0 7 835080 59048 160340 10323272 0 0 5740 0 1848 1529 0 0 73 27 2 17 835080 57544 160348 10324648 0 0 6196 1340 1824 1538 0 0 82 18 0 13 835080 85816 160136 10297068 0 0 5928 8 1997 1372 0 0 80 19 0 16 835080 77104 160144 10305592 0 0 8336 156 2282 1360 1 0 74 25 ps aux | grep postgres : postgres 9189 2.2 4.7 991916 777892 ? Ss 08:45 1:13 postgres: asa flows 10.48.168.160(35031) idle in transaction postgres 9363 2.3 3.0 973312 493688 ? Ss 09:15 0:33 postgres: asa flows 10.48.168.160(35091) idle in transaction postgres 9364 2.2 3.4 979872 573856 ? Ss 09:15 0:32 postgres: asa flows 10.48.168.160(35092) idle in transaction postgres 9450 1.6 2.3 980604 386856 ? Ss 09:20 0:19 postgres: asa flows 10.48.168.160(35104) idle in transaction postgres 9452 1.8 2.8 981196 476572 ? Ss 09:20 0:21 postgres: asa flows 10.48.168.160(35106) idle postgres 9505 1.4 1.4 959420 245852 ? Ss 09:30 0:08 postgres: asa flows 10.48.168.160(35121) idle in transaction postgres 9507 1.1 1.2 955996 204500 ? Ss 09:30 0:06 postgres: asa flows 10.48.168.160(35123) idle in transaction postgres 9508 1.3 1.3 975348 223700 ? Ss 09:30 0:07 postgres: asa flows 10.48.168.160(35124) idle postgres 9530 1.6 0.2 896732 33000 ? Ss 09:33 0:05 postgres: autovacuum worker process flows postgres 9565 0.3 0.1 900472 26828 ? Ds 09:34 0:01 postgres: autovacuum worker process flows postgres 9569 1.0 0.1 896732 30896 ? Ds 09:34 0:02 postgres: autovacuum worker process flows root 9570 0.0 0.0 69516 3236 pts/3 S+ 09:34 0:00 /usr/lib/postgresql/8.4/bin/psql -U asa flows postgres 9571 0.0 0.0 887536 6160 ? Ss 09:34 0:00 postgres: asa flows [local] idle postgres 9590 19.0 0.6 927648 99308 ? Ss 09:39 0:01 postgres: asa flows 10.48.168.160(35128) idle in transaction postgres 9591 18.8 0.6 931956 101248 ? Ss 09:39 0:01 postgres: asa flows 10.48.168.160(35129) idle in transaction postgres 9592 10.6 0.4 927432 81084 ? Ss 09:39 0:00 postgres: asa flows 10.48.168.160(35130) idle postgres 9593 13.3 0.5 927836 87920 ? Ss 09:39 0:00 postgres: asa flows 10.48.168.160(35131) idle root 9595 0.0 0.0 7552 876 pts/2 S+ 09:39 0:00 grep postgres postgres 11433 0.0 0.1 885744 26056 ? S Dec20 0:17 /usr/lib/postgresql/8.4/bin/postgres -D /var/lib/postgresql/8.4/main-c config_file=/etc/postgresql/8.4/main/postgresql.conf postgres 11435 0.0 4.8 889276 790012 ? Ss Dec20 10:44 postgres: writer process postgres 11436 0.0 0.0 886124 704 ? Ss Dec20 1:47 postgres: wal writer process postgres 11437 0.0 0.0 894472 5104 ? Ss Dec20 4:42 postgres: autovacuum launcher process postgres 11438 0.1 0.0 81956 4768 ? Ss Dec20 13:53 postgres: stats collector process postgres 12443 0.0 4.9 903820 810512 ? Ss Dec20 0:58 postgres: asa flows 10.48.168.163(58799) idle postgres 12444 0.0 5.1 945064 844596 ? Ss Dec20 6:09 postgres: asa flows 10.48.168.163(58800) idle postgres 12453 0.0 0.0 887384 6864 ? Ss Dec20 0:00 postgres: asa flows 10.48.168.163(58801) idle postgres 26141 0.3 5.3 976928 885892 ? Ss Dec22 33:47 postgres: asa flows 10.48.168.160(40884) idle select * from pg_stat_activity : datid | datname | procpid | usesysid | usename | current_query | waiting | xact_start | query_start | backend_start | client_addr | client_port -------+---------+---------+----------+----------+-------------------------------------------------------------------------------------------------------+---------+-------------------------------+-------------------------------+-------------------------------+---------------+------------- 42539 | flows | 9189 | 16384 | asa | CREATE INDEX "flowpool_31_segment_121_timestamp" ON "flowpool_31_segment_121"USING btree (timestamp) | f | 2012-12-28 09:45:07.277654+01 | 2012-12-28 09:45:07.277704+01| 2012-12-28 08:45:15.923245+01 | 10.48.168.160 | 35031 42539 | flows | 12443 | 16384 | asa | <IDLE> | f | | 2012-12-27 21:30:35.767577+01 | 2012-12-20 13:00:09.968633+01| 10.48.168.163 | 58799 42539 | flows | 9505 | 16384 | asa | <IDLE> | f | | 2012-12-28 09:39:38.19554+01 | 2012-12-28 09:30:17.273639+01| 10.48.168.160 | 35121 42539 | flows | 12444 | 16384 | asa | <IDLE> | f | | 2012-12-27 15:15:22.785422+01 | 2012-12-20 13:00:16.82323+01 | 10.48.168.163 | 58800 42539 | flows | 12453 | 16384 | asa | <IDLE> | f | | 2012-12-20 13:01:15.019182+01 | 2012-12-20 13:01:15.015847+01| 10.48.168.163 | 58801 42539 | flows | 26141 | 16384 | asa | <IDLE> | f | | 2012-12-28 06:19:58.348495+01 | 2012-12-22 06:00:55.453306+01| 10.48.168.160 | 40884 42539 | flows | 9364 | 16384 | asa | CREATE INDEX "flowpool_15_segment_129_timestamp" ON "flowpool_15_segment_129"USING btree (timestamp) | f | 2012-12-28 09:45:04.464606+01 | 2012-12-28 09:45:04.464668+01| 2012-12-28 09:15:23.179672+01 | 10.48.168.160 | 35092 42539 | flows | 9530 | 10 | postgres | autovacuum: ANALYZE public.agg_t93_incoming_a7_src_port_and_proto_f5 | f | 2012-12-28 09:47:25.338923+01 | 2012-12-28 09:47:25.338923+01 | 2012-12-28 09:33:37.591954+01| | 42539 | flows | 9565 | 10 | postgres | autovacuum: VACUUM ANALYZE public.agg_t482_outgoing_a7_src_port_and_proto_f5 | f | 2012-12-28 09:41:47.314486+01 | 2012-12-2809:41:47.314486+01 | 2012-12-28 09:34:07.609676+01 | | 42539 | flows | 9569 | 10 | postgres | autovacuum: ANALYZE public.agg_t1257_incoming_a7_src_port_and_proto_f5 | f | 2012-12-28 09:47:50.63221+01 | 2012-12-28 09:47:50.63221+01 | 2012-12-28 09:34:37.627542+01 | | 42539 | flows | 9571 | 16384 | asa | select * from pg_stat_activity; | f | 2012-12-28 09:48:02.711778+01 | 2012-12-28 09:48:02.711778+01 | 2012-12-28 09:34:44.817718+01| | -1 42539 | flows | 9590 | 16384 | asa | CREATE INDEX "flowpool_29_segment_132_timestamp" ON "flowpool_29_segment_132"USING btree (timestamp) | f | 2012-12-28 09:45:05.258448+01 | 2012-12-28 09:45:05.258499+01| 2012-12-28 09:39:20.604855+01 | 10.48.168.160 | 35128 42539 | flows | 9508 | 16384 | asa | CREATE INDEX "flowpool_26_segment_139_timestamp" ON "flowpool_26_segment_139"USING btree (timestamp) | f | 2012-12-28 09:45:04.296674+01 | 2012-12-28 09:45:04.296732+01| 2012-12-28 09:30:17.487318+01 | 10.48.168.160 | 35124 42539 | flows | 9452 | 16384 | asa | CREATE INDEX "flowpool_21_segment_136_timestamp" ON "flowpool_21_segment_136"USING btree (timestamp) | f | 2012-12-28 09:45:06.741216+01 | 2012-12-28 09:45:06.741271+01| 2012-12-28 09:20:20.385564+01 | 10.48.168.160 | 35106 42539 | flows | 9592 | 16384 | asa | CREATE INDEX "flowpool_17_segment_124_timestamp" ON "flowpool_17_segment_124"USING btree (timestamp) | f | 2012-12-28 09:45:05.966266+01 | 2012-12-28 09:45:05.966332+01| 2012-12-28 09:39:20.649273+01 | 10.48.168.160 | 35130 42539 | flows | 9593 | 16384 | asa | CREATE INDEX "flowpool_2_segment_131_timestamp" ON "flowpool_2_segment_131"USING btree (timestamp) | f | 2012-12-28 09:45:03.358737+01 | 2012-12-28 09:45:03.358818+01| 2012-12-28 09:39:20.671335+01 | 10.48.168.160 | 35131 (16 rows) select * from pg_locks : locktype | database | relation | page | tuple | virtualxid | transactionid | classid | objid | objsubid | virtualtransaction| pid | mode | granted ---------------+----------+----------+------+-------+------------+---------------+---------+-------+----------+--------------------+------+--------------------------+--------- relation | 42539 | 6691584 | | | | | | | | 9/122762 | 9530 | ShareUpdateExclusiveLock | t relation | 42539 | 2168206 | | | | | | | | 12/128806 | 9569 | AccessShareLock | t transactionid | | | | | | 28615251 | | | | 8/140603 | 9364 | ExclusiveLock | t relation | 42539 | 6691588 | | | | | | | | 9/122762 | 9530 | AccessShareLock | t transactionid | | | | | | 28615278 | | | | 1/170450 | 9189 | ExclusiveLock | t relation | 42539 | 103506 | | | | | | | | 16/99231 | 9508 | AccessShareLock | t relation | 42539 | 103506 | | | | | | | | 16/99231 | 9508 | ShareLock | t relation | 42539 | 7252363 | | | | | | | | 1/170450 | 9189 | AccessExclusiveLock | t relation | 42539 | 105969 | | | | | | | | 19/311 | 9592 | AccessShareLock | t relation | 42539 | 105969 | | | | | | | | 19/311 | 9592 | ShareLock | t relation | 42539 | 85230 | | | | | | | | 8/140603 | 9364 | AccessShareLock | t relation | 42539 | 85230 | | | | | | | | 8/140603 | 9364 | ShareLock | t relation | 42539 | 7252359 | | | | | | | | 8/140603 | 9364 | AccessExclusiveLock | t relation | 42539 | 87756 | | | | | | | | 1/170450 | 9189 | AccessShareLock | t relation | 42539 | 87756 | | | | | | | | 1/170450 | 9189 | ShareLock | t relation | 42539 | 82695 | | | | | | | | 17/43667 | 9452 | AccessShareLock | t relation | 42539 | 82695 | | | | | | | | 17/43667 | 9452 | ShareLock | t transactionid | | | | | | 28615238 | | | | 20/36 | 9593 | ExclusiveLock | t transactionid | | | | | | 28615271 | | | | 17/43667 | 9452 | ExclusiveLock | t virtualxid | | | | | 17/43667 | | | | | 17/43667 | 9452 | ExclusiveLock | t relation | 42539 | 50023 | | | | | | | | 10/120119 | 9565 | ShareUpdateExclusiveLock | t relation | 42539 | 7252361 | | | | | | | | 19/311 | 9592 | AccessExclusiveLock | t virtualxid | | | | | 9/122762 | | | | | 9/122762 | 9530 | ExclusiveLock | t transactionid | | | | | | 28615262 | | | | 15/112000 | 9590 | ExclusiveLock | t virtualxid | | | | | 13/114752 | | | | | 13/114752 | 9571 | ExclusiveLock | t relation | 42539 | 6691587 | | | | | | | | 9/122762 | 9530 | AccessShareLock | t transactionid | | | | | | 28615266 | | | | 19/311 | 9592 | ExclusiveLock | t virtualxid | | | | | 1/170450 | | | | | 1/170450 | 9189 | ExclusiveLock | t relation | 42539 | 10969 | | | | | | | | 13/114752 | 9571 | AccessShareLock | t virtualxid | | | | | 19/311 | | | | | 19/311 | 9592 | ExclusiveLock | t relation | 42539 | 7252362 | | | | | | | | 17/43667 | 9452 | AccessExclusiveLock | t virtualxid | | | | | 10/120119 | | | | | 10/120119 | 9565 | ExclusiveLock | t relation | 42539 | 90444 | | | | | | | | 20/36 | 9593 | AccessShareLock | t relation | 42539 | 90444 | | | | | | | | 20/36 | 9593 | ShareLock | t relation | 42539 | 7252360 | | | | | | | | 15/112000 | 9590 | AccessExclusiveLock | t relation | 42539 | 93051 | | | | | | | | 15/112000 | 9590 | AccessShareLock | t relation | 42539 | 93051 | | | | | | | | 15/112000 | 9590 | ShareLock | t transactionid | | | | | | 28615245 | | | | 16/99231 | 9508 | ExclusiveLock | t relation | 42539 | 2168202 | | | | | | | | 12/128806 | 9569 | ShareUpdateExclusiveLock | t virtualxid | | | | | 12/128806 | | | | | 12/128806 | 9569 | ExclusiveLock | t virtualxid | | | | | 20/36 | | | | | 20/36 | 9593 | ExclusiveLock | t relation | 42539 | 7252358 | | | | | | | | 16/99231 | 9508 | AccessExclusiveLock | t relation | 42539 | 7252357 | | | | | | | | 20/36 | 9593 | AccessExclusiveLock | t relation | 42539 | 2168205 | | | | | | | | 12/128806 | 9569 | AccessShareLock | t relation | 42539 | 50027 | | | | | | | | 10/120119 | 9565 | RowExclusiveLock | t virtualxid | | | | | 15/112000 | | | | | 15/112000 | 9590 | ExclusiveLock | t virtualxid | | | | | 16/99231 | | | | | 16/99231 | 9508 | ExclusiveLock | t relation | 42539 | 50026 | | | | | | | | 10/120119 | 9565 | RowExclusiveLock | t virtualxid | | | | | 8/140603 | | | | | 8/140603 | 9364 | ExclusiveLock | t (49 rows) #######################################################################################" #######################################################################################" #######################################################################################" Here is the state 10 minutes after changes of autovacuum thresholds : LOAD 1 minute : 15.6 vmstat 1 : procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 0 10 834996 55696 118652 10404024 0 0 4820 328 1421 1617 0 0 56 44 0 8 834996 56680 118660 10402664 0 0 3992 8 1396 1769 0 0 66 33 0 7 834996 59024 118664 10400968 0 0 3204 348 1397 1710 0 0 73 27 0 7 834996 60140 118672 10399404 0 0 3360 72 1558 2342 0 0 75 24 0 6 834996 55692 118684 10403244 0 0 3808 308 1617 1977 0 0 76 24 0 6 834996 57304 118692 10401984 0 0 3912 0 1498 1592 0 0 81 19 0 12 834996 59408 118700 10399908 0 0 3000 748 1652 1805 0 0 77 23 0 13 834996 56432 118704 10403092 0 0 3276 284 1654 1578 0 0 69 31 0 13 834996 58292 118712 10401204 0 0 3496 328 1483 1552 0 0 69 31 0 12 834996 55120 118724 10404192 0 0 2892 432 1473 1562 0 0 69 31 0 12 834996 78920 118652 10380460 0 0 3496 380 1428 1588 0 0 69 31 0 11 834996 75588 118660 10383892 0 0 3136 556 1351 1581 0 0 69 31 0 7 834996 71356 118672 10388272 0 0 4276 396 1588 1885 0 0 69 31 0 6 834996 67484 118680 10391660 0 0 3264 468 1545 2624 0 0 81 19 0 6 834996 63636 118688 10395932 0 0 3668 320 1578 2570 0 0 81 19 0 5 834996 59668 118696 10399928 0 0 3592 464 1457 2379 0 0 81 19 0 14 834996 55696 118708 10403812 0 0 3776 8 1292 1688 0 0 68 31 0 14 834996 57264 118716 10401760 0 0 4256 320 1352 1548 0 0 50 50 0 15 834996 59096 118728 10400508 0 0 3668 464 1381 1549 0 0 50 50 0 15 834996 59696 118732 10399452 0 0 5320 288 1422 1434 0 0 51 49 0 14 834996 56720 118736 10402704 0 0 3252 700 1348 1526 0 0 50 50 0 14 834996 58604 118740 10401152 0 0 3636 208 1432 1560 0 0 51 49 0 13 834996 55488 118748 10404420 0 0 3648 264 1371 1638 0 0 56 44 1 13 834996 56532 118756 10403112 0 0 4168 292 1383 1646 0 0 60 40 0 10 834996 58512 118760 10400596 0 0 3520 376 1334 1694 0 0 73 27 0 10 834996 60372 118764 10398720 0 0 3952 344 1370 1827 0 0 69 31 0 8 834996 55284 118772 10403200 0 0 4216 24 1318 2107 0 0 71 29 0 6 834996 56772 118780 10402160 0 0 3584 352 1188 2098 0 0 80 19 0 7 834996 59004 118776 10399448 0 0 3112 504 1194 1742 0 0 79 21 0 10 834996 55176 118784 10403092 0 0 3864 400 1315 1794 0 0 75 25 1 12 834996 55796 118772 10402840 0 0 4328 612 1355 1811 0 0 75 25 0 11 834996 58012 118760 10400376 0 0 2860 844 1205 1595 0 0 75 25 0 11 834996 59888 118772 10399060 0 0 3620 328 1285 1562 0 0 75 25 0 10 834996 55656 118780 10402796 0 0 4024 464 1372 1925 0 0 75 25 0 6 834996 57472 118788 10400696 0 0 3856 380 1362 2005 0 0 77 23 0 7 834996 58588 118792 10399660 0 0 4192 328 1410 2415 0 0 80 20 0 5 834996 55116 118796 10402884 0 0 3056 8 1354 2426 0 0 81 19 1 6 834996 57364 118808 10400528 0 0 2764 304 1225 2465 0 0 81 19 0 6 834996 58620 118812 10399676 0 0 3920 0 1243 2197 0 0 80 19 0 17 834996 55268 118820 10402988 0 0 3004 480 1202 1796 0 0 79 21 0 16 834996 58368 118824 10400072 0 0 3240 220 1097 1543 0 0 75 25 0 16 834996 59856 118836 10398692 0 0 4188 0 1214 1492 0 0 75 25 0 16 834996 56488 118840 10401720 0 0 3228 628 1206 1455 0 0 75 25 0 16 834996 59216 118848 10399460 0 0 3792 428 1264 1397 0 0 76 24 ps aux | grep postgres : postgres 9189 1.9 4.7 991916 782084 ? Ds 08:45 1:17 postgres: asa flows 10.48.168.160(35031) CREATE INDEX postgres 9364 1.5 3.6 979872 598028 ? Ds 09:15 0:36 postgres: asa flows 10.48.168.160(35092) CREATE INDEX postgres 9452 1.2 3.0 981196 499464 ? Ds 09:20 0:25 postgres: asa flows 10.48.168.160(35106) CREATE INDEX postgres 9505 0.5 1.5 965564 249164 ? Ss 09:30 0:08 postgres: asa flows 10.48.168.160(35121) idle postgres 9508 0.8 1.5 955892 255448 ? Ds 09:30 0:12 postgres: asa flows 10.48.168.160(35124) CREATE INDEX postgres 9530 1.1 0.2 896732 38028 ? Ds 09:33 0:13 postgres: autovacuum worker process flows postgres 9565 0.5 0.2 1093852 34268 ? Ss 09:34 0:06 postgres: autovacuum worker process flows postgres 9569 0.5 0.1 896732 31948 ? Ds 09:34 0:06 postgres: autovacuum worker process flows root 9570 0.0 0.0 69516 3264 pts/3 S+ 09:34 0:00 /usr/lib/postgresql/8.4/bin/psql -U asa flows postgres 9571 0.0 0.0 887600 6768 ? Ss 09:34 0:00 postgres: asa flows [local] idle postgres 9590 0.6 1.2 965716 211248 ? Ds 09:39 0:05 postgres: asa flows 10.48.168.160(35128) CREATE INDEX postgres 9592 0.6 1.2 963228 199132 ? Ds 09:39 0:05 postgres: asa flows 10.48.168.160(35130) CREATE INDEX postgres 9593 1.7 1.9 962992 327904 ? Ds 09:39 0:14 postgres: asa flows 10.48.168.160(35131) CREATE INDEX root 9623 0.0 0.0 7552 872 pts/2 S+ 09:53 0:00 grep postgres postgres 11433 0.0 0.1 885744 26056 ? S Dec20 0:17 /usr/lib/postgresql/8.4/bin/postgres -D /var/lib/postgresql/8.4/main-c config_file=/etc/postgresql/8.4/main/postgresql.conf postgres 11435 0.0 4.8 889276 790016 ? Ds Dec20 10:44 postgres: writer process postgres 11436 0.0 0.0 886124 704 ? Ss Dec20 1:47 postgres: wal writer process postgres 11437 0.0 0.0 894472 5104 ? Ss Dec20 4:42 postgres: autovacuum launcher process postgres 11438 0.1 0.0 81956 4768 ? Ss Dec20 13:53 postgres: stats collector process postgres 12443 0.0 4.9 903820 810512 ? Ss Dec20 0:58 postgres: asa flows 10.48.168.163(58799) idle postgres 12444 0.0 5.1 945064 844596 ? Ss Dec20 6:09 postgres: asa flows 10.48.168.163(58800) idle postgres 12453 0.0 0.0 887384 6864 ? Ss Dec20 0:00 postgres: asa flows 10.48.168.163(58801) idle postgres 26141 0.3 5.3 976928 885892 ? Ss Dec22 33:47 postgres: asa flows 10.48.168.160(40884) idle select * from pg_stat_activity : datid | datname | procpid | usesysid | usename | current_query | waiting | xact_start | query_start | backend_start | client_addr | client_port -------+---------+---------+----------+----------+-------------------------------------------------------------------------------------------------------+---------+-------------------------------+-------------------------------+-------------------------------+---------------+------------- 42539 | flows | 9189 | 16384 | asa | CREATE INDEX "flowpool_31_segment_121_timestamp" ON "flowpool_31_segment_121"USING btree (timestamp) | f | 2012-12-28 09:45:07.277654+01 | 2012-12-28 09:45:07.277704+01| 2012-12-28 08:45:15.923245+01 | 10.48.168.160 | 35031 42539 | flows | 12443 | 16384 | asa | <IDLE> | f | | 2012-12-27 21:30:35.767577+01 | 2012-12-20 13:00:09.968633+01| 10.48.168.163 | 58799 42539 | flows | 9505 | 16384 | asa | <IDLE> | f | | 2012-12-28 09:39:38.19554+01 | 2012-12-28 09:30:17.273639+01| 10.48.168.160 | 35121 42539 | flows | 12444 | 16384 | asa | <IDLE> | f | | 2012-12-27 15:15:22.785422+01 | 2012-12-20 13:00:16.82323+01 | 10.48.168.163 | 58800 42539 | flows | 12453 | 16384 | asa | <IDLE> | f | | 2012-12-20 13:01:15.019182+01 | 2012-12-20 13:01:15.015847+01| 10.48.168.163 | 58801 42539 | flows | 26141 | 16384 | asa | <IDLE> | f | | 2012-12-28 06:19:58.348495+01 | 2012-12-22 06:00:55.453306+01| 10.48.168.160 | 40884 42539 | flows | 9364 | 16384 | asa | CREATE INDEX "flowpool_15_segment_129_timestamp" ON "flowpool_15_segment_129"USING btree (timestamp) | f | 2012-12-28 09:45:04.464606+01 | 2012-12-28 09:45:04.464668+01| 2012-12-28 09:15:23.179672+01 | 10.48.168.160 | 35092 42539 | flows | 9530 | 10 | postgres | autovacuum: ANALYZE public.agg_t253_incoming_a6_dst_port_and_proto_f5 | f | 2012-12-28 09:50:19.161336+01| 2012-12-28 09:50:19.161336+01 | 2012-12-28 09:33:37.591954+01 | | 42539 | flows | 9565 | 10 | postgres | autovacuum: VACUUM ANALYZE public.agg_t482_outgoing_a7_src_port_and_proto_f5 | f | 2012-12-28 09:41:47.314486+01 | 2012-12-2809:41:47.314486+01 | 2012-12-28 09:34:07.609676+01 | | 42539 | flows | 9569 | 10 | postgres | autovacuum: ANALYZE public.agg_t1244_outgoing_a6_dst_port_and_proto_f5 | f | 2012-12-28 09:53:34.831039+01| 2012-12-28 09:53:34.831039+01 | 2012-12-28 09:34:37.627542+01 | | 42539 | flows | 9571 | 16384 | asa | select * from pg_stat_activity; | f | 2012-12-28 09:53:40.35896+01 | 2012-12-28 09:53:40.35896+01 | 2012-12-28 09:34:44.817718+01| | -1 42539 | flows | 9590 | 16384 | asa | CREATE INDEX "flowpool_29_segment_132_timestamp" ON "flowpool_29_segment_132"USING btree (timestamp) | f | 2012-12-28 09:45:05.258448+01 | 2012-12-28 09:45:05.258499+01| 2012-12-28 09:39:20.604855+01 | 10.48.168.160 | 35128 42539 | flows | 9508 | 16384 | asa | CREATE INDEX "flowpool_26_segment_139_timestamp" ON "flowpool_26_segment_139"USING btree (timestamp) | f | 2012-12-28 09:45:04.296674+01 | 2012-12-28 09:45:04.296732+01| 2012-12-28 09:30:17.487318+01 | 10.48.168.160 | 35124 42539 | flows | 9452 | 16384 | asa | CREATE INDEX "flowpool_21_segment_136_timestamp" ON "flowpool_21_segment_136"USING btree (timestamp) | f | 2012-12-28 09:45:06.741216+01 | 2012-12-28 09:45:06.741271+01| 2012-12-28 09:20:20.385564+01 | 10.48.168.160 | 35106 42539 | flows | 9592 | 16384 | asa | CREATE INDEX "flowpool_17_segment_124_timestamp" ON "flowpool_17_segment_124"USING btree (timestamp) | f | 2012-12-28 09:45:05.966266+01 | 2012-12-28 09:45:05.966332+01| 2012-12-28 09:39:20.649273+01 | 10.48.168.160 | 35130 42539 | flows | 9593 | 16384 | asa | CREATE INDEX "flowpool_2_segment_131_timestamp" ON "flowpool_2_segment_131"USING btree (timestamp) | f | 2012-12-28 09:45:03.358737+01 | 2012-12-28 09:45:03.358818+01| 2012-12-28 09:39:20.671335+01 | 10.48.168.160 | 35131 (16 rows) select * from pg_locks : locktype | database | relation | page | tuple | virtualxid | transactionid | classid | objid | objsubid | virtualtransaction| pid | mode | granted ---------------+----------+----------+------+-------+------------+---------------+---------+-------+----------+--------------------+------+--------------------------+--------- transactionid | | | | | | 28615251 | | | | 8/140603 | 9364 | ExclusiveLock | t relation | 42539 | 6690376 | | | | | | | | 12/128814 | 9569 | AccessShareLock | t transactionid | | | | | | 28615278 | | | | 1/170450 | 9189 | ExclusiveLock | t relation | 42539 | 103506 | | | | | | | | 16/99231 | 9508 | AccessShareLock | t relation | 42539 | 103506 | | | | | | | | 16/99231 | 9508 | ShareLock | t relation | 42539 | 6690372 | | | | | | | | 12/128814 | 9569 | ShareUpdateExclusiveLock | t relation | 42539 | 7252363 | | | | | | | | 1/170450 | 9189 | AccessExclusiveLock | t relation | 42539 | 105969 | | | | | | | | 19/311 | 9592 | AccessShareLock | t relation | 42539 | 105969 | | | | | | | | 19/311 | 9592 | ShareLock | t relation | 42539 | 85230 | | | | | | | | 8/140603 | 9364 | AccessShareLock | t relation | 42539 | 85230 | | | | | | | | 8/140603 | 9364 | ShareLock | t relation | 42539 | 7252359 | | | | | | | | 8/140603 | 9364 | AccessExclusiveLock | t relation | 42539 | 87756 | | | | | | | | 1/170450 | 9189 | AccessShareLock | t relation | 42539 | 87756 | | | | | | | | 1/170450 | 9189 | ShareLock | t relation | 42539 | 82695 | | | | | | | | 17/43667 | 9452 | AccessShareLock | t relation | 42539 | 82695 | | | | | | | | 17/43667 | 9452 | ShareLock | t transactionid | | | | | | 28615238 | | | | 20/36 | 9593 | ExclusiveLock | t transactionid | | | | | | 28615271 | | | | 17/43667 | 9452 | ExclusiveLock | t relation | 42539 | 46103 | | | | | | | | 9/122774 | 9530 | ShareUpdateExclusiveLock | t relation | 42539 | 6690375 | | | | | | | | 12/128814 | 9569 | AccessShareLock | t virtualxid | | | | | 17/43667 | | | | | 17/43667 | 9452 | ExclusiveLock | t relation | 42539 | 50023 | | | | | | | | 10/120119 | 9565 | ShareUpdateExclusiveLock | t relation | 42539 | 7252361 | | | | | | | | 19/311 | 9592 | AccessExclusiveLock | t relation | 42539 | 46107 | | | | | | | | 9/122774 | 9530 | AccessShareLock | t transactionid | | | | | | 28615262 | | | | 15/112000 | 9590 | ExclusiveLock | t transactionid | | | | | | 28615266 | | | | 19/311 | 9592 | ExclusiveLock | t virtualxid | | | | | 12/128814 | | | | | 12/128814 | 9569 | ExclusiveLock | t virtualxid | | | | | 1/170450 | | | | | 1/170450 | 9189 | ExclusiveLock | t relation | 42539 | 10969 | | | | | | | | 13/114754 | 9571 | AccessShareLock | t virtualxid | | | | | 19/311 | | | | | 19/311 | 9592 | ExclusiveLock | t relation | 42539 | 7252362 | | | | | | | | 17/43667 | 9452 | AccessExclusiveLock | t virtualxid | | | | | 10/120119 | | | | | 10/120119 | 9565 | ExclusiveLock | t relation | 42539 | 90444 | | | | | | | | 20/36 | 9593 | AccessShareLock | t relation | 42539 | 90444 | | | | | | | | 20/36 | 9593 | ShareLock | t relation | 42539 | 7252360 | | | | | | | | 15/112000 | 9590 | AccessExclusiveLock | t relation | 42539 | 93051 | | | | | | | | 15/112000 | 9590 | AccessShareLock | t relation | 42539 | 93051 | | | | | | | | 15/112000 | 9590 | ShareLock | t transactionid | | | | | | 28615245 | | | | 16/99231 | 9508 | ExclusiveLock | t virtualxid | | | | | 13/114754 | | | | | 13/114754 | 9571 | ExclusiveLock | t virtualxid | | | | | 20/36 | | | | | 20/36 | 9593 | ExclusiveLock | t relation | 42539 | 7252358 | | | | | | | | 16/99231 | 9508 | AccessExclusiveLock | t relation | 42539 | 7252357 | | | | | | | | 20/36 | 9593 | AccessExclusiveLock | t relation | 42539 | 50027 | | | | | | | | 10/120119 | 9565 | RowExclusiveLock | t virtualxid | | | | | 15/112000 | | | | | 15/112000 | 9590 | ExclusiveLock | t virtualxid | | | | | 16/99231 | | | | | 16/99231 | 9508 | ExclusiveLock | t relation | 42539 | 50026 | | | | | | | | 10/120119 | 9565 | RowExclusiveLock | t virtualxid | | | | | 9/122774 | | | | | 9/122774 | 9530 | ExclusiveLock | t relation | 42539 | 46106 | | | | | | | | 9/122774 | 9530 | AccessShareLock | t virtualxid | | | | | 8/140603 | | | | | 8/140603 | 9364 | ExclusiveLock | t (49 rows) #######################################################################################" #######################################################################################" #######################################################################################" Here is the state 15 minutes after changes of autovacuum thresholds : LOAD 1 minute : 16.8 vmstat 1 : procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 0 13 834996 73132 118760 10381432 0 0 4532 504 1758 1599 0 0 59 41 0 15 834996 68932 118772 10385756 0 0 4180 0 1748 1518 0 0 55 45 0 13 834996 64240 118780 10390472 0 0 4856 316 1884 1547 0 0 64 36 0 13 834996 59032 118788 10395604 0 0 5088 4 2011 1491 0 0 59 40 0 13 834996 61124 118632 10392852 0 0 4768 164 2012 1449 0 0 62 37 0 12 834996 56164 118640 10397920 0 0 5344 224 1986 1644 0 0 67 33 0 10 834996 56288 118644 10397492 0 0 4436 8 1941 1639 0 0 57 43 0 11 834996 57156 118632 10395980 0 0 4668 188 2102 1629 0 0 64 36 0 11 834996 58616 118604 10394936 0 0 4020 8 1922 1625 0 0 62 38 0 11 834996 59236 118604 10394448 0 0 4272 216 1894 1631 0 0 62 37 0 7 834996 59608 118600 10394336 0 0 4536 8 1829 1627 0 0 66 34 1 5 834996 60104 118620 10393908 0 0 4372 240 1868 1658 0 0 83 17 0 6 834996 55888 118620 10398288 0 0 4368 216 1805 1618 0 0 88 12 0 7 834996 56756 118628 10397168 0 0 4036 180 1873 1710 0 0 86 13 0 14 834996 56228 118636 10397536 0 0 4832 264 2032 1480 0 0 68 31 0 13 834996 56384 118644 10397312 0 0 4864 392 1983 1513 0 0 60 40 0 12 834996 57004 118656 10396336 0 0 4244 248 1889 1528 0 0 67 32 0 11 834996 58244 118648 10395476 0 0 4408 12 2005 1569 0 0 70 30 0 9 834996 59104 118660 10395136 0 0 4276 208 1977 1770 0 0 73 26 0 9 834996 59476 118660 10394616 0 0 4124 12 1957 2033 0 0 76 24 0 7 834996 55384 118668 10398184 0 0 3920 300 1971 2170 0 0 77 23 0 6 834996 55384 118680 10398520 0 0 4324 4 1919 2664 0 0 81 18 0 7 834996 55616 118684 10397876 0 0 4852 0 1782 2527 0 0 82 18 0 6 834996 55736 118688 10398392 0 0 5108 296 1704 2549 0 0 83 16 0 5 834996 60324 118684 10393980 0 0 5164 0 1836 2658 0 0 88 12 0 5 834996 55856 118696 10398880 0 0 4476 336 1785 2426 0 0 88 11 0 6 834996 55464 118700 10398700 0 0 4388 484 1787 2649 0 0 87 12 0 13 834996 56472 118708 10398832 0 0 4004 80 1758 1631 0 0 73 26 0 13 834996 57336 118680 10396928 0 0 4328 0 1830 1410 0 0 61 39 0 13 834996 57956 118648 10397092 0 0 4264 336 1726 1368 0 0 60 40 0 13 834996 58684 118616 10396608 0 0 4308 384 1783 1448 0 0 60 40 0 13 834996 59924 118600 10394840 0 0 4048 312 1790 1399 0 0 60 40 0 12 834996 55972 118612 10399244 0 0 3924 768 1676 1405 0 0 60 40 0 13 834996 57072 118620 10398020 0 0 4916 0 1758 1406 0 0 60 40 0 12 834996 56452 118628 10398848 0 0 6432 0 1525 1587 0 0 60 40 0 13 834996 57580 118632 10398060 0 0 4108 344 1575 1486 0 0 73 27 0 13 834996 57980 118640 10398132 0 0 4968 0 1684 1474 0 0 61 38 0 13 834996 59096 118652 10397300 0 0 4012 348 1680 1438 0 0 60 40 0 12 834996 59964 118664 10395604 0 0 4212 824 1681 1448 0 0 60 40 0 11 834996 59948 118668 10395780 0 0 5332 432 1712 1524 0 0 64 36 0 12 834996 59948 118668 10395812 0 0 4996 336 1738 1513 0 0 66 34 0 10 834996 56336 118688 10399036 0 0 2992 320 1338 1791 0 0 73 27 0 12 834996 58444 118700 10396076 0 0 3260 1004 1373 1762 0 0 80 20 0 11 834996 58460 118708 10396224 0 0 4104 1064 1365 1809 0 0 87 13 0 8 834996 56476 118720 10397336 0 0 6052 1696 1583 2360 0 0 81 19 0 11 834996 56352 118736 10397312 0 0 5368 964 1524 1898 0 0 83 17 0 7 834996 55840 118744 10397308 0 0 5076 272 1442 2331 0 0 82 18 0 6 834996 58088 118744 10395432 0 0 6560 1020 1534 2994 0 0 84 16 0 4 834996 57592 118744 10396616 0 0 5848 256 1363 2800 0 0 89 10 2 6 834996 59576 118760 10394408 0 0 6936 204 1505 2983 0 0 89 10 ps aux | grep postgres : postgres 9189 1.7 4.7 991916 782084 ? Ds 08:45 1:17 postgres: asa flows 10.48.168.160(35031) CREATE INDEX postgres 9364 1.4 3.6 979872 598028 ? Ds 09:15 0:36 postgres: asa flows 10.48.168.160(35092) CREATE INDEX postgres 9452 1.0 3.0 981196 499464 ? Ds 09:20 0:25 postgres: asa flows 10.48.168.160(35106) CREATE INDEX postgres 9505 0.4 1.5 965564 249164 ? Ss 09:30 0:08 postgres: asa flows 10.48.168.160(35121) idle postgres 9508 0.7 1.5 955892 255448 ? Ds 09:30 0:12 postgres: asa flows 10.48.168.160(35124) CREATE INDEX postgres 9530 0.9 0.2 896732 38412 ? Ds 09:33 0:14 postgres: autovacuum worker process flows postgres 9565 0.6 0.2 1093852 44832 ? Ds 09:34 0:09 postgres: autovacuum worker process flows postgres 9569 0.7 0.2 896732 34680 ? Ds 09:34 0:10 postgres: autovacuum worker process flows root 9570 0.0 0.0 69512 3260 pts/3 S+ 09:34 0:00 /usr/lib/postgresql/8.4/bin/psql -U asa flows postgres 9571 0.0 0.0 887600 6780 ? Ss 09:34 0:00 postgres: asa flows [local] idle postgres 9590 0.4 1.2 965716 211248 ? Ds 09:39 0:05 postgres: asa flows 10.48.168.160(35128) CREATE INDEX postgres 9592 0.4 1.2 963228 199132 ? Ds 09:39 0:05 postgres: asa flows 10.48.168.160(35130) CREATE INDEX postgres 9593 1.2 1.9 962992 327904 ? Ds 09:39 0:14 postgres: asa flows 10.48.168.160(35131) CREATE INDEX root 9634 0.0 0.0 2452 152 pts/2 D+ 09:58 0:00 grep postgres postgres 11433 0.0 0.1 885744 26056 ? S Dec20 0:17 /usr/lib/postgresql/8.4/bin/postgres -D /var/lib/postgresql/8.4/main-c config_file=/etc/postgresql/8.4/main/postgresql.conf postgres 11435 0.0 4.8 889276 790016 ? Ds Dec20 10:44 postgres: writer process postgres 11436 0.0 0.0 886124 704 ? Ss Dec20 1:47 postgres: wal writer process postgres 11437 0.0 0.0 894472 5104 ? Ss Dec20 4:42 postgres: autovacuum launcher process postgres 11438 0.1 0.0 81956 4768 ? Ss Dec20 13:54 postgres: stats collector process postgres 12443 0.0 4.9 903820 810512 ? Ss Dec20 0:58 postgres: asa flows 10.48.168.163(58799) idle postgres 12444 0.0 5.1 945064 844596 ? Ss Dec20 6:09 postgres: asa flows 10.48.168.163(58800) idle postgres 12453 0.0 0.0 887384 6864 ? Ss Dec20 0:00 postgres: asa flows 10.48.168.163(58801) idle postgres 26141 0.3 5.3 976928 885892 ? Ss Dec22 33:47 postgres: asa flows 10.48.168.160(40884) idle select * from pg_stat_activity : datid | datname | procpid | usesysid | usename | current_query | waiting | xact_start | query_start | backend_start | client_addr | client_port -------+---------+---------+----------+----------+-------------------------------------------------------------------------------------------------------+---------+-------------------------------+-------------------------------+-------------------------------+---------------+------------- 42539 | flows | 9189 | 16384 | asa | CREATE INDEX "flowpool_31_segment_121_timestamp" ON "flowpool_31_segment_121"USING btree (timestamp) | f | 2012-12-28 09:45:07.277654+01 | 2012-12-28 09:45:07.277704+01| 2012-12-28 08:45:15.923245+01 | 10.48.168.160 | 35031 42539 | flows | 12443 | 16384 | asa | <IDLE> | f | | 2012-12-27 21:30:35.767577+01 | 2012-12-20 13:00:09.968633+01| 10.48.168.163 | 58799 42539 | flows | 9505 | 16384 | asa | <IDLE> | f | | 2012-12-28 09:39:38.19554+01 | 2012-12-28 09:30:17.273639+01| 10.48.168.160 | 35121 42539 | flows | 12444 | 16384 | asa | <IDLE> | f | | 2012-12-27 15:15:22.785422+01 | 2012-12-20 13:00:16.82323+01 | 10.48.168.163 | 58800 42539 | flows | 12453 | 16384 | asa | <IDLE> | f | | 2012-12-20 13:01:15.019182+01 | 2012-12-20 13:01:15.015847+01| 10.48.168.163 | 58801 42539 | flows | 26141 | 16384 | asa | <IDLE> | f | | 2012-12-28 06:19:58.348495+01 | 2012-12-22 06:00:55.453306+01| 10.48.168.160 | 40884 42539 | flows | 9364 | 16384 | asa | CREATE INDEX "flowpool_15_segment_129_timestamp" ON "flowpool_15_segment_129"USING btree (timestamp) | f | 2012-12-28 09:45:04.464606+01 | 2012-12-28 09:45:04.464668+01| 2012-12-28 09:15:23.179672+01 | 10.48.168.160 | 35092 42539 | flows | 9530 | 10 | postgres | autovacuum: ANALYZE public.agg_t254_outgoing_a6_dst_port_and_proto_f5 | f | 2012-12-28 09:56:07.444296+01| 2012-12-28 09:56:07.444296+01 | 2012-12-28 09:33:37.591954+01 | | 42539 | flows | 9565 | 10 | postgres | autovacuum: VACUUM ANALYZE public.agg_t482_outgoing_a7_src_port_and_proto_f5 | f | 2012-12-28 09:41:47.314486+01 | 2012-12-2809:41:47.314486+01 | 2012-12-28 09:34:07.609676+01 | | 42539 | flows | 9569 | 10 | postgres | autovacuum: ANALYZE public.agg_t467_incoming_a7_src_port_and_proto_f5 | f | 2012-12-28 09:57:24.226429+01| 2012-12-28 09:57:24.226429+01 | 2012-12-28 09:34:37.627542+01 | | 42539 | flows | 9571 | 16384 | asa | select * from pg_stat_activity; | f | 2012-12-28 09:58:39.762966+01 | 2012-12-28 09:58:39.762966+01 | 2012-12-28 09:34:44.817718+01| | -1 42539 | flows | 9590 | 16384 | asa | CREATE INDEX "flowpool_29_segment_132_timestamp" ON "flowpool_29_segment_132"USING btree (timestamp) | f | 2012-12-28 09:45:05.258448+01 | 2012-12-28 09:45:05.258499+01| 2012-12-28 09:39:20.604855+01 | 10.48.168.160 | 35128 42539 | flows | 9508 | 16384 | asa | CREATE INDEX "flowpool_26_segment_139_timestamp" ON "flowpool_26_segment_139"USING btree (timestamp) | f | 2012-12-28 09:45:04.296674+01 | 2012-12-28 09:45:04.296732+01| 2012-12-28 09:30:17.487318+01 | 10.48.168.160 | 35124 42539 | flows | 9452 | 16384 | asa | CREATE INDEX "flowpool_21_segment_136_timestamp" ON "flowpool_21_segment_136"USING btree (timestamp) | f | 2012-12-28 09:45:06.741216+01 | 2012-12-28 09:45:06.741271+01| 2012-12-28 09:20:20.385564+01 | 10.48.168.160 | 35106 42539 | flows | 9592 | 16384 | asa | CREATE INDEX "flowpool_17_segment_124_timestamp" ON "flowpool_17_segment_124"USING btree (timestamp) | f | 2012-12-28 09:45:05.966266+01 | 2012-12-28 09:45:05.966332+01| 2012-12-28 09:39:20.649273+01 | 10.48.168.160 | 35130 42539 | flows | 9593 | 16384 | asa | CREATE INDEX "flowpool_2_segment_131_timestamp" ON "flowpool_2_segment_131"USING btree (timestamp) | f | 2012-12-28 09:45:03.358737+01 | 2012-12-28 09:45:03.358818+01| 2012-12-28 09:39:20.671335+01 | 10.48.168.160 | 35131 (16 rows) select * from pg_locks : locktype | database | relation | page | tuple | virtualxid | transactionid | classid | objid | objsubid | virtualtransaction| pid | mode | granted ---------------+----------+----------+------+-------+------------+---------------+---------+-------+----------+--------------------+------+--------------------------+--------- relation | 42539 | 46020 | | | | | | | | 9/122778 | 9530 | AccessShareLock | t transactionid | | | | | | 28615251 | | | | 8/140603 | 9364 | ExclusiveLock | t transactionid | | | | | | 28615278 | | | | 1/170450 | 9189 | ExclusiveLock | t relation | 42539 | 103506 | | | | | | | | 16/99231 | 9508 | AccessShareLock | t relation | 42539 | 103506 | | | | | | | | 16/99231 | 9508 | ShareLock | t relation | 42539 | 7252363 | | | | | | | | 1/170450 | 9189 | AccessExclusiveLock | t relation | 42539 | 105969 | | | | | | | | 19/311 | 9592 | AccessShareLock | t relation | 42539 | 105969 | | | | | | | | 19/311 | 9592 | ShareLock | t relation | 42539 | 46019 | | | | | | | | 9/122778 | 9530 | AccessShareLock | t relation | 42539 | 85230 | | | | | | | | 8/140603 | 9364 | AccessShareLock | t relation | 42539 | 85230 | | | | | | | | 8/140603 | 9364 | ShareLock | t relation | 42539 | 7252359 | | | | | | | | 8/140603 | 9364 | AccessExclusiveLock | t relation | 42539 | 49754 | | | | | | | | 12/128834 | 9569 | AccessShareLock | t relation | 42539 | 87756 | | | | | | | | 1/170450 | 9189 | AccessShareLock | t relation | 42539 | 87756 | | | | | | | | 1/170450 | 9189 | ShareLock | t relation | 42539 | 82695 | | | | | | | | 17/43667 | 9452 | AccessShareLock | t relation | 42539 | 82695 | | | | | | | | 17/43667 | 9452 | ShareLock | t transactionid | | | | | | 28615238 | | | | 20/36 | 9593 | ExclusiveLock | t transactionid | | | | | | 28615271 | | | | 17/43667 | 9452 | ExclusiveLock | t relation | 42539 | 46016 | | | | | | | | 9/122778 | 9530 | ShareUpdateExclusiveLock | t virtualxid | | | | | 17/43667 | | | | | 17/43667 | 9452 | ExclusiveLock | t relation | 42539 | 50023 | | | | | | | | 10/120119 | 9565 | ShareUpdateExclusiveLock | t relation | 42539 | 7252361 | | | | | | | | 19/311 | 9592 | AccessExclusiveLock | t relation | 42539 | 49750 | | | | | | | | 12/128834 | 9569 | ShareUpdateExclusiveLock | t transactionid | | | | | | 28615262 | | | | 15/112000 | 9590 | ExclusiveLock | t transactionid | | | | | | 28615266 | | | | 19/311 | 9592 | ExclusiveLock | t virtualxid | | | | | 12/128834 | | | | | 12/128834 | 9569 | ExclusiveLock | t virtualxid | | | | | 1/170450 | | | | | 1/170450 | 9189 | ExclusiveLock | t relation | 42539 | 10969 | | | | | | | | 13/114756 | 9571 | AccessShareLock | t virtualxid | | | | | 19/311 | | | | | 19/311 | 9592 | ExclusiveLock | t relation | 42539 | 7252362 | | | | | | | | 17/43667 | 9452 | AccessExclusiveLock | t virtualxid | | | | | 13/114756 | | | | | 13/114756 | 9571 | ExclusiveLock | t virtualxid | | | | | 10/120119 | | | | | 10/120119 | 9565 | ExclusiveLock | t relation | 42539 | 90444 | | | | | | | | 20/36 | 9593 | AccessShareLock | t relation | 42539 | 90444 | | | | | | | | 20/36 | 9593 | ShareLock | t relation | 42539 | 7252360 | | | | | | | | 15/112000 | 9590 | AccessExclusiveLock | t relation | 42539 | 93051 | | | | | | | | 15/112000 | 9590 | AccessShareLock | t relation | 42539 | 93051 | | | | | | | | 15/112000 | 9590 | ShareLock | t transactionid | | | | | | 28615245 | | | | 16/99231 | 9508 | ExclusiveLock | t virtualxid | | | | | 20/36 | | | | | 20/36 | 9593 | ExclusiveLock | t relation | 42539 | 7252358 | | | | | | | | 16/99231 | 9508 | AccessExclusiveLock | t relation | 42539 | 7252357 | | | | | | | | 20/36 | 9593 | AccessExclusiveLock | t virtualxid | | | | | 9/122778 | | | | | 9/122778 | 9530 | ExclusiveLock | t relation | 42539 | 50027 | | | | | | | | 10/120119 | 9565 | RowExclusiveLock | t virtualxid | | | | | 15/112000 | | | | | 15/112000 | 9590 | ExclusiveLock | t virtualxid | | | | | 16/99231 | | | | | 16/99231 | 9508 | ExclusiveLock | t relation | 42539 | 50026 | | | | | | | | 10/120119 | 9565 | RowExclusiveLock | t relation | 42539 | 49753 | | | | | | | | 12/128834 | 9569 | AccessShareLock | t virtualxid | | | | | 8/140603 | | | | | 8/140603 | 9364 | ExclusiveLock | t (49 rows) Best regards, Baptiste. --- Baptiste LHOSTE blhoste@alaloop.com ALALOOP S.A.S. - Technopole Izarbel - 64210 Bidart Téléphone : +33 (0) 5 59 41 51 10 www.alaloop.com
Baptiste LHOSTE wrote: >> Just so we know how to interpret that, how many minutes, hours, >> or days did you wait to see whether it would ever end? > > I have waiting for 15 minutes in this state. I can not wait more > time without losing some data for our client. Thanks. I wasn't suggesting you increase the duration; I just wanted perspective on whether it could be the result of unusually long run times rather than blocking, and how severe that increase was known ot be. >> If it doesn't cause too much pain to let it get into this state >> for a few minutes, it might help diagnose the issue if you could >> start `vmstat 1` before you let it get into this state, and >> capture `ps aux | postgres`, pg_stat_activity, and pg_locks at >> intervals while it is in this state. Looking at all of the above >> might suggest a cause. If we can find the cause, we can almost >> certainly fix it. > [information captured as requested] Thank you very much, With that much information we should be much better able to get a sense of the nature of the problem. It will take a while to sift through it and properly analyze it. But even on a first pass I think there is a lot of valuable information that jumps out: (1) There is no sign of unusual pressure on OS memory; OS cache usage remains stable from before the incident to the end of the monitored period. (2) Disk wait time climbed early in the incident and remained high to the end. (3) Disk read volume *before* the incident shows a peak of about the same as during the incident, with somewhat lower disk wait time. (Do we know what was causing that? It ended before the other results were captured.) (4) Not a single incident of blocking was captured in any of the lists of pg_stat_activity or pg_locks. (5) The TRUNCATE statements completed and moved on to CREATE INDEX, which continued to accrue CPU time during the episode. (6) Of the three autovacuum worker processes, two were running just an ANALYZE on every sample, and were moving on from table to table. (7) One autovacuum process was running VACUUM ANALYZE against a single table for the entire duration of the incident. It was slowly accumulating CPU time during the incident. On the face of it, it appears that with your normal production settings your storage system is right at the edge of what it can handle, and making autovacuum more aggressive to try to keep the statistics on the second type of table more up-to-date is pushing the load past its capacity. You might be able to change the autovacuum thresholds and scale factors without changing autovacuum_vacuum_cost_delay and autovacuum_vacuum_cost_limit (or making smaller changes to them). You could probably throw hardware at it to fix the problem. Even with settings which work fine when everything is up-to-date you may experience some impact on production when you frist turn it on and autovacuum is attempting to "catch up". I'm not actually clear, when I look back, at what the problem is that you are trying to solve -- you say that a particular type of query is running for 2 to 3 minutes, and note that statistics on a particular type of table are only being re-sampled once every 5 to 6 days. It's not clear that more frequent statistical sampling of the tables would change the plans. Perhaps you should post one such query to the performance list, with supporting data, and see whether someone can suggest a way to speed that query. http://wiki.postgresql.org/wiki/SlowQueryQuestions -Kevin
Hi, > Thanks. I wasn't suggesting you increase the duration; I just > wanted perspective on whether it could be the result of unusually > long run times rather than blocking, and how severe that increase > was known ot be. > Thank you very much, With that much information we should be much > better able to get a sense of the nature of the problem. It will > take a while to sift through it and properly analyze it. But even > on a first pass I think there is a lot of valuable information that > jumps out: > (1) There is no sign of unusual pressure on OS memory; OS cache > usage remains stable from before the incident to the end of the > monitored period. > (2) Disk wait time climbed early in the incident and remained high > to the end. Just for information, our server has 4 disk SAS in raid 0. I do not remember if I have already mention it. > (3) Disk read volume *before* the incident shows a peak of about > the same as during the incident, with somewhat lower disk wait > time. (Do we know what was causing that? It ended before the other > results were captured.) Maybe it's the consequence of the step 4 of our process : "we copy new data (around 150 000 rows) in the target partition" These data have been read from files then have been copied to the database. That could explain the peak on the disk read volume. > (4) Not a single incident of blocking was captured in any of the > lists of pg_stat_activity or pg_locks. > (5) The TRUNCATE statements completed and moved on to CREATE INDEX, > which continued to accrue CPU time during the episode. > (6) Of the three autovacuum worker processes, two were running just > an ANALYZE on every sample, and were moving on from table to table. > (7) One autovacuum process was running VACUUM ANALYZE against a > single table for the entire duration of the incident. It was slowly > accumulating CPU time during the incident. > On the face of it, it appears that with your normal production > settings your storage system is right at the edge of what it can > handle, and making autovacuum more aggressive to try to keep the > statistics on the second type of table more up-to-date is pushing > the load past its capacity. You might be able to change the > autovacuum thresholds and scale factors without changing > autovacuum_vacuum_cost_delay and autovacuum_vacuum_cost_limit (or > making smaller changes to them). You could probably throw hardware > at it to fix the problem. Even with settings which work fine when > everything is up-to-date you may experience some impact on > production when you frist turn it on and autovacuum is attempting > to "catch up". We will try to change autovacuum thresholds and scale factors without changing autovacuum_vacuum_cost_delay and autovacuum_vacuum_cost_limit. > I'm not actually clear, when I look back, at what the problem is > that you are trying to solve -- you say that a particular type of > query is running for 2 to 3 minutes, and note that statistics on a > particular type of table are only being re-sampled once every 5 to > 6 days. It's not clear that more frequent statistical sampling of > the tables would change the plans. Perhaps you should post one such > query to the performance list, with supporting data, and see > whether someone can suggest a way to speed that query. > http://wiki.postgresql.org/wiki/SlowQueryQuestions These queries are very simple : delete from table where start_date < availableTimestamp. We performed an EXPLAIN to try to understand what could be the problem. The query planner said that the index on start_date could not be used because it was not up-to-date. That why we decided to increase the frequency of the autovacuum process. How a server (8 CPUs) which has a 0.56 load over the last 15 minutes could not handle 3 autovacuum processes, for me it isvery confusing. Thanks anyway for your time and help. Happy New Year's Eve. Best regards, Baptiste --- Baptiste LHOSTE blhoste@alaloop.com ALALOOP S.A.S. - Technopole Izarbel - 64210 Bidart Téléphone : +33 (0) 5 59 41 51 10 www.alaloop.com
Baptiste LHOSTE wrote: > These queries are very simple : delete from table where > start_date < availableTimestamp. We performed an EXPLAIN to try > to understand what could be the problem. The query planner said > that the index on start_date could not be used because it was not > up-to-date. Could you show that output you base that on? > How a server (8 CPUs) which has a 0.56 load over the last 15 > minutes could not handle 3 autovacuum processes, for me it is > very confusing. When the bottleneck is disk I/O the CPUs count is not going to help. Threads which have not been context-switched out, but are sitting waiting for the electric motors to drag the disk arm to the right cylinder probably don't count against the load average. Note that while three autovacuum processes normally don't cause any grief, you seem to be near the tipping point anyway, so it may be a case of "the straw that broke the camel's back". Especially since you made autovacuum many times more resource-hungry than it is by default. -Kevin
Hi, > Could you show that output you base that on? EXPLAIN on table which was recently analyzed by the autovacuum process : explain delete from agg_t1343_incoming_a3_src_net_and_dst_net_f5 where start_date < 1353317127200; QUERY PLAN --------------------------------------------------------------------------------------------------------------------------------------Index Scanusing agg_t1343_incoming_a3_src_net_and_dst_net_f5_start_date on agg_t1343_incoming_a3_src_net_and_dst_net_f5 (cost=0.00..9.01rows=41 width=6) Index Cond: (start_date < 1353317127200::bigint) (2 rows) Here you can find the duration and the number of deleted rows of previous query. Query duration : 4s 538ms for 15960 rows deleted EXPLAIN on table which was analyzed four days ago by the autovacuum process : explain delete from agg_t1187_incoming_a6_dst_port_and_proto_f5 where start_date < 1353317127200; QUERY PLAN -------------------------------------------------------------------------------------------------- Seq Scan on agg_t1187_incoming_a6_dst_port_and_proto_f5 (cost=0.00..58063.86 rows=3269 width=6) Filter: (start_date < 1353317127200::bigint) (2 rows) Here you can find the duration and the number of deleted rows of previous query. Query duration : 52s 368ms for 21130 rows deleted The first EXPLAIN mention that an Index scan is used but the second one mention that a sequence scan is used. That why we concluded that we had to have our index up-to-date. > When the bottleneck is disk I/O the CPUs count is not going to > help. Threads which have not been context-switched out, but are > sitting waiting for the electric motors to drag the disk arm to the > right cylinder probably don't count against the load average. My mistake, when I said that the server had a 0.56 load over the last 15 minutes, it was before the change on autovacuumthresholds, because after that the load just blow up. > Note that while three autovacuum processes normally don't cause any > grief, you seem to be near the tipping point anyway, so it may be a > case of "the straw that broke the camel's back". Especially since > you made autovacuum many times more resource-hungry than it is by > default. We tried to change the autovacuum thresholds without changing autovacuum_vacuum_cost_delay and autovacuum_vacuum_cost_limit. The server managed to handle the work during 25 minutes then it just started to take too much time on CREATE INDEX statements. I just figure that when we perform the full process (with deletion of old data), CREATE INDEX and TUNCATE take too much time. At the same time the autovacuum process seems to perform more tasks on second type of tables. Best regards, Baptiste --- Baptiste LHOSTE blhoste@alaloop.com ALALOOP S.A.S. - Technopole Izarbel - 64210 Bidart Téléphone : +33 (0) 5 59 41 51 10 www.alaloop.com
Hi, We are still trying to fix our issue and we found following logs : 2013-01-17 09:55:01 CET LOG: automatic vacuum of table "flows.public.agg_t1213_incoming_a6_dst_port_and_proto_f5": indexscans: 1 pages: 0 removed, 136547 remain tuples: 0 removed, 4044679 remain system usage: CPU 3.21s/5.21u sec elapsed 2005.15 sec 2013-01-17 10:12:50 CET LOG: automatic vacuum of table "flows.public.agg_t1214_outgoing_a1_src_net_f5": index scans: 1 pages: 59886 removed, 37828 remain tuples: 1645338 removed, 3750874 remain system usage: CPU 3.01s/4.08u sec elapsed 1015.75 sec How is it possible that first task took twice the time of the second without any tuples to remove ? Baptiste
Baptiste LHOSTE wrote: > We are still trying to fix our issue and we found following logs : > > 2013-01-17 09:55:01 CET LOG: automatic vacuum of table > "flows.public.agg_t1213_incoming_a6_dst_port_and_proto_f5": index scans: 1 > pages: 0 removed, 136547 remain > tuples: 0 removed, 4044679 remain > system usage: CPU 3.21s/5.21u sec elapsed 2005.15 sec > > 2013-01-17 10:12:50 CET LOG: automatic vacuum of table > "flows.public.agg_t1214_outgoing_a1_src_net_f5": index scans: 1 > pages: 59886 removed, 37828 remain > tuples: 1645338 removed, 3750874 remain > system usage: CPU 3.01s/4.08u sec elapsed 1015.75 sec > > How is it possible that first task took twice the time of the second without any tuples to remove ? Maybe more of the first table is cached. Maybe contention on the table or the I/O system in general varied between the two VACUUMs. Yours, Laurenz Albe