Re: Autovacuum issues with truncate and create index ... - Mailing list pgsql-admin
From | Baptiste LHOSTE |
---|---|
Subject | Re: Autovacuum issues with truncate and create index ... |
Date | |
Msg-id | 1100168005.2688396.1356019183074.JavaMail.root@alaloop.com Whole thread Raw |
In response to | Re: Autovacuum issues with truncate and create index ... ("Kevin Grittner" <kgrittn@mail.com>) |
Responses |
Re: Autovacuum issues with truncate and create index ...
|
List | pgsql-admin |
> Would it be possible for you to create such a situation and capture > the contents of pg_stat_activity and pg_locks while it is going on? > What messages related to autovacuum or deadlocks do you see in the > server log while this is going on? Before the change we can only see only automatic analyze logs on first kind of tables. After the change I can see automatic analyze logs on both kind of tables. Here's the pg_stat_activity during the issue : datid | datname | procpid | usesysid | usename | current_query | waiting | xact_start | query_start | backend_start | client_addr | client_port -------+---------+---------+----------+----------+-------------------------------------------------------------------------+---------+-------------------------------+-------------------------------+-------------------------------+---------------+------------- 42539 | flows | 13792 | 16384 | asa | TRUNCATE flowpool_24_segment_221 | f | 2012-12-20 16:55:01.346986+01 | 2012-12-20 16:55:01.347051+01 | 2012-12-20 16:50:22.073586+01 | 10.48.168.160| 33550 42539 | flows | 12443 | 16384 | asa | <IDLE> | f | | 2012-12-20 15:51:32.787359+01 | 2012-12-20 13:00:09.968633+01 | 10.48.168.163| 58799 42539 | flows | 13804 | 10 | postgres | autovacuum: ANALYZE public.agg_t1406_outgoing_a3_src_net_and_dst_net_f5| f | 2012-12-20 16:56:00.919196+01 | 2012-12-20 16:56:00.919196+01| 2012-12-20 16:51:27.079612+01 | | 42539 | flows | 12444 | 16384 | asa | <IDLE> | f | | 2012-12-20 16:09:53.123326+01 | 2012-12-20 13:00:16.82323+01 | 10.48.168.163| 58800 42539 | flows | 12453 | 16384 | asa | <IDLE> | f | | 2012-12-20 13:01:15.019182+01 | 2012-12-20 13:01:15.015847+01 | 10.48.168.163| 58801 42539 | flows | 13629 | 16384 | asa | TRUNCATE flowpool_15_segment_216 | f | 2012-12-20 16:55:01.783653+01 | 2012-12-20 16:55:01.783725+01 | 2012-12-20 16:30:22.331191+01 | 10.48.168.160| 33495 42539 | flows | 13793 | 16384 | asa | TRUNCATE flowpool_19_segment_215 | f | 2012-12-20 16:55:01.296588+01 | 2012-12-20 16:55:01.297449+01 | 2012-12-20 16:50:22.095245+01 | 10.48.168.160| 33551 42539 | flows | 13822 | 10 | postgres | autovacuum: ANALYZE public.agg_t449_incoming_a3_src_net_and_dst_net_f5 | f | 2012-12-20 16:53:34.720815+01 | 2012-12-20 16:53:34.720815+01| 2012-12-20 16:51:57.097049+01 | | 42539 | flows | 13658 | 16384 | asa | TRUNCATE flowpool_2_segment_218 | f | 2012-12-20 16:55:01.606198+01 | 2012-12-20 16:55:01.606266+01 | 2012-12-20 16:35:30.327792+01 | 10.48.168.160| 33505 42539 | flows | 13680 | 16384 | asa | TRUNCATE flowpool_16_segment_228 | f | 2012-12-20 16:55:01.710645+01 | 2012-12-20 16:55:01.713793+01 | 2012-12-20 16:40:21.689958+01 | 10.48.168.160| 33512 42539 | flows | 13870 | 16384 | asa | select * from pg_stat_activity; | f | 2012-12-20 16:56:05.58373+01 | 2012-12-20 16:56:05.58373+01 | 2012-12-20 16:56:03.432322+01 | | -1 42539 | flows | 13770 | 16384 | asa | TRUNCATE flowpool_31_segment_208 | f | 2012-12-20 16:55:01.782363+01 | 2012-12-20 16:55:01.791481+01 | 2012-12-20 16:45:24.082223+01 | 10.48.168.160| 33544 42539 | flows | 13771 | 16384 | asa | TRUNCATE flowpool_17_segment_211 | f | 2012-12-20 16:55:01.729515+01 | 2012-12-20 16:55:01.736037+01 | 2012-12-20 16:45:24.147856+01 | 10.48.168.160| 33545 42539 | flows | 13849 | 10 | postgres | autovacuum: ANALYZE public.agg_t1251_incoming_a7_src_port_and_proto_f5 | f | 2012-12-20 16:56:01.00984+01 | 2012-12-20 16:56:01.00984+01 | 2012-12-20 16:52:27.111586+01 | | 42539 | flows | 13795 | 16384 | asa | TRUNCATE flowpool_22_segment_217 | f | 2012-12-20 16:55:01.342442+01 | 2012-12-20 16:55:01.345095+01 | 2012-12-20 16:50:22.159256+01 | 10.48.168.160| 33553 Here's the pg_locks during the issue : locktype | database | relation | page | tuple | virtualxid | transactionid | classid | objid | objsubid | virtualtransaction| pid | mode | granted ---------------+----------+----------+------+-------+------------+---------------+---------+-------+----------+--------------------+-------+ --------------------------+--------- transactionid | | | | | | 27720953 | | | | 12/0 | 13770 | ExclusiveLock | t virtualxid | | | | | 11/3140 | | | | | 11/3140 | 13870 | ExclusiveLock | t relation | 42539 | 106752 | | | | | | | | 13/0 | 13771 | ShareLock | t relation | 42539 | 106752 | | | | | | | | 13/0 | 13771 | AccessExclusiveLock | t relation | 42539 | 88542 | | | | | | | | 12/0 | 13770 | ShareLock | t relation | 42539 | 88542 | | | | | | | | 12/0 | 13770 | AccessExclusiveLock | t transactionid | | | | | | 27720938 | | | | 1/0 | 13792 | ExclusiveLock | t transactionid | | | | | | 27720969 | | | | 7/2476 | 13793 | ExclusiveLock | t virtualxid | | | | | 13/2821 | | | | | 13/0 | 13771 | ExclusiveLock | t relation | 42539 | 106755 | | | | | | | | 13/0 | 13771 | ShareLock | t relation | 42539 | 106755 | | | | | | | | 13/0 | 13771 | AccessExclusiveLock | t relation | 42539 | 112041 | | | | | | | | 1/0 | 13792 | ShareLock | t relation | 42539 | 112041 | | | | | | | | 1/0 | 13792 | AccessExclusiveLock | t virtualxid | | | | | 1/15720 | | | | | 1/0 | 13792 | ExclusiveLock | t virtualxid | | | | | 7/2476 | | | | | 7/2476 | 13793 | ExclusiveLock | t relation | 42539 | 88544 | | | | | | | | 12/0 | 13770 | AccessExclusiveLock | t transactionid | | | | | | 27720971 | | | | 9/2935 | 13658 | ExclusiveLock | t relation | 42539 | 10969 | | | | | | | | 11/3140 | 13870 | AccessShareLock | t relation | 42539 | 6697688 | | | | | | | | 14/2459 | 13849 | AccessShareLock | t relation | 42539 | 49499 | | | | | | | | 8/15669 | 13822 | AccessShareLock | t relation | 42539 | 88539 | | | | | | | | 12/0 | 13770 | ShareLock | t relation | 42539 | 88539 | | | | | | | | 12/0 | 13770 | AccessExclusiveLock | t relation | 42539 | 112038 | | | | | | | | 1/0 | 13792 | ShareLock | t relation | 42539 | 112038 | | | | | | | | 1/0 | 13792 | AccessExclusiveLock | t relation | 42539 | 106757 | | | | | | | | 13/0 | 13771 | AccessExclusiveLock | t virtualxid | | | | | 6/2974 | | | | | 6/2974 | 13629 | ExclusiveLock | t relation | 42539 | 50023 | | | | | | | | 3/3137 | 13804 | ShareUpdateExclusiveLock | t virtualxid | | | | | 3/3137 | | | | | 3/3137 | 13804 | ExclusiveLock | t relation | 42539 | 80808 | | | | | | | | 7/2476 | 13793 | RowExclusiveLock | t transactionid | | | | | | 27720951 | | | | 13/0 | 13771 | ExclusiveLock | t virtualxid | | | | | 9/2935 | | | | | 9/2935 | 13658 | ExclusiveLock | t relation | 42539 | 49492 | | | | | | | | 8/15669 | 13822 | ShareUpdateExclusiveLock | t virtualxid | | | | | 15/2206 | | | | | 15/2206 | 13795 | ExclusiveLock | t relation | 42539 | 91227 | | | | | | | | 9/2935 | 13658 | RowExclusiveLock | t relation | 42539 | 49498 | | | | | | | | 8/15669 | 13822 | AccessShareLock | t relation | 42539 | 112043 | | | | | | | | 1/0 | 13792 | AccessExclusiveLock | t relation | 42539 | 6697685 | | | | | | | | 14/2459 | 13849 | ShareUpdateExclusiveLock | t virtualxid | | | | | 14/2459 | | | | | 14/2459 | 13849 | ExclusiveLock | t relation | 42539 | 50026 | | | | | | | | 3/3137 | 13804 | RowExclusiveLock | t relation | 42539 | 6697689 | | | | | | | | 14/2459 | 13849 | AccessShareLock | t relation | 42539 | 86013 | | | | | | | | 6/2974 | 13629 | RowExclusiveLock | t virtualxid | | | | | 8/15669 | | | | | 8/15669 | 13822 | ExclusiveLock | t virtualxid | | | | | 12/14901 | | | | | 12/0 | 13770 | ExclusiveLock | t transactionid | | | | | | 27720970 | | | | 6/2974 | 13629 | ExclusiveLock | t relation | 42539 | 50027 | | | | | | | | 3/3137 | 13804 | RowExclusiveLock | t (45 rows) > Would it be possible to update your 8.4 installation to the latest > bug fix (currently 8.4.15) to rule out the influence of any bugs > which have already been fixed? Is there a way to upgrade without having to dump all data and restore them after the upgrade ? Best regards, Baptiste. --- Baptiste LHOSTE blhoste@alaloop.com ALALOOP S.A.S. - Technopole Izarbel - 64210 Bidart Téléphone : +33 (0) 5 59 41 51 10 www.alaloop.com
pgsql-admin by date: