Thread: How to avoid Force Autovacuum

How to avoid Force Autovacuum

From
Vishalakshi Navaneethakrishnan
Date:
Hi All,

We have one production database server , having 6 DBs, Postgres 9.2.1 version.

This is my vacuum settings in Production database 

#autovacuum = on                        # Enable autovacuum subprocess?  'on'
                                        # requires track_counts to also be on.
log_autovacuum_min_duration = 0
autovacuum_vacuum_threshold = 50000     # min number of row updates before
                                        # vacuum
autovacuum_analyze_threshold = 50000  
maintenance_work_mem = 2GB              # min 1MB

#################

Daily once we are executing "Vacuum Freeze analyze" -- To prevent transaction id wraparound
using this command 
vacuumdb -F -z -h localhost -U postgres dbname

Even sometimes autovacuum running on the databases and increase the load (Above 200) very much and the server was unresponsive

I have seen the autovacum worker process in top command,

While i executing pg_stat_activity as postgres user, i have seen the pid of autovacuum process in the result  but the query filed is "Empty"

while i check in Pg_class table i got the value as last_autoanalyze_field is not null in one of the table. 

So i am guessing this is the auto analyze query.

But why it increased the load very high? 

How can i avoid the autovacuum process ? And also autovacuum executed in the template0 database also. But I cant connect the database , since it has "datallowconn=F"

If i update the value to true and then execute "vacuum freeze analyze" will make any problems?

since template0 has no activities why the age(datfrozenxid) increasing heavily and reach the thresold value?

Do i need to disable autovacuum for particular tables to avoid force autovacuum ? 

Can you please suggest me in this case? 
--
Best Regards,
Vishalakshi.N

Re: How to avoid Force Autovacuum

From
Sergey Konoplev
Date:
On Wed, Aug 7, 2013 at 2:46 AM, Vishalakshi Navaneethakrishnan
<nvishalakshi@sirahu.com> wrote:
> Daily once we are executing "Vacuum Freeze analyze" -- To prevent
> transaction id wraparound
> using this command
> vacuumdb -F -z -h localhost -U postgres dbname

It is not necessary to do. Autovacuum does it itself where and when needed.

> Even sometimes autovacuum running on the databases and increase the load
> (Above 200) very much and the server was unresponsive
>
> I have seen the autovacum worker process in top command,
> While i executing pg_stat_activity as postgres user, i have seen the pid of
> autovacuum process in the result  but the query filed is "Empty"

Was autovacuum the only process that you saw in pg_stat_activity?

What OS do you use?

Do you use huge pages?

--
Kind regards,
Sergey Konoplev
PostgreSQL Consultant and DBA

http://www.linkedin.com/in/grayhemp
+1 (415) 867-9984, +7 (901) 903-0499, +7 (988) 888-1979
gray.ru@gmail.com


Re: How to avoid Force Autovacuum

From
Vishalakshi Navaneethakrishnan
Date:
Hi,

> vacuumdb -F -z -h localhost -U postgres dbname

It is not necessary to do. Autovacuum does it itself where and when needed.

If we did not do this, then autovacuum will occur, Load was very high at that time and the server was unresponsive, To avoid this we are executing vacuum freeze analyze everyday.

We are using Centos

cat /etc/issue
CentOS release 6.3 (Final)

cat /proc/meminfo |grep Hugepagesize
Hugepagesize:       2048 kB




On Thu, Aug 8, 2013 at 6:59 AM, Sergey Konoplev <gray.ru@gmail.com> wrote:
On Wed, Aug 7, 2013 at 2:46 AM, Vishalakshi Navaneethakrishnan
<nvishalakshi@sirahu.com> wrote:
> Daily once we are executing "Vacuum Freeze analyze" -- To prevent
> transaction id wraparound
> using this command
> vacuumdb -F -z -h localhost -U postgres dbname

It is not necessary to do. Autovacuum does it itself where and when needed.

> Even sometimes autovacuum running on the databases and increase the load
> (Above 200) very much and the server was unresponsive
>
> I have seen the autovacum worker process in top command,
> While i executing pg_stat_activity as postgres user, i have seen the pid of
> autovacuum process in the result  but the query filed is "Empty"

Was autovacuum the only process that you saw in pg_stat_activity?

What OS do you use?

Do you use huge pages?

--
Kind regards,
Sergey Konoplev
PostgreSQL Consultant and DBA

http://www.linkedin.com/in/grayhemp
+1 (415) 867-9984, +7 (901) 903-0499, +7 (988) 888-1979
gray.ru@gmail.com



--
Best Regards,
Vishalakshi.N

Re: How to avoid Force Autovacuum

From
Sergey Konoplev
Date:
On Wed, Aug 7, 2013 at 9:32 PM, Vishalakshi Navaneethakrishnan
<nvishalakshi@sirahu.com> wrote:
> cat /etc/issue
> CentOS release 6.3 (Final)
>
> cat /proc/meminfo |grep Hugepagesize
> Hugepagesize:       2048 kB

Please show what commands below print.

cat /proc/meminfo | grep -i huge
cat /sys/kernel/mm/transparent_hugepage/enabled
cat /sys/kernel/mm/transparent_hugepage/defrag

--
Kind regards,
Sergey Konoplev
PostgreSQL Consultant and DBA

http://www.linkedin.com/in/grayhemp
+1 (415) 867-9984, +7 (901) 903-0499, +7 (988) 888-1979
gray.ru@gmail.com


Re: How to avoid Force Autovacuum

From
Vishalakshi Navaneethakrishnan
Date:
cat /proc/meminfo | grep -i huge
AnonHugePages:  31576064 kB
HugePages_Total:       0
HugePages_Free:        0
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB

cat /sys/kernel/mm/redhat_transparent_hugepage/enabled 
[always] never

 cat /sys/kernel/mm/redhat_transparent_hugepage/defrag 
[always] never




On Thu, Aug 8, 2013 at 10:57 AM, Sergey Konoplev <gray.ru@gmail.com> wrote:
On Wed, Aug 7, 2013 at 9:32 PM, Vishalakshi Navaneethakrishnan
<nvishalakshi@sirahu.com> wrote:
> cat /etc/issue
> CentOS release 6.3 (Final)
>
> cat /proc/meminfo |grep Hugepagesize
> Hugepagesize:       2048 kB

Please show what commands below print.

cat /proc/meminfo | grep -i huge
cat /sys/kernel/mm/transparent_hugepage/enabled
cat /sys/kernel/mm/transparent_hugepage/defrag

--
Kind regards,
Sergey Konoplev
PostgreSQL Consultant and DBA

http://www.linkedin.com/in/grayhemp
+1 (415) 867-9984, +7 (901) 903-0499, +7 (988) 888-1979
gray.ru@gmail.com



--
Best Regards,
Vishalakshi.N

Re: How to avoid Force Autovacuum

From
Kevin Grittner
Date:
Vishalakshi Navaneethakrishnan <nvishalakshi@sirahu.com> wrote:

> We have one production database server , having 6 DBs, Postgres
> 9.2.1 version.

There were some fixes for autovacuum problems in 9.2.3.  Some other
fixes will be coming when 9.2.5 is released.  Many of your problems
are likely to go away by staying up-to-date on minor releases.

http://www.postgresql.org/support/versioning/

> autovacuum_vacuum_threshold = 50000

By setting this so high, you are increasing the amount of work
autovacuum will need to do when it does work on a table.  A smaller
value tends to give less "bursty" performance.  Also, any small,
frequently-updated tables may bloat quite a bit in 50000
transactions.

> maintenance_work_mem = 2GB

Each autovacuum worker will allocate this much RAM.  If all of your
autovacuum workers wake up at once, would losing 2GB for each one
from your cache cause a significant performance hit?  (Since you
didn't say how much RAM the machine has, it's impossible to tell.)

> How can i avoid the autovacuum process ?

Vacuuming is a necessary part of PostgreSQL operations, and
autovacuum is almost always part of a good vacuum plan.  The bug
fixes in 9.2.3 will help avoid some of the most extreme problems,
but you might also want to reduce the threshold so that it has less
work to do each time it wakes up, reducing the impact.

> And also autovacuum executed in the template0 database also.

What does running this in psql this show?:

\x on
select * from pg_database where datname = 'template0';
select * from pg_stat_database where datname = 'template0';

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Re: How to avoid Force Autovacuum

From
Sergey Konoplev
Date:
On Thu, Aug 8, 2013 at 11:18 AM, Kevin Grittner <kgrittn@ymail.com> wrote:
> There were some fixes for autovacuum problems in 9.2.3.  Some other
> fixes will be coming when 9.2.5 is released.  Many of your problems
> are likely to go away by staying up-to-date on minor releases.
>
> By setting this so high, you are increasing the amount of work
> autovacuum will need to do when it does work on a table.  A smaller
> value tends to give less "bursty" performance.  Also, any small,
> frequently-updated tables may bloat quite a bit in 50000
> transactions.
>
> Each autovacuum worker will allocate this much RAM.  If all of your
> autovacuum workers wake up at once, would losing 2GB for each one
> from your cache cause a significant performance hit?  (Since you
> didn't say how much RAM the machine has, it's impossible to tell.)
>
> What does running this in psql this show?:
>
> \x on
> select * from pg_database where datname = 'template0';
> select * from pg_stat_database where datname = 'template0';

In addition to Kevin's notes, I think it is also worth to look at the
result of the query below.

select name, setting from pg_settings
where name ~ 'vacuum' and setting <> reset_val;

--
Kind regards,
Sergey Konoplev
PostgreSQL Consultant and DBA

http://www.linkedin.com/in/grayhemp
+1 (415) 867-9984, +7 (901) 903-0499, +7 (988) 888-1979
gray.ru@gmail.com


Re: How to avoid Force Autovacuum

From
Vishalakshi Navaneethakrishnan
Date:
Hi All,

select * from pg_database where datname = 'template0';
-[ RECORD 1 ]-+------------------------------------
datname       | template0
datdba        | 10
encoding      | 6
datcollate    | en_US.UTF-8
datctype      | en_US.UTF-8
datistemplate | t
datallowconn  | f
datconnlimit  | -1
datlastsysoid | 12865
datfrozenxid  | 2025732249
dattablespace | 1663
datacl        | {=c/postgres,postgres=CTc/postgres}

select * from pg_stat_database where datname = 'template0';
-[ RECORD 1 ]--+------------------------------
datid          | 12865
datname        | template0
numbackends    | 0
xact_commit    | 320390
xact_rollback  | 7
blks_read      | 3797
blks_hit       | 9458783
tup_returned   | 105872028
tup_fetched    | 1771782
tup_inserted   | 10
tup_updated    | 457
tup_deleted    | 10
conflicts      | 0
temp_files     | 0
temp_bytes     | 0
deadlocks      | 0
blk_read_time  | 0
blk_write_time | 0
stats_reset    | 2013-04-19 19:22:39.013056-07


select name, setting from pg_settings where name ~ 'vacuum';
              name               |  setting  
---------------------------------+-----------
 autovacuum                      | on
 autovacuum_analyze_scale_factor | 0.1
 autovacuum_analyze_threshold    | 50000
 autovacuum_freeze_max_age       | 200000000
 autovacuum_max_workers          | 3
 autovacuum_naptime              | 60
 autovacuum_vacuum_cost_delay    | 20
 autovacuum_vacuum_cost_limit    | -1
 autovacuum_vacuum_scale_factor  | 0.2
 autovacuum_vacuum_threshold     | 50000
 log_autovacuum_min_duration     | 0
 vacuum_cost_delay               | 0
 vacuum_cost_limit               | 200
 vacuum_cost_page_dirty          | 20
 vacuum_cost_page_hit            | 1
 vacuum_cost_page_miss           | 10
 vacuum_defer_cleanup_age        | 0
 vacuum_freeze_min_age           | 50000000
 vacuum_freeze_table_age         | 150000000
(19 rows)


Our Physical RAM size is 256GB

Please note : we are executing standard vacuum daily (Manual Vacuum) -- Vacuum freeze analyze.. 

But during manual vacuum -- the load is normal -- for all databases ( load is in 1 to 2) 

Load increased to 200 during autovacuum process..

[Previously i had set maintenance_work_mem as 256MB at that time manual vacuum increased the load to 300. Then only i have increased the maintenance work memory to 2GB, Now manual vacuum is fine, Load is normal during vacuum process, so our application is fine during vacuum process also ]

Now the problem is autovacuum.. why it was invoked and increased the load? How to avoid this? 






On Fri, Aug 9, 2013 at 5:21 AM, Sergey Konoplev <gray.ru@gmail.com> wrote:
On Thu, Aug 8, 2013 at 11:18 AM, Kevin Grittner <kgrittn@ymail.com> wrote:
> There were some fixes for autovacuum problems in 9.2.3.  Some other
> fixes will be coming when 9.2.5 is released.  Many of your problems
> are likely to go away by staying up-to-date on minor releases.
>
> By setting this so high, you are increasing the amount of work
> autovacuum will need to do when it does work on a table.  A smaller
> value tends to give less "bursty" performance.  Also, any small,
> frequently-updated tables may bloat quite a bit in 50000
> transactions.
>
> Each autovacuum worker will allocate this much RAM.  If all of your
> autovacuum workers wake up at once, would losing 2GB for each one
> from your cache cause a significant performance hit?  (Since you
> didn't say how much RAM the machine has, it's impossible to tell.)
>
> What does running this in psql this show?:
>
> \x on
> select * from pg_database where datname = 'template0';
> select * from pg_stat_database where datname = 'template0';

In addition to Kevin's notes, I think it is also worth to look at the
result of the query below.

select name, setting from pg_settings
where name ~ 'vacuum' and setting <> reset_val;

--
Kind regards,
Sergey Konoplev
PostgreSQL Consultant and DBA

http://www.linkedin.com/in/grayhemp
+1 (415) 867-9984, +7 (901) 903-0499, +7 (988) 888-1979
gray.ru@gmail.com



--
Best Regards,
Vishalakshi.N

Re: How to avoid Force Autovacuum

From
Sergey Konoplev
Date:
On Thu, Aug 8, 2013 at 10:59 PM, Vishalakshi Navaneethakrishnan
<nvishalakshi@sirahu.com> wrote:
> Now the problem is autovacuum.. why it was invoked and increased the load?
> How to avoid this?

Upgrade to the latest minor version 9.2.4 first.

--
Kind regards,
Sergey Konoplev
PostgreSQL Consultant and DBA

http://www.linkedin.com/in/grayhemp
+1 (415) 867-9984, +7 (901) 903-0499, +7 (988) 888-1979
gray.ru@gmail.com


Recovery.conf and PITR

From
"ascot.moss@gmail.com"
Date:
Hi,

I am trying PITR in a test machine (same PG version 9.2.4 same O/S Ubuntu 12.04 64 bit).  All archived WAL files are
shippedand saved in /var/pgsql/data/archive, the latest transaction txid of them is 75666.  I want to recover PG at a
pointof time that if  XIDs are equal or smaller than '75634' so I  have the following recovery.conf (only two lines):  

restore_command = 'cp /var/pgsql/data/archive/%f %p'
recovery_target_xid = '75634'


After the restart of PG, the recovery.conf is processed and it is renamed to recovery.done.  However it restored all
(75666)instead of '75634'. 

postgres=# select txid_current();
 txid_current
--------------
        75666
(1 row)


Can you please advise?

regards




Re: Recovery.conf and PITR

From
Gabriele Bartolini
Date:
 Hello,

 On Fri, 9 Aug 2013 16:09:49 +0800, "ascot.moss@gmail.com"
 <ascot.moss@gmail.com> wrote:
> postgres=# select txid_current();
>  txid_current
> --------------
>         75666
> (1 row)
>
>
> Can you please advise?

 WAL contains REDO log information, which means only COMMITTED
 transactions will be recovered. Sequentiality of TXID refers to the
 start of the transaction. Transactions can therefore be committed in a
 different order (meaning that 75666 could for example be committed
 before 75634 in your case).

 Remember that if you do not specify "recovery_target_inclusive =
 false", your txid target will be included in the recovered server.

 For further information, I suggest you look at:
 http://www.postgresql.org/docs/9.2/static/recovery-target-settings.html

 Cheers,
 Gabriele
--
  Gabriele Bartolini - 2ndQuadrant Italia
  PostgreSQL Training, Services and Support
  Gabriele.Bartolini@2ndQuadrant.it - www.2ndQuadrant.it


Re: Recovery.conf and PITR

From
Luca Ferrari
Date:
On Fri, Aug 9, 2013 at 10:09 AM, ascot.moss@gmail.com
<ascot.moss@gmail.com> wrote:
> Hi,
>
> I am trying PITR in a test machine (same PG version 9.2.4 same O/S Ubuntu 12.04 64 bit).  All archived WAL files are
shippedand saved in /var/pgsql/data/archive, the latest transaction txid of them is 75666.  I want to recover PG at a
pointof time that if  XIDs are equal or smaller than '75634' so I  have the following recovery.conf (only two lines): 
>
> restore_command = 'cp /var/pgsql/data/archive/%f %p'
> recovery_target_xid = '75634'
>
>
> After the restart of PG, the recovery.conf is processed and it is renamed to recovery.done.  However it restored all
(75666)instead of '75634'. 


Any chance the 75666 committed before the one you specified as target?
From the docs (http://www.postgresql.org/docs/9.1/static/recovery-target-settings.html):

The transactions that will be recovered are those that committed
before (and optionally including) the specified one.

Luca


archive folder housekeeping

From
"ascot.moss@gmail.com"
Date:
Hi,

I have enabled archive in PG (v 9.2.4):

archive_mode = on
archive_command = 'test ! -f /usr/local/pgsql/data/archive/%f && cp %p /usr/local/pgsql/data/archive/%f'

I know that pg_xlog folder is maintained by PostgreSQL automatically,  when the pg_xlog folder hits to certain limit
(pg_xlogis full), it will automatically archive old log files into the archive folder.  Is the parameter
wal_keep_segmentsused to control this limit? 

On the other hand, will PostgreSQL also automatically maintain the archive folder by itself or I need to do some
housekeepingjob to maintain it from time to time?  Is there any PG manual command available to remove archived files by
(archive)date/time?  

Can you  please advise?

regards




Re: archive folder housekeeping

From
Albe Laurenz
Date:
ascot.moss@gmail.com wrote:
> I have enabled archive in PG (v 9.2.4):
> 
> archive_mode = on
> archive_command = 'test ! -f /usr/local/pgsql/data/archive/%f && cp %p
> /usr/local/pgsql/data/archive/%f'
> 
> I know that pg_xlog folder is maintained by PostgreSQL automatically,  when the pg_xlog folder hits to
> certain limit  (pg_xlog is full), it will automatically archive old log files into the archive folder.
> Is the parameter wal_keep_segments used to control this limit?

WAL files will be archived immediately after the switch to
the next WAL file.

pg_xlog is cleaned up, but that has nothing to do with how
full it is.  It is controlled by wal_keep_segments.
WAL files are not always cleaned up immediately when they
qualify, cleanup may lag behind a little.

> On the other hand, will PostgreSQL also automatically maintain the archive folder by itself or I need
> to do some housekeeping job to maintain it from time to time?  Is there any PG manual command
> available to remove archived files by (archive) date/time?
> 
> Can you  please advise?

PostgreSQL will not maintain the archives.
You have to do housekeeping yourself.
PostgreSQL does not know about your backup strategy
(archived WAL files should not just be deleted, but backed up).

Yours,
Laurenz Albe

Re: Recovery.conf and PITR

From
wd
Date:
Try add these settings, 

pause_at_recovery_target=true recovery_target_inclusive=false


On Fri, Aug 9, 2013 at 4:09 PM, ascot.moss@gmail.com <ascot.moss@gmail.com> wrote:
Hi,

I am trying PITR in a test machine (same PG version 9.2.4 same O/S Ubuntu 12.04 64 bit).  All archived WAL files are shipped and saved in /var/pgsql/data/archive, the latest transaction txid of them is 75666.  I want to recover PG at a point of time that if  XIDs are equal or smaller than '75634' so I  have the following recovery.conf (only two lines):

restore_command = 'cp /var/pgsql/data/archive/%f %p'
recovery_target_xid = '75634'


After the restart of PG, the recovery.conf is processed and it is renamed to recovery.done.  However it restored all (75666) instead of '75634'.

postgres=# select txid_current();
 txid_current
--------------
        75666
(1 row)


Can you please advise?

regards




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: Recovery.conf and PITR

From
Luca Ferrari
Date:
On Fri, Aug 9, 2013 at 12:40 PM, wd <wd@wdicc.com> wrote:
> Try add these settings,
>
> pause_at_recovery_target=true

Be warned that this would require a manual completion of the recovery
and requires hot_standby that is not specified in the original post.

> recovery_target_inclusive=false
>

Uhm...I guess the problem is not about the txid being included or not:
the recovery target was 75634 and the transaction 75666 appeared, so
the problem seems to be an out-of-order commit of the transactions. In
such case making the inclusive false will not cause 75666 to appear if
it has committed before the target xid, or am I wrong?

Luca


Re: archive folder housekeeping

From
Ian Lawrence Barwick
Date:
2013/8/9 ascot.moss@gmail.com <ascot.moss@gmail.com>:
> Is there any PG manual command available to remove archived files by (archive) date/time?

pg_archivecleanup might be of use to you:

  http://www.postgresql.org/docs/current/static/pgarchivecleanup.html

Regards

Ian Barwick


Re: Recovery.conf and PITR

From
"ascot.moss@gmail.com"
Date:
On 9 Aug 2013, at 7:09 PM, Luca Ferrari wrote:

> Uhm...I guess the problem is not about the txid being included or not:
> the recovery target was 75634 and the transaction 75666 appeared, so
> the problem seems to be an out-of-order commit of the transactions. In
> such case making the inclusive false will not cause 75666 to appear if
> it has committed before the target xid, or am I wrong?

Hi,

Is there a way to query the master (it is up and running) about the actual commit sequence by transaction IDs?

regards

Recovery.conf and PITR by recovery_target_time

From
"ascot.moss@gmail.com"
Date:
Hi,

I am trying another way to test PITR: by recovery_target_time.

The test machine has the same PG version 9.2.4 and same O/S Ubuntu 12.04 64 bit.    All archived WAL files are shipped
andsaved in /var/pgsql/data/archive, the latest time stamp of them is "2013-08-09 19:30:01", the full hot backup time
isat '2013-08-09 16:47:12'.   

Case 1) I want to recover PG to the state before 18:03:02 that there were 6 tables deleted
Case 2) Hope to recover PG to the point of time right before table TEST8 was created

Transactions in master:
16:45:01    (create 4 test tables : test1, test2, test3, test4)
16:47:12      (FULL HOT BACKUP)
17:50:22      postgres=# CREATE TABLE test5 (id INTEGER PRIMARY KEY); INSERT INTO test5 VALUES
(generate_series(1,4000000)); EXPLAIN ANALYZE SELECT COUNT(*) FROM test5; 
17:57:13      postgres=# CREATE TABLE test6 (id INTEGER PRIMARY KEY); INSERT INTO test6 VALUES
(generate_series(1,1000000)); EXPLAIN ANALYZE SELECT COUNT(*) FROM test6; 
            postgres=# \d
                 List of relations
            Schema | Name  | Type  |  Owner
            --------+-------+-------+----------
             public | test1 | table | postgres (created before full hot backup)
             public | test2 | table | postgres (created before full hot backup)
             public | test3 | table | postgres (created before full hot backup)
             public | test4 | table | postgres (created before full hot backup)
             public | test5 | table | postgres
            public | test6 | table | postgres
18:03:02    postgres=# drop table test1; DROP TABLE
            postgres=# drop table test2; DROP TABLE
            postgres=# drop table test3; DROP TABLE
            postgres=# drop table test4; DROP TABLE
            postgres=# drop table test5; DROP TABLE
            postgres=# drop table test6; DROP TABLE
            postgres=# commit; WARNING: there is no transaction in progress COMMIT
18:04:34     postgres=# CREATE TABLE test7 (id INTEGER PRIMARY KEY); INSERT INTO test7 VALUES
(generate_series(1,1000000)); EXPLAIN ANALYZE SELECT COUNT(*) FROM test7;             
18:11:31    postgres=# CREATE TABLE test8 (id INTEGER PRIMARY KEY); INSERT INTO test8 VALUES
(generate_series(1,1000000));EXPLAIN ANALYZE SELECT COUNT(*) FROM test8; 
            postgres=# CREATE TABLE test9 (id INTEGER PRIMARY KEY); INSERT INTO test9 VALUES
(generate_series(1,1000000));EXPLAIN ANALYZE SELECT COUNT(*) FROM test9; 
            postgres=# CREATE TABLE test10 (id INTEGER PRIMARY KEY); INSERT INTO test10 VALUES
(generate_series(1,1000000));EXPLAIN ANALYZE SELECT COUNT(*) FROM test10; 
19:26:18    postgres=# vacuum;
            VACUUM
            postgres=# begin; INSERT INTO test10 VALUES (generate_series(2000002,3000002));commit; end; BEGIN INSERT 0
1000001COMMIT WARNING: there is no transaction in progress COMMIT  
            postgres=# CREATE TABLE test11 (id INTEGER PRIMARY KEY); INSERT INTO test11 VALUES
(generate_series(1,1000000));EXPLAIN ANALYZE SELECT COUNT(*) FROM test11; 
19:30:01    (ship the WAL file to test machine)




CASE-1:     '2013-08-09 17:57:55'     (only 3 lines in recovery.conf)
            restore_command = 'cp /var/pgsql/data/archive/%f %p'
            recovery_target_time = '2013-08-09 17:57:55'
            recovery_target_inclusive = false
Result:
            LOG:  starting point-in-time recovery to 2013-08-09 17:57:55
            LOG:  restored log file "000000010000006F00000066" from archive
            LOG:  redo starts at 6F/66000020
            LOG:  recovery stopping before commit of transaction 75891, time 2013-08-09 18:07:09.547682+08
            LOG:  redo done at 6F/66003DF0
            FATAL:  requested recovery stop point is before consistent recovery point
            LOG:  startup process (PID 15729) exited with exit code 1
            LOG:  terminating any other active server processes
            [1]+  Exit 1                 ...

CASE-2:      '2013-08-09 18:06:01'     (only 3 lines in recovery.conf)
            restore_command = 'cp /var/pgsql/data/archive/%f %p'
            recovery_target_time = '2013-08-09 18:06:01'
            recovery_target_inclusive = false
Result:
            LOG:  starting point-in-time recovery to 2013-08-09 18:06:01
            LOG:  restored log file "000000010000006F000000B0" from archive
            LOG:  restored log file "000000010000006F0000009B" from archive
            LOG:  redo starts at 6F/9B000020
            LOG:  recovery stopping before commit of transaction 75967, time 2013-08-09 19:30:10.217888+08
            LOG:  redo done at 6F/9B003500
            FATAL:  requested recovery stop point is before consistent recovery point
            LOG:  startup process (PID 19100) exited with exit code 1
            LOG:  terminating any other active server processes
            [1]+  Exit 1                ...


So far I can only restore ALL (i.e. up to 19:30:01) but cannot recover PG at certain Point-of-time.

Can you please advise?

regards






Re: archive folder housekeeping

From
Michael Paquier
Date:
On Fri, Aug 9, 2013 at 9:12 PM, Ian Lawrence Barwick <barwick@gmail.com> wrote:
> 2013/8/9 ascot.moss@gmail.com <ascot.moss@gmail.com>:
>> Is there any PG manual command available to remove archived files by (archive) date/time?
>
> pg_archivecleanup might be of use to you:
>
>   http://www.postgresql.org/docs/current/static/pgarchivecleanup.html
Yes, this is particularly useful and easy to maintain when enabled
with archive_cleanup_command in the recovery.conf file of a slave
fetching the archives.
--
Michael


Re: Recovery.conf and PITR by recovery_target_time

From
Albe Laurenz
Date:
ascot.moss@gmail.com wrote:
> I am trying another way to test PITR: by recovery_target_time.
> 
> The test machine has the same PG version 9.2.4 and same O/S Ubuntu 12.04 64 bit.    All archived WAL
> files are shipped and saved in /var/pgsql/data/archive, the latest time stamp of them is "2013-08-09
> 19:30:01", the full hot backup time is at '2013-08-09 16:47:12'.
> 
> Case 1) I want to recover PG to the state before 18:03:02 that there were 6 tables deleted
> Case 2) Hope to recover PG to the point of time right before table TEST8 was created
> 
> Transactions in master:
> 16:45:01    (create 4 test tables : test1, test2, test3, test4)
> 16:47:12      (FULL HOT BACKUP)
> 17:50:22      postgres=# CREATE TABLE test5 (id INTEGER PRIMARY KEY); INSERT INTO test5 VALUES
> (generate_series(1,4000000));  EXPLAIN ANALYZE SELECT COUNT(*) FROM test5;
> 17:57:13      postgres=# CREATE TABLE test6 (id INTEGER PRIMARY KEY); INSERT INTO test6 VALUES
> (generate_series(1,1000000));  EXPLAIN ANALYZE SELECT COUNT(*) FROM test6;
>             postgres=# \d
>                  List of relations
>             Schema | Name  | Type  |  Owner
>             --------+-------+-------+----------
>              public | test1 | table | postgres (created before full hot backup)
>              public | test2 | table | postgres (created before full hot backup)
>              public | test3 | table | postgres (created before full hot backup)
>              public | test4 | table | postgres (created before full hot backup)
>              public | test5 | table | postgres
>             public | test6 | table | postgres
> 18:03:02    postgres=# drop table test1; DROP TABLE
>             postgres=# drop table test2; DROP TABLE
>             postgres=# drop table test3; DROP TABLE
>             postgres=# drop table test4; DROP TABLE
>             postgres=# drop table test5; DROP TABLE
>             postgres=# drop table test6; DROP TABLE
>             postgres=# commit; WARNING: there is no transaction in progress COMMIT
> 18:04:34     postgres=# CREATE TABLE test7 (id INTEGER PRIMARY KEY); INSERT INTO test7 VALUES
> (generate_series(1,1000000));  EXPLAIN ANALYZE SELECT COUNT(*) FROM test7;
> 18:11:31    postgres=# CREATE TABLE test8 (id INTEGER PRIMARY KEY); INSERT INTO test8 VALUES
> (generate_series(1,1000000)); EXPLAIN ANALYZE SELECT COUNT(*) FROM test8;
>             postgres=# CREATE TABLE test9 (id INTEGER PRIMARY KEY); INSERT INTO test9 VALUES
> (generate_series(1,1000000)); EXPLAIN ANALYZE SELECT COUNT(*) FROM test9;
>             postgres=# CREATE TABLE test10 (id INTEGER PRIMARY KEY); INSERT INTO test10 VALUES
> (generate_series(1,1000000)); EXPLAIN ANALYZE SELECT COUNT(*) FROM test10;
> 19:26:18    postgres=# vacuum;
>             VACUUM
>             postgres=# begin; INSERT INTO test10 VALUES
> (generate_series(2000002,3000002));commit; end; BEGIN INSERT 0 1000001 COMMIT WARNING: there is no
> transaction in progress COMMIT
>             postgres=# CREATE TABLE test11 (id INTEGER PRIMARY KEY); INSERT INTO test11 VALUES
> (generate_series(1,1000000)); EXPLAIN ANALYZE SELECT COUNT(*) FROM test11;
> 19:30:01    (ship the WAL file to test machine)
> 
> 
> 
> 
> CASE-1:     '2013-08-09 17:57:55'     (only 3 lines in recovery.conf)
>             restore_command = 'cp /var/pgsql/data/archive/%f %p'
>             recovery_target_time = '2013-08-09 17:57:55'
>             recovery_target_inclusive = false
> Result:
>             LOG:  starting point-in-time recovery to 2013-08-09 17:57:55
>             LOG:  restored log file "000000010000006F00000066" from archive
>             LOG:  redo starts at 6F/66000020
>             LOG:  recovery stopping before commit of transaction 75891, time 2013-08-09
> 18:07:09.547682+08
>             LOG:  redo done at 6F/66003DF0
>             FATAL:  requested recovery stop point is before consistent recovery point
>             LOG:  startup process (PID 15729) exited with exit code 1
>             LOG:  terminating any other active server processes
>             [1]+  Exit 1                 ...
> 
> CASE-2:      '2013-08-09 18:06:01'     (only 3 lines in recovery.conf)
>             restore_command = 'cp /var/pgsql/data/archive/%f %p'
>             recovery_target_time = '2013-08-09 18:06:01'
>             recovery_target_inclusive = false
> Result:
>             LOG:  starting point-in-time recovery to 2013-08-09 18:06:01
>             LOG:  restored log file "000000010000006F000000B0" from archive
>             LOG:  restored log file "000000010000006F0000009B" from archive
>             LOG:  redo starts at 6F/9B000020
>             LOG:  recovery stopping before commit of transaction 75967, time 2013-08-09
> 19:30:10.217888+08
>             LOG:  redo done at 6F/9B003500
>             FATAL:  requested recovery stop point is before consistent recovery point
>             LOG:  startup process (PID 19100) exited with exit code 1
>             LOG:  terminating any other active server processes
>             [1]+  Exit 1                ...
> 
> 
> So far I can only restore ALL (i.e. up to 19:30:01) but cannot recover PG at certain Point-of-time.

The error message:
  FATAL:  requested recovery stop point is before consistent recovery point
suggests to me that the online backup had not ended at that time.

What exactly did you do at 16:47:12?
Did you call pg_stop_backup() after your backup?
Is there a file "backup_label" in your data directory?
You can only recover to a point in time *after* the time of backup completion.

Another hint: specify the time zone for recovery_target_time, like
2013-08-09 18:06:01 PST

Yours,
Laurenz Albe

Re: Recovery.conf and PITR by recovery_target_time

From
"ascot.moss@gmail.com"
Date:
hi

>> 16:47:12

SELECT pg_start_backup('hot_backup');
"tar cfP" the PG "data" folder
SELECT pg_stop_backup();

regards

On 9 Aug 2013, at 9:55 PM, Albe Laurenz wrote:

> ascot.moss@gmail.com wrote:
>> I am trying another way to test PITR: by recovery_target_time.
>>
>> The test machine has the same PG version 9.2.4 and same O/S Ubuntu 12.04 64 bit.    All archived WAL
>> files are shipped and saved in /var/pgsql/data/archive, the latest time stamp of them is "2013-08-09
>> 19:30:01", the full hot backup time is at '2013-08-09 16:47:12'.
>>
>> Case 1) I want to recover PG to the state before 18:03:02 that there were 6 tables deleted
>> Case 2) Hope to recover PG to the point of time right before table TEST8 was created
>>
>> Transactions in master:
>> 16:45:01    (create 4 test tables : test1, test2, test3, test4)
>> 16:47:12      (FULL HOT BACKUP)
>> 17:50:22      postgres=# CREATE TABLE test5 (id INTEGER PRIMARY KEY); INSERT INTO test5 VALUES
>> (generate_series(1,4000000));  EXPLAIN ANALYZE SELECT COUNT(*) FROM test5;
>> 17:57:13      postgres=# CREATE TABLE test6 (id INTEGER PRIMARY KEY); INSERT INTO test6 VALUES
>> (generate_series(1,1000000));  EXPLAIN ANALYZE SELECT COUNT(*) FROM test6;
>>             postgres=# \d
>>                  List of relations
>>             Schema | Name  | Type  |  Owner
>>             --------+-------+-------+----------
>>             public | test1 | table | postgres (created before full hot backup)
>>             public | test2 | table | postgres (created before full hot backup)
>>             public | test3 | table | postgres (created before full hot backup)
>>             public | test4 | table | postgres (created before full hot backup)
>>             public | test5 | table | postgres
>>             public | test6 | table | postgres
>> 18:03:02    postgres=# drop table test1; DROP TABLE
>>             postgres=# drop table test2; DROP TABLE
>>             postgres=# drop table test3; DROP TABLE
>>             postgres=# drop table test4; DROP TABLE
>>             postgres=# drop table test5; DROP TABLE
>>             postgres=# drop table test6; DROP TABLE
>>             postgres=# commit; WARNING: there is no transaction in progress COMMIT
>> 18:04:34     postgres=# CREATE TABLE test7 (id INTEGER PRIMARY KEY); INSERT INTO test7 VALUES
>> (generate_series(1,1000000));  EXPLAIN ANALYZE SELECT COUNT(*) FROM test7;
>> 18:11:31    postgres=# CREATE TABLE test8 (id INTEGER PRIMARY KEY); INSERT INTO test8 VALUES
>> (generate_series(1,1000000)); EXPLAIN ANALYZE SELECT COUNT(*) FROM test8;
>>             postgres=# CREATE TABLE test9 (id INTEGER PRIMARY KEY); INSERT INTO test9 VALUES
>> (generate_series(1,1000000)); EXPLAIN ANALYZE SELECT COUNT(*) FROM test9;
>>             postgres=# CREATE TABLE test10 (id INTEGER PRIMARY KEY); INSERT INTO test10 VALUES
>> (generate_series(1,1000000)); EXPLAIN ANALYZE SELECT COUNT(*) FROM test10;
>> 19:26:18    postgres=# vacuum;
>>             VACUUM
>>             postgres=# begin; INSERT INTO test10 VALUES
>> (generate_series(2000002,3000002));commit; end; BEGIN INSERT 0 1000001 COMMIT WARNING: there is no
>> transaction in progress COMMIT
>>             postgres=# CREATE TABLE test11 (id INTEGER PRIMARY KEY); INSERT INTO test11 VALUES
>> (generate_series(1,1000000)); EXPLAIN ANALYZE SELECT COUNT(*) FROM test11;
>> 19:30:01    (ship the WAL file to test machine)
>>
>>
>>
>>
>> CASE-1:     '2013-08-09 17:57:55'     (only 3 lines in recovery.conf)
>>             restore_command = 'cp /var/pgsql/data/archive/%f %p'
>>             recovery_target_time = '2013-08-09 17:57:55'
>>             recovery_target_inclusive = false
>> Result:
>>             LOG:  starting point-in-time recovery to 2013-08-09 17:57:55
>>             LOG:  restored log file "000000010000006F00000066" from archive
>>             LOG:  redo starts at 6F/66000020
>>             LOG:  recovery stopping before commit of transaction 75891, time 2013-08-09
>> 18:07:09.547682+08
>>             LOG:  redo done at 6F/66003DF0
>>             FATAL:  requested recovery stop point is before consistent recovery point
>>             LOG:  startup process (PID 15729) exited with exit code 1
>>             LOG:  terminating any other active server processes
>>             [1]+  Exit 1                 ...
>>
>> CASE-2:      '2013-08-09 18:06:01'     (only 3 lines in recovery.conf)
>>             restore_command = 'cp /var/pgsql/data/archive/%f %p'
>>             recovery_target_time = '2013-08-09 18:06:01'
>>             recovery_target_inclusive = false
>> Result:
>>             LOG:  starting point-in-time recovery to 2013-08-09 18:06:01
>>             LOG:  restored log file "000000010000006F000000B0" from archive
>>             LOG:  restored log file "000000010000006F0000009B" from archive
>>             LOG:  redo starts at 6F/9B000020
>>             LOG:  recovery stopping before commit of transaction 75967, time 2013-08-09
>> 19:30:10.217888+08
>>             LOG:  redo done at 6F/9B003500
>>             FATAL:  requested recovery stop point is before consistent recovery point
>>             LOG:  startup process (PID 19100) exited with exit code 1
>>             LOG:  terminating any other active server processes
>>             [1]+  Exit 1                ...
>>
>>
>> So far I can only restore ALL (i.e. up to 19:30:01) but cannot recover PG at certain Point-of-time.
>
> The error message:
>  FATAL:  requested recovery stop point is before consistent recovery point
> suggests to me that the online backup had not ended at that time.
>
> What exactly did you do at 16:47:12?
> Did you call pg_stop_backup() after your backup?
> Is there a file "backup_label" in your data directory?
> You can only recover to a point in time *after* the time of backup completion.
>
> Another hint: specify the time zone for recovery_target_time, like
> 2013-08-09 18:06:01 PST
>
> Yours,
> Laurenz Albe



Re: How to avoid Force Autovacuum

From
Kevin Grittner
Date:
Vishalakshi Navaneethakrishnan <nvishalakshi@sirahu.com> wrote:

> select * from pg_database where datname = 'template0';
> -[ RECORD 1 ]-+------------------------------------
> datname       | template0
> datdba        | 10
> encoding      | 6
> datcollate    | en_US.UTF-8
> datctype      | en_US.UTF-8
> datistemplate | t
> datallowconn  | f
> datconnlimit  | -1
> datlastsysoid | 12865
> datfrozenxid  | 2025732249
> dattablespace | 1663
> datacl        | {=c/postgres,postgres=CTc/postgres}
>
>
> select * from pg_stat_database where datname = 'template0';
> -[ RECORD 1 ]--+------------------------------
> datid          | 12865
> datname        | template0
> numbackends    | 0
> xact_commit    | 320390
> xact_rollback  | 7
> blks_read      | 3797
> blks_hit       | 9458783
> tup_returned   | 105872028
> tup_fetched    | 1771782
> tup_inserted   | 10
> tup_updated    | 457
> tup_deleted    | 10
> conflicts      | 0
> temp_files     | 0
> temp_bytes     | 0
> deadlocks      | 0
> blk_read_time  | 0
> blk_write_time | 0
> stats_reset    | 2013-04-19 19:22:39.013056-07

Well, that's why template0 is getting vacuumed.  At some point
someone must have set it to allow connections; otherwise you would
have zero for commits, rollbacks, and all those block and tuple
counts.  I'm not sure whether you can get past that now by allowing
connctions, running VACUUM FREEZE ANALYZE on it, and disabling
connections again, but it might be wroth a try.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Re: How to avoid Force Autovacuum

From
Jeff Janes
Date:
On Wed, Aug 7, 2013 at 2:46 AM, Vishalakshi Navaneethakrishnan
<nvishalakshi@sirahu.com> wrote:
> Hi All,
>
> We have one production database server , having 6 DBs, Postgres 9.2.1
> version.
>

You should probably upgrade to 9.2.4.

...

> log_autovacuum_min_duration = 0

That is good for debugging.  But what are you seeing in the log as the
result of this?


>
> Even sometimes autovacuum running on the databases and increase the load
> (Above 200) very much and the server was unresponsive
>
> I have seen the autovacum worker process in top command,

How certain are you that the autovacuum is causing the high load?  The
simple fact that you see it show up in top is not very compelling
evidence.  It seems at least as likely to me that autovacuum is just
another victim, and not the cause

> While i executing pg_stat_activity as postgres user, i have seen the pid of
> autovacuum process in the result  but the query filed is "Empty"

I've never seen that.  I always see a query of something like
"autovacuum: VACUUM public.pgbench_accounts (to prevent wraparound)".
I don't know what part of the vacuuming process you might be stuck at
which would not have such a query--I would think it would have to be
either during the start-up phase or the shut-down phase. Could you
attach to the worker with gdb when this is going on, and get a
backtrace?

>
> since template0 has no activities why the age(datfrozenxid) increasing
> heavily and reach the thresold value?

Time marches on.  If datfrozenxid is not changed, but the current xid
advances, then the age of the datfrozenzid must increase.

Cheers,

Jeff


Re: How to avoid Force Autovacuum

From
Jeff Janes
Date:
On Fri, Aug 9, 2013 at 9:06 AM, Kevin Grittner <kgrittn@ymail.com> wrote:
> Vishalakshi Navaneethakrishnan <nvishalakshi@sirahu.com> wrote:
>
>> select * from pg_database where datname = 'template0';
>> -[ RECORD 1 ]-+------------------------------------
>> datname       | template0
>> datdba        | 10
>> encoding      | 6
>> datcollate    | en_US.UTF-8
>> datctype      | en_US.UTF-8
>> datistemplate | t
>> datallowconn  | f
>> datconnlimit  | -1
>> datlastsysoid | 12865
>> datfrozenxid  | 2025732249
>> dattablespace | 1663
>> datacl        | {=c/postgres,postgres=CTc/postgres}
>>
>>
>> select * from pg_stat_database where datname = 'template0';
>> -[ RECORD 1 ]--+------------------------------
>> datid          | 12865
>> datname        | template0
>> numbackends    | 0
>> xact_commit    | 320390
>> xact_rollback  | 7
>> blks_read      | 3797
>> blks_hit       | 9458783
>> tup_returned   | 105872028
>> tup_fetched    | 1771782
>> tup_inserted   | 10
>> tup_updated    | 457
>> tup_deleted    | 10
>> conflicts      | 0
>> temp_files     | 0
>> temp_bytes     | 0
>> deadlocks      | 0
>> blk_read_time  | 0
>> blk_write_time | 0
>> stats_reset    | 2013-04-19 19:22:39.013056-07
>
> Well, that's why template0 is getting vacuumed.  At some point
> someone must have set it to allow connections; otherwise you would
> have zero for commits, rollbacks, and all those block and tuple
> counts.

Non-zero values are normal.  There is no mechanism to prevent
template0 from getting vacuumed.  template0 will get vacuumed once
every autovacuum_freeze_max_age even if no one has ever connected to
it, and that vacuum will cause block reads and writes to happen.  (But
I'm not sure why it would contribute xact_rollback or tup_updated, and
the tup_returned seems awfully high to be due to only anti-wrap-around
vacs.)

Cheers,

Jeff


Re: Recovery.conf and PITR

From
Luca Ferrari
Date:
On Fri, Aug 9, 2013 at 2:17 PM, ascot.moss@gmail.com
<ascot.moss@gmail.com> wrote:

> Is there a way to query the master (it is up and running) about the actual commit sequence by transaction IDs?


The information is within the clogs, but nothing comes into my mind as
to inspect from an admin point of view the clog sequence.
Anyway, I suepect it is wrong the approach you have with recovery: you
are asking the database to recover at least up to transaction x, so
why worrying about other interleaved transactions?

Luca


Re: How to avoid Force Autovacuum

From
Vishalakshi Navaneethakrishnan
Date:
Hi Team,

Today also we faced issue in autovacuum.. Is there any workaround for this instead of upgrading,, If yes means can you please give me tuning parameters..


> log_autovacuum_min_duration = 0

That is good for debugging.  But what are you seeing in the log as the
result of this?

There is nothing logged during autovacuum

This is the Pid  in Pg_stat_activity

postgres=# select * from pg_stat_activity where pid=25769;
 datid | datname |  pid  | usesysid | usename  | application_name | client_addr | client_hostname | client_port |         backend_start         |          xact_start           | query_start | state_change | waiting | state | query 
-------+---------+-------+----------+----------+------------------+-------------+-----------------+-------------+-------------------------------+-------------------------------+-------------+--------------+---------+-------+-------
 16408 | db1 | 25769 |       10 | postgres |                  |             |                 |             | 2013-08-13 04:00:14.767093-07 | 2013-08-13 04:00:14.765484-07 |             |              | f       |       | 
(1 row)

This is the top command:

postgres 25769 30705 93 03:54 ?        00:01:45 postgres: autovacuum worker process   db1
postgres 24680 30705 84 03:55 ?        00:00:33 postgres: autovacuum worker process   db2
postgres 24692 30705 79 03:55 ?        00:00:26 postgres: autovacuum worker process   db3



On Sat, Aug 10, 2013 at 12:23 AM, Jeff Janes <jeff.janes@gmail.com> wrote:
On Fri, Aug 9, 2013 at 9:06 AM, Kevin Grittner <kgrittn@ymail.com> wrote:
> Vishalakshi Navaneethakrishnan <nvishalakshi@sirahu.com> wrote:
>
>> select * from pg_database where datname = 'template0';
>> -[ RECORD 1 ]-+------------------------------------
>> datname       | template0
>> datdba        | 10
>> encoding      | 6
>> datcollate    | en_US.UTF-8
>> datctype      | en_US.UTF-8
>> datistemplate | t
>> datallowconn  | f
>> datconnlimit  | -1
>> datlastsysoid | 12865
>> datfrozenxid  | 2025732249
>> dattablespace | 1663
>> datacl        | {=c/postgres,postgres=CTc/postgres}
>>
>>
>> select * from pg_stat_database where datname = 'template0';
>> -[ RECORD 1 ]--+------------------------------
>> datid          | 12865
>> datname        | template0
>> numbackends    | 0
>> xact_commit    | 320390
>> xact_rollback  | 7
>> blks_read      | 3797
>> blks_hit       | 9458783
>> tup_returned   | 105872028
>> tup_fetched    | 1771782
>> tup_inserted   | 10
>> tup_updated    | 457
>> tup_deleted    | 10
>> conflicts      | 0
>> temp_files     | 0
>> temp_bytes     | 0
>> deadlocks      | 0
>> blk_read_time  | 0
>> blk_write_time | 0
>> stats_reset    | 2013-04-19 19:22:39.013056-07
>
> Well, that's why template0 is getting vacuumed.  At some point
> someone must have set it to allow connections; otherwise you would
> have zero for commits, rollbacks, and all those block and tuple
> counts.

Non-zero values are normal.  There is no mechanism to prevent
template0 from getting vacuumed.  template0 will get vacuumed once
every autovacuum_freeze_max_age even if no one has ever connected to
it, and that vacuum will cause block reads and writes to happen.  (But
I'm not sure why it would contribute xact_rollback or tup_updated, and
the tup_returned seems awfully high to be due to only anti-wrap-around
vacs.)

Cheers,

Jeff



--
Best Regards,
Vishalakshi.N