Re: pg_upgrade and wraparound - Mailing list pgsql-general

From Alexander Shutyaev
Subject Re: pg_upgrade and wraparound
Date
Msg-id CAGBp8g8R9TBBE0XXYnonKW+NqUVfSsgaE21=9kCE+aCtGJ82=g@mail.gmail.com
Whole thread Raw
In response to Re: pg_upgrade and wraparound  ("Daniel Verite" <daniel@manitou-mail.org>)
Responses Re: pg_upgrade and wraparound  (Alexander Shutyaev <shutyaev@gmail.com>)
List pgsql-general
Back again,

>> Alexander, could you hack things up so autovacuum logging is enabled
>> (log_autovacuum_min_duration=0), and see whether it's triggered?

I've changed this config setting in both 9.6 and 10.4 postgresql.conf, then I've ran pg_upgrade once more.

However I'm not sure how can I see whether autovacuum was triggered or not. I've tried grepping the logs for lines containing both 'vacuum' and 'auto' (case-insensitive) - there were none. If you can be more specific, I can look for anything else.

I've googled on how can one see that the autovacuum is working, and found out this query, which I ran on the 10.4 cluster:

select count(*) from pg_stat_all_tables where last_autovacuum is not null;
 count
-------
     0
(1 row)

So it seems autovacuum is indeed not working, just as you proposed.

If I correctly summarized all your responses, the problem is that:

1) pg_restore (as part of pg_upgrade) inserts each large object in a different transaction

That seems true to me given the log output - each time an object is inserted the wraparound warning decrements by 1

2) the autovacuum doesn't work while the database is restored 

That also seems true (see above)

3) the number of large objects is so big that as they are restored the transaction wraparound occurs

Here's the number of large objects taken from the 9.6 cluster (spaces added manually for clarity):

select count(*) from pg_largeobject_metadata ;
   count
-----------
 133 635 871
(1 row)

If I've googled correctly - the transaction number is a 32bit integer so it's limit is 2 147 483 647 which is a lot more. I guess I'm missing something.

This is just my attempt to summarize our progress so far.

I'm further open to your suggestions.

2018-06-12 14:32 GMT+03:00 Daniel Verite <daniel@manitou-mail.org>:
        Andres Freund wrote:

> I'm not entirely clear why pg_restore appears to use a separate
> transaction for each large object, surely exascerbating the problem.

To make sure that per-object locks don't fill up the shared
lock table?
There might be hundreds of thousands of large objects.
If it had to restore N objects per transaction, would it know
how to compute N that is large enough to be effective
and small enough not to exhaust the shared table?

Best regards,
--
Daniel Vérité
PostgreSQL-powered mailer: http://www.manitou-mail.org
Twitter: @DanielVerite

pgsql-general by date:

Previous
From: Murthy Nunna
Date:
Subject: RE: pg_upgrade 10.2
Next
From: Adrian Klaver
Date:
Subject: Re: pg_upgrade 10.2