Re: pg_upgrade failing for 200+ million Large Objects - Mailing list pgsql-hackers

From Jacob Champion
Subject Re: pg_upgrade failing for 200+ million Large Objects
Date
Msg-id CAAWbhmgUb8p7ff_ZX5jCvqM=ipPxbbDJTXMNVzH-Ho_CXVkRHA@mail.gmail.com
Whole thread Raw
In response to Re: pg_upgrade failing for 200+ million Large Objects  (Nathan Bossart <nathandbossart@gmail.com>)
Responses Re: pg_upgrade failing for 200+ million Large Objects
Re: pg_upgrade failing for 200+ million Large Objects
List pgsql-hackers
On Thu, Sep 8, 2022 at 4:18 PM Nathan Bossart <nathandbossart@gmail.com> wrote:
> IIUC the main benefit of this approach is that it isn't dependent on
> binary-upgrade mode, which seems to be a goal based on the discussion
> upthread [0].

To clarify, I agree that pg_dump should contain the core fix. What I'm
questioning is the addition of --dump-options to make use of that fix
from pg_upgrade, since it also lets the user do "exciting" new things
like --exclude-schema and --include-foreign-data and so on. I don't
think we should let them do that without a good reason.

Thanks,
--Jacob



pgsql-hackers by date:

Previous
From: Nathan Bossart
Date:
Subject: Re: pg_upgrade failing for 200+ million Large Objects
Next
From: David Rowley
Date:
Subject: Re: Reducing the chunk header sizes on all memory context types