Re: pg_upgrade failing for 200+ million Large Objects - Mailing list pgsql-hackers

From Nathan Bossart
Subject Re: pg_upgrade failing for 200+ million Large Objects
Date
Msg-id 20220908233407.GA2244644@nathanxps13
Whole thread Raw
In response to Re: pg_upgrade failing for 200+ million Large Objects  (Jacob Champion <jchampion@timescale.com>)
Responses Re: pg_upgrade failing for 200+ million Large Objects
List pgsql-hackers
On Thu, Sep 08, 2022 at 04:29:10PM -0700, Jacob Champion wrote:
> On Thu, Sep 8, 2022 at 4:18 PM Nathan Bossart <nathandbossart@gmail.com> wrote:
>> IIUC the main benefit of this approach is that it isn't dependent on
>> binary-upgrade mode, which seems to be a goal based on the discussion
>> upthread [0].
> 
> To clarify, I agree that pg_dump should contain the core fix. What I'm
> questioning is the addition of --dump-options to make use of that fix
> from pg_upgrade, since it also lets the user do "exciting" new things
> like --exclude-schema and --include-foreign-data and so on. I don't
> think we should let them do that without a good reason.

Ah, yes, I think that is a fair point.

-- 
Nathan Bossart
Amazon Web Services: https://aws.amazon.com



pgsql-hackers by date:

Previous
From: David Rowley
Date:
Subject: Re: Reducing the chunk header sizes on all memory context types
Next
From: Jacob Champion
Date:
Subject: Re: Patch proposal: make use of regular expressions for the username in pg_hba.conf