Re: pg_upgrade failing for 200+ million Large Objects - Mailing list pgsql-hackers

From Michael Paquier
Subject Re: pg_upgrade failing for 200+ million Large Objects
Date
Msg-id Y0ZVk5ILZqYKEL+Z@paquier.xyz
Whole thread Raw
In response to Re: pg_upgrade failing for 200+ million Large Objects  (Nathan Bossart <nathandbossart@gmail.com>)
List pgsql-hackers
On Thu, Sep 08, 2022 at 04:34:07PM -0700, Nathan Bossart wrote:
> On Thu, Sep 08, 2022 at 04:29:10PM -0700, Jacob Champion wrote:
>> To clarify, I agree that pg_dump should contain the core fix. What I'm
>> questioning is the addition of --dump-options to make use of that fix
>> from pg_upgrade, since it also lets the user do "exciting" new things
>> like --exclude-schema and --include-foreign-data and so on. I don't
>> think we should let them do that without a good reason.
>
> Ah, yes, I think that is a fair point.

It has been more than four weeks since the last activity of this
thread and there has been what looks like some feedback to me, so
marked as RwF for the time being.
--
Michael

Attachment

pgsql-hackers by date:

Previous
From: Peter Eisentraut
Date:
Subject: Re: meson PGXS compatibility
Next
From: Michael Paquier
Date:
Subject: Re: Allow pageinspect's bt_page_stats function to return a set of rows instead of a single row