Re: pg_upgrade failing for 200+ million Large Objects - Mailing list pgsql-hackers

From Jacob Champion
Subject Re: pg_upgrade failing for 200+ million Large Objects
Date
Msg-id 663393ca-b2ff-26f0-2e2d-adc942aff4fd@timescale.com
Whole thread Raw
In response to Re: pg_upgrade failing for 200+ million Large Objects  (Nathan Bossart <nathandbossart@gmail.com>)
Responses Re: pg_upgrade failing for 200+ million Large Objects
List pgsql-hackers
On 8/24/22 17:32, Nathan Bossart wrote:
> I'd like to revive this thread, so I've created a commitfest entry [0] and
> attached a hastily rebased patch that compiles and passes the tests.  I am
> aiming to spend some more time on this in the near future.

Just to clarify, was Justin's statement upthread (that the XID problem
is fixed) correct? And is this patch just trying to improve the
remaining memory and lock usage problems?

I took a quick look at the pg_upgrade diffs. I agree with Jan that the
escaping problem is a pretty bad smell, but even putting that aside for
a bit, is it safe to expose arbitrary options to pg_dump/restore during
upgrade? It's super flexible, but I can imagine that some of those flags
might really mess up the new cluster...

And yeah, if you choose to do that then you get to keep both pieces, I
guess, but I like that pg_upgrade tries to be (IMO) fairly bulletproof.

--Jacob



pgsql-hackers by date:

Previous
From: David Rowley
Date:
Subject: Re: Reducing the chunk header sizes on all memory context types
Next
From: Stephen Frost
Date:
Subject: Re: has_privs_of_role vs. is_member_of_role, redux