Re: pg_upgrade failing for 200+ million Large Objects - Mailing list pgsql-hackers

From Tom Lane
Subject Re: pg_upgrade failing for 200+ million Large Objects
Date
Msg-id 1873872.1722035181@sss.pgh.pa.us
Whole thread Raw
In response to Re: pg_upgrade failing for 200+ million Large Objects  (Alexander Korotkov <aekorotkov@gmail.com>)
Responses Re: pg_upgrade failing for 200+ million Large Objects
List pgsql-hackers
Alexander Korotkov <aekorotkov@gmail.com> writes:
> On Sat, Jul 27, 2024 at 1:37 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:
>> It's fairly easy to fix things so that this example doesn't cause
>> that to happen: we just need to issue these updates as one command
>> not N commands per table.

> I was thinking about counting actual number of queries, not TOC
> entries for transaction number as a more universal solution.  But that
> would require usage of psql_scan() or writing simpler alternative for
> this particular purpose.  That looks quite annoying.  What do you
> think?

The assumption underlying what we're doing now is that the number
of SQL commands per TOC entry is limited.  I'd prefer to fix the
code so that that assumption is correct, at least in normal cases.
I confess I'd not looked closely enough at the binary-upgrade support
code to realize it wasn't correct already :-(.  If we go that way,
we can fix this while also making pg_upgrade faster rather than
slower.  I also expect that it'll be a lot simpler than putting
a full SQL parser in pg_restore.

            regards, tom lane



pgsql-hackers by date:

Previous
From: Alexander Korotkov
Date:
Subject: Re: pg_upgrade failing for 200+ million Large Objects
Next
From: Jeff Davis
Date:
Subject: Re: MAINTAIN privilege -- what do we need to un-revert it?