Re: pg_upgrade failing for 200+ million Large Objects - Mailing list pgsql-hackers

From Tom Lane
Subject Re: pg_upgrade failing for 200+ million Large Objects
Date
Msg-id 3023817.1710629175@sss.pgh.pa.us
Whole thread Raw
In response to Re: pg_upgrade failing for 200+ million Large Objects  (Laurenz Albe <laurenz.albe@cybertec.at>)
Responses Re: pg_upgrade failing for 200+ million Large Objects
Re: pg_upgrade failing for 200+ million Large Objects
List pgsql-hackers
Laurenz Albe <laurenz.albe@cybertec.at> writes:
> On Fri, 2024-03-15 at 19:18 -0400, Tom Lane wrote:
>> This patch seems to have stalled out again.  In hopes of getting it
>> over the finish line, I've done a bit more work to address the two
>> loose ends I felt were probably essential to deal with:

> Applies and builds fine.
> I didn't scrutinize the code, but I gave it a spin on a database with
> 15 million (small) large objects.  I tried pg_upgrade --link with and
> without the patch on a debug build with the default configuration.

Thanks for looking at it!

> Without the patch:
> Runtime: 74.5 minutes

> With the patch:
> Runtime: 70 minutes

Hm, I'd have hoped for a bit more runtime improvement.  But perhaps
not --- most of the win we saw upthread was from parallelism, and
I don't think you'd get any parallelism in a pg_upgrade with all
the data in one database.  (Perhaps there is more to do there later,
but I'm still not clear on how this should interact with the existing
cross-DB parallelism; so I'm content to leave that question for
another patch.)

            regards, tom lane



pgsql-hackers by date:

Previous
From: Laurenz Albe
Date:
Subject: Re: pg_upgrade failing for 200+ million Large Objects
Next
From: Daniel Gustafsson
Date:
Subject: Re: Support json_errdetail in FRONTEND builds