Re: Horribly slow pg_upgrade performance with many Large Objects - Mailing list pgsql-hackers

From Hannu Krosing
Subject Re: Horribly slow pg_upgrade performance with many Large Objects
Date
Msg-id CAMT0RQStPtHfKwowd88Q0tynX0x=uJSKn=ihP8syhDJ6cH3DHQ@mail.gmail.com
Whole thread Raw
In response to Re: Horribly slow pg_upgrade performance with many Large Objects  (Nathan Bossart <nathandbossart@gmail.com>)
Responses Re: Horribly slow pg_upgrade performance with many Large Objects
List pgsql-hackers
On Tue, Jul 8, 2025 at 11:06 PM Nathan Bossart <nathandbossart@gmail.com> wrote:
>
> On Sun, Jul 06, 2025 at 02:48:08PM +0200, Hannu Krosing wrote:
> > Did a quick check of the patch and it seems to work ok.
>
> Thanks for taking a look.
>
> > What do you think of the idea of not dumping pg_shdepend here, but
> > instead adding the required entries after loading
> > pg_largeobject_metadata based on the contents of it ?
>
> While not dumping it might save a little space during upgrade, the query
> seems to be extremely slow.  So, I don't see any strong advantage.

Yeah, looks like the part that avoids duplicates made it slow.

If you run it without the last WHERE it is reasonably fast. And it
behaves the same as just inserting from the dump which also does not
have any checks against duplicates.



pgsql-hackers by date:

Previous
From: Aleksander Alekseev
Date:
Subject: Re: encode/decode support for base64url
Next
From: Rintaro Ikeda
Date:
Subject: Re: Suggestion to add --continue-client-on-abort option to pgbench