Re: pg_upgrade failing for 200+ million Large Objects - Mailing list pgsql-hackers

From Nathan Bossart
Subject Re: pg_upgrade failing for 200+ million Large Objects
Date
Msg-id 20240401193718.GB2302032@nathanxps13
Whole thread Raw
In response to Re: pg_upgrade failing for 200+ million Large Objects  (Tom Lane <tgl@sss.pgh.pa.us>)
List pgsql-hackers
On Mon, Apr 01, 2024 at 03:28:26PM -0400, Tom Lane wrote:
> Nathan Bossart <nathandbossart@gmail.com> writes:
>> The one design point that worries me a little is the non-configurability of
>> --transaction-size in pg_upgrade.  I think it's fine to default it to 1,000
>> or something, but given how often I've had to fiddle with
>> max_locks_per_transaction, I'm wondering if we might regret hard-coding it.
> 
> Well, we could add a command-line switch to pg_upgrade, but I'm
> unconvinced that it'd be worth the trouble.  I think a very large
> fraction of users invoke pg_upgrade by means of packager-supplied
> scripts that are unlikely to provide a way to pass through such
> a switch.  I'm inclined to say let's leave it as-is until we get
> some actual field requests for a switch.

Okay.  I'll let you know if I see anything.  IIRC usually the pg_dump side
of pg_upgrade is more prone to lock exhaustion, so you may very well be
right that this is unnecessary.

-- 
Nathan Bossart
Amazon Web Services: https://aws.amazon.com



pgsql-hackers by date:

Previous
From: Tom Lane
Date:
Subject: Re: pg_upgrade failing for 200+ million Large Objects
Next
From: Andrew Dunstan
Date:
Subject: Re: Broken error detection in genbki.pl