On Mon, Apr 01, 2024 at 03:28:26PM -0400, Tom Lane wrote:
> Nathan Bossart <nathandbossart@gmail.com> writes:
> > The one design point that worries me a little is the non-configurability of
> > --transaction-size in pg_upgrade. I think it's fine to default it to 1,000
> > or something, but given how often I've had to fiddle with
> > max_locks_per_transaction, I'm wondering if we might regret hard-coding it.
>
> Well, we could add a command-line switch to pg_upgrade, but I'm
> unconvinced that it'd be worth the trouble. I think a very large
> fraction of users invoke pg_upgrade by means of packager-supplied
> scripts that are unlikely to provide a way to pass through such
> a switch. I'm inclined to say let's leave it as-is until we get
> some actual field requests for a switch.
I've been importing our schemas and doing upgrade testing, and was
surprised when a postgres backend was killed for OOM during pg_upgrade:
Killed process 989302 (postgres) total-vm:5495648kB, anon-rss:5153292kB, ...
Upgrading from v16 => v16 doesn't use nearly as much RAM.
While tracking down the responsible commit, I reproduced the problem
using a subset of tables; at 959b38d770, the backend process used
~650 MB RAM, and at its parent commit used at most ~120 MB.
959b38d770b Invent --transaction-size option for pg_restore.
By changing RESTORE_TRANSACTION_SIZE to 100, backend RAM use goes to
180 MB during pg_upgrade, which is reasonable.
With partitioning, we have a lot of tables, some of them wide (126
partitioned tables, 8942 childs, total 1019315 columns). I didn't track
if certain parts of our schema contribute most to the high backend mem
use, just that it's now 5x (while testing a subset) to 50x higher.
We'd surely prefer that the transaction size be configurable.
--
Justin