=?UTF-8?Q?Ulf_Lohbr=C3=BCgge?= <ulf.lohbruegge@gmail.com> writes:
> A database cluster (PostgreSQL 12.4 running on Amazon Aurora @
> db.r5.xlarge) with a single database of mine consists of 1,656,618 rows in
> pg_class.
Ouch.
> Using pg_dump on that database leads to excessive memory usage
> and sometimes even a kill by signal 9:
> 2021-09-18 16:51:24 UTC::@:[29787]:LOG: Aurora Runtime process (PID 29794)
> was terminated by signal 9: Killed
For the record, Aurora isn't Postgres. It's a heavily-modified fork,
with (I imagine) different performance bottlenecks. Likely you
should be asking Amazon support about this before the PG community.
Having said that ...
> The high number of rows in pg_class result from more than ~550 schemata,
> each containing more than 600 tables. It's part of a multi tenant setup
> where each tenant lives in its own schema.
... you might have some luck dumping each schema separately, or at least
in small groups, using pg_dump's --schema switch.
> Is there anything I can do to improve that situation? Next thing that comes
> to my mind is to distribute those ~550 schemata over 5 to 6 databases in
> one database cluster instead of having one single database.
Yeah, you definitely don't want to have this many tables in one
database, especially not on a platform that's going to be chary
of memory.
regards, tom lane