Re: Out of memory error during pg_upgrade in big DB with large objects - Mailing list pgsql-admin

From Tom Lane
Subject Re: Out of memory error during pg_upgrade in big DB with large objects
Date
Msg-id 3215516.1669051820@sss.pgh.pa.us
Whole thread Raw
In response to Out of memory error during pg_upgrade in big DB with large objects  (Massimo Ortensi <mortensi@unimaticaspa.it>)
Responses Re: Out of memory error during pg_upgrade in big DB with large objects
List pgsql-admin
Massimo Ortensi <mortensi@unimaticaspa.it> writes:
> I'm trying to upgrade a huge DB from postgres 10 to 14

> This cluster is 70+ TB, with one database having more than 2 billion 
> records in pg_largeobject

> I'm trying pg_upgrade in hard link mode, but the dump of databas schema 
> phase always fails with

> pg_dump: error: query failed: out of memory for query result
> pg_dump: error: query was: SELECT l.oid, (SELECT rolname FROM 
> pg_catalog.pg_roles WHERE oid = l.lomowner) AS rolname, (SELECT 
> pg_catalog.array_agg(acl ORDER BY row_n) FROM (SELECT acl, row_n FROM 

FWIW, this query was rewritten pretty substantially in v15.
It's still going to produce a row per large object, but it
should be a lot narrower because most of the ACL-wrangling
now happens somewhere else.  I don't know if migrating to
v15 instead of v14 is an option for you, and I can't promise
that that'd be enough savings to fix it anyway.  But it's
something to think about.

            regards, tom lane



pgsql-admin by date:

Previous
From: Massimo Ortensi
Date:
Subject: Out of memory error during pg_upgrade in big DB with large objects
Next
From: Ninad Shah
Date:
Subject: Re: Client IP in Patroni