Re: Upgrade from 11.3 to 13.1 failed with out of memory - Mailing list pgsql-general

From Magnus Hagander
Subject Re: Upgrade from 11.3 to 13.1 failed with out of memory
Date
Msg-id CABUevEzTzWXCkypf77pR-oDUT7RFoTkxqmB7NEAD3JJtLstvqg@mail.gmail.com
Whole thread Raw
In response to Upgrade from 11.3 to 13.1 failed with out of memory  (Mihalidesová Jana <jana.mihalidesova@cetin.cz>)
Responses RE: Upgrade from 11.3 to 13.1 failed with out of memory  (Mihalidesová Jana <jana.mihalidesova@cetin.cz>)
List pgsql-general
On Tue, Apr 6, 2021 at 3:08 PM Mihalidesová Jana
<jana.mihalidesova@cetin.cz> wrote:
>
> Open
>
>
> Hi,
>
>
>
> I have aproximetly 560GB large database and try to upgrade it from 11.3 to 13.1. I’ve successfully upgraded dev,test
andref environment but on the production pg_dump failed with out of memory. Yes, of course, the dev,test and ref are
muchmuch smaller then production database. 
>
> We are using OID data type so there’s a lot of large objects. pg_largeobject it’s 59GB large.
>
> The upgrade process fail during the pg_dump schemas_only so I’m confused why it’s not enough 35GB RAM which is free
onthe server when there’s no data. When I tried to run same pg_dump command by hand as during upgrade it fails on line
pg_dump:reading large objects. 
>
>
>
> Creating dump of global objects                             "/pgsql/bin/13.1_XXXX/bin/pg_dumpall" --host
/pgsql/data/XXXX--port 50432 --username XXXXXX --globals-only --quote-all-identifiers --binary-upgrade --verbose -f
pg_upgrade_dump_globals.sql>> "pg_upgrade_utility.log" 2>&1 
>
> ok
>
> Creating dump of database schemas
>
> "/pgsql/bin/13.1_XXXX/bin/pg_dump" --host /pgsql/data/XXXX --port 50432 --username XXXXXX --schema-only
--quote-all-identifiers--binary-upgrade --format=custom --verbose --file="pg_upgrade_dump_16384.custom" 'dbname=XXXX'
>>"pg_upgrade_dump_16384.log" 2>&1 
>
>
>
> *failure*
>
> There were problems executing ""/pgsql/bin/13.1_XXXX/bin/pg_dump" --host /pgsql/data/XXXX --port 50432 --username
pgpnip--schema-only --quote-all-identifiers --binary-upgrade --format=custom --verbose
--file="pg_upgrade_dump_16384.custom"'dbname=XXXX' >> "pg_upgrade_dump_16384.log" 2>&1" 
>
>
>
>
>
> Do you have any idea how to upgrade the database? This is my upgrade command:
>
>
>
> /pgsql/bin/13.1_XXXX/bin/pg_upgrade -k -b /pgsql/bin/11.3_XXXX/bin -B /pgsql/bin/13.1_XXXX/bin -d /pgsql/data/XXXX -D
/pgsql/data/XXXX/XXXX.new
>


This is unfortunately a known limitation in pg_dump (and therefor by
proxy it becomes a problem with pg_upgrade) when you have many large
objects. It doesn't really matter how big they are, it matters how
*many* they are. It takes a long time and uses crazy amounts of
memory, but that's unfortunately where it's at. You'd have the same
problem with a plain dump/reload as well, not just the "binary upgrade
mode".

There's been some recent work on trying to find a remedy for this, but
nothing is available at this point. You'll need to either trim the
number of objects if you can (by maybe manually dumping them out to
files before the restore and then reloading them back in later), or
just add more memory/swap to the machine.

Long term you should probably consider switching to using bytea
columns when you have that many objects.

--
 Magnus Hagander
 Me: https://www.hagander.net/
 Work: https://www.redpill-linpro.com/



pgsql-general by date:

Previous
From: chlor
Date:
Subject: LDAP, single sign on from Windows client
Next
From: DAVID Nicolas
Date:
Subject: RE: Open source licenses