Re: pg_upgrade failing for 200+ million Large Objects - Mailing list pgsql-hackers

From Bruce Momjian
Subject Re: pg_upgrade failing for 200+ million Large Objects
Date
Msg-id 20210320164536.GB7968@momjian.us
Whole thread Raw
In response to Re: pg_upgrade failing for 200+ million Large Objects  (Tom Lane <tgl@sss.pgh.pa.us>)
Responses Re: pg_upgrade failing for 200+ million Large Objects  (Tom Lane <tgl@sss.pgh.pa.us>)
List pgsql-hackers
On Sat, Mar 20, 2021 at 11:23:19AM -0400, Tom Lane wrote:
> I wonder if pg_dump could improve matters cheaply by aggregating the
> large objects by owner and ACL contents.  That is, do
> 
> select distinct lomowner, lomacl from pg_largeobject_metadata;
> 
> and make just *one* BLOB TOC entry for each result.  Then dump out
> all the matching blobs under that heading.
> 
> A possible objection is that it'd reduce the ability to restore blobs
> selectively, so maybe we'd need to make it optional.
> 
> Of course, that just reduces the memory consumption on the client
> side; it does nothing for the locks.  Can we get away with releasing the
> lock immediately after doing an ALTER OWNER or GRANT/REVOKE on a blob?

Well, in pg_upgrade mode you can, since there are no other cluster
users, but you might be asking for general pg_dump usage.

-- 
  Bruce Momjian  <bruce@momjian.us>        https://momjian.us
  EDB                                      https://enterprisedb.com

  If only the physical world exists, free will is an illusion.




pgsql-hackers by date:

Previous
From: Pavel Stehule
Date:
Subject: Re: pspg pager is finished
Next
From: Tom Lane
Date:
Subject: Re: pg_upgrade failing for 200+ million Large Objects