Re: pg_upgrade failing for 200+ million Large Objects - Mailing list pgsql-hackers

From Tom Lane
Subject Re: pg_upgrade failing for 200+ million Large Objects
Date
Msg-id 2495251.1703690301@sss.pgh.pa.us
Whole thread Raw
In response to Re: pg_upgrade failing for 200+ million Large Objects  (Robins Tharakan <tharakan@gmail.com>)
Responses Re: pg_upgrade failing for 200+ million Large Objects
List pgsql-hackers
Robins Tharakan <tharakan@gmail.com> writes:
> Applying all 4 patches, I also see good performance improvement.
> With more Large Objects, although pg_dump improved significantly,
> pg_restore is now comfortably an order of magnitude faster.

Yeah.  The key thing here is that pg_dump can only parallelize
the data transfer, while (with 0004) pg_restore can parallelize
large object creation and owner-setting as well as data transfer.
I don't see any simple way to improve that on the dump side,
but I'm not sure we need to.  Zillions of empty objects is not
really the use case to worry about.  I suspect that a more realistic
case with moderate amounts of data in the blobs would make pg_dump
look better.

            regards, tom lane



pgsql-hackers by date:

Previous
From: Tom Lane
Date:
Subject: Re: Should we remove -Wdeclaration-after-statement?
Next
From: Bruce Momjian
Date:
Subject: Re: Statistics Import and Export