Re: pg_upgrade failing for 200+ million Large Objects - Mailing list pgsql-hackers

From Kumar, Sachin
Subject Re: pg_upgrade failing for 200+ million Large Objects
Date
Msg-id 0643CC11-223A-4039-AC34-94E127462796@amazon.com
Whole thread Raw
In response to Re: pg_upgrade failing for 200+ million Large Objects  (Jacob Champion <jchampion@timescale.com>)
Responses Re: pg_upgrade failing for 200+ million Large Objects  (Tom Lane <tgl@sss.pgh.pa.us>)
List pgsql-hackers



Hi Everyone , I want to continue this thread , I have rebased the patch to latest
master and fixed an issue when pg_restore prints to file.

`
╰─$ pg_restore  dump_small.custom  --restore-blob-batch-size=2 --file=a
--
-- End BLOB restore batch
--
COMMIT;
`

> On 09/11/2023, 17:05, "Jacob Champion" <jchampion@timescale.com <mailto:jchampion@timescale.com>> wrote:
> To clarify, I agree that pg_dump should contain the core fix. What I'm
> questioning is the addition of --dump-options to make use of that fix
> from pg_upgrade, since it also lets the user do "exciting" new things
> like --exclude-schema and --include-foreign-data and so on. I don't
> think we should let them do that without a good reason.

Earlier idea was to not expose these options to users and use [1]
   --restore-jobs=NUM             --jobs parameter passed to pg_restore
   --restore-blob-batch-size=NUM  number of blobs restored in one xact
But this was later expanded to use --dump-options and --restore-options [2].
With --restore-options user can use --exclude-schema , 
So maybe we can go back to [1]

[1] https://www.postgresql.org/message-id/a1e200e6-adde-2561-422b-a166ec084e3b%40wi3ck.info
[2] https://www.postgresql.org/message-id/8d8d3961-8e8b-3dbe-f911-6f418c5fb1d3%40wi3ck.info

Regards
Sachin
Amazon Web Services: https://aws.amazon.com



pgsql-hackers by date:

Previous
From: Paul Jungwirth
Date:
Subject: Re: SQL:2011 application time
Next
From: Andres Freund
Date:
Subject: Re: meson documentation build open issues