Re: pg_upgrade failing for 200+ million Large Objects - Mailing list pgsql-hackers

From Jan Wieck
Subject Re: pg_upgrade failing for 200+ million Large Objects
Date
Msg-id 802b96e9-f5e1-015c-dfb9-8756974b11fc@wi3ck.info
Whole thread Raw
In response to Re: pg_upgrade failing for 200+ million Large Objects  (Tom Lane <tgl@sss.pgh.pa.us>)
Responses Re: pg_upgrade failing for 200+ million Large Objects  (Jan Wieck <jan@wi3ck.info>)
List pgsql-hackers
On 3/23/21 4:55 PM, Tom Lane wrote:
> Jan Wieck <jan@wi3ck.info> writes:
>> Have we even reached a consensus yet on that doing it the way, my patch 
>> is proposing, is the right way to go? Like that emitting BLOB TOC 
>> entries into SECTION_DATA when in binary upgrade mode is a good thing? 
>> Or that bunching all the SQL statements for creating the blob, changing 
>> the ACL and COMMENT and SECLABEL all in one multi-statement-query is.
> 
> Now you're asking for actual review effort, which is a little hard
> to come by towards the tail end of the last CF of a cycle.  I'm
> interested in this topic, but I can't justify spending much time
> on it right now.

Understood.

In any case I changed the options so that they behave the same way, the 
existing -o and -O (for old/new postmaster options) work. I don't think 
it would be wise to have option forwarding work differently between 
options for postmaster and options for pg_dump/pg_restore.


Regards, Jan

-- 
Jan Wieck
Principle Database Engineer
Amazon Web Services



pgsql-hackers by date:

Previous
From: Alvaro Herrera
Date:
Subject: Re: Feature improvement: can we add queryId for pg_catalog.pg_stat_activity view?
Next
From: Robert Haas
Date:
Subject: Re: default result formats setting