Re: pg_upgrade failing for 200+ million Large Objects - Mailing list pgsql-hackers

From Jan Wieck
Subject Re: pg_upgrade failing for 200+ million Large Objects
Date
Msg-id 5bdcb010-ecdd-c69a-b441-68002fc38483@wi3ck.info
Whole thread Raw
In response to Re: pg_upgrade failing for 200+ million Large Objects  (Andrew Dunstan <andrew@dunslane.net>)
Responses Re: pg_upgrade failing for 200+ million Large Objects  (Andrew Dunstan <andrew@dunslane.net>)
List pgsql-hackers
On 3/21/21 7:47 AM, Andrew Dunstan wrote:
> One possible (probable?) source is the JDBC driver, which currently
> treats all Blobs (and Clobs, for that matter) as LOs. I'm working on
> improving that some: <https://github.com/pgjdbc/pgjdbc/pull/2093>

You mean the user is using OID columns pointing to large objects and the 
JDBC driver is mapping those for streaming operations?

Yeah, that would explain a lot.


Thanks, Jan

-- 
Jan Wieck
Principle Database Engineer
Amazon Web Services



pgsql-hackers by date:

Previous
From: Jan Wieck
Date:
Subject: Fix pg_upgrade to preserve datdba (was: Re: pg_upgrade failing for 200+ million Large Objects)
Next
From: Tom Lane
Date:
Subject: Re: Fix pg_upgrade to preserve datdba (was: Re: pg_upgrade failing for 200+ million Large Objects)