Re: pg_upgrade failing for 200+ million Large Objects - Mailing list pgsql-hackers

From Andrew Dunstan
Subject Re: pg_upgrade failing for 200+ million Large Objects
Date
Msg-id ee7d96b8-7b0e-bb76-9724-900606efe69a@dunslane.net
Whole thread Raw
In response to Re: pg_upgrade failing for 200+ million Large Objects  (Jan Wieck <jan@wi3ck.info>)
Responses Re: pg_upgrade failing for 200+ million Large Objects  (Zhihong Yu <zyu@yugabyte.com>)
List pgsql-hackers
On 3/21/21 12:56 PM, Jan Wieck wrote:
> On 3/21/21 7:47 AM, Andrew Dunstan wrote:
>> One possible (probable?) source is the JDBC driver, which currently
>> treats all Blobs (and Clobs, for that matter) as LOs. I'm working on
>> improving that some: <https://github.com/pgjdbc/pgjdbc/pull/2093>
>
> You mean the user is using OID columns pointing to large objects and
> the JDBC driver is mapping those for streaming operations?
>
> Yeah, that would explain a lot.
>
>
>


Probably in most cases the database is designed by Hibernate, and the
front end programmers know nothing at all of Oids or LOs, they just ask
for and get a Blob.


cheers


andrew


--
Andrew Dunstan
EDB: https://www.enterprisedb.com




pgsql-hackers by date:

Previous
From: Jan Wieck
Date:
Subject: Re: Fix pg_upgrade to preserve datdba
Next
From: Tom Lane
Date:
Subject: Re: Fix pg_upgrade to preserve datdba