Re: pg_upgrade failing for 200+ million Large Objects - Mailing list pgsql-hackers

From Andrew Dunstan
Subject Re: pg_upgrade failing for 200+ million Large Objects
Date
Msg-id c2a43a97-e551-ea6d-7a4f-a4709b4e0cbd@dunslane.net
Whole thread Raw
In response to Re: pg_upgrade failing for 200+ million Large Objects  (Jan Wieck <jan@wi3ck.info>)
Responses Re: pg_upgrade failing for 200+ million Large Objects  (Jan Wieck <jan@wi3ck.info>)
List pgsql-hackers
On 3/20/21 12:55 PM, Jan Wieck wrote:
> On 3/20/21 11:23 AM, Tom Lane wrote:
>> Jan Wieck <jan@wi3ck.info> writes:
>>> All that aside, the entire approach doesn't scale.
>>
>> Yeah, agreed.  When we gave large objects individual ownership and ACL
>> info, it was argued that pg_dump could afford to treat each one as a
>> separate TOC entry because "you wouldn't have that many of them, if
>> they're large".  The limits of that approach were obvious even at the
>> time, and I think now we're starting to see people for whom it really
>> doesn't work.
>
> It actually looks more like some users have millions of "small
> objects". I am still wondering where that is coming from and why they
> are abusing LOs in that way, but that is more out of curiosity. Fact
> is that they are out there and that they cannot upgrade from their 9.5
> databases, which are now past EOL.
>

One possible (probable?) source is the JDBC driver, which currently
treats all Blobs (and Clobs, for that matter) as LOs. I'm working on
improving that some: <https://github.com/pgjdbc/pgjdbc/pull/2093>


cheers


andrew


--
Andrew Dunstan
EDB: https://www.enterprisedb.com




pgsql-hackers by date:

Previous
From: er@xs4all.nl
Date:
Subject: Re: [PATCH] proposal for regexp_count, regexp_instr, regexp_substr and regexp_replace
Next
From: Bharath Rupireddy
Date:
Subject: Re: Log message for GSS connection is missing once connection authorization is successful.