Re: Large objects. - Mailing list pgsql-hackers

From Robert Haas
Subject Re: Large objects.
Date
Msg-id AANLkTinr5s-jKyESwAbX5qW9-Oh6WWUdZZODFNeKw0Kc@mail.gmail.com
Whole thread Raw
In response to Re: Large objects.  (Tom Lane <tgl@sss.pgh.pa.us>)
Responses Re: Large objects.  (Dmitriy Igrishin <dmitigr@gmail.com>)
List pgsql-hackers
On Mon, Sep 27, 2010 at 10:50 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> Robert Haas <robertmhaas@gmail.com> writes:
>> According to the documentation, the maximum size of a large object is
>> 2 GB, which may be the reason for this behavior.
>
> In principle, since pg_largeobject stores an integer pageno, we could
> support large objects of up to LOBLKSIZE * 2^31 bytes = 4TB without any
> incompatible change in on-disk format.  This'd require converting a lot
> of the internal LO access logic to track positions as int64 not int32,
> but now that we require platforms to have working int64 that's no big
> drawback.  The main practical problem is that the existing lo_seek and
> lo_tell APIs use int32 positions.  I'm not sure if there's any cleaner
> way to deal with that than to add "lo_seek64" and "lo_tell64" functions,
> and have the existing ones throw error if asked to deal with positions
> past 2^31.
>
> In the particular case here, I think that lo_write may actually be
> writing past the 2GB boundary, while the coding in lo_read is a bit
> different and stops at the 2GB "limit".

Ouch.  Letting people write data to where they can't get it back from
seems double-plus ungood.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postgres Company


pgsql-hackers by date:

Previous
From: Robert Haas
Date:
Subject: Re: gist access methods parameter types
Next
From: Robert Haas
Date:
Subject: Re: Improving prep_buildtree used in VPATH builds