Thread: Practical usage of large objects.

Practical usage of large objects.

From
Dmitry Igrishin
Date:
Hello all,

As you know, PostgreSQL has a large objects facility [1]. I'm curious
are there real systems which are use this feature? I'm asking because
and I'm in doubt should the Pgfe driver [2] provide the convenient API
for working with large objects or not.

Thanks!

[1] https://www.postgresql.org/docs/12/largeobjects.html
[2] https://github.com/dmitigr/pgfe



Re: Practical usage of large objects.

From
Tom Lane
Date:
Dmitry Igrishin <dmitigr@gmail.com> writes:
> As you know, PostgreSQL has a large objects facility [1]. I'm curious
> are there real systems which are use this feature?

We get questions about it regularly, so yeah people use it.

            regards, tom lane



Re: Practical usage of large objects.

From
Ron
Date:
Our databases use bytea instead.  (I don't know why the application vendor 
decided on that.)

On 5/13/20 12:53 PM, Dmitry Igrishin wrote:
> Hello all,
>
> As you know, PostgreSQL has a large objects facility [1]. I'm curious
> are there real systems which are use this feature? I'm asking because
> and I'm in doubt should the Pgfe driver [2] provide the convenient API
> for working with large objects or not.
>
> Thanks!
>
> [1] https://www.postgresql.org/docs/12/largeobjects.html
> [2] https://github.com/dmitigr/pgfe
>
>

-- 
Angular momentum makes the world go 'round.



Re: Practical usage of large objects.

From
Michael Paquier
Date:
On Wed, May 13, 2020 at 01:55:48PM -0400, Tom Lane wrote:
> Dmitry Igrishin <dmitigr@gmail.com> writes:
>> As you know, PostgreSQL has a large objects facility [1]. I'm curious
>> are there real systems which are use this feature?
>
> We get questions about it regularly, so yeah people use it.

I recall that some applications where I work make use of it for some
rather large log-like data.  At the end of the day, it really boils
down to if you wish to store blobs of data which are larger than 1GB,
the limit for toasted fields, as LOs can be up to 4TB.  Also, updating
or reading a LO can be much cheaper than a toasted field, as the
latter would update/read the value as a whole.
--
Michael

Attachment

Sv: Practical usage of large objects.

From
Andreas Joseph Krogh
Date:
På onsdag 13. mai 2020 kl. 19:53:38, skrev Dmitry Igrishin <dmitigr@gmail.com>:
Hello all,

As you know, PostgreSQL has a large objects facility [1]. I'm curious
are there real systems which are use this feature? I'm asking because
and I'm in doubt should the Pgfe driver [2] provide the convenient API
for working with large objects or not.

Thanks!
 
Yea, we use LOs, because using JDBC bytea reallys doesn't stream (at least using the pgjdbc-ng driver). When retrieving bytea using JDBC it retunrs an InputStream but it's backed by an in-memory byte[]. With LOs and java.sql.Blob (which the standard pgjdbc-dirver doesn't support ,but pgjdbc-ng does) it acutally uses strams and memory is kept down to a minimum.
 
--
Andreas Joseph Krogh

Re: Practical usage of large objects.

From
Laurenz Albe
Date:
On Thu, 2020-05-14 at 12:59 +0900, Michael Paquier wrote:
> On Wed, May 13, 2020 at 01:55:48PM -0400, Tom Lane wrote:
> > Dmitry Igrishin <dmitigr@gmail.com> writes:
> > > As you know, PostgreSQL has a large objects facility [1]. I'm curious
> > > are there real systems which are use this feature?
> > 
> > We get questions about it regularly, so yeah people use it.
> 
> I recall that some applications where I work make use of it for some
> rather large log-like data.  At the end of the day, it really boils
> down to if you wish to store blobs of data which are larger than 1GB,
> the limit for toasted fields, as LOs can be up to 4TB.  Also, updating
> or reading a LO can be much cheaper than a toasted field, as the
> latter would update/read the value as a whole.

Interesting; only recently I played with that a little and found that
that is not necessarily true:

https://www.cybertec-postgresql.com/en/binary-data-performance-in-postgresql/

Yours,
Laurenz Albe




Re: Practical usage of large objects.

From
Thomas Markus
Date:
Am 14.05.20 um 15:36 schrieb Laurenz Albe:
> Interesting; only recently I played with that a little and found that
> that is not necessarily true:
>
> https://www.cybertec-postgresql.com/en/binary-data-performance-in-postgresql/
>
> Yours,
> Laurenz Albe
We used lo a lot in a project for large uploads (>4GB files). Really 
useful in a cloud environment.

I was interested in speed camparison myself and made a similar test with 
network connection and without pg specific code.
https://github.com/5UtJAjiRWj1q/psql-lob-performance

File access is really fast and lo access is much slower than bytea (as 
expected). But content size limitation and memory consumption for bytea 
is problematic.

regards
Thomas