Hi,
Le 08/04/2020 à 12:12, Donato Marrazzo a écrit :
> Hi Laurenz,
> thank you for your reply.
> Are you aware of any performance drawback?
We had a customer with millions of small Large Objects, in part because
his application forgot to unlink them.
As a consequence, pg_dump was using huge amounts of memory, making a
backup impossible. It was with PG 9.5, I don't think the situation
improved since.
> Il giorno mer 8 apr 2020 alle ore 12:06 Laurenz Albe
> <laurenz.albe@cybertec.at <mailto:laurenz.albe@cybertec.at>> ha scritto:
...
> > I'm working on a use case were there are many tables with blobs
> (on average not so large 32KB).
> > I foresee that in 2-3 years time frame, the limit of overall blobs
> will be breached: more than 2^32 blobs.
> > - Is there a way to change the OID limit?
> > - Should we switch to a bytea implementation?
> > - Are there any drawback of bytea except the maximum space?
> Don't use large objects. They are only useful if
> 1) you have files larger than 1GB or
> 2) you need to stream writes
>
> There are no such limitations if you use the "bytea" data type, and
> it is much simpler to handle at the same time.
+1
--
Christophe Courtois
Consultant Dalibo
https://dalibo.com/