Sridhar N Bamandlapally wrote:
> due to size limitation BYTEA was not considered
You could adopt for a custom table the same kind of structure that
pg_largeobject has, that is an ordered series of BYTEA chunks.
# \d pg_largeobject
Table "pg_catalog.pg_largeobject"
Column | Type | Modifiers
--------+---------+-----------
loid | oid | not null
pageno | integer | not null
data | bytea |
Say you create a table looking like this:
(
object_id int
pageno integer
data bytea
)
with a unique index on (object_id,pageno),
and octet_length(data) never exceeding a reasonable max size,
such as 1MB, so if you want a 5GB object, that's just 5*1024 rows in
that table, one row per pageno.
It's really a good plan if your client code cooperates by streaming
contents ordered by pageno instead of handling the blob as
a monolith.
About the chunk size, by comparison, the large object facility limits
pg_largeobject.data to a quarter of a page, or 2048 bytes per row
(=LOBLKSIZE, see comments around
http://doxygen.postgresql.org/large__object_8h_source.html#l00072 )
Having your own table has several advantages:
- it contains much less rows for the same contents, if the choosen chunk
size is
much larger than 2048 bytes.
- TOAST storage is enabled so that the main relation is way smaller.
- it can be partitioned.
- it can have triggers (good for custom replication)
The drawback being that your application has to provide the equivalent
code to the lo_* client-side and server-side functions that it needs.
But that's a relatively easy work for a programmer, especially if the blobs
happen to be immutable, as is often the case.
Best regards,
--
Daniel Vérité
PostgreSQL-powered mailer: http://www.manitou-mail.org
Twitter: @DanielVerite