Is there any way we can change the segment file size,
I am trying to look into the possibility of segment file size Vs bytea size limitation
PostgreSQL installation
step 1: ./configure --enable-largefile --with-segsize ( throwing error "configure: error: Large file support is not enabled. Segment size cannot be larger than 1GB" )
You could adopt for a custom table the same kind of structure that pg_largeobject has, that is an ordered series of BYTEA chunks.
# \d pg_largeobject Table "pg_catalog.pg_largeobject" Column | Type | Modifiers --------+---------+----------- loid | oid | not null pageno | integer | not null data | bytea |
Say you create a table looking like this: ( object_id int pageno integer data bytea ) with a unique index on (object_id,pageno), and octet_length(data) never exceeding a reasonable max size, such as 1MB, so if you want a 5GB object, that's just 5*1024 rows in that table, one row per pageno.
It's really a good plan if your client code cooperates by streaming contents ordered by pageno instead of handling the blob as a monolith.
Having your own table has several advantages: - it contains much less rows for the same contents, if the choosen chunk size is much larger than 2048 bytes. - TOAST storage is enabled so that the main relation is way smaller. - it can be partitioned. - it can have triggers (good for custom replication)
The drawback being that your application has to provide the equivalent code to the lo_* client-side and server-side functions that it needs. But that's a relatively easy work for a programmer, especially if the blobs happen to be immutable, as is often the case.
Best regards, -- Daniel Vérité PostgreSQL-powered mailer: http://www.manitou-mail.org Twitter: @DanielVerite