On Jan 18 09:00, Eric Davies wrote:
> Back in the days of 7.4.2, we tried storing large blobs (1GB+) in
> postgres but found them too slow because the blob was being chopped
> into 2K rows stored in some other table.
> However, it has occurred to us that if it was possible to configure
> the server to split blobs into bigger pieces, say 32K, our speed
> problems might diminish correspondingly.
> Is there a compile time constant or a run time configuration entry
> that accomplish this?
include/storage/large_object.h:64: #define LOBLKSIZE (BLCKSZ / 4)
include/pg_config_manual.h:26: #define BLCKSZ 8192
HTH.
Regards.