Thread: bigger blob rows?
Back in the days of 7.4.2, we tried storing large blobs (1GB+) in postgres but found them too slow because the blob was being chopped into 2K rows stored in some other table.
However, it has occurred to us that if it was possible to configure the server to split blobs into bigger pieces, say 32K, our speed problems might diminish correspondingly.
Is there a compile time constant or a run time configuration entry that accomplish this?
Thank you.
However, it has occurred to us that if it was possible to configure the server to split blobs into bigger pieces, say 32K, our speed problems might diminish correspondingly.
Is there a compile time constant or a run time configuration entry that accomplish this?
Thank you.
**********************************************
Eric Davies, M.Sc.
Barrodale Computing Services Ltd.
Tel: (250) 472-4372 Fax: (250) 472-4373
Web: http://www.barrodale.com
Email: eric@barrodale.com
**********************************************
Mailing Address:
P.O. Box 3075 STN CSC
Victoria BC Canada V8W 3W2
Shipping Address:
Hut R, McKenzie Avenue
University of Victoria
Victoria BC Canada V8W 3W2
**********************************************
Eric Davies <Eric@barrodale.com> writes: > Back in the days of 7.4.2, we tried storing large blobs (1GB+) in > postgres but found them too slow because the blob was being chopped > into 2K rows stored in some other table. > However, it has occurred to us that if it was possible to configure > the server to split blobs into bigger pieces, say 32K, our speed > problems might diminish correspondingly. > Is there a compile time constant or a run time configuration entry > that accomplish this? I *think* the limit would be 8k (the size of a PG page) even if you could change it. Upping that would require recompiling with PAGE_SIZE set larger, which would have a lot of other consequences. -Doug
On Jan 18 09:00, Eric Davies wrote: > Back in the days of 7.4.2, we tried storing large blobs (1GB+) in > postgres but found them too slow because the blob was being chopped > into 2K rows stored in some other table. > However, it has occurred to us that if it was possible to configure > the server to split blobs into bigger pieces, say 32K, our speed > problems might diminish correspondingly. > Is there a compile time constant or a run time configuration entry > that accomplish this? include/storage/large_object.h:64: #define LOBLKSIZE (BLCKSZ / 4) include/pg_config_manual.h:26: #define BLCKSZ 8192 HTH. Regards.