Back in the days of 7.4.2, we tried storing large blobs (1GB+) in postgres but found them too slow because the blob was being chopped into 2K rows stored in some other table.
However, it has occurred to us that if it was possible to configure the server to split blobs into bigger pieces, say 32K, our speed problems might diminish correspondingly.
Is there a compile time constant or a run time configuration entry that accomplish this?
Thank you.
**********************************************
Eric Davies, M.Sc.
Barrodale Computing Services Ltd.
Tel: (250) 472-4372 Fax: (250) 472-4373
Web: http://www.barrodale.com
Email: eric@barrodale.com
**********************************************
Mailing Address:
P.O. Box 3075 STN CSC
Victoria BC Canada V8W 3W2
Shipping Address:
Hut R, McKenzie Avenue
University of Victoria
Victoria BC Canada V8W 3W2
**********************************************