overhead of "small" large objects - Mailing list pgsql-general

From Philip Crotwell
Subject overhead of "small" large objects
Date
Msg-id Pine.GSO.4.10.10012101404140.4870-100000@tigger.seis.sc.edu
Whole thread Raw
Responses Re: overhead of "small" large objects  (Tom Lane <tgl@sss.pgh.pa.us>)
List pgsql-general
Hi

I'm putting lots of small (~10Kb) chunks of binary seismic data into large
objects in postgres 7.0.2. Basically just arrays of 2500 or so ints that
represent about a minutes worth of data. I put in the data at the rate of
about 1.5Mb per hour, but the disk usage of the database is growing at
about 6Mb per hour! A factor of 4 seems a bit excessive.

Is there significant overhead involoved in using large objects that aren't
very large?

What might I be doing wrong?

Is there a better way to store these chunks?

thanks,
Philip



pgsql-general by date:

Previous
From: Juriy Goloveshkin
Date:
Subject: Re: ilike and --enable-multibyte=KOI8
Next
From: Tom Lane
Date:
Subject: Re: overhead of "small" large objects