Jeremy Andrus <jeremy@jeremya.com> writes:
> I have a database that contains a large amount of Large Objects
> (>500MB). I am using this database to store images for an e-commerce
> website, so I have a simple accessor script written in perl to dump out
> a blob based on a virtual 'path' stored in a table (and associated with
> the large object's OID). This system seemed to work wonderfully until I
> put more than ~500MB of binary data into the database.
Are you talking about 500MB in one BLOB, or 500MB total?
If the former, I can well imagine swap thrashing being a problem when
you try to access such a large blob.
If the latter, I can't think of any reason for total blob storage to
cause any big performance issue. Perhaps you just haven't vacuumed
pg_largeobject in a long time?
regards, tom lane