Thread: pgsql BLOB issues

pgsql BLOB issues

From
Jeremy Andrus
Date:
Hello,

  I have a database that contains a large amount of Large Objects
(>500MB). I am using this database to store images for an e-commerce
website, so I have a simple accessor script written in perl to dump out
a blob based on a virtual 'path' stored in a table (and associated with
the large object's OID). This system seemed to work wonderfully until I
put more than ~500MB of binary data into the database.

  Now, every time I run the accessor script (via the web OR the command
line), the postmaster process gobbles up my CPU resources (usually >30%
for a single process - and it's a 1GHz processor with 1GB of RAM!), and
the script takes a very long time to completely dump out the data.

  I have the same issue with an import script that reads files from the
hard drive and puts them into Large Objects in the database. It takes a
very long time to import whereas before, it would run extremely fast.

  Are there any known issues in PostgreSQL involving databases with a
lot of binary data? I am using PostgreSQL v7.2.3 on a linux system.

Thanks,

    -Jeremy

--
------------------------
Jeremy C. Andrus
http://www.jeremya.com/
------------------------


Re: pgsql BLOB issues

From
Tom Lane
Date:
Jeremy Andrus <jeremy@jeremya.com> writes:
>   I have a database that contains a large amount of Large Objects
> (>500MB). I am using this database to store images for an e-commerce
> website, so I have a simple accessor script written in perl to dump out
> a blob based on a virtual 'path' stored in a table (and associated with
> the large object's OID). This system seemed to work wonderfully until I
> put more than ~500MB of binary data into the database.

Are you talking about 500MB in one BLOB, or 500MB total?

If the former, I can well imagine swap thrashing being a problem when
you try to access such a large blob.

If the latter, I can't think of any reason for total blob storage to
cause any big performance issue.  Perhaps you just haven't vacuumed
pg_largeobject in a long time?

            regards, tom lane


Re: pgsql BLOB issues

From
Jeremy Andrus
Date:
On Monday 28 April 2003 01:00 am, Tom Lane wrote:
> Are you talking about 500MB in one BLOB, or 500MB total?

  I meant 500MB _total_. There are over 5000 separate BLOBs.

  I'll ask my friendly sys-admin to vacuum pg_largeobject, and I'll let
you know what happens :-) In general though, how much performance is
really gained through regular vacuuming?

Thanks for your help,

    -Jeremy

--
------------------------
Jeremy C. Andrus
http://www.jeremya.com/
------------------------


Re: pgsql BLOB issues

From
"Christopher Kings-Lynne"
Date:
>   I'll ask my friendly sys-admin to vacuum pg_largeobject, and I'll let
> you know what happens :-) In general though, how much performance is
> really gained through regular vacuuming?

Significant.  It's essential to vacuum regularly.

Chris