Re: Putting files into fields in a table - Mailing list pgsql-performance

From Campbell, Lance
Subject Re: Putting files into fields in a table
Date
Msg-id B10E6810AC2A2F4EA7550D072CDE8760197E3F@SAB-FENWICK.sab.uiuc.edu
Whole thread Raw
In response to Re: Putting files into fields in a table  ("Alexander Staubo" <alex@purefiction.net>)
Responses Re: Putting files into fields in a table
Re: Putting files into fields in a table
List pgsql-performance
I did not see much info in the 8.2 documentation on BLOB.  I did ready
about "bytea" or binary data type.  It seems like it would work for
storing files.  I guess I could stick with the OS for file storage but
it is a pain.  It would be easier to use the DB.

Thanks,

Lance Campbell
Project Manager/Software Architect
Web Services at Public Affairs
University of Illinois
217.333.0382
http://webservices.uiuc.edu


-----Original Message-----
From: madevilgenius@gmail.com [mailto:madevilgenius@gmail.com] On Behalf
Of Alexander Staubo
Sent: Thursday, December 13, 2007 1:39 PM
To: Campbell, Lance
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Putting files into fields in a table

On 12/13/07, Campbell, Lance <lance@uiuc.edu> wrote:
> I am looking at the possibility of storing files in some of my
database
> tables.  My concern is obviously performance.  I have configured
PostgreSQL
> to take advantage of Linux file caching.  So my PostgreSQL does not
have a
> large setting for shared_buffers even though I have 24G of memory.

This used to be the recommended way before 8.0. In 8.0, it is
advantageous to give PostgreSQL more buffers. You should still make
some room for the kernel cache.

By "storing files", I assume you mean a lot of data imported from
files. The procs and cons of storing large amounts of data as
PostgreSQL tuples has been debated before. You might want to search
the archives.

My opinion is that PostgreSQL is fine up to a point (let's say 10,000
tuples x 2KB), above which I would merely store references to
file-system objects. Managing these objects can be painful, especially
in a cluster of more than one machine, but at least it's fast and
lightweight.

> What data type should I use for fields that hold files?

PostgreSQL has two ways of storing "large amounts of data" in a single
tuple: variable-length columns, and blobs.

Blobs are divided into chunks and stored in separate tables, one tuple
per chunk, indexed by offset, and PostgreSQL allows random access to
the data. The downside is that they take up more room, they're slower
to create, slower to read from end to end, and I believe there are
some operations (such as function expressions) that don't work on
them. Some replication products, including (the last time I looked)
Slony, does not support replicating blobs. Blobs are not deprecated, I
think, but they feel like they are.

Variable-length columns such as bytea and text support a system called
TOAST, which allow the first few kilobytes of the data to be stored
in-place in the tuple, and the overflow to be stored elsewhere and
potentially compressed. This system is much faster and tighter than
blobs, but does not offer random I/O.

> Is there anything that I should be aware of when putting files into a
field
> in a table?

Backup dumps will increase in size in proportion to the size of your
data. PostgreSQL is no speed demon at loading/storing data, so this
might turn out to be the Achilles heel.

> When PostgreSQL accesses a table that has fields that contain files
does it
> put the fields that contain the files into the shared_buffers memory
area?

I believe so.

Alexander.

pgsql-performance by date:

Previous
From: "Alexander Staubo"
Date:
Subject: Re: Putting files into fields in a table
Next
From: Alvaro Herrera
Date:
Subject: Re: Putting files into fields in a table