Thread: Putting files into fields in a table
PostgreSQL: 8.2
I am looking at the possibility of storing files in some of my database tables. My concern is obviously performance. I have configured PostgreSQL to take advantage of Linux file caching. So my PostgreSQL does not have a large setting for shared_buffers even though I have 24G of memory. The performance today is very good.
Some questions I have:
What data type should I use for fields that hold files?
Is there anything that I should be aware of when putting files into a field in a table?
When PostgreSQL accesses a table that has fields that contain files does it put the fields that contain the files into the shared_buffers memory area?
Thanks,
Lance Campbell
Project Manager/Software Architect
Web Services at Public Affairs
University of Illinois
217.333.0382
http://webservices.uiuc.edu
On 12/13/07, Campbell, Lance <lance@uiuc.edu> wrote: > I am looking at the possibility of storing files in some of my database > tables. My concern is obviously performance. I have configured PostgreSQL > to take advantage of Linux file caching. So my PostgreSQL does not have a > large setting for shared_buffers even though I have 24G of memory. This used to be the recommended way before 8.0. In 8.0, it is advantageous to give PostgreSQL more buffers. You should still make some room for the kernel cache. By "storing files", I assume you mean a lot of data imported from files. The procs and cons of storing large amounts of data as PostgreSQL tuples has been debated before. You might want to search the archives. My opinion is that PostgreSQL is fine up to a point (let's say 10,000 tuples x 2KB), above which I would merely store references to file-system objects. Managing these objects can be painful, especially in a cluster of more than one machine, but at least it's fast and lightweight. > What data type should I use for fields that hold files? PostgreSQL has two ways of storing "large amounts of data" in a single tuple: variable-length columns, and blobs. Blobs are divided into chunks and stored in separate tables, one tuple per chunk, indexed by offset, and PostgreSQL allows random access to the data. The downside is that they take up more room, they're slower to create, slower to read from end to end, and I believe there are some operations (such as function expressions) that don't work on them. Some replication products, including (the last time I looked) Slony, does not support replicating blobs. Blobs are not deprecated, I think, but they feel like they are. Variable-length columns such as bytea and text support a system called TOAST, which allow the first few kilobytes of the data to be stored in-place in the tuple, and the overflow to be stored elsewhere and potentially compressed. This system is much faster and tighter than blobs, but does not offer random I/O. > Is there anything that I should be aware of when putting files into a field > in a table? Backup dumps will increase in size in proportion to the size of your data. PostgreSQL is no speed demon at loading/storing data, so this might turn out to be the Achilles heel. > When PostgreSQL accesses a table that has fields that contain files does it > put the fields that contain the files into the shared_buffers memory area? I believe so. Alexander.
I did not see much info in the 8.2 documentation on BLOB. I did ready about "bytea" or binary data type. It seems like it would work for storing files. I guess I could stick with the OS for file storage but it is a pain. It would be easier to use the DB. Thanks, Lance Campbell Project Manager/Software Architect Web Services at Public Affairs University of Illinois 217.333.0382 http://webservices.uiuc.edu -----Original Message----- From: madevilgenius@gmail.com [mailto:madevilgenius@gmail.com] On Behalf Of Alexander Staubo Sent: Thursday, December 13, 2007 1:39 PM To: Campbell, Lance Cc: pgsql-performance@postgresql.org Subject: Re: [PERFORM] Putting files into fields in a table On 12/13/07, Campbell, Lance <lance@uiuc.edu> wrote: > I am looking at the possibility of storing files in some of my database > tables. My concern is obviously performance. I have configured PostgreSQL > to take advantage of Linux file caching. So my PostgreSQL does not have a > large setting for shared_buffers even though I have 24G of memory. This used to be the recommended way before 8.0. In 8.0, it is advantageous to give PostgreSQL more buffers. You should still make some room for the kernel cache. By "storing files", I assume you mean a lot of data imported from files. The procs and cons of storing large amounts of data as PostgreSQL tuples has been debated before. You might want to search the archives. My opinion is that PostgreSQL is fine up to a point (let's say 10,000 tuples x 2KB), above which I would merely store references to file-system objects. Managing these objects can be painful, especially in a cluster of more than one machine, but at least it's fast and lightweight. > What data type should I use for fields that hold files? PostgreSQL has two ways of storing "large amounts of data" in a single tuple: variable-length columns, and blobs. Blobs are divided into chunks and stored in separate tables, one tuple per chunk, indexed by offset, and PostgreSQL allows random access to the data. The downside is that they take up more room, they're slower to create, slower to read from end to end, and I believe there are some operations (such as function expressions) that don't work on them. Some replication products, including (the last time I looked) Slony, does not support replicating blobs. Blobs are not deprecated, I think, but they feel like they are. Variable-length columns such as bytea and text support a system called TOAST, which allow the first few kilobytes of the data to be stored in-place in the tuple, and the overflow to be stored elsewhere and potentially compressed. This system is much faster and tighter than blobs, but does not offer random I/O. > Is there anything that I should be aware of when putting files into a field > in a table? Backup dumps will increase in size in proportion to the size of your data. PostgreSQL is no speed demon at loading/storing data, so this might turn out to be the Achilles heel. > When PostgreSQL accesses a table that has fields that contain files does it > put the fields that contain the files into the shared_buffers memory area? I believe so. Alexander.
Campbell, Lance wrote: > I did not see much info in the 8.2 documentation on BLOB. That's because we don't call them "blobs". Search for "large objects" instead: http://www.postgresql.org/docs/current/static/largeobjects.html -- Alvaro Herrera http://www.flickr.com/photos/alvherre/ "Executive Executive Summary: The [Windows] Vista Content Protection specification could very well constitute the longest suicide note in history." Peter Guttman, http://www.cs.auckland.ac.nz/~pgut001/pubs/vista_cost.txt
On Dec 13, 2007, at 2:09 PM, Campbell, Lance wrote: > I did not see much info in the 8.2 documentation on BLOB. I did ready > about "bytea" or binary data type. It seems like it would work for > storing files. I guess I could stick with the OS for file storage but > it is a pain. It would be easier to use the DB. In postgres they're simply called Large Objects (or LOBs) and there is a whole chapter devoted to them in Part IV of the manual. Note that you only need to use this facility if you're going to be storing data over 1G in size (at which point your limit becomes 2G). What kind of data are in these files? What gain do you foresee in storing the files directly in the db (as opposed, say, to storing the paths to the files in the filesystem)? Erik Jones Software Developer | Emma® erik@myemma.com 800.595.4401 or 615.292.5888 615.292.0777 (fax) Emma helps organizations everywhere communicate & market in style. Visit us online at http://www.myemma.com
Erik, The advantage with storing things in the database verses the file system is the number of lines of code. I manage 18 software applications. I have developed an application that reads in an XML file and will generate database java code for inserting, updating, selecting and deleting data. So for me the database is a no brainer. But when I need to store files that are uploaded by users I have to hand code the process. It is not hard. It is just time consuming. I want to keep the amount I can do per hour at a very high level. The less code the better. Using a database correctly really saves on the number of lines of code. Thanks, Lance Campbell Project Manager/Software Architect Web Services at Public Affairs University of Illinois 217.333.0382 http://webservices.uiuc.edu -----Original Message----- From: Erik Jones [mailto:erik@myemma.com] Sent: Thursday, December 13, 2007 2:22 PM To: Campbell, Lance Cc: pgsql-performance@postgresql.org list Subject: Re: [PERFORM] Putting files into fields in a table On Dec 13, 2007, at 2:09 PM, Campbell, Lance wrote: > I did not see much info in the 8.2 documentation on BLOB. I did ready > about "bytea" or binary data type. It seems like it would work for > storing files. I guess I could stick with the OS for file storage but > it is a pain. It would be easier to use the DB. In postgres they're simply called Large Objects (or LOBs) and there is a whole chapter devoted to them in Part IV of the manual. Note that you only need to use this facility if you're going to be storing data over 1G in size (at which point your limit becomes 2G). What kind of data are in these files? What gain do you foresee in storing the files directly in the db (as opposed, say, to storing the paths to the files in the filesystem)? Erik Jones Software Developer | Emma(r) erik@myemma.com 800.595.4401 or 615.292.5888 615.292.0777 (fax) Emma helps organizations everywhere communicate & market in style. Visit us online at http://www.myemma.com