On Jun 12, 2009, at 11:53 AM, Yaroslav Tykhiy wrote:
> I cannot but ask the community a related question here: Can such
> design, that is, storing quite large objects of varying size in a
> PostgreSQL database, be a good idea in the first place? I used to
> believe that what RDBMS were really good at was storing a huge
> number of relations, each of a small and mostly uniform size if
> expressed in bytes; but today people tend to put big things, e.g.,
> email or files, in relational databases because it's convenient to
> them. That's absolutely normal as typical data objects we have to
> deal with keep growing in size, but how well can databases stand the
> pressure? And can't it still be better to store large things as
> plain files and put just their names in the database? File systems
> were designed for such kind of job after all, unlike RDBMS.
I've been thinking about this exact same problem.
There's another drawback in storing files in the database BTW: They're
not directly accessible from the file system. To illustrate, I was
looking into storing images for a website into the database. It's much
easier if those images are available to the web-server directly
instead of having to go through a script that reads the image file
from the database and streams the bytes to the client.
What I came up with was to create a file system layer that needs to go
through the database to be able to manipulate files. It's still a file
system, so files are available, but the database gets to check its
constraints against those operations as well and can throw an error
that prevents the file-system operation from being performed.
Apparently something like this shouldn't be too hard to implement
using FuseFS.
Alban Hertroys
--
If you can't see the forest for the trees,
cut the trees and you'll see there is no forest.
!DSPAM:737,4a3388e3759153496917459!