Re: Large objetcs performance - Mailing list pgsql-performance

From Ulrich Cech
Subject Re: Large objetcs performance
Date
Msg-id 4629BCC7.1090900@cech-privat.de
Whole thread Raw
In response to Large objetcs performance  ("Alexandre Vasconcelos" <alex.vasconcelos@gmail.com>)
List pgsql-performance
Hello Alexandre,

<
We have an application subjected do sign documents and store them somewhere.>

I developed a relative simple "file archive" with PostgreSQL (web application with JSF for user interface). The major structure is one table with some "key word fields", and 3 blob-fields (because exactly 3 files belong to one record). I have do deal with millions of files (95% about 2-5KB, 5% are greater than 1MB).
The great advantage is that I don't have to "communicate" with the file system (try to open a directory with 300T files on a windows system... it's horrible, even on the command line).

The database now is 12Gb, but searching with the web interface has a maximum of 5 seconds (most searches are faster). The one disadvantage is the backup (I use pg_dump once a week which needs about 10 hours). But for now, this is acceptable for me. But I want to look at slony or port everything to a linux machine.

Ulrich

pgsql-performance by date:

Previous
From: Colin McGuigan
Date:
Subject: Odd problem with planner choosing seq scan
Next
From: chrisj
Date:
Subject: seeking advise on char vs text or varchar in search table