On Sat, 2008-10-11 at 18:26 +0200, Tomas Vondra wrote:
>
> Is there any other way to solve storing of large files in PostgreSQL?
No, not until there are functions that let you fopen() on the bytea
column.
Also, your "... || more_column" solution will generate large numbers of
dead rows and require frequent vacuuming.
> - Optimization is a serious criterion, as is reliability.
If you're using tables with very large columns, make sure you index on
every other column you're going to access it by. If PostgreSQL has to
resort to full-table scans on this table, and especially with a low
memory constraint, you could easily end up with it doing an on-disk sort
on a copy of the data.
If you *have* to store it in a table column (and it really isn't the
most efficient way of doing it) then create a separate table for it
which is just SERIAL + data.
Cheers,
Andrew McMillan.
------------------------------------------------------------------------
Andrew @ McMillan .Net .NZ Porirua, New Zealand
http://andrew.mcmillan.net.nz/ Phone: +64(272)DEBIAN
It is often easier to tame a wild idea than to breathe life into a
dull one. -- Alex Osborn
------------------------------------------------------------------------