On Sat, 2008-10-11 at 23:41 +0200, Tomas Vondra wrote:
> >
> > If you're using tables with very large columns, make sure you index on
> > every other column you're going to access it by. If PostgreSQL has to
> > resort to full-table scans on this table, and especially with a low
> > memory constraint, you could easily end up with it doing an on-disk sort
> > on a copy of the data.
>
> But I'm not sure what you mean by 'low memory contraint' - the memory
> limit I've been talking about is purely PHP feature, so it's related to
> inserting / reading and escaping / unescaping data.
In this case I'm not referring to PHP memory, but to PostgreSQL memory.
If you're on a memory constrained shared system then it's not just PHP
which will be configured for a smaller memory footprint...
> I *want* to store it in a table column
Yes, that's certainly what you seem to be saying. Personally I would
steer clear of storing many megabytes in a bytea column on a memory
constrained system, but you're closer to the application and will make
your own decision.
> If you know a better way to store binary data, please describe it here.
> The only other way I'm aware of is LOB - it solves the problem of
> inserting data (by streaming), but has other disadvantages (no
> referential integrity, etc.)
Yes, your trade-off is essentially efficiency vs. referential integrity.
This is a common trade-off, and if you have tight control over how rows
will be inserted/deleted from your table then referential integrity is
merely a nice-to-have.
If people will be creating / deleting these things all over the
application, without the benefit of an API to do so, then referential
integrity obviously becomes much more important.
Cheers,
Andrew.
------------------------------------------------------------------------
andrew (AT) morphoss (DOT) com +64(272)DEBIAN
Writing is turning one's worst moments into money.
-- J.P. Donleavy
------------------------------------------------------------------------