body
p { margin-bottom: 10pt; margin-top: 0pt; }
<body style="direction: ltr;"
bidimailui-detected-decoding-type="UTF-8" bgcolor="#FFFFFF"
text="#000000">
Hi, everyone. I'm working with someone who has a database
application currently running under PostgreSQL 8.3. Among other
things, there is a main table that is referenced by a number of
other tables via a foreign key. One of those tables has a field
of type "oid," which then points to a large object.  When a
record in the main table is deleted, there is a rule (yes a rule
-- not a trigger) in the referencing table that performs a
lo_unlink on the associated object.
This means that for every DELETE we perform on the main table,
we're doing an lo_unlink on the large objects. This also means
that if we do a mass delete from that main table, we're executing
lo_unlike once for every deleted row in the main table, which is
taking a heckuva long time. I ran EXPLAIN ANALYZE, and a good
40-50 percent of our time spent deleting is in the execution of
this rule.
I just want to check that my intuition is correct: Wouldn't it be
way faster and more efficient for us to use BYTEA columns to store
the data (which can get into the 20-50 MB range), and for us to
just depend on ON DELETE CASCADE, rather than a rule? Or are we
going to encounter performance issues regardless of which
technique we use, and we need to find a way to delete these large
pieces of data in the background Or should we be using large
objects, and then find a way other than a rule to deal with
deleting them on this sort of scale? Or (of course) am I missing
another good option?
Thanks for any and all advice, as usual!
Reuven
--
Reuven M. Lerner -- Web development, consulting, and training
Mobile: +972-54-496-8405 * US phone: 847-230-9795
Skype/AIM: reuvenlerner