Mike Cianflone writes:
> After the comment by someone about the UPDATE being responsible for
> the reason for vacuuming (sorry, I didn't know that), I looked into a stored
> procedure that gets triggered during an insert. The stored procedure does an
> UPDATE on another table, for every insert. So inserting 100,000 items into
> the table causes an update on 100,000 items in another table. I noticed that
> the other table's file size gets very large (right now it's over a megabyte
> and only 10% complete inserting), even though there are only about 5 items
> in that table. Since that table has the UPDATE happening to it, it's getting
> large. A vacuum chops it down to 8K.
An UPDATE basically acts as "append and mark old record obsolete". This
is the so-called non-overwriting storage manager which keeps all data,
even deleted data, indefinitely until you run vacuum (hence the name).
> I tried increasing the buffer size, and that made the 100,000
> inserts (with the corresponding update) go longer before hitting the barrier
> and slowing down tremendously (until another vacuum is done).
Increasing the buffer size will keep the data in memory longer before
pushing it out to disk. The net result will be the same though.
--
Peter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter