On Wed, 30 Jun 1999, Bruce Momjian wrote:
> > Would it be easy to come up with a scheme for the vacuum function defrag a
> > set number of pages and such, release its locks if there is another
> > process blocked and waiting, then resume after that process is finished?
>
> That is a very nice idea. We could just release and reaquire the lock,
> knowing that if there is someone waiting, they would get the lock.
> Maybe someone can comment on this?
My first thought is "doesn't this still require the 'page-reusing'
functionality to exist"? Which virtually eliminates the problem...
If not, then why can't something be done where this is transparent
altogther? Have some sort of mechanism that keeps track of "dead
space"...a trigger that says after X tuples have been deleted, do an
automatic vacuum of the database?
The automatic vacuum would be done in a way similar to Michael's
suggestion above...scan through for the first 'dead space', lock the table
for a short period of time and "move records up". How many tuples could
you move in a very short period of time, such that it is virtually
transparent to end-users?
As a table gets larger and larger, a few 'dead tuples' aren't going to
make much of a different in performance, so make the threshold some
percentage of the size of the table, so at it grows, the number of 'dead
tuples' has to be larger...
And leave out the truncate at the end...
The 'manual vacuum' would still need to be run periodically, for the
truncate and for stats...
Just a thought...:)
Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy
Systems Administrator @ hub.org
primary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org