* Thomas Lockhart <lockhart@alumni.caltech.edu> [000511 09:55] wrote:
> > In 6.4.x and 6.5.x if you delete a large number of rows (say 100,000 -
> > 1,000,000) then hit vacuum, the vacuum will run literally forever.
> > ...before I finally killed the vacuum process, manually removed the
> > pg_vlock, dropped the indexes, then vacuumed again, and re-indexed.
> > Will this be fixed?
>
> Patches? ;)
>
> Just thinking here: could we add an option to vacuum so that it would
> drop and recreate indices "automatically"?
I'm hoping automatically means some algorithm: When heap + N < index
ie. when it's really needed.
> We already have the ability
> to chain multiple internal commands together, so that would just
> require snarfing the names and properties of indices in the parser
> backend and then doing the drops and creates on the fly.
>
> A real problem with this is that those commands are currently not
> rollback-able, so if something quits in the middle (or someone kills
> the vacuum process; I've heard of this happening ;) then you are left
> without indices in sort of a hidden way.
>
> Not sure what the prospects are of making these DDL statements
> transactionally secure though I know we've had some discussions of
> this on -hackers.
One could do it in the opposite direction, rename the old index,
create a new index, drop the old. If the worst happens you then
have two indexes, perhaps the database could warn about this somehow.
In fact, one could have a system table that is things to be deleted
at startup. Put the name of the old index into it and at startup
the database could nuke the old index. It's pretty hackish, but
would work pretty ok.
It does seem possible to have two indeces on a single column.
--
-Alfred Perlstein - [bright@wintelcom.net|alfred@freebsd.org]
"I have the heart of a child; I keep it in a jar on my desk."