Thomas Lockhart wrote:
>
> > In 6.4.x and 6.5.x if you delete a large number of rows (say 100,000 -
> > 1,000,000) then hit vacuum, the vacuum will run literally forever.
> > ...before I finally killed the vacuum process, manually removed the
> > pg_vlock, dropped the indexes, then vacuumed again, and re-indexed.
> > Will this be fixed?
>
> Patches? ;)
Hehehe - I say the same thing when someone complains about SourceForge.
Now you know I'm a huge postgres hugger - but PHP is my strength and you
would not like any C patches I'd submit anyway.
> Just thinking here: could we add an option to vacuum so that it would
> drop and recreate indices "automatically"? We already have the ability
> to chain multiple internal commands together, so that would just
> require snarfing the names and properties of indices in the parser
> backend and then doing the drops and creates on the fly.
This seems like a hack to me personally. Can someone figure out why the
vacuum runs forever and fix it? Probably a logic flaw somewhere?
Tim
--
Founder - PHPBuilder.com / Geocrawler.com
Lead Developer - SourceForge
VA Linux Systems
408-542-5723