Re: Eternal vacuuming.... - Mailing list pgsql-hackers

From Thomas Lockhart
Subject Re: Eternal vacuuming....
Date
Msg-id 391ADD26.B7522589@alumni.caltech.edu
Whole thread Raw
In response to Eternal vacuuming....  (Tim Perdue <tperdue@valinux.com>)
Responses Re: Eternal vacuuming....  (Alfred Perlstein <bright@wintelcom.net>)
Re: Eternal vacuuming....  (Bruce Momjian <pgman@candle.pha.pa.us>)
List pgsql-hackers
> In 6.4.x and 6.5.x if you delete a large number of rows (say 100,000 -
> 1,000,000) then hit vacuum, the vacuum will run literally forever.
> ...before I finally killed the vacuum process, manually removed the
> pg_vlock, dropped the indexes, then vacuumed again, and re-indexed.
> Will this be fixed?

Patches? ;)

Just thinking here: could we add an option to vacuum so that it would
drop and recreate indices "automatically"? We already have the ability
to chain multiple internal commands together, so that would just
require snarfing the names and properties of indices in the parser
backend and then doing the drops and creates on the fly.

A real problem with this is that those commands are currently not
rollback-able, so if something quits in the middle (or someone kills
the vacuum process; I've heard of this happening ;) then you are left
without indices in sort of a hidden way.

Not sure what the prospects are of making these DDL statements
transactionally secure though I know we've had some discussions of
this on -hackers.
                      - Thomas

-- 
Thomas Lockhart                lockhart@alumni.caltech.edu
South Pasadena, California


pgsql-hackers by date:

Previous
From: Tim Perdue
Date:
Subject: Eternal vacuuming....
Next
From: The Hermit Hacker
Date:
Subject: Re: Some CVS stuff, A 7.0-stable branch? and a mailing list?