"Michael Goldner" <MGoldner@agmednet.com> writes:
> Am I stuck in a loop, or is this happening because the size of the relation
> is so large that postgres is operating on smaller chunks?
It's removing as many dead rows at a time as it can handle. Arithmetic
suggests that you've got maintenance_work_mem set to 64MB, which would
be enough room to process 11184810 rows per index scanning cycle.
The fact that there are so many dead large objects is what I'd be
worrying about. Does that square with your sense of what you've
removed, or does it suggest you've got a large object leak? Do you
use contrib/lo and/or contrib/vacuumlo to manage them?
The numbers also suggest that you might be removing all or nearly
all of the rows in pg_largeobject. If so, a CLUSTER on it might
be more effective than VACUUM as a one-shot cleanup method.
regards, tom lane