Re: Resumable vacuum proposal and design overview - Mailing list pgsql-hackers

From Gregory Stark
Subject Re: Resumable vacuum proposal and design overview
Date
Msg-id 873b4qpgg9.fsf@stark.xeocode.com
Whole thread Raw
In response to Re: Resumable vacuum proposal and design overview  ("Simon Riggs" <simon@2ndquadrant.com>)
Responses Re: Resumable vacuum proposal and design overview  (Heikki Linnakangas <heikki@enterprisedb.com>)
List pgsql-hackers
"Simon Riggs" <simon@2ndquadrant.com> writes:

> How much memory would it save during VACUUM on a 1 billion row table
> with 200 million dead rows? Would that reduce the number of cycles a
> normal non-interrupted VACUUM would perform?

It would depend on how many dead tuples you have per-page. If you have a very
large table with only one dead tuple per page then it might even be larger.
But in the usual case it would be smaller.

Also note that it would have to be non-lossy.

My only objection to this idea, and it's not really an objection at all, is
that I think we want to head in the direction of making indexes cheaper to
scan and doing the index scan phase more often. That reduces the need for
multiple concurrent vacuums and makes the problem of busy tables getting
starved less of a concern.

That doesn't mean there's any downside to making the dead tuple list take less
memory but I think the upside is limited. As we optimize our index
representations with GII and bitmapped indexes scanning them gets easier and
easier anyways. And you don't really want to wait too long before you get the
benefit of the recovered space in the table.

--  Gregory Stark EnterpriseDB          http://www.enterprisedb.com


pgsql-hackers by date:

Previous
From: "Simon Riggs"
Date:
Subject: Major Feature Interactions
Next
From: Zoltan Boszormenyi
Date:
Subject: Re: psql problem querying relations