Re: Vacuum, visibility maps and SKIP_PAGES_THRESHOLD - Mailing list pgsql-hackers

From Greg Stark
Subject Re: Vacuum, visibility maps and SKIP_PAGES_THRESHOLD
Date
Msg-id BANLkTinnGO9jSqB4FEBYXeXBQM_bxjgxwQ@mail.gmail.com
Whole thread Raw
In response to Re: Vacuum, visibility maps and SKIP_PAGES_THRESHOLD  (Heikki Linnakangas <heikki.linnakangas@enterprisedb.com>)
Responses Re: Vacuum, visibility maps and SKIP_PAGES_THRESHOLD  (Robert Haas <robertmhaas@gmail.com>)
Re: Vacuum, visibility maps and SKIP_PAGES_THRESHOLD  (Pavan Deolasee <pavan.deolasee@gmail.com>)
List pgsql-hackers
On Fri, May 27, 2011 at 7:11 AM, Heikki Linnakangas
<heikki.linnakangas@enterprisedb.com> wrote:
> Well, as with normal queries, it's usually faster to just seqscan the whole
> table if you need to access more than a few percent of the pages, because
> sequential I/O is so much faster than random I/O.

Well it's not strictly random access, you're still reading
sequentially, you're just skipping some pages. It'll never be slower
than a sequential scan it just might not be any faster. In my testing
reading every 8th page took exactly as long as reading every page,
which makes sense as the drive still has to seek to every track
exactly as if you were reading sequentially. IIRC reading less than
every 8th page started seeing a speedup.

>> Do we have any numbers to prove what we have today is good ? Sorry, I
>> may not have followed the discussions very closely in the past and not
>> sure if this has been debated/tested already.
>
> I think that number was chosen quite arbitrary. When you consider updating
> relfrozenxid, it's a bit difficult to decide what the optimal value would
> be; if you decide to skip pages you might have to perform an extra
> anti-wraparound somewhere down the line.

It would be nice if the VM had a bit for "all-frozen" but that
wouldn't help much except in the case of truly cold data. We could
perhaps keep the frozen data per segment or per VM page (which covers
a large section of the table) which would at least mean that would
have a fixed amount of data become vacuum-dirty when a tuple is
updated rather than a whole table which could be arbitrarily large.

Separately it's a bit strange that we actually have to visit the
pages. We have all the information we need in the VM to determine
whether there's a run of 32 vacuum-clean pages. Why can't we look at
the next 32 pages and if they're all vacuum-clean then skip looking at
the heap at all for them. What we do now is do the regular vacuum
algorithm and only after we've processed 32 pages in a row realize
that it was a waste of effort.



-- 
greg


pgsql-hackers by date:

Previous
From: Robert Haas
Date:
Subject: Re: dblink crash on PPC
Next
From: Pavan Deolasee
Date:
Subject: Re: Vacuum, visibility maps and SKIP_PAGES_THRESHOLD