Re: Block level parallel vacuum WIP - Mailing list pgsql-hackers

From Alvaro Herrera
Subject Re: Block level parallel vacuum WIP
Date
Msg-id 20160823151747.GA166843@alvherre.pgsql
Whole thread Raw
In response to Re: Block level parallel vacuum WIP  (Robert Haas <robertmhaas@gmail.com>)
Responses Re: Block level parallel vacuum WIP  (Robert Haas <robertmhaas@gmail.com>)
List pgsql-hackers
Robert Haas wrote:

> 2. When you finish the heap scan, or when the array of dead tuple IDs
> is full (or very nearly full?), perform a cycle of index vacuuming.
> For now, have each worker process a separate index; extra workers just
> wait.  Perhaps use the condition variable patch that I posted
> previously to make the workers wait.  Then resume the parallel heap
> scan, if not yet done.

At least btrees should easily be scannable in parallel, given that we
process them in physical order rather than logically walk the tree.  So
if there are more workers than indexes, it's possible to put more than
one worker on the same index by carefully indicating each to stop at a
predetermined index page number.

-- 
Álvaro Herrera                http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



pgsql-hackers by date:

Previous
From: Aleksander Alekseev
Date:
Subject: Re: [Patch] Temporary tables that do not bloat pg_catalog (a.k.a fast temp tables)
Next
From: Heikki Linnakangas
Date:
Subject: Re: Proposal for CSN based snapshots