Re: [HACKERS] Parallel bitmap heap scan - Mailing list pgsql-hackers

From Robert Haas
Subject Re: [HACKERS] Parallel bitmap heap scan
Date
Msg-id CA+TgmoZVubg6gUbykxyuPe8HvaPzhr+=uOyo=TqipDSE9HdurA@mail.gmail.com
Whole thread Raw
In response to Re: [HACKERS] Parallel bitmap heap scan  (Dilip Kumar <dilipbalaut@gmail.com>)
Responses Re: [HACKERS] Parallel bitmap heap scan  (Dilip Kumar <dilipbalaut@gmail.com>)
List pgsql-hackers
On Tue, Mar 7, 2017 at 11:27 AM, Dilip Kumar <dilipbalaut@gmail.com> wrote:
> On Tue, Mar 7, 2017 at 9:44 PM, Robert Haas <robertmhaas@gmail.com> wrote:
>> I mean, IIUC, the call to PrefetchBuffer() is not done under any lock.
>> And that's the slow part.  The tiny amount of time we spend updating
>> the prefetch information under the mutex should be insignificant
>> compared to the cost of actually reading the buffer.  Unless I'm
>> missing something.
>
> Okay, but IIUC, the PrefetchBuffer is an async call to load the buffer
> if it's not already in shared buffer? so If instead of one process is
> making multiple async calls to PrefetchBuffer, if we make it by
> multiple processes will it be any faster?  Or you are thinking that at
> least we can make BufTableLookup call parallel because that is not an
> async call.

It's not about speed.  It's about not forgetting to prefetch.  Suppose
that worker 1 becomes the prefetch worker but then doesn't return to
the Bitmap Heap Scan node for a long time because it's busy in some
other part of the plan tree.  Now you just stop prefetching; that's
bad.  You want prefetching to continue regardless of which workers are
busy doing what; as long as SOME worker is executing the parallel
bitmap heap scan, prefetching should continue as needed.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



pgsql-hackers by date:

Previous
From: Robert Haas
Date:
Subject: Re: [HACKERS] WARNING: relcache reference leak: relation "p1" not closed
Next
From: Dilip Kumar
Date:
Subject: Re: [HACKERS] Proposal : Parallel Merge Join