Re: Parallel Seq Scan - Mailing list pgsql-hackers

From Amit Kapila
Subject Re: Parallel Seq Scan
Date
Msg-id CAA4eK1K9BDMMUU+6w0rEg0=W6G=mLL=o-QabrYN1Ed7a=px4BQ@mail.gmail.com
Whole thread Raw
In response to Re: Parallel Seq Scan  (Thom Brown <thom@linux.com>)
Responses Re: Parallel Seq Scan  (Robert Haas <robertmhaas@gmail.com>)
Re: Parallel Seq Scan  (Thom Brown <thom@linux.com>)
List pgsql-hackers
On Thu, Nov 12, 2015 at 9:05 PM, Thom Brown <thom@linux.com> wrote:
>
> On 12 November 2015 at 15:23, Amit Kapila <amit.kapila16@gmail.com> wrote:
> > On Wed, Nov 11, 2015 at 11:29 PM, Pavel Stehule <pavel.stehule@gmail.com>
> > wrote:
> >>
> >> Hi
> >>
> >> I have a first query
> >>
> >> I looked on EXPLAIN ANALYZE output and the numbers of filtered rows are
> >> differen
> >>
> >
> > Thanks for the report.  The reason for this problem is that instrumentation
> > information from workers is getting aggregated multiple times.  In
> > ExecShutdownGatherWorkers(), we call ExecParallelFinish where it
> > will wait for workers to finish and then accumulate stats from workers.
> > Now ExecShutdownGatherWorkers() could be called multiple times
> > (once we read all tuples from workers, at end of node) and it should be
> > ensured that repeated calls should not try to redo the work done by first
> > call.
> > The same is ensured for tuplequeues, but not for parallel executor info.
> > I think we can safely assume that we need to call ExecParallelFinish() only
> > when there are workers started by the Gathers node, so on those lines
> > attached patch should fix the problem.
>
> That fixes the count issue for me, although not the number of buffers
> hit,
>

The number of shared buffers hit could be different across different runs
because the read sequence of parallel workers can't be guaranteed, also
I don't think same is even guaranteed for Seq Scan node, the other operations
in parallel could lead to different number, however the actual problem was
that in one of the plans shown by you [1], the Buffers hit at Gather node
(175288) is lesser than the Buffers hit at Parallel Seq Scan node (241201).
Do you still (after applying above patch) see that Gather node is showing
lesser hit buffers than Parallel Seq Scan node?

[1]
 # explain (analyse, buffers, timing, verbose, costs) select count(*)
from js where content->'tags'->>'title' like '%design%';
                                                             QUERY
PLAN
------------------------------------------------------------------------------------------------------------------------------------
 Aggregate  (cost=132489.34..132489.35 rows=1 width=0) (actual
time=382.987..382.987 rows=1 loops=1)
   Output: count(*)
   Buffers: shared hit=175288
   ->  Gather  (cost=1000.00..132488.34 rows=401 width=0) (actual
time=382.983..382.983 rows=0 loops=1)
         Output: content
         Number of Workers: 1
         Buffers: shared hit=175288
         ->  Parallel Seq Scan on public.js  (cost=0.00..131448.24
rows=401 width=0) (actual time=379.407..1141.437 rows=0 loops=1)
               Output: content
               Filter: (((js.content -> 'tags'::text) ->>
'title'::text) ~~ '%design%'::text)
               Rows Removed by Filter: 1724810
               Buffers: shared hit=241201
 Planning time: 0.104 ms
 Execution time: 403.045 ms
(14 rows)

Time: 403.596 ms

> or the actual time taken.
>

Exactly what time you are referring here, Execution Time or actual time
shown on Parallel Seq Scan node and what problem do you see with
the reported time?


With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com

pgsql-hackers by date:

Previous
From: Kouhei Kaigai
Date:
Subject: Re: Foreign join pushdown vs EvalPlanQual
Next
From: Kyotaro HORIGUCHI
Date:
Subject: Re: Support for N synchronous standby servers - take 2