Thread: [HACKERS] Parallel Append implementation

[HACKERS] Parallel Append implementation

From
Amit Khandekar
Date:
Currently an Append plan node does not execute its subplans in
parallel. There is no distribution of workers across its subplans. The
second subplan starts running only after the first subplan finishes,
although the individual subplans may be running parallel scans.

Secondly, we create a partial Append path for an appendrel, but we do
that only if all of its member subpaths are partial paths. If one or
more of the subplans is a non-parallel path, there will be only a
non-parallel Append. So whatever node is sitting on top of Append is
not going to do a parallel plan; for example, a select count(*) won't
divide it into partial aggregates if the underlying Append is not
partial.

The attached patch removes both of the above restrictions.  There has
already been a mail thread [1] that discusses an approach suggested by
Robert Haas for implementing this feature. This patch uses this same
approach.

Attached is pgbench_create_partition.sql (derived from the one
included in the above thread) that distributes pgbench_accounts table
data into 3 partitions pgbench_account_[1-3]. The below queries use
this schema.

Consider a query such as :
select count(*) from pgbench_accounts;

Now suppose, these two partitions do not allow parallel scan :
alter table pgbench_accounts_1 set (parallel_workers=0);
alter table pgbench_accounts_2 set (parallel_workers=0);

On HEAD, due to some of the partitions having non-parallel scans, the
whole Append would be a sequential scan :

 Aggregate
   ->  Append
         ->  Index Only Scan using pgbench_accounts_pkey on pgbench_accounts
         ->  Seq Scan on pgbench_accounts_1
         ->  Seq Scan on pgbench_accounts_2
         ->  Seq Scan on pgbench_accounts_3

Whereas, with the patch, the Append looks like this :

 Finalize Aggregate
   ->  Gather
         Workers Planned: 6
         ->  Partial Aggregate
               ->  Parallel Append
                     ->  Parallel Seq Scan on pgbench_accounts
                     ->  Seq Scan on pgbench_accounts_1
                     ->  Seq Scan on pgbench_accounts_2
                     ->  Parallel Seq Scan on pgbench_accounts_3

Above, Parallel Append is generated, and it executes all these
subplans in parallel, with 1 worker executing each of the sequential
scans, and multiple workers executing each of the parallel subplans.


======= Implementation details ========

------- Adding parallel-awareness -------

In a given worker, this Append plan node will be executing just like
the usual partial Append node. It will run a subplan until completion.
The subplan may or may not be a partial parallel-aware plan like
parallelScan. After the subplan is done, Append will choose the next
subplan. It is here where it will be different than the current
partial Append plan: it is parallel-aware. The Append nodes in the
workers will be aware that there are other Append nodes running in
parallel. The partial Append will have to coordinate with other Append
nodes while choosing the next subplan.

------- Distribution of workers --------

The coordination info is stored in a shared array, each element of
which describes the per-subplan info. This info contains the number of
workers currently executing the subplan, and the maximum number of
workers that should be executing it at the same time. For non-partial
sublans, max workers would always be 1. For choosing the next subplan,
the Append executor will sequentially iterate over the array to find a
subplan having the least number of workers currently being executed,
AND which is not already being executed by the maximum number of
workers assigned for the subplan. Once it gets one, it increments
current_workers, and releases the Spinlock, so that other workers can
choose their next subplan if they are waiting.

This way, workers would be fairly distributed across subplans.

The shared array needs to be initialized and made available to
workers. For this, we can do exactly what sequential scan does for
being parallel-aware : Using function ExecAppendInitializeDSM()
similar to ExecSeqScanInitializeDSM() in the backend to allocate the
array. Similarly, for workers, have ExecAppendInitializeWorker() to
retrieve the shared array.


-------- Generating Partial Append plan having non-partial subplans --------

In set_append_rel_pathlist(), while generating a partial path for
Append, also include the non-partial child subpaths, besides the
partial subpaths. This way, it can contain a mix of partial and
non-partial children paths. But for a given child, its path would be
either the cheapest partial path or the cheapest non-partial path.

For a non-partial child path, it will only be included if it is
parallel-safe. If there is no parallel-safe path, a partial Append
path would not be generated. This behaviour also automatically
prevents paths that have a Gather node beneath.

Finally when it comes to create a partial append path using these
child paths, we also need to store a bitmap set indicating which of
the child paths are non-partial paths. For this, have a new BitmapSet
field : Append->partial_subplans. At execution time, this will be used
to set the maximum workers for a non-partial subpath to 1.


-------- Costing -------

For calculating per-worker parallel Append path cost, it first
calculates a total of child subplan costs considering all of their
workers, and then divides it by the Append node's parallel_divisor,
similar to how parallel scan uses this "parallel_divisor".

For startup cost, it is assumed that Append would start returning
tuples when the child node having the lowest startup cost is done
setting up. So Append startup cost is equal to startup cost of the
child with minimum startup cost.


-------- Scope --------

There are two different code paths where Append path is generated.
1. One is where append rel is generated : Inheritance table, and UNION
ALL clause.
2. Second codepath is in prepunion.c. This gets executed for UNION
(without ALL) and INTERSECT/EXCEPT [ALL]. The patch does not support
Parallel Append in this scenario. It can be later taken up as
extension, once this patch is reviewed.


======= Performance =======

There is a clear benefit in case of ParallelAppend in scenarios where
one or more subplans don't have partial paths, because in such cases,
on HEAD it does not generate Partial Append. For example, the below
query took around 30 secs with the patch
(max_parallel_workers_per_gather should be 3 or more), whereas, it
took 74 secs on HEAD.

explain analyze select avg(aid) from (
select aid from pgbench_accounts_1 inner join bid_tab b using (bid)
UNION ALL
select aid from pgbench_accounts_2 inner join bid_tab using (bid)
UNION ALL
select aid from pgbench_accounts_3 inner join bid_tab using (bid)
) p;

--- With HEAD ---

QUERY PLAN

---------------------------------------------------------------------------------------------------------------------------------------------------------
 Aggregate  (cost=6415493.67..6415493.67 rows=1 width=32) (actual
time=74135.821..74135.822 rows=1 loops=1)
   ->  Append  (cost=1541552.36..6390743.54 rows=9900047 width=4)
(actual time=73829.985..74125.048 rows=100000 loops=1)
         ->  Hash Join  (cost=1541552.36..2097249.67 rows=3300039
width=4) (actual time=25758.592..25758.592 rows=0 loops=1)
               Hash Cond: (pgbench_accounts_1.bid = b.bid)
               ->  Seq Scan on pgbench_accounts_1
(cost=0.00..87099.39 rows=3300039 width=8) (actual time=0.118..778.097
rows=3300000 loops=1)
               ->  Hash  (cost=721239.16..721239.16 rows=50000016
width=4) (actual time=24426.433..24426.433 rows=49999902 loops=1)
                     Buckets: 131072  Batches: 1024  Memory Usage: 2744kB
                     ->  Seq Scan on bid_tab b  (cost=0.00..721239.16
rows=50000016 width=4) (actual time=0.105..10112.995 rows=49999902
loops=1)
         ->  Hash Join  (cost=1541552.36..2097249.67 rows=3300039
width=4) (actual time=24063.761..24063.761 rows=0 loops=1)
               Hash Cond: (pgbench_accounts_2.bid = bid_tab.bid)
               ->  Seq Scan on pgbench_accounts_2
(cost=0.00..87099.39 rows=3300039 width=8) (actual time=0.065..779.498
rows=3300000 loops=1)
               ->  Hash  (cost=721239.16..721239.16 rows=50000016
width=4) (actual time=22708.377..22708.377 rows=49999902 loops=1)
                     Buckets: 131072  Batches: 1024  Memory Usage: 2744kB
                     ->  Seq Scan on bid_tab  (cost=0.00..721239.16
rows=50000016 width=4) (actual time=0.024..9513.032 rows=49999902
loops=1)
         ->  Hash Join  (cost=1541552.36..2097243.73 rows=3299969
width=4) (actual time=24007.628..24297.067 rows=100000 loops=1)
               Hash Cond: (pgbench_accounts_3.bid = bid_tab_1.bid)
               ->  Seq Scan on pgbench_accounts_3
(cost=0.00..87098.69 rows=3299969 width=8) (actual time=0.049..782.230
rows=3300000 loops=1)
               ->  Hash  (cost=721239.16..721239.16 rows=50000016
width=4) (actual time=22943.413..22943.413 rows=49999902 loops=1)
                     Buckets: 131072  Batches: 1024  Memory Usage: 2744kB
                     ->  Seq Scan on bid_tab bid_tab_1
(cost=0.00..721239.16 rows=50000016 width=4) (actual
time=0.022..9601.753 rows=49999902 loops=1)
 Planning time: 0.366 ms
 Execution time: 74138.043 ms
(22 rows)


--- With Patch ---

       QUERY PLAN

----------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Finalize Aggregate  (cost=2139493.66..2139493.67 rows=1 width=32)
(actual time=29658.825..29658.825 rows=1 loops=1)
   ->  Gather  (cost=2139493.34..2139493.65 rows=3 width=32) (actual
time=29568.957..29658.804 rows=4 loops=1)
         Workers Planned: 3
         Workers Launched: 3
         ->  Partial Aggregate  (cost=2138493.34..2138493.35 rows=1
width=32) (actual time=22086.324..22086.325 rows=1 loops=4)
               ->  Parallel Append  (cost=0.00..2130243.42
rows=3299969 width=4) (actual time=22008.945..22083.536 rows=25000
loops=4)
                     ->  Hash Join  (cost=1541552.36..2097243.73
rows=3299969 width=4) (actual time=29568.605..29568.605 rows=0
loops=1)
                           Hash Cond: (pgbench_accounts_1.bid = b.bid)
                           ->  Seq Scan on pgbench_accounts_1
(cost=0.00..87098.69 rows=3299969 width=8) (actual time=0.024..841.598
rows=3300000 loops=1)
                           ->  Hash  (cost=721239.16..721239.16
rows=50000016 width=4) (actual time=28134.596..28134.596 rows=49999902
loops=1)
                                 Buckets: 131072  Batches: 1024
Memory Usage: 2744kB
                                 ->  Seq Scan on bid_tab b
(cost=0.00..721239.16 rows=50000016 width=4) (actual
time=0.076..11598.097 rows=49999902 loops=1)
                     ->  Hash Join  (cost=1541552.36..2097243.73
rows=3299969 width=4) (actual time=29127.085..29127.085 rows=0
loops=1)
                           Hash Cond: (pgbench_accounts_2.bid = bid_tab.bid)
                           ->  Seq Scan on pgbench_accounts_2
(cost=0.00..87098.69 rows=3299969 width=8) (actual time=0.022..837.027
rows=3300000 loops=1)
                           ->  Hash  (cost=721239.16..721239.16
rows=50000016 width=4) (actual time=27658.276..27658.276 rows=49999902
loops=1)
                                 ->  Seq Scan on bid_tab
(cost=0.00..721239.16 rows=50000016 width=4) (actual
time=0.022..11561.530 rows=49999902 loops=1)
                     ->  Hash Join  (cost=1541552.36..2097243.73
rows=3299969 width=4) (actual time=29340.081..29632.180 rows=100000
loops=1)
                           Hash Cond: (pgbench_accounts_3.bid = bid_tab_1.bid)
                           ->  Seq Scan on pgbench_accounts_3
(cost=0.00..87098.69 rows=3299969 width=8) (actual time=0.027..821.875
rows=3300000 loops=1)
                           ->  Hash  (cost=721239.16..721239.16
rows=50000016 width=4) (actual time=28186.009..28186.009 rows=49999902
loops=1)
                                 ->  Seq Scan on bid_tab bid_tab_1
(cost=0.00..721239.16 rows=50000016 width=4) (actual
time=0.019..11594.461 rows=49999902 loops=1)
 Planning time: 0.493 ms
 Execution time: 29662.791 ms
(24 rows)

Thanks to Robert Haas and Rushabh Lathia for their valuable inputs
while working on this feature.

[1] Old mail thread :

https://www.postgresql.org/message-id/flat/9A28C8860F777E439AA12E8AEA7694F80115DEB8%40BPXM15GP.gisp.nec.co.jp#9A28C8860F777E439AA12E8AEA7694F80115DEB8@BPXM15GP.gisp.nec.co.jp

Thanks
-Amit Khandekar

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Attachment

Re: [HACKERS] Parallel Append implementation

From
Ashutosh Bapat
Date:
On Fri, Dec 23, 2016 at 10:51 AM, Amit Khandekar <amitdkhan.pg@gmail.com> wrote:
> Currently an Append plan node does not execute its subplans in
> parallel. There is no distribution of workers across its subplans. The
> second subplan starts running only after the first subplan finishes,
> although the individual subplans may be running parallel scans.
>
> Secondly, we create a partial Append path for an appendrel, but we do
> that only if all of its member subpaths are partial paths. If one or
> more of the subplans is a non-parallel path, there will be only a
> non-parallel Append. So whatever node is sitting on top of Append is
> not going to do a parallel plan; for example, a select count(*) won't
> divide it into partial aggregates if the underlying Append is not
> partial.
>
> The attached patch removes both of the above restrictions.  There has
> already been a mail thread [1] that discusses an approach suggested by
> Robert Haas for implementing this feature. This patch uses this same
> approach.

The first goal requires some kind of synchronization which will allow workers
to be distributed across the subplans. The second goal requires some kind of
synchronization to prevent multiple workers from executing non-parallel
subplans. The patch uses different mechanisms to achieve the goals. If
we create two different patches addressing each goal, those may be
easier to handle.

We may want to think about a third goal: preventing too many workers
from executing the same plan. As per comment in get_parallel_divisor()
we do not see any benefit if more than 4 workers execute the same
node. So, an append node can distribute more than 4 worker nodes
equally among the available subplans. It might be better to do that as
a separate patch.

>
> Attached is pgbench_create_partition.sql (derived from the one
> included in the above thread) that distributes pgbench_accounts table
> data into 3 partitions pgbench_account_[1-3]. The below queries use
> this schema.
>
> Consider a query such as :
> select count(*) from pgbench_accounts;
>
> Now suppose, these two partitions do not allow parallel scan :
> alter table pgbench_accounts_1 set (parallel_workers=0);
> alter table pgbench_accounts_2 set (parallel_workers=0);
>
> On HEAD, due to some of the partitions having non-parallel scans, the
> whole Append would be a sequential scan :
>
>  Aggregate
>    ->  Append
>          ->  Index Only Scan using pgbench_accounts_pkey on pgbench_accounts
>          ->  Seq Scan on pgbench_accounts_1
>          ->  Seq Scan on pgbench_accounts_2
>          ->  Seq Scan on pgbench_accounts_3
>
> Whereas, with the patch, the Append looks like this :
>
>  Finalize Aggregate
>    ->  Gather
>          Workers Planned: 6
>          ->  Partial Aggregate
>                ->  Parallel Append
>                      ->  Parallel Seq Scan on pgbench_accounts
>                      ->  Seq Scan on pgbench_accounts_1
>                      ->  Seq Scan on pgbench_accounts_2
>                      ->  Parallel Seq Scan on pgbench_accounts_3
>
> Above, Parallel Append is generated, and it executes all these
> subplans in parallel, with 1 worker executing each of the sequential
> scans, and multiple workers executing each of the parallel subplans.
>
>
> ======= Implementation details ========
>
> ------- Adding parallel-awareness -------
>
> In a given worker, this Append plan node will be executing just like
> the usual partial Append node. It will run a subplan until completion.
> The subplan may or may not be a partial parallel-aware plan like
> parallelScan. After the subplan is done, Append will choose the next
> subplan. It is here where it will be different than the current
> partial Append plan: it is parallel-aware. The Append nodes in the
> workers will be aware that there are other Append nodes running in
> parallel. The partial Append will have to coordinate with other Append
> nodes while choosing the next subplan.
>
> ------- Distribution of workers --------
>
> The coordination info is stored in a shared array, each element of
> which describes the per-subplan info. This info contains the number of
> workers currently executing the subplan, and the maximum number of
> workers that should be executing it at the same time. For non-partial
> sublans, max workers would always be 1. For choosing the next subplan,
> the Append executor will sequentially iterate over the array to find a
> subplan having the least number of workers currently being executed,
> AND which is not already being executed by the maximum number of
> workers assigned for the subplan. Once it gets one, it increments
> current_workers, and releases the Spinlock, so that other workers can
> choose their next subplan if they are waiting.
>
> This way, workers would be fairly distributed across subplans.
>
> The shared array needs to be initialized and made available to
> workers. For this, we can do exactly what sequential scan does for
> being parallel-aware : Using function ExecAppendInitializeDSM()
> similar to ExecSeqScanInitializeDSM() in the backend to allocate the
> array. Similarly, for workers, have ExecAppendInitializeWorker() to
> retrieve the shared array.
>
>
> -------- Generating Partial Append plan having non-partial subplans --------
>
> In set_append_rel_pathlist(), while generating a partial path for
> Append, also include the non-partial child subpaths, besides the
> partial subpaths. This way, it can contain a mix of partial and
> non-partial children paths. But for a given child, its path would be
> either the cheapest partial path or the cheapest non-partial path.
>
> For a non-partial child path, it will only be included if it is
> parallel-safe. If there is no parallel-safe path, a partial Append
> path would not be generated. This behaviour also automatically
> prevents paths that have a Gather node beneath.
>
> Finally when it comes to create a partial append path using these
> child paths, we also need to store a bitmap set indicating which of
> the child paths are non-partial paths. For this, have a new BitmapSet
> field : Append->partial_subplans. At execution time, this will be used
> to set the maximum workers for a non-partial subpath to 1.
>

We may be able to eliminate this field. Please check comment 6 below.

>
> -------- Costing -------
>
> For calculating per-worker parallel Append path cost, it first
> calculates a total of child subplan costs considering all of their
> workers, and then divides it by the Append node's parallel_divisor,
> similar to how parallel scan uses this "parallel_divisor".
>
> For startup cost, it is assumed that Append would start returning
> tuples when the child node having the lowest startup cost is done
> setting up. So Append startup cost is equal to startup cost of the
> child with minimum startup cost.
>
>
> -------- Scope --------
>
> There are two different code paths where Append path is generated.
> 1. One is where append rel is generated : Inheritance table, and UNION
> ALL clause.
> 2. Second codepath is in prepunion.c. This gets executed for UNION
> (without ALL) and INTERSECT/EXCEPT [ALL]. The patch does not support
> Parallel Append in this scenario. It can be later taken up as
> extension, once this patch is reviewed.
>
>

Here are some review comments

1. struct ParallelAppendDescData is being used at other places. The declaration
style doesn't seem to be very common in the code or in the directory where the
file is located.
+struct ParallelAppendDescData
+{
+    slock_t        pa_mutex;        /* mutual exclusion to choose
next subplan */
+    parallel_append_info pa_info[FLEXIBLE_ARRAY_MEMBER];
+};
Defining it like
typdef struct ParallelAppendDescData
{   slock_t        pa_mutex;        /* mutual exclusion to choose next
subplan */   parallel_append_info pa_info[FLEXIBLE_ARRAY_MEMBER];
};
will make its use handy. Instead of struct ParallelAppendDescData, you will
need to use just ParallelAppendDescData. May be we want to rename
parallel_append_info as ParallelAppendInfo and change the style to match other
declarations.

2. The comment below refers to the constant which it describes, which looks
odd. May be it should be worded as "A special value of
AppendState::as_whichplan, to indicate no plans left to be executed.". Also
using INVALID for "no plans left ..." seems to be a misnomer.
/** For Parallel Append, AppendState::as_whichplan can have PA_INVALID_PLAN* value, which indicates there are no plans
leftto be executed.*/
 
#define PA_INVALID_PLAN -1

3. The sentence "We have got NULL", looks odd. Probably we don't need it as
it's clear from the code above that this code deals with the case when the
current subplan didn't return any row.       /*        * We have got NULL. There might be other workers still
processingthe        * last chunk of rows for this same node, but there's no point for new        * workers to run this
node,so mark this node as finished.        */
 
4. In the same comment, I guess, the word "node" refers to "subnode" and not
the node pointed by variable "node". May be you want to use word "subplan"
here.

4. set_finished()'s prologue has different indentation compared to other
functions in the file.

5. Multilevel comment starts with an empty line.
+        /* Keep track of the node with the least workers so far. For the very

6. By looking at parallel_worker field of a path, we can say whether it's
partial or not. We probably do not require to maintain a bitmap for that at in
the Append path. The bitmap can be constructed, if required, at the time of
creating the partial append plan. The reason to take this small step is 1. we
want to minimize our work at the time of creating paths, 2. while freeing a
path in add_path, we don't free the internal structures, in this case the
Bitmap, which will waste memory if the path is not chosen while planning.

7. If we consider 6, we don't need concat_append_subpaths(), but still here are
some comments about that function. Instead of accepting two separate arguments
childpaths and child_partial_subpaths_set, which need to be in sync, we can
just pass the path which contains both of those. In the same following code may
be optimized by adding a utility function to Bitmapset, which advances
all members
by given offset and using that function with bms_union() to merge the
bitmapset e.g.
bms_union(*partial_subpaths_set,
bms_advance_members(bms_copy(child_partial_subpaths_set), append_subpath_len));   if (partial_subpaths_set)   {
for(i = 0; i < list_length(childpaths); i++)       {           /*            * The child paths themselves may or may
notbe partial paths. The            * bitmapset numbers of these paths will need to be set considering            *
thatthese are to be appended onto the partial_subpaths_set.            */           if (!child_partial_subpaths_set ||
            bms_is_member(i, child_partial_subpaths_set))           {               *partial_subpaths_set =
bms_add_member(*partial_subpaths_set,                                                     append_subpath_len + i);
    }       }   }
 

8.
-            parallel_workers = Max(parallel_workers, path->parallel_workers);
+            /*
+             * partial_subpaths can have non-partial subpaths so
+             * path->parallel_workers can be 0. For such paths, allocate one
+             * worker.
+             */
+            parallel_workers +=
+                (path->parallel_workers > 0 ? path->parallel_workers : 1);

This looks odd. Earlier code was choosing maximum of all parallel workers,
whereas new code adds them all. E.g. if parallel_workers for subpaths is 3, 4,
3, without your change, it will pick up 4. But with your change it will pick
10. I think, you intend to write this as
parallel_workers = Max(parallel_workers, path->parallel_workers ?
path->parallel_workers : 1);

If you do that probably you don't need since parallel_workers are never set
more than max_parallel_workers_per_gather.
+        /* In no case use more than max_parallel_workers_per_gather. */
+        parallel_workers = Min(parallel_workers,
+                               max_parallel_workers_per_gather);
+

9. Shouldn't this funciton return double?
int
get_parallel_divisor(int parallel_workers)

9. In get_parallel_divisor(), if parallel_worker is 0 i.e. it's a partial path
the return value will be 2, which isn't true. This function is being called for
all the subpaths to get the original number of rows and costs of partial paths.
I think we don't need to call this function on subpaths which are not partial
paths or make it work parallel_workers = 0.

10. We should probably move the parallel_safe calculation out of cost_append().
+            path->parallel_safe = path->parallel_safe &&
+                                  subpath->parallel_safe;

11. This check shouldn't be part of cost_append().
+            /* All child paths must have same parameterization */
+            Assert(bms_equal(PATH_REQ_OUTER(subpath), required_outer));

12. cost_append() essentially adds costs of all the subpaths and then divides
by parallel_divisor. This might work if all the subpaths are partial paths. But
for the subpaths which are not partial, a single worker will incur the whole
cost of that subpath. Hence just dividing all the total cost doesn't seem the
right thing to do. We should apply different logic for costing non-partial
subpaths and partial subpaths.

13. No braces required for single line block
+    /* Increment worker count for the chosen node, if at all we found one. */
+    if (min_whichplan != PA_INVALID_PLAN)
+    {
+        padesc->pa_info[min_whichplan].pa_num_workers++;
+    }

14. exec_append_scan_first() is a one-liner function, should we just inline it?

15. This patch replaces exec_append_initialize_next() with
exec_append_scan_first(). The earlier function was handling backward and
forward scans separately, but the later function doesn't do that. Why?

-- 
Best Wishes,
Ashutosh Bapat
EnterpriseDB Corporation
The Postgres Database Company



Re: [HACKERS] Parallel Append implementation

From
Amit Khandekar
Date:
Thanks Ashutosh for the feedback.

On 6 January 2017 at 17:04, Ashutosh Bapat
<ashutosh.bapat@enterprisedb.com> wrote:
> On Fri, Dec 23, 2016 at 10:51 AM, Amit Khandekar <amitdkhan.pg@gmail.com> wrote:
>> Currently an Append plan node does not execute its subplans in
>> parallel. There is no distribution of workers across its subplans. The
>> second subplan starts running only after the first subplan finishes,
>> although the individual subplans may be running parallel scans.
>>
>> Secondly, we create a partial Append path for an appendrel, but we do
>> that only if all of its member subpaths are partial paths. If one or
>> more of the subplans is a non-parallel path, there will be only a
>> non-parallel Append. So whatever node is sitting on top of Append is
>> not going to do a parallel plan; for example, a select count(*) won't
>> divide it into partial aggregates if the underlying Append is not
>> partial.
>>
>> The attached patch removes both of the above restrictions.  There has
>> already been a mail thread [1] that discusses an approach suggested by
>> Robert Haas for implementing this feature. This patch uses this same
>> approach.
>
> The first goal requires some kind of synchronization which will allow workers
> to be distributed across the subplans. The second goal requires some kind of
> synchronization to prevent multiple workers from executing non-parallel
> subplans. The patch uses different mechanisms to achieve the goals. If
> we create two different patches addressing each goal, those may be
> easier to handle.

Goal A : Allow non-partial subpaths in Partial Append.
Goal B : Distribute workers across the Append subplans.
Both of these require some kind of synchronization while choosing the
next subplan. So, goal B is achieved by doing all the synchronization
stuff. And implementation of goal A requires that goal B is
implemented. So there is a dependency between these two goals. While
implementing goal B, we should keep in mind that it should also work
for goal A; it does not make sense later changing the synchronization
logic in goal A.

I am ok with splitting the patch into 2 patches :
a) changes required for goal A
b) changes required for goal B.
But I think we should split it only when we are ready to commit them
(commit for B, immediately followed by commit for A). Until then, we
should consider both of these together because they are interconnected
as explained above.

>
> We may want to think about a third goal: preventing too many workers
> from executing the same plan. As per comment in get_parallel_divisor()
> we do not see any benefit if more than 4 workers execute the same
> node. So, an append node can distribute more than 4 worker nodes
> equally among the available subplans. It might be better to do that as
> a separate patch.

I think that comment is for calculating leader contribution. It does
not say that 4 workers is too many workers in general.

But yes, I agree, and I have it in mind as the next improvement.
Basically, it does not make sense to give more than 3 workers to a
subplan when parallel_workers for that subplan are 3. For e.g., if
gather max workers is 10, and we have 2 Append subplans s1 and s2 with
parallel workers 3 and 5 respectively. Then, with the current patch,
it will distribute 4 workers to each of these workers. What we should
do is : once both of the subplans get 3 workers each, we should give
the 7th and 8th worker to s2.

Now that I think of that, I think for implementing above, we need to
keep track of per-subplan max_workers in the Append path; and with
that, the bitmap will be redundant. Instead, it can be replaced with
max_workers. Let me check if it is easy to do that. We don't want to
have the bitmap if we are sure it would be replaced by some other data
structure.


> Here are some review comments
I will handle the other comments, but first, just a quick response to
some important ones :

> 6. By looking at parallel_worker field of a path, we can say whether it's
> partial or not. We probably do not require to maintain a bitmap for that at in
> the Append path. The bitmap can be constructed, if required, at the time of
> creating the partial append plan. The reason to take this small step is 1. we
> want to minimize our work at the time of creating paths, 2. while freeing a
> path in add_path, we don't free the internal structures, in this case the
> Bitmap, which will waste memory if the path is not chosen while planning.

Let me try keeping the per-subplan max_worker info in Append path
itself, like I mentioned above. If that works, the bitmap will be
replaced by max_worker field. In case of non-partial subpath,
max_worker will be 1. (this is the same info kept in AppendState node
in the patch, but now we might need to keep it in Append path node as
well).

>
> 7. If we consider 6, we don't need concat_append_subpaths(), but still here are
> some comments about that function. Instead of accepting two separate arguments
> childpaths and child_partial_subpaths_set, which need to be in sync, we can
> just pass the path which contains both of those. In the same following code may
> be optimized by adding a utility function to Bitmapset, which advances
> all members
> by given offset and using that function with bms_union() to merge the
> bitmapset e.g.
> bms_union(*partial_subpaths_set,
> bms_advance_members(bms_copy(child_partial_subpaths_set), append_subpath_len));
>     if (partial_subpaths_set)
>     {
>         for (i = 0; i < list_length(childpaths); i++)
>         {
>             /*
>              * The child paths themselves may or may not be partial paths. The
>              * bitmapset numbers of these paths will need to be set considering
>              * that these are to be appended onto the partial_subpaths_set.
>              */
>             if (!child_partial_subpaths_set ||
>                 bms_is_member(i, child_partial_subpaths_set))
>             {
>                 *partial_subpaths_set = bms_add_member(*partial_subpaths_set,
>                                                        append_subpath_len + i);
>             }
>         }
>     }

Again, for the reason mentioned above, we will defer this point for now.

>
> 8.
> -            parallel_workers = Max(parallel_workers, path->parallel_workers);
> +            /*
> +             * partial_subpaths can have non-partial subpaths so
> +             * path->parallel_workers can be 0. For such paths, allocate one
> +             * worker.
> +             */
> +            parallel_workers +=
> +                (path->parallel_workers > 0 ? path->parallel_workers : 1);
>
> This looks odd. Earlier code was choosing maximum of all parallel workers,
> whereas new code adds them all. E.g. if parallel_workers for subpaths is 3, 4,
> 3, without your change, it will pick up 4. But with your change it will pick
> 10. I think, you intend to write this as
> parallel_workers = Max(parallel_workers, path->parallel_workers ?
> path->parallel_workers : 1);
The intention is to add all workers, because a parallel-aware Append
is going to need them in order to make the subplans run with their
full capacity in parallel. So with subpaths with 3, 4, and 3 workers,
the Append path will need 10 workers. If it allocates 4 workers, its
not sufficient : Each of them would get only 1 worker, or max 2. In
the existing code, 4 is correct, because all the workers are going to
execute the same subplan node at a time.


>
> 9. Shouldn't this funciton return double?
> int
> get_parallel_divisor(int parallel_workers)
Yes, right, I will do that.

>
> 9. In get_parallel_divisor(), if parallel_worker is 0 i.e. it's a partial path
> the return value will be 2, which isn't true. This function is being called for
> all the subpaths to get the original number of rows and costs of partial paths.
> I think we don't need to call this function on subpaths which are not partial
> paths or make it work parallel_workers = 0.
I didn't understand this. I checked again get_parallel_divisor()
function code. I think it will return 1 if parallel_workers is 0. But
I may be missing something.


> 12. cost_append() essentially adds costs of all the subpaths and then divides
> by parallel_divisor. This might work if all the subpaths are partial paths. But
> for the subpaths which are not partial, a single worker will incur the whole
> cost of that subpath. Hence just dividing all the total cost doesn't seem the
> right thing to do. We should apply different logic for costing non-partial
> subpaths and partial subpaths.

WIth the current partial path costing infrastructure, it is assumed
that a partial path node should return the average per-worker cost.
Hence, I thought it would be best to do it in a similar way for
Append. But let me think if we can do something. With the current
parallelism costing infrastructure, I am not sure though.

>
> 13. No braces required for single line block
> +    /* Increment worker count for the chosen node, if at all we found one. */
> +    if (min_whichplan != PA_INVALID_PLAN)
> +    {
> +        padesc->pa_info[min_whichplan].pa_num_workers++;
> +    }
>
> 14. exec_append_scan_first() is a one-liner function, should we just inline it?
>
> 15. This patch replaces exec_append_initialize_next() with
> exec_append_scan_first(). The earlier function was handling backward and
> forward scans separately, but the later function doesn't do that. Why?

I will come to these and some other ones later.

>
> --
> Best Wishes,
> Ashutosh Bapat
> EnterpriseDB Corporation
> The Postgres Database Company



-- 
Thanks,
-Amit Khandekar
EnterpriseDB Corporation
The Postgres Database Company



Re: [HACKERS] Parallel Append implementation

From
Ashutosh Bapat
Date:
On Mon, Jan 16, 2017 at 9:49 AM, Amit Khandekar <amitdkhan.pg@gmail.com> wrote:
> Thanks Ashutosh for the feedback.
>
> On 6 January 2017 at 17:04, Ashutosh Bapat
> <ashutosh.bapat@enterprisedb.com> wrote:
>> On Fri, Dec 23, 2016 at 10:51 AM, Amit Khandekar <amitdkhan.pg@gmail.com> wrote:
>>> Currently an Append plan node does not execute its subplans in
>>> parallel. There is no distribution of workers across its subplans. The
>>> second subplan starts running only after the first subplan finishes,
>>> although the individual subplans may be running parallel scans.
>>>
>>> Secondly, we create a partial Append path for an appendrel, but we do
>>> that only if all of its member subpaths are partial paths. If one or
>>> more of the subplans is a non-parallel path, there will be only a
>>> non-parallel Append. So whatever node is sitting on top of Append is
>>> not going to do a parallel plan; for example, a select count(*) won't
>>> divide it into partial aggregates if the underlying Append is not
>>> partial.
>>>
>>> The attached patch removes both of the above restrictions.  There has
>>> already been a mail thread [1] that discusses an approach suggested by
>>> Robert Haas for implementing this feature. This patch uses this same
>>> approach.
>>
>> The first goal requires some kind of synchronization which will allow workers
>> to be distributed across the subplans. The second goal requires some kind of
>> synchronization to prevent multiple workers from executing non-parallel
>> subplans. The patch uses different mechanisms to achieve the goals. If
>> we create two different patches addressing each goal, those may be
>> easier to handle.
>
> Goal A : Allow non-partial subpaths in Partial Append.
> Goal B : Distribute workers across the Append subplans.
> Both of these require some kind of synchronization while choosing the
> next subplan. So, goal B is achieved by doing all the synchronization
> stuff. And implementation of goal A requires that goal B is
> implemented. So there is a dependency between these two goals. While
> implementing goal B, we should keep in mind that it should also work
> for goal A; it does not make sense later changing the synchronization
> logic in goal A.
>
> I am ok with splitting the patch into 2 patches :
> a) changes required for goal A
> b) changes required for goal B.
> But I think we should split it only when we are ready to commit them
> (commit for B, immediately followed by commit for A). Until then, we
> should consider both of these together because they are interconnected
> as explained above.

For B, we need to know, how much gain that brings and in which cases.
If that gain is not worth the complexity added, we may have to defer
Goal B. Goal A would certainly be useful since it will improve
performance of the targetted cases. The synchronization required for
Goal A is simpler than that of B and thus if we choose to implement
only A, we can live with a simpler synchronization.

BTW, Right now, the patch does not consider non-partial paths for a
child which has partial paths. Do we know, for sure, that a path
containing partial paths for a child, which has it, is always going to
be cheaper than the one which includes non-partial path. If not,
should we build another paths which contains non-partial paths for all
child relations. This sounds like a 0/1 knapsack problem.

>
>
>> Here are some review comments
> I will handle the other comments, but first, just a quick response to
> some important ones :
>
>> 6. By looking at parallel_worker field of a path, we can say whether it's
>> partial or not. We probably do not require to maintain a bitmap for that at in
>> the Append path. The bitmap can be constructed, if required, at the time of
>> creating the partial append plan. The reason to take this small step is 1. we
>> want to minimize our work at the time of creating paths, 2. while freeing a
>> path in add_path, we don't free the internal structures, in this case the
>> Bitmap, which will waste memory if the path is not chosen while planning.
>
> Let me try keeping the per-subplan max_worker info in Append path
> itself, like I mentioned above. If that works, the bitmap will be
> replaced by max_worker field. In case of non-partial subpath,
> max_worker will be 1. (this is the same info kept in AppendState node
> in the patch, but now we might need to keep it in Append path node as
> well).

It will be better if we can fetch that information from each subpath
when creating the plan. As I have explained before, a path is minimal
structure, which should be easily disposable, when throwing away the
path.

>
>>
>> 7. If we consider 6, we don't need concat_append_subpaths(), but still here are
>> some comments about that function. Instead of accepting two separate arguments
>> childpaths and child_partial_subpaths_set, which need to be in sync, we can
>> just pass the path which contains both of those. In the same following code may
>> be optimized by adding a utility function to Bitmapset, which advances
>> all members
>> by given offset and using that function with bms_union() to merge the
>> bitmapset e.g.
>> bms_union(*partial_subpaths_set,
>> bms_advance_members(bms_copy(child_partial_subpaths_set), append_subpath_len));
>>     if (partial_subpaths_set)
>>     {
>>         for (i = 0; i < list_length(childpaths); i++)
>>         {
>>             /*
>>              * The child paths themselves may or may not be partial paths. The
>>              * bitmapset numbers of these paths will need to be set considering
>>              * that these are to be appended onto the partial_subpaths_set.
>>              */
>>             if (!child_partial_subpaths_set ||
>>                 bms_is_member(i, child_partial_subpaths_set))
>>             {
>>                 *partial_subpaths_set = bms_add_member(*partial_subpaths_set,
>>                                                        append_subpath_len + i);
>>             }
>>         }
>>     }
>
> Again, for the reason mentioned above, we will defer this point for now.

Ok.

>
>>
>> 8.
>> -            parallel_workers = Max(parallel_workers, path->parallel_workers);
>> +            /*
>> +             * partial_subpaths can have non-partial subpaths so
>> +             * path->parallel_workers can be 0. For such paths, allocate one
>> +             * worker.
>> +             */
>> +            parallel_workers +=
>> +                (path->parallel_workers > 0 ? path->parallel_workers : 1);
>>
>> This looks odd. Earlier code was choosing maximum of all parallel workers,
>> whereas new code adds them all. E.g. if parallel_workers for subpaths is 3, 4,
>> 3, without your change, it will pick up 4. But with your change it will pick
>> 10. I think, you intend to write this as
>> parallel_workers = Max(parallel_workers, path->parallel_workers ?
>> path->parallel_workers : 1);
> The intention is to add all workers, because a parallel-aware Append
> is going to need them in order to make the subplans run with their
> full capacity in parallel. So with subpaths with 3, 4, and 3 workers,
> the Append path will need 10 workers. If it allocates 4 workers, its
> not sufficient : Each of them would get only 1 worker, or max 2. In
> the existing code, 4 is correct, because all the workers are going to
> execute the same subplan node at a time.
>

Ok, makes sense if we take up Goal B.


>>
>> 9. In get_parallel_divisor(), if parallel_worker is 0 i.e. it's a partial path
>> the return value will be 2, which isn't true. This function is being called for
>> all the subpaths to get the original number of rows and costs of partial paths.
>> I think we don't need to call this function on subpaths which are not partial
>> paths or make it work parallel_workers = 0.
> I didn't understand this. I checked again get_parallel_divisor()
> function code. I think it will return 1 if parallel_workers is 0. But
> I may be missing something.

Sorry, I also don't understand why I had that comment. For some
reason, I thought we are sending 1 when parallel_workers = 0 to
get_parallel_divisor(). But I don't understand why I thought so.
Anyway, I will provide better explanation next time I bounce against
this.

>
>
>> 12. cost_append() essentially adds costs of all the subpaths and then divides
>> by parallel_divisor. This might work if all the subpaths are partial paths. But
>> for the subpaths which are not partial, a single worker will incur the whole
>> cost of that subpath. Hence just dividing all the total cost doesn't seem the
>> right thing to do. We should apply different logic for costing non-partial
>> subpaths and partial subpaths.
>
> WIth the current partial path costing infrastructure, it is assumed
> that a partial path node should return the average per-worker cost.
> Hence, I thought it would be best to do it in a similar way for
> Append. But let me think if we can do something. With the current
> parallelism costing infrastructure, I am not sure though.

The current parallel mechanism is in sync with that costing. Each
worker is supposed to take the same burden, hence the same (average)
cost. But it will change when a single worker has to scan an entire
child relation and different child relations have different sizes.

Thanks for working on the comments.
-- 
Best Wishes,
Ashutosh Bapat
EnterpriseDB Corporation
The Postgres Database Company



Re: [HACKERS] Parallel Append implementation

From
Amit Langote
Date:
Hi Amit,

On 2016/12/23 14:21, Amit Khandekar wrote:
> Currently an Append plan node does not execute its subplans in
> parallel. There is no distribution of workers across its subplans. The
> second subplan starts running only after the first subplan finishes,
> although the individual subplans may be running parallel scans.
> 
> Secondly, we create a partial Append path for an appendrel, but we do
> that only if all of its member subpaths are partial paths. If one or
> more of the subplans is a non-parallel path, there will be only a
> non-parallel Append. So whatever node is sitting on top of Append is
> not going to do a parallel plan; for example, a select count(*) won't
> divide it into partial aggregates if the underlying Append is not
> partial.
> 
> The attached patch removes both of the above restrictions.  There has
> already been a mail thread [1] that discusses an approach suggested by
> Robert Haas for implementing this feature. This patch uses this same
> approach.

I was looking at the executor portion of this patch and I noticed that in
exec_append_initialize_next():
   if (appendstate->as_padesc)       return parallel_append_next(appendstate);
   /*    * Not parallel-aware. Fine, just go on to the next subplan in the    * appropriate direction.    */   if
(ScanDirectionIsForward(appendstate->ps.state->es_direction))      appendstate->as_whichplan++;   else
appendstate->as_whichplan--;

which seems to mean that executing Append in parallel mode disregards the
scan direction.  I am not immediately sure what implications that has, so
I checked what heap scan does when executing in parallel mode, and found
this in heapgettup():
   else if (backward)   {       /* backward parallel scan not supported */       Assert(scan->rs_parallel == NULL);

Perhaps, AppendState.as_padesc would not have been set if scan direction
is backward, because parallel mode would be disabled for the whole query
in that case (PlannerGlobal.parallelModeOK = false).  Maybe add an
Assert() similar to one in heapgettup().

Thanks,
Amit





Re: [HACKERS] Parallel Append implementation

From
Michael Paquier
Date:
On Tue, Jan 17, 2017 at 2:40 PM, Amit Langote
<Langote_Amit_f8@lab.ntt.co.jp> wrote:
> Hi Amit,
>
> On 2016/12/23 14:21, Amit Khandekar wrote:
>> Currently an Append plan node does not execute its subplans in
>> parallel. There is no distribution of workers across its subplans. The
>> second subplan starts running only after the first subplan finishes,
>> although the individual subplans may be running parallel scans.
>>
>> Secondly, we create a partial Append path for an appendrel, but we do
>> that only if all of its member subpaths are partial paths. If one or
>> more of the subplans is a non-parallel path, there will be only a
>> non-parallel Append. So whatever node is sitting on top of Append is
>> not going to do a parallel plan; for example, a select count(*) won't
>> divide it into partial aggregates if the underlying Append is not
>> partial.
>>
>> The attached patch removes both of the above restrictions.  There has
>> already been a mail thread [1] that discusses an approach suggested by
>> Robert Haas for implementing this feature. This patch uses this same
>> approach.
>
> I was looking at the executor portion of this patch and I noticed that in
> exec_append_initialize_next():
>
>     if (appendstate->as_padesc)
>         return parallel_append_next(appendstate);
>
>     /*
>      * Not parallel-aware. Fine, just go on to the next subplan in the
>      * appropriate direction.
>      */
>     if (ScanDirectionIsForward(appendstate->ps.state->es_direction))
>         appendstate->as_whichplan++;
>     else
>         appendstate->as_whichplan--;
>
> which seems to mean that executing Append in parallel mode disregards the
> scan direction.  I am not immediately sure what implications that has, so
> I checked what heap scan does when executing in parallel mode, and found
> this in heapgettup():
>
>     else if (backward)
>     {
>         /* backward parallel scan not supported */
>         Assert(scan->rs_parallel == NULL);
>
> Perhaps, AppendState.as_padesc would not have been set if scan direction
> is backward, because parallel mode would be disabled for the whole query
> in that case (PlannerGlobal.parallelModeOK = false).  Maybe add an
> Assert() similar to one in heapgettup().

There have been some reviews, but the patch has not been updated in
two weeks. Marking as "returned with feedback".
-- 
Michael



Re: [HACKERS] Parallel Append implementation

From
Amit Khandekar
Date:
Ashutosh Bapat <ashutosh.bapat@enterprisedb.com> wrote:
>> We may want to think about a third goal: preventing too many workers
>> from executing the same plan. As per comment in get_parallel_divisor()
>> we do not see any benefit if more than 4 workers execute the same
>> node. So, an append node can distribute more than 4 worker nodes
>> equally among the available subplans. It might be better to do that as
>> a separate patch.
>
> I think that comment is for calculating leader contribution. It does
> not say that 4 workers is too many workers in general.
>
> But yes, I agree, and I have it in mind as the next improvement.
> Basically, it does not make sense to give more than 3 workers to a
> subplan when parallel_workers for that subplan are 3. For e.g., if
> gather max workers is 10, and we have 2 Append subplans s1 and s2 with
> parallel workers 3 and 5 respectively. Then, with the current patch,
> it will distribute 4 workers to each of these workers. What we should
> do is : once both of the subplans get 3 workers each, we should give
> the 7th and 8th worker to s2.
>
> Now that I think of that, I think for implementing above, we need to
> keep track of per-subplan max_workers in the Append path; and with
> that, the bitmap will be redundant. Instead, it can be replaced with
> max_workers. Let me check if it is easy to do that. We don't want to
> have the bitmap if we are sure it would be replaced by some other data
> structure.

Attached is v2 patch, which implements above. Now Append plan node
stores a list of per-subplan max worker count, rather than the
Bitmapset. But still Bitmapset turned out to be necessary for
AppendPath. More details are in the subsequent comments.


>> Goal A : Allow non-partial subpaths in Partial Append.
>> Goal B : Distribute workers across the Append subplans.
>> Both of these require some kind of synchronization while choosing the
>> next subplan. So, goal B is achieved by doing all the synchronization
>> stuff. And implementation of goal A requires that goal B is
>> implemented. So there is a dependency between these two goals. While
>> implementing goal B, we should keep in mind that it should also work
>> for goal A; it does not make sense later changing the synchronization
>> logic in goal A.
>>
>> I am ok with splitting the patch into 2 patches :
>> a) changes required for goal A
>> b) changes required for goal B.
>> But I think we should split it only when we are ready to commit them
>> (commit for B, immediately followed by commit for A). Until then, we
>> should consider both of these together because they are interconnected
>> as explained above.
>
> For B, we need to know, how much gain that brings and in which cases.
> If that gain is not worth the complexity added, we may have to defer
> Goal B. Goal A would certainly be useful since it will improve
> performance of the targetted cases. The synchronization required for
> Goal A is simpler than that of B and thus if we choose to implement
> only A, we can live with a simpler synchronization.

For Goal A , the logic for a worker synchronously choosing a subplan will be :
Go the next subplan. If that subplan has not already assigned max
workers, choose this subplan, otherwise, go the next subplan, and so
on.
For Goal B , the logic will be :
Among the subplans which are yet to achieve max workers, choose the
subplan with the minimum number of workers currently assigned.

I don't think there is a significant difference between the complexity
of the above two algorithms. So I think here the complexity does not
look like a factor based on which we can choose the particular logic.
We should choose the logic which has more potential for benefits. The
logic for goal B will work for goal A as well. And secondly, if the
subplans are using their own different system resources, the resource
contention might be less. One case is : all subplans using different
disks. Second case is : some of the subplans may be using a foreign
scan, so it would start using foreign server resources sooner. These
benefits apply when the Gather max workers count is not sufficient for
running all the subplans in their full capacity. If they are
sufficient, then the workers will be distributed over the subplans
using both the logics. Just the order of assignments of workers to
subplans will be different.

Also, I don't see a disadvantage if we follow the logic of Goal B.

>
> BTW, Right now, the patch does not consider non-partial paths for a
> child which has partial paths. Do we know, for sure, that a path
> containing partial paths for a child, which has it, is always going to
> be cheaper than the one which includes non-partial path. If not,
> should we build another paths which contains non-partial paths for all
> child relations. This sounds like a 0/1 knapsack problem.

I didn't quite get this. We do create a non-partial Append path using
non-partial child paths anyways.

>
>>
>>
>>> Here are some review comments
>> I will handle the other comments, but first, just a quick response to
>> some important ones :
>>
>>> 6. By looking at parallel_worker field of a path, we can say whether it's
>>> partial or not. We probably do not require to maintain a bitmap for that at in
>>> the Append path. The bitmap can be constructed, if required, at the time of
>>> creating the partial append plan. The reason to take this small step is 1. we
>>> want to minimize our work at the time of creating paths, 2. while freeing a
>>> path in add_path, we don't free the internal structures, in this case the
>>> Bitmap, which will waste memory if the path is not chosen while planning.
>>
>> Let me try keeping the per-subplan max_worker info in Append path
>> itself, like I mentioned above. If that works, the bitmap will be
>> replaced by max_worker field. In case of non-partial subpath,
>> max_worker will be 1. (this is the same info kept in AppendState node
>> in the patch, but now we might need to keep it in Append path node as
>> well).
>
> It will be better if we can fetch that information from each subpath
> when creating the plan. As I have explained before, a path is minimal
> structure, which should be easily disposable, when throwing away the
> path.

Now in the v2 patch, we store per-subplan worker count. But still, we
cannot use the path->parallel_workers to determine whether it's a
partial path. This is because even for a non-partial path, it seems
the parallel_workers can be non-zero. For e.g., in
create_subqueryscan_path(), it sets path->parallel_workers to
subpath->parallel_workers. But this path is added as a non-partial
path. So we need a separate info as to which of the subpaths in Append
path are partial subpaths. So in the v2 patch, I continued to use
Bitmapset in AppendPath. But in Append plan node, number of workers is
calculated using this bitmapset. Check the new function
get_append_num_workers().

>>> 7. If we consider 6, we don't need concat_append_subpaths(),
As explained above, I have kept the BitmapSet for AppendPath.

>>> but still here are
>>> some comments about that function. Instead of accepting two separate arguments
>>> childpaths and child_partial_subpaths_set, which need to be in sync, we can
>>> just pass the path which contains both of those. In the same following code may
>>> be optimized by adding a utility function to Bitmapset, which advances
>>> all members
>>> by given offset and using that function with bms_union() to merge the
>>> bitmapset e.g.
>>> bms_union(*partial_subpaths_set,
>>> bms_advance_members(bms_copy(child_partial_subpaths_set), append_subpath_len));
>>>     if (partial_subpaths_set)

I will get back on this after more thought.

>
>>
>>> 12. cost_append() essentially adds costs of all the subpaths and then divides
>>> by parallel_divisor. This might work if all the subpaths are partial paths. But
>>> for the subpaths which are not partial, a single worker will incur the whole
>>> cost of that subpath. Hence just dividing all the total cost doesn't seem the
>>> right thing to do. We should apply different logic for costing non-partial
>>> subpaths and partial subpaths.
>>
>> WIth the current partial path costing infrastructure, it is assumed
>> that a partial path node should return the average per-worker cost.
>> Hence, I thought it would be best to do it in a similar way for
>> Append. But let me think if we can do something. With the current
>> parallelism costing infrastructure, I am not sure though.
>
> The current parallel mechanism is in sync with that costing. Each
> worker is supposed to take the same burden, hence the same (average)
> cost. But it will change when a single worker has to scan an entire
> child relation and different child relations have different sizes.

I gave more thought on this. Considering each subplan has different
number of workers, I think it makes sense to calculate average
per-worker cost even in parallel Append. In case of non-partial
subplan, a single worker will execute it, but it will next choose
another subplan. So on average each worker is going to process the
same number of rows, and also the same amount of CPU. And that amount
of CPU cost and rows cost should be calculated by taking the total
count and dividing it by number of workers (parallel_divsor actually).


> Here are some review comments
>
> 1. struct ParallelAppendDescData is being used at other places. The declaration
> style doesn't seem to be very common in the code or in the directory where the
> file is located.
> +struct ParallelAppendDescData
> +{
> +    slock_t        pa_mutex;        /* mutual exclusion to choose
> next subplan */
> +    parallel_append_info pa_info[FLEXIBLE_ARRAY_MEMBER];
> +};
> Defining it like
> typdef struct ParallelAppendDescData
> {
>     slock_t        pa_mutex;        /* mutual exclusion to choose next
> subplan */
>     parallel_append_info pa_info[FLEXIBLE_ARRAY_MEMBER];
> };
> will make its use handy. Instead of struct ParallelAppendDescData, you will
> need to use just ParallelAppendDescData. May be we want to rename
> parallel_append_info as ParallelAppendInfo and change the style to match other
> declarations.
>
> 2. The comment below refers to the constant which it describes, which looks
> odd. May be it should be worded as "A special value of
> AppendState::as_whichplan, to indicate no plans left to be executed.". Also
> using INVALID for "no plans left ..." seems to be a misnomer.
> /*
>  * For Parallel Append, AppendState::as_whichplan can have PA_INVALID_PLAN
>  * value, which indicates there are no plans left to be executed.
>  */
> #define PA_INVALID_PLAN -1
>
> 3. The sentence "We have got NULL", looks odd. Probably we don't need it as
> it's clear from the code above that this code deals with the case when the
> current subplan didn't return any row.
>         /*
>          * We have got NULL. There might be other workers still processing the
>          * last chunk of rows for this same node, but there's no point for new
>          * workers to run this node, so mark this node as finished.
>          */
> 4. In the same comment, I guess, the word "node" refers to "subnode" and not
> the node pointed by variable "node". May be you want to use word "subplan"
> here.
>
> 4. set_finished()'s prologue has different indentation compared to other
> functions in the file.
>
> 5. Multilevel comment starts with an empty line.
> +        /* Keep track of the node with the least workers so far. For the very
>
Done 1. to 5. above, as per your suggestions.

> 9. Shouldn't this funciton return double?
> int
> get_parallel_divisor(int parallel_workers)

v2 patch is rebased on latest master branch, which already contains
this function returning double.


> 10. We should probably move the parallel_safe calculation out of cost_append().
> +            path->parallel_safe = path->parallel_safe &&
> +                                  subpath->parallel_safe;
>
> 11. This check shouldn't be part of cost_append().
> +            /* All child paths must have same parameterization */
> +            Assert(bms_equal(PATH_REQ_OUTER(subpath), required_outer));
>
Yet to handle the above ones.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Attachment

Re: [HACKERS] Parallel Append implementation

From
Amit Khandekar
Date:
v2 patch was not rebased over the latest master branch commits. Please
refer to the attached ParallelAppend_v3.patch, instead.

On 6 February 2017 at 11:06, Amit Khandekar <amitdkhan.pg@gmail.com> wrote:
> Ashutosh Bapat <ashutosh.bapat@enterprisedb.com> wrote:
>>> We may want to think about a third goal: preventing too many workers
>>> from executing the same plan. As per comment in get_parallel_divisor()
>>> we do not see any benefit if more than 4 workers execute the same
>>> node. So, an append node can distribute more than 4 worker nodes
>>> equally among the available subplans. It might be better to do that as
>>> a separate patch.
>>
>> I think that comment is for calculating leader contribution. It does
>> not say that 4 workers is too many workers in general.
>>
>> But yes, I agree, and I have it in mind as the next improvement.
>> Basically, it does not make sense to give more than 3 workers to a
>> subplan when parallel_workers for that subplan are 3. For e.g., if
>> gather max workers is 10, and we have 2 Append subplans s1 and s2 with
>> parallel workers 3 and 5 respectively. Then, with the current patch,
>> it will distribute 4 workers to each of these workers. What we should
>> do is : once both of the subplans get 3 workers each, we should give
>> the 7th and 8th worker to s2.
>>
>> Now that I think of that, I think for implementing above, we need to
>> keep track of per-subplan max_workers in the Append path; and with
>> that, the bitmap will be redundant. Instead, it can be replaced with
>> max_workers. Let me check if it is easy to do that. We don't want to
>> have the bitmap if we are sure it would be replaced by some other data
>> structure.
>
> Attached is v2 patch, which implements above. Now Append plan node
> stores a list of per-subplan max worker count, rather than the
> Bitmapset. But still Bitmapset turned out to be necessary for
> AppendPath. More details are in the subsequent comments.
>
>
>>> Goal A : Allow non-partial subpaths in Partial Append.
>>> Goal B : Distribute workers across the Append subplans.
>>> Both of these require some kind of synchronization while choosing the
>>> next subplan. So, goal B is achieved by doing all the synchronization
>>> stuff. And implementation of goal A requires that goal B is
>>> implemented. So there is a dependency between these two goals. While
>>> implementing goal B, we should keep in mind that it should also work
>>> for goal A; it does not make sense later changing the synchronization
>>> logic in goal A.
>>>
>>> I am ok with splitting the patch into 2 patches :
>>> a) changes required for goal A
>>> b) changes required for goal B.
>>> But I think we should split it only when we are ready to commit them
>>> (commit for B, immediately followed by commit for A). Until then, we
>>> should consider both of these together because they are interconnected
>>> as explained above.
>>
>> For B, we need to know, how much gain that brings and in which cases.
>> If that gain is not worth the complexity added, we may have to defer
>> Goal B. Goal A would certainly be useful since it will improve
>> performance of the targetted cases. The synchronization required for
>> Goal A is simpler than that of B and thus if we choose to implement
>> only A, we can live with a simpler synchronization.
>
> For Goal A , the logic for a worker synchronously choosing a subplan will be :
> Go the next subplan. If that subplan has not already assigned max
> workers, choose this subplan, otherwise, go the next subplan, and so
> on.
> For Goal B , the logic will be :
> Among the subplans which are yet to achieve max workers, choose the
> subplan with the minimum number of workers currently assigned.
>
> I don't think there is a significant difference between the complexity
> of the above two algorithms. So I think here the complexity does not
> look like a factor based on which we can choose the particular logic.
> We should choose the logic which has more potential for benefits. The
> logic for goal B will work for goal A as well. And secondly, if the
> subplans are using their own different system resources, the resource
> contention might be less. One case is : all subplans using different
> disks. Second case is : some of the subplans may be using a foreign
> scan, so it would start using foreign server resources sooner. These
> benefits apply when the Gather max workers count is not sufficient for
> running all the subplans in their full capacity. If they are
> sufficient, then the workers will be distributed over the subplans
> using both the logics. Just the order of assignments of workers to
> subplans will be different.
>
> Also, I don't see a disadvantage if we follow the logic of Goal B.
>
>>
>> BTW, Right now, the patch does not consider non-partial paths for a
>> child which has partial paths. Do we know, for sure, that a path
>> containing partial paths for a child, which has it, is always going to
>> be cheaper than the one which includes non-partial path. If not,
>> should we build another paths which contains non-partial paths for all
>> child relations. This sounds like a 0/1 knapsack problem.
>
> I didn't quite get this. We do create a non-partial Append path using
> non-partial child paths anyways.
>
>>
>>>
>>>
>>>> Here are some review comments
>>> I will handle the other comments, but first, just a quick response to
>>> some important ones :
>>>
>>>> 6. By looking at parallel_worker field of a path, we can say whether it's
>>>> partial or not. We probably do not require to maintain a bitmap for that at in
>>>> the Append path. The bitmap can be constructed, if required, at the time of
>>>> creating the partial append plan. The reason to take this small step is 1. we
>>>> want to minimize our work at the time of creating paths, 2. while freeing a
>>>> path in add_path, we don't free the internal structures, in this case the
>>>> Bitmap, which will waste memory if the path is not chosen while planning.
>>>
>>> Let me try keeping the per-subplan max_worker info in Append path
>>> itself, like I mentioned above. If that works, the bitmap will be
>>> replaced by max_worker field. In case of non-partial subpath,
>>> max_worker will be 1. (this is the same info kept in AppendState node
>>> in the patch, but now we might need to keep it in Append path node as
>>> well).
>>
>> It will be better if we can fetch that information from each subpath
>> when creating the plan. As I have explained before, a path is minimal
>> structure, which should be easily disposable, when throwing away the
>> path.
>
> Now in the v2 patch, we store per-subplan worker count. But still, we
> cannot use the path->parallel_workers to determine whether it's a
> partial path. This is because even for a non-partial path, it seems
> the parallel_workers can be non-zero. For e.g., in
> create_subqueryscan_path(), it sets path->parallel_workers to
> subpath->parallel_workers. But this path is added as a non-partial
> path. So we need a separate info as to which of the subpaths in Append
> path are partial subpaths. So in the v2 patch, I continued to use
> Bitmapset in AppendPath. But in Append plan node, number of workers is
> calculated using this bitmapset. Check the new function
> get_append_num_workers().
>
>>>> 7. If we consider 6, we don't need concat_append_subpaths(),
> As explained above, I have kept the BitmapSet for AppendPath.
>
>>>> but still here are
>>>> some comments about that function. Instead of accepting two separate arguments
>>>> childpaths and child_partial_subpaths_set, which need to be in sync, we can
>>>> just pass the path which contains both of those. In the same following code may
>>>> be optimized by adding a utility function to Bitmapset, which advances
>>>> all members
>>>> by given offset and using that function with bms_union() to merge the
>>>> bitmapset e.g.
>>>> bms_union(*partial_subpaths_set,
>>>> bms_advance_members(bms_copy(child_partial_subpaths_set), append_subpath_len));
>>>>     if (partial_subpaths_set)
>
> I will get back on this after more thought.
>
>>
>>>
>>>> 12. cost_append() essentially adds costs of all the subpaths and then divides
>>>> by parallel_divisor. This might work if all the subpaths are partial paths. But
>>>> for the subpaths which are not partial, a single worker will incur the whole
>>>> cost of that subpath. Hence just dividing all the total cost doesn't seem the
>>>> right thing to do. We should apply different logic for costing non-partial
>>>> subpaths and partial subpaths.
>>>
>>> WIth the current partial path costing infrastructure, it is assumed
>>> that a partial path node should return the average per-worker cost.
>>> Hence, I thought it would be best to do it in a similar way for
>>> Append. But let me think if we can do something. With the current
>>> parallelism costing infrastructure, I am not sure though.
>>
>> The current parallel mechanism is in sync with that costing. Each
>> worker is supposed to take the same burden, hence the same (average)
>> cost. But it will change when a single worker has to scan an entire
>> child relation and different child relations have different sizes.
>
> I gave more thought on this. Considering each subplan has different
> number of workers, I think it makes sense to calculate average
> per-worker cost even in parallel Append. In case of non-partial
> subplan, a single worker will execute it, but it will next choose
> another subplan. So on average each worker is going to process the
> same number of rows, and also the same amount of CPU. And that amount
> of CPU cost and rows cost should be calculated by taking the total
> count and dividing it by number of workers (parallel_divsor actually).
>
>
>> Here are some review comments
>>
>> 1. struct ParallelAppendDescData is being used at other places. The declaration
>> style doesn't seem to be very common in the code or in the directory where the
>> file is located.
>> +struct ParallelAppendDescData
>> +{
>> +    slock_t        pa_mutex;        /* mutual exclusion to choose
>> next subplan */
>> +    parallel_append_info pa_info[FLEXIBLE_ARRAY_MEMBER];
>> +};
>> Defining it like
>> typdef struct ParallelAppendDescData
>> {
>>     slock_t        pa_mutex;        /* mutual exclusion to choose next
>> subplan */
>>     parallel_append_info pa_info[FLEXIBLE_ARRAY_MEMBER];
>> };
>> will make its use handy. Instead of struct ParallelAppendDescData, you will
>> need to use just ParallelAppendDescData. May be we want to rename
>> parallel_append_info as ParallelAppendInfo and change the style to match other
>> declarations.
>>
>> 2. The comment below refers to the constant which it describes, which looks
>> odd. May be it should be worded as "A special value of
>> AppendState::as_whichplan, to indicate no plans left to be executed.". Also
>> using INVALID for "no plans left ..." seems to be a misnomer.
>> /*
>>  * For Parallel Append, AppendState::as_whichplan can have PA_INVALID_PLAN
>>  * value, which indicates there are no plans left to be executed.
>>  */
>> #define PA_INVALID_PLAN -1
>>
>> 3. The sentence "We have got NULL", looks odd. Probably we don't need it as
>> it's clear from the code above that this code deals with the case when the
>> current subplan didn't return any row.
>>         /*
>>          * We have got NULL. There might be other workers still processing the
>>          * last chunk of rows for this same node, but there's no point for new
>>          * workers to run this node, so mark this node as finished.
>>          */
>> 4. In the same comment, I guess, the word "node" refers to "subnode" and not
>> the node pointed by variable "node". May be you want to use word "subplan"
>> here.
>>
>> 4. set_finished()'s prologue has different indentation compared to other
>> functions in the file.
>>
>> 5. Multilevel comment starts with an empty line.
>> +        /* Keep track of the node with the least workers so far. For the very
>>
> Done 1. to 5. above, as per your suggestions.
>
>> 9. Shouldn't this funciton return double?
>> int
>> get_parallel_divisor(int parallel_workers)
>
> v2 patch is rebased on latest master branch, which already contains
> this function returning double.
>
>
>> 10. We should probably move the parallel_safe calculation out of cost_append().
>> +            path->parallel_safe = path->parallel_safe &&
>> +                                  subpath->parallel_safe;
>>
>> 11. This check shouldn't be part of cost_append().
>> +            /* All child paths must have same parameterization */
>> +            Assert(bms_equal(PATH_REQ_OUTER(subpath), required_outer));
>>
> Yet to handle the above ones.



-- 
Thanks,
-Amit Khandekar
EnterpriseDB Corporation
The Postgres Database Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Attachment

Re: [HACKERS] Parallel Append implementation

From
Ashutosh Bapat
Date:
On Mon, Feb 6, 2017 at 11:06 AM, Amit Khandekar <amitdkhan.pg@gmail.com> wrote:
> Ashutosh Bapat <ashutosh.bapat@enterprisedb.com> wrote:
>>> We may want to think about a third goal: preventing too many workers
>>> from executing the same plan. As per comment in get_parallel_divisor()
>>> we do not see any benefit if more than 4 workers execute the same
>>> node. So, an append node can distribute more than 4 worker nodes
>>> equally among the available subplans. It might be better to do that as
>>> a separate patch.
>>
>> I think that comment is for calculating leader contribution. It does
>> not say that 4 workers is too many workers in general.
>>
>> But yes, I agree, and I have it in mind as the next improvement.
>> Basically, it does not make sense to give more than 3 workers to a
>> subplan when parallel_workers for that subplan are 3. For e.g., if
>> gather max workers is 10, and we have 2 Append subplans s1 and s2 with
>> parallel workers 3 and 5 respectively. Then, with the current patch,
>> it will distribute 4 workers to each of these workers. What we should
>> do is : once both of the subplans get 3 workers each, we should give
>> the 7th and 8th worker to s2.
>>
>> Now that I think of that, I think for implementing above, we need to
>> keep track of per-subplan max_workers in the Append path; and with
>> that, the bitmap will be redundant. Instead, it can be replaced with
>> max_workers. Let me check if it is easy to do that. We don't want to
>> have the bitmap if we are sure it would be replaced by some other data
>> structure.
>
> Attached is v2 patch, which implements above. Now Append plan node
> stores a list of per-subplan max worker count, rather than the
> Bitmapset. But still Bitmapset turned out to be necessary for
> AppendPath. More details are in the subsequent comments.
>
>
>>> Goal A : Allow non-partial subpaths in Partial Append.
>>> Goal B : Distribute workers across the Append subplans.
>>> Both of these require some kind of synchronization while choosing the
>>> next subplan. So, goal B is achieved by doing all the synchronization
>>> stuff. And implementation of goal A requires that goal B is
>>> implemented. So there is a dependency between these two goals. While
>>> implementing goal B, we should keep in mind that it should also work
>>> for goal A; it does not make sense later changing the synchronization
>>> logic in goal A.
>>>
>>> I am ok with splitting the patch into 2 patches :
>>> a) changes required for goal A
>>> b) changes required for goal B.
>>> But I think we should split it only when we are ready to commit them
>>> (commit for B, immediately followed by commit for A). Until then, we
>>> should consider both of these together because they are interconnected
>>> as explained above.
>>
>> For B, we need to know, how much gain that brings and in which cases.
>> If that gain is not worth the complexity added, we may have to defer
>> Goal B. Goal A would certainly be useful since it will improve
>> performance of the targetted cases. The synchronization required for
>> Goal A is simpler than that of B and thus if we choose to implement
>> only A, we can live with a simpler synchronization.
>
> For Goal A , the logic for a worker synchronously choosing a subplan will be :
> Go the next subplan. If that subplan has not already assigned max
> workers, choose this subplan, otherwise, go the next subplan, and so
> on.

Right, at a given time, we have to remember only the next plan to
assign worker to. That's simpler than remembering the number of
workers for each subplan and updating those concurrently. That's why I
am saying synchronization for A is simpler than that of B.

> For Goal B , the logic will be :
> Among the subplans which are yet to achieve max workers, choose the
> subplan with the minimum number of workers currently assigned.
>
> I don't think there is a significant difference between the complexity
> of the above two algorithms. So I think here the complexity does not
> look like a factor based on which we can choose the particular logic.
> We should choose the logic which has more potential for benefits. The
> logic for goal B will work for goal A as well. And secondly, if the
> subplans are using their own different system resources, the resource
> contention might be less. One case is : all subplans using different
> disks. Second case is : some of the subplans may be using a foreign
> scan, so it would start using foreign server resources sooner. These
> benefits apply when the Gather max workers count is not sufficient for
> running all the subplans in their full capacity. If they are
> sufficient, then the workers will be distributed over the subplans
> using both the logics. Just the order of assignments of workers to
> subplans will be different.
>
> Also, I don't see a disadvantage if we follow the logic of Goal B.

Do we have any performance measurements where we see that Goal B
performs better than Goal A, in such a situation? Do we have any
performance measurement comparing these two approaches in other
situations. If implementation for Goal B beats that of Goal A always,
we can certainly implement it directly. But it may not. Also,
separating patches for Goal A and Goal B might make reviews easier.

>
>>
>> BTW, Right now, the patch does not consider non-partial paths for a
>> child which has partial paths. Do we know, for sure, that a path
>> containing partial paths for a child, which has it, is always going to
>> be cheaper than the one which includes non-partial path. If not,
>> should we build another paths which contains non-partial paths for all
>> child relations. This sounds like a 0/1 knapsack problem.
>
> I didn't quite get this. We do create a non-partial Append path using
> non-partial child paths anyways.

Let's say a given child-relation has both partial and non-partial
paths, your approach would always pick up a partial path. But now that
parallel append can handle non-partial paths as well, it may happen
that picking up non-partial path instead of partial one when both are
available gives an overall better performance. Have we ruled out that
possibility.

>
>>
>>>
>>>
>>>> Here are some review comments
>>> I will handle the other comments, but first, just a quick response to
>>> some important ones :
>>>
>>>> 6. By looking at parallel_worker field of a path, we can say whether it's
>>>> partial or not. We probably do not require to maintain a bitmap for that at in
>>>> the Append path. The bitmap can be constructed, if required, at the time of
>>>> creating the partial append plan. The reason to take this small step is 1. we
>>>> want to minimize our work at the time of creating paths, 2. while freeing a
>>>> path in add_path, we don't free the internal structures, in this case the
>>>> Bitmap, which will waste memory if the path is not chosen while planning.
>>>
>>> Let me try keeping the per-subplan max_worker info in Append path
>>> itself, like I mentioned above. If that works, the bitmap will be
>>> replaced by max_worker field. In case of non-partial subpath,
>>> max_worker will be 1. (this is the same info kept in AppendState node
>>> in the patch, but now we might need to keep it in Append path node as
>>> well).
>>
>> It will be better if we can fetch that information from each subpath
>> when creating the plan. As I have explained before, a path is minimal
>> structure, which should be easily disposable, when throwing away the
>> path.
>
> Now in the v2 patch, we store per-subplan worker count. But still, we
> cannot use the path->parallel_workers to determine whether it's a
> partial path. This is because even for a non-partial path, it seems
> the parallel_workers can be non-zero. For e.g., in
> create_subqueryscan_path(), it sets path->parallel_workers to
> subpath->parallel_workers. But this path is added as a non-partial
> path. So we need a separate info as to which of the subpaths in Append
> path are partial subpaths. So in the v2 patch, I continued to use
> Bitmapset in AppendPath. But in Append plan node, number of workers is
> calculated using this bitmapset. Check the new function
> get_append_num_workers().

If the subpath from childrel->partial_pathlist, then we set the
corresponding bit in the bitmap. Now we can infer that for any path if
that path is found in path->parent->partial_pathlist. Since the code
always chooses the first partial path, the search in partial_pathlist
should not affect performance. So, we can avoid maintaining a bitmap
in the path and keep accumulating it when collapsing append paths.

>
>>>> 7. If we consider 6, we don't need concat_append_subpaths(),
> As explained above, I have kept the BitmapSet for AppendPath.
>
>>>> but still here are
>>>> some comments about that function. Instead of accepting two separate arguments
>>>> childpaths and child_partial_subpaths_set, which need to be in sync, we can
>>>> just pass the path which contains both of those. In the same following code may
>>>> be optimized by adding a utility function to Bitmapset, which advances
>>>> all members
>>>> by given offset and using that function with bms_union() to merge the
>>>> bitmapset e.g.
>>>> bms_union(*partial_subpaths_set,
>>>> bms_advance_members(bms_copy(child_partial_subpaths_set), append_subpath_len));
>>>>     if (partial_subpaths_set)
>
> I will get back on this after more thought.

Another possibility, you could use a loop like offset_relid_set(),
using bms_next_member(). That way we could combine the for loop and
bms_is_member() call into a loop over bms_next_member().

>
>>
>>>
>>>> 12. cost_append() essentially adds costs of all the subpaths and then divides
>>>> by parallel_divisor. This might work if all the subpaths are partial paths. But
>>>> for the subpaths which are not partial, a single worker will incur the whole
>>>> cost of that subpath. Hence just dividing all the total cost doesn't seem the
>>>> right thing to do. We should apply different logic for costing non-partial
>>>> subpaths and partial subpaths.
>>>
>>> WIth the current partial path costing infrastructure, it is assumed
>>> that a partial path node should return the average per-worker cost.
>>> Hence, I thought it would be best to do it in a similar way for
>>> Append. But let me think if we can do something. With the current
>>> parallelism costing infrastructure, I am not sure though.
>>
>> The current parallel mechanism is in sync with that costing. Each
>> worker is supposed to take the same burden, hence the same (average)
>> cost. But it will change when a single worker has to scan an entire
>> child relation and different child relations have different sizes.
>
> I gave more thought on this. Considering each subplan has different
> number of workers, I think it makes sense to calculate average
> per-worker cost even in parallel Append. In case of non-partial
> subplan, a single worker will execute it, but it will next choose
> another subplan. So on average each worker is going to process the
> same number of rows, and also the same amount of CPU. And that amount
> of CPU cost and rows cost should be calculated by taking the total
> count and dividing it by number of workers (parallel_divsor actually).
>

That's not entirely true. Consider N child relations with chosen paths
with costs C1, C2, ... CN which are very very different. If there are
N workers, the total cost should correspond to the highest of the
costs of subpaths, since no worker will execute more than one plan.
The unfortunate worker which executes the costliest path would take
the longest time. The cost of parallel append should reflect that. The
patch does not make any attempt to distribute workers based on the
actual load, so such skews should be considered into costing. I don't
think we can do anything to the condition I explained.

-- 
Best Wishes,
Ashutosh Bapat
EnterpriseDB Corporation
The Postgres Database Company



Re: [HACKERS] Parallel Append implementation

From
Robert Haas
Date:
On Mon, Feb 6, 2017 at 12:36 AM, Amit Khandekar <amitdkhan.pg@gmail.com> wrote:
>> Now that I think of that, I think for implementing above, we need to
>> keep track of per-subplan max_workers in the Append path; and with
>> that, the bitmap will be redundant. Instead, it can be replaced with
>> max_workers. Let me check if it is easy to do that. We don't want to
>> have the bitmap if we are sure it would be replaced by some other data
>> structure.
>
> Attached is v2 patch, which implements above. Now Append plan node
> stores a list of per-subplan max worker count, rather than the
> Bitmapset. But still Bitmapset turned out to be necessary for
> AppendPath. More details are in the subsequent comments.

Keep in mind that, for a non-partial path, the cap of 1 worker for
that subplan is a hard limit.  Anything more will break the world.
But for a partial plan, the limit -- whether 1 or otherwise -- is a
soft limit.  It may not help much to route more workers to that node,
and conceivably it could even hurt, but it shouldn't yield any
incorrect result.  I'm not sure it's a good idea to conflate those two
things.  For example, suppose that I have a scan of two children, one
of which has parallel_workers of 4, and the other of which has
parallel_workers of 3.  If I pick parallel_workers of 7 for the
Parallel Append, that's probably too high.  Had those two tables been
a single unpartitioned table, I would have picked 4 or 5 workers, not
7.  On the other hand, if I pick parallel_workers of 4 or 5 for the
Parallel Append, and I finish with the larger table first, I think I
might as well throw all 4 of those workers at the smaller table even
though it would normally have only used 3 workers.  Having the extra
1-2 workers exist does not seem better.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



Re: [HACKERS] Parallel Append implementation

From
Robert Haas
Date:
On Tue, Feb 14, 2017 at 12:05 PM, Robert Haas <robertmhaas@gmail.com> wrote:
> Having the extra
> 1-2 workers exist does not seem better.

Err, exit, not exist.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



Re: [HACKERS] Parallel Append implementation

From
Amit Khandekar
Date:
On 14 February 2017 at 22:35, Robert Haas <robertmhaas@gmail.com> wrote:
> On Mon, Feb 6, 2017 at 12:36 AM, Amit Khandekar <amitdkhan.pg@gmail.com> wrote:
>>> Now that I think of that, I think for implementing above, we need to
>>> keep track of per-subplan max_workers in the Append path; and with
>>> that, the bitmap will be redundant. Instead, it can be replaced with
>>> max_workers. Let me check if it is easy to do that. We don't want to
>>> have the bitmap if we are sure it would be replaced by some other data
>>> structure.
>>
>> Attached is v2 patch, which implements above. Now Append plan node
>> stores a list of per-subplan max worker count, rather than the
>> Bitmapset. But still Bitmapset turned out to be necessary for
>> AppendPath. More details are in the subsequent comments.
>
> Keep in mind that, for a non-partial path, the cap of 1 worker for
> that subplan is a hard limit.  Anything more will break the world.
> But for a partial plan, the limit -- whether 1 or otherwise -- is a
> soft limit.  It may not help much to route more workers to that node,
> and conceivably it could even hurt, but it shouldn't yield any
> incorrect result.  I'm not sure it's a good idea to conflate those two
> things.

Yes, the logic that I used in the patch assumes that
"Path->parallel_workers field not only suggests how many workers to
allocate, but also prevents allocation of too many workers for that
path". For seqscan path, this field is calculated based on the
relation pages count. I believe the theory is that, too many workers
might even slow down the parallel scan. And the same theory would be
applied for calculating other types of low-level paths like index
scan.

The only reason I combined the soft limit and the hard limit is
because it is not necessary to have two different fields. But of
course this is again under the assumption that allocating more than
parallel_workers would never improve the speed, in fact it can even
slow it down.

Do we have such a case currently where the actual number of workers
launched turns out to be *more* than Path->parallel_workers ?

> For example, suppose that I have a scan of two children, one
> of which has parallel_workers of 4, and the other of which has
> parallel_workers of 3.  If I pick parallel_workers of 7 for the
> Parallel Append, that's probably too high.  Had those two tables been
> a single unpartitioned table, I would have picked 4 or 5 workers, not
> 7.  On the other hand, if I pick parallel_workers of 4 or 5 for the
> Parallel Append, and I finish with the larger table first, I think I
> might as well throw all 4 of those workers at the smaller table even
> though it would normally have only used 3 workers.

> Having the extra 1-2 workers exit does not seem better.

It is here, where I didn't understand exactly why would we want to
assign these extra workers to a subplan which tells use that it is
already being run by 'parallel_workers' number of workers.


>
> --
> Robert Haas
> EnterpriseDB: http://www.enterprisedb.com
> The Enterprise PostgreSQL Company



-- 
Thanks,
-Amit Khandekar
EnterpriseDB Corporation
The Postgres Database Company



Re: [HACKERS] Parallel Append implementation

From
Amit Khandekar
Date:
> On 14 February 2017 at 22:35, Robert Haas <robertmhaas@gmail.com> wrote:
>> For example, suppose that I have a scan of two children, one
>> of which has parallel_workers of 4, and the other of which has
>> parallel_workers of 3.  If I pick parallel_workers of 7 for the
>> Parallel Append, that's probably too high.

In the patch, in such case, 7 workers are indeed selected for Parallel
Append path, so that both the subplans are able to execute in parallel
with their full worker capacity. Are you suggesting that we should not
?



Re: [HACKERS] Parallel Append implementation

From
Robert Haas
Date:
On Wed, Feb 15, 2017 at 2:33 AM, Amit Khandekar <amitdkhan.pg@gmail.com> wrote:
> The only reason I combined the soft limit and the hard limit is
> because it is not necessary to have two different fields. But of
> course this is again under the assumption that allocating more than
> parallel_workers would never improve the speed, in fact it can even
> slow it down.

That could be true in extreme cases, but in general I think it's probably false.

> Do we have such a case currently where the actual number of workers
> launched turns out to be *more* than Path->parallel_workers ?

No.

>> For example, suppose that I have a scan of two children, one
>> of which has parallel_workers of 4, and the other of which has
>> parallel_workers of 3.  If I pick parallel_workers of 7 for the
>> Parallel Append, that's probably too high.  Had those two tables been
>> a single unpartitioned table, I would have picked 4 or 5 workers, not
>> 7.  On the other hand, if I pick parallel_workers of 4 or 5 for the
>> Parallel Append, and I finish with the larger table first, I think I
>> might as well throw all 4 of those workers at the smaller table even
>> though it would normally have only used 3 workers.
>
>> Having the extra 1-2 workers exit does not seem better.
>
> It is here, where I didn't understand exactly why would we want to
> assign these extra workers to a subplan which tells use that it is
> already being run by 'parallel_workers' number of workers.

The decision to use fewer workers for a smaller scan isn't really
because we think that using more workers will cause a regression.
It's because we think it may not help very much, and because it's not
worth firing up a ton of workers for a relatively small scan given
that workers are a limited resource.  I think once we've got a bunch
of workers started, we might as well try to use them.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



Re: [HACKERS] Parallel Append implementation

From
Robert Haas
Date:
On Wed, Feb 15, 2017 at 4:43 AM, Amit Khandekar <amitdkhan.pg@gmail.com> wrote:
>> On 14 February 2017 at 22:35, Robert Haas <robertmhaas@gmail.com> wrote:
>>> For example, suppose that I have a scan of two children, one
>>> of which has parallel_workers of 4, and the other of which has
>>> parallel_workers of 3.  If I pick parallel_workers of 7 for the
>>> Parallel Append, that's probably too high.
>
> In the patch, in such case, 7 workers are indeed selected for Parallel
> Append path, so that both the subplans are able to execute in parallel
> with their full worker capacity. Are you suggesting that we should not
> ?

Absolutely.  I think that's going to be way too many workers.  Imagine
that there are 100 child tables and each one is big enough to qualify
for 2 or 3 workers.  No matter what value the user has selected for
max_parallel_workers_per_gather, they should not get a scan involving
200 workers.

What I was thinking about is something like this:

1. First, take the maximum parallel_workers value from among all the children.

2. Second, compute log2(num_children)+1 and round up.  So, for 1
child, 1; for 2 children, 2; for 3-4 children, 3; for 5-8 children, 4;
for 9-16 children, 5, and so on.

3. Use as the number of parallel workers for the children the maximum
of the value computed in step 1 and the value computed in step 2.

With this approach, a plan with 100 children qualifies for 8 parallel
workers (unless one of the children individually qualifies for some
larger number, or unless max_parallel_workers_per_gather is set to a
smaller value).  That seems fairly reasonable to me.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



Re: [HACKERS] Parallel Append implementation

From
Ashutosh Bapat
Date:
On Wed, Feb 15, 2017 at 6:40 PM, Robert Haas <robertmhaas@gmail.com> wrote:
> On Wed, Feb 15, 2017 at 4:43 AM, Amit Khandekar <amitdkhan.pg@gmail.com> wrote:
>>> On 14 February 2017 at 22:35, Robert Haas <robertmhaas@gmail.com> wrote:
>>>> For example, suppose that I have a scan of two children, one
>>>> of which has parallel_workers of 4, and the other of which has
>>>> parallel_workers of 3.  If I pick parallel_workers of 7 for the
>>>> Parallel Append, that's probably too high.
>>
>> In the patch, in such case, 7 workers are indeed selected for Parallel
>> Append path, so that both the subplans are able to execute in parallel
>> with their full worker capacity. Are you suggesting that we should not
>> ?
>
> Absolutely.  I think that's going to be way too many workers.  Imagine
> that there are 100 child tables and each one is big enough to qualify
> for 2 or 3 workers.  No matter what value the user has selected for
> max_parallel_workers_per_gather, they should not get a scan involving
> 200 workers.

If the user is ready throw 200 workers and if the subplans can use
them to speed up the query 200 times (obviously I am exaggerating),
why not to use those? When the user set
max_parallel_workers_per_gather to that high a number, he meant it to
be used by a gather, and that's what we should be doing.

>
> What I was thinking about is something like this:
>
> 1. First, take the maximum parallel_workers value from among all the children.
>
> 2. Second, compute log2(num_children)+1 and round up.  So, for 1
> child, 1; for 2 children, 2; for 3-4 children, 3; for 5-8 children, 4;
> for 9-16 children, 5, and so on.

Can you please explain the rationale behind this maths?

>
> 3. Use as the number of parallel workers for the children the maximum
> of the value computed in step 1 and the value computed in step 2.
>
> With this approach, a plan with 100 children qualifies for 8 parallel
> workers (unless one of the children individually qualifies for some
> larger number, or unless max_parallel_workers_per_gather is set to a
> smaller value).  That seems fairly reasonable to me.

-- 
Best Wishes,
Ashutosh Bapat
EnterpriseDB Corporation
The Postgres Database Company



Re: [HACKERS] Parallel Append implementation

From
Amit Khandekar
Date:
On 15 February 2017 at 18:40, Robert Haas <robertmhaas@gmail.com> wrote:
> On Wed, Feb 15, 2017 at 4:43 AM, Amit Khandekar <amitdkhan.pg@gmail.com> wrote:
>>> On 14 February 2017 at 22:35, Robert Haas <robertmhaas@gmail.com> wrote:
>>>> For example, suppose that I have a scan of two children, one
>>>> of which has parallel_workers of 4, and the other of which has
>>>> parallel_workers of 3.  If I pick parallel_workers of 7 for the
>>>> Parallel Append, that's probably too high.
>>
>> In the patch, in such case, 7 workers are indeed selected for Parallel
>> Append path, so that both the subplans are able to execute in parallel
>> with their full worker capacity. Are you suggesting that we should not
>> ?
>
> Absolutely.  I think that's going to be way too many workers.  Imagine
> that there are 100 child tables and each one is big enough to qualify
> for 2 or 3 workers.  No matter what value the user has selected for
> max_parallel_workers_per_gather, they should not get a scan involving
> 200 workers.
>
> What I was thinking about is something like this:
>
> 1. First, take the maximum parallel_workers value from among all the children.
>
> 2. Second, compute log2(num_children)+1 and round up.  So, for 1
> child, 1; for 2 children, 2; for 3-4 children, 3; for 5-8 children, 4;
> for 9-16 children, 5, and so on.
>
> 3. Use as the number of parallel workers for the children the maximum
> of the value computed in step 1 and the value computed in step 2.

Ah, now that I closely look at compute_parallel_worker(), I see what
you are getting at.

For plain unpartitioned table, parallel_workers is calculated as
roughly equal to log(num_pages) (actually it is log3). So if the table
size is n, the workers will be log(n). So if it is partitioned into p
partitions of size n/p each, still the number of workers should be
log(n). Whereas, in the patch, it is calculated as (total of all the
child workers) i.e. n * log(n/p) for this case. But log(n) != p *
log(x/p). For e.g. log(1000) is much less than log(300) + log(300) +
log(300).

That means, the way it is calculated in the patch turns out to be much
larger than if it were calculated using log(total of sizes of all
children). So I think for the step 2 above, log(total_rel_size)
formula seems to be appropriate. What do you think ? For
compute_parallel_worker(), it is actually log3 by the way.

BTW this formula is just an extension of how parallel_workers is
calculated for an unpartitioned table.

>>> For example, suppose that I have a scan of two children, one
>>> of which has parallel_workers of 4, and the other of which has
>>> parallel_workers of 3.  If I pick parallel_workers of 7 for the
>>> Parallel Append, that's probably too high.  Had those two tables been
>>> a single unpartitioned table, I would have picked 4 or 5 workers, not
>>> 7.  On the other hand, if I pick parallel_workers of 4 or 5 for the
>>> Parallel Append, and I finish with the larger table first, I think I
>>> might as well throw all 4 of those workers at the smaller table even
>>> though it would normally have only used 3 workers.
>>
>>> Having the extra 1-2 workers exit does not seem better.
>>
>> It is here, where I didn't understand exactly why would we want to
>> assign these extra workers to a subplan which tells use that it is
>> already being run by 'parallel_workers' number of workers.
>
> The decision to use fewer workers for a smaller scan isn't really
> because we think that using more workers will cause a regression.
> It's because we think it may not help very much, and because it's not
> worth firing up a ton of workers for a relatively small scan given
> that workers are a limited resource.  I think once we've got a bunch
> of workers started, we might as well try to use them.

One possible side-effect I see due to this is : Other sessions might
not get a fair share of workers due to this. But again, there might be
counter argument that, because Append is now focussing all the workers
on a last subplan, it may finish faster, and release *all* of its
workers earlier.

BTW, there is going to be some logic change in the choose-next-subplan
algorithm if we consider giving extra workers to subplans.



Re: [HACKERS] Parallel Append implementation

From
Robert Haas
Date:
On Wed, Feb 15, 2017 at 11:15 PM, Ashutosh Bapat
<ashutosh.bapat@enterprisedb.com> wrote:
> If the user is ready throw 200 workers and if the subplans can use
> them to speed up the query 200 times (obviously I am exaggerating),
> why not to use those? When the user set
> max_parallel_workers_per_gather to that high a number, he meant it to
> be used by a gather, and that's what we should be doing.

The reason is because of what Amit Khandekar wrote in his email -- you
get a result with a partitioned table that is wildly inconsistent with
the result you get for an unpartitioned table.  You could equally well
argue that if the user sets max_parallel_workers_per_gather to 200,
and there's a parallel sequential scan of an 8MB table to be
performed, we ought to use all 200 workers for that.  But the planner
in fact estimates a much lesser number of workers, because using 200
workers for that task wastes a lot of resources for no real
performance benefit.  If you partition that 8MB table into 100 tables
that are each 80kB, that shouldn't radically increase the number of
workers that get used.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



Re: [HACKERS] Parallel Append implementation

From
Robert Haas
Date:
On Thu, Feb 16, 2017 at 1:34 AM, Amit Khandekar <amitdkhan.pg@gmail.com> wrote:
>> What I was thinking about is something like this:
>>
>> 1. First, take the maximum parallel_workers value from among all the children.
>>
>> 2. Second, compute log2(num_children)+1 and round up.  So, for 1
>> child, 1; for 2 children, 2; for 3-4 children, 3; for 5-8 children, 4;
>> for 9-16 children, 5, and so on.
>>
>> 3. Use as the number of parallel workers for the children the maximum
>> of the value computed in step 1 and the value computed in step 2.
>
> Ah, now that I closely look at compute_parallel_worker(), I see what
> you are getting at.
>
> For plain unpartitioned table, parallel_workers is calculated as
> roughly equal to log(num_pages) (actually it is log3). So if the table
> size is n, the workers will be log(n). So if it is partitioned into p
> partitions of size n/p each, still the number of workers should be
> log(n). Whereas, in the patch, it is calculated as (total of all the
> child workers) i.e. n * log(n/p) for this case. But log(n) != p *
> log(x/p). For e.g. log(1000) is much less than log(300) + log(300) +
> log(300).
>
> That means, the way it is calculated in the patch turns out to be much
> larger than if it were calculated using log(total of sizes of all
> children). So I think for the step 2 above, log(total_rel_size)
> formula seems to be appropriate. What do you think ? For
> compute_parallel_worker(), it is actually log3 by the way.
>
> BTW this formula is just an extension of how parallel_workers is
> calculated for an unpartitioned table.

log(total_rel_size) would be a reasonable way to estimate workers when
we're scanning an inheritance hierarchy, but I'm hoping Parallel
Append is also going to apply to UNION ALL queries, where there's no
concept of the total rel size.  For that we need something else, which
is why the algorithm that I proposed upthread doesn't rely on it.

>> The decision to use fewer workers for a smaller scan isn't really
>> because we think that using more workers will cause a regression.
>> It's because we think it may not help very much, and because it's not
>> worth firing up a ton of workers for a relatively small scan given
>> that workers are a limited resource.  I think once we've got a bunch
>> of workers started, we might as well try to use them.
>
> One possible side-effect I see due to this is : Other sessions might
> not get a fair share of workers due to this. But again, there might be
> counter argument that, because Append is now focussing all the workers
> on a last subplan, it may finish faster, and release *all* of its
> workers earlier.

Right.  I think in general it's pretty clear that there are possible
fairness problems with parallel query.  The first process that comes
along seizes however many workers it thinks it should use, and
everybody else can use whatever (if anything) is left.  In the long
run, I think it would be cool to have a system where workers can leave
one parallel query in progress and join a different one (or exit and
spawn a new worker to join a different one), automatically rebalancing
as the number of parallel queries in flight fluctuates.  But that's
clearly way beyond anything we can do right now.  I think we should
assume that any parallel workers our process has obtained are ours to
use for the duration of the query, and use them as best we can.  Note
that even if the Parallel Append tells one of the workers that there
are no more tuples and it should go away, some higher level of the
query plan could make a different choice anyway; there might be
another Append elsewhere in the plan tree.

> BTW, there is going to be some logic change in the choose-next-subplan
> algorithm if we consider giving extra workers to subplans.

I'm not sure that it's going to be useful to make this logic very
complicated.  I think the most important thing is to give 1 worker to
each plan before we give a second worker to any plan.  In general I
think it's sufficient to assign a worker that becomes available to the
subplan with the fewest number of workers (or one of them, if there's
a tie) without worrying too much about the target number of workers
for that subplan.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



Re: [HACKERS] Parallel Append implementation

From
Ashutosh Bapat
Date:
On Thu, Feb 16, 2017 at 8:15 PM, Robert Haas <robertmhaas@gmail.com> wrote:
> On Wed, Feb 15, 2017 at 11:15 PM, Ashutosh Bapat
> <ashutosh.bapat@enterprisedb.com> wrote:
>> If the user is ready throw 200 workers and if the subplans can use
>> them to speed up the query 200 times (obviously I am exaggerating),
>> why not to use those? When the user set
>> max_parallel_workers_per_gather to that high a number, he meant it to
>> be used by a gather, and that's what we should be doing.
>
> The reason is because of what Amit Khandekar wrote in his email -- you
> get a result with a partitioned table that is wildly inconsistent with
> the result you get for an unpartitioned table.  You could equally well
> argue that if the user sets max_parallel_workers_per_gather to 200,
> and there's a parallel sequential scan of an 8MB table to be
> performed, we ought to use all 200 workers for that.  But the planner
> in fact estimates a much lesser number of workers, because using 200
> workers for that task wastes a lot of resources for no real
> performance benefit.  If you partition that 8MB table into 100 tables
> that are each 80kB, that shouldn't radically increase the number of
> workers that get used.

That's true for a partitioned table, but not necessarily for every
append relation. Amit's patch is generic for all append relations. If
the child plans are joins or subquery segments of set operations, I
doubt if the same logic works. It may be better if we throw as many
workers (or some function "summing" those up) as specified by those
subplans. I guess, we have to use different logic for append relations
which are base relations and append relations which are not base
relations.

-- 
Best Wishes,
Ashutosh Bapat
EnterpriseDB Corporation
The Postgres Database Company



Re: [HACKERS] Parallel Append implementation

From
Amit Khandekar
Date:
On 16 February 2017 at 20:37, Robert Haas <robertmhaas@gmail.com> wrote:
> On Thu, Feb 16, 2017 at 1:34 AM, Amit Khandekar <amitdkhan.pg@gmail.com> wrote:
>>> What I was thinking about is something like this:
>>>
>>> 1. First, take the maximum parallel_workers value from among all the children.
>>>
>>> 2. Second, compute log2(num_children)+1 and round up.  So, for 1
>>> child, 1; for 2 children, 2; for 3-4 children, 3; for 5-8 children, 4;
>>> for 9-16 children, 5, and so on.
>>>
>>> 3. Use as the number of parallel workers for the children the maximum
>>> of the value computed in step 1 and the value computed in step 2.
>>
>> Ah, now that I closely look at compute_parallel_worker(), I see what
>> you are getting at.
>>
>> For plain unpartitioned table, parallel_workers is calculated as
>> roughly equal to log(num_pages) (actually it is log3). So if the table
>> size is n, the workers will be log(n). So if it is partitioned into p
>> partitions of size n/p each, still the number of workers should be
>> log(n). Whereas, in the patch, it is calculated as (total of all the
>> child workers) i.e. n * log(n/p) for this case. But log(n) != p *
>> log(x/p). For e.g. log(1000) is much less than log(300) + log(300) +
>> log(300).
>>
>> That means, the way it is calculated in the patch turns out to be much
>> larger than if it were calculated using log(total of sizes of all
>> children). So I think for the step 2 above, log(total_rel_size)
>> formula seems to be appropriate. What do you think ? For
>> compute_parallel_worker(), it is actually log3 by the way.
>>
>> BTW this formula is just an extension of how parallel_workers is
>> calculated for an unpartitioned table.
>
> log(total_rel_size) would be a reasonable way to estimate workers when
> we're scanning an inheritance hierarchy, but I'm hoping Parallel
> Append is also going to apply to UNION ALL queries, where there's no
> concept of the total rel size.
Yes ParallelAppend also gets used in UNION ALL.

> For that we need something else, which
> is why the algorithm that I proposed upthread doesn't rely on it.

The log2(num_children)+1 formula which you proposed does not take into
account the number of workers for each of the subplans, that's why I
am a bit more inclined to look for some other logic. May be, treat the
children as if they belong to partitions, and accordingly calculate
the final number of workers. So for 2 children with 4 and 5 workers
respectively, Append parallel_workers would be : log3(3^4 + 3^5) .

>
>>> The decision to use fewer workers for a smaller scan isn't really
>>> because we think that using more workers will cause a regression.
>>> It's because we think it may not help very much, and because it's not
>>> worth firing up a ton of workers for a relatively small scan given
>>> that workers are a limited resource.  I think once we've got a bunch
>>> of workers started, we might as well try to use them.
>>
>> One possible side-effect I see due to this is : Other sessions might
>> not get a fair share of workers due to this. But again, there might be
>> counter argument that, because Append is now focussing all the workers
>> on a last subplan, it may finish faster, and release *all* of its
>> workers earlier.
>
> Right.  I think in general it's pretty clear that there are possible
> fairness problems with parallel query.  The first process that comes
> along seizes however many workers it thinks it should use, and
> everybody else can use whatever (if anything) is left.  In the long
> run, I think it would be cool to have a system where workers can leave
> one parallel query in progress and join a different one (or exit and
> spawn a new worker to join a different one), automatically rebalancing
> as the number of parallel queries in flight fluctuates.  But that's
> clearly way beyond anything we can do right now.  I think we should
> assume that any parallel workers our process has obtained are ours to
> use for the duration of the query, and use them as best we can.

> Note that even if the Parallel Append tells one of the workers that there
> are no more tuples and it should go away, some higher level of the
> query plan could make a different choice anyway; there might be
> another Append elsewhere in the plan tree.
Yeah, that looks good enough to justify not losing the workers

-- 
Thanks,
-Amit Khandekar
EnterpriseDB Corporation
The Postgres Database Company



Re: [HACKERS] Parallel Append implementation

From
Amit Khandekar
Date:
Ashutosh Bapat <ashutosh.bapat@enterprisedb.com> wrote:
> Do we have any performance measurements where we see that Goal B
> performs better than Goal A, in such a situation? Do we have any
> performance measurement comparing these two approaches in other
> situations. If implementation for Goal B beats that of Goal A always,
> we can certainly implement it directly. But it may not.

I will get back with some performance numbers.

> Also, separating patches for Goal A and Goal B might make reviews easier.

Do you anyways want the patch with the current state to be split ?
Right now, I am not sure how exactly you need me to split it.

>
>>
>>>
>>> BTW, Right now, the patch does not consider non-partial paths for a
>>> child which has partial paths. Do we know, for sure, that a path
>>> containing partial paths for a child, which has it, is always going to
>>> be cheaper than the one which includes non-partial path. If not,
>>> should we build another paths which contains non-partial paths for all
>>> child relations. This sounds like a 0/1 knapsack problem.
>>
>> I didn't quite get this. We do create a non-partial Append path using
>> non-partial child paths anyways.
>
> Let's say a given child-relation has both partial and non-partial
> paths, your approach would always pick up a partial path. But now that
> parallel append can handle non-partial paths as well, it may happen
> that picking up non-partial path instead of partial one when both are
> available gives an overall better performance. Have we ruled out that
> possibility.

Yes, one Append can contain a child c1 with partial path, another
Append path can contain child c1 with non-partial path, and each of
this combination can have two more combinations for child2, and so on,
leading to too many Append paths. I think that's what you referred to
as 0/1 knapsack problem. Right, this does not seem worth it.

I had earlier considered adding a partial Append path containing only
non-partial paths, but for some reason I had concluded that it's not
worth having this path, as it's cost is most likely going to be higher
due to presence of all single-worker paths *and* also a Gather above
them. I should have documented the reason. Let me give a thought on
this.

>>>> Let me try keeping the per-subplan max_worker info in Append path
>>>> itself, like I mentioned above. If that works, the bitmap will be
>>>> replaced by max_worker field. In case of non-partial subpath,
>>>> max_worker will be 1. (this is the same info kept in AppendState node
>>>> in the patch, but now we might need to keep it in Append path node as
>>>> well).
>>>
>>> It will be better if we can fetch that information from each subpath
>>> when creating the plan. As I have explained before, a path is minimal
>>> structure, which should be easily disposable, when throwing away the
>>> path.
>>
>> Now in the v2 patch, we store per-subplan worker count. But still, we
>> cannot use the path->parallel_workers to determine whether it's a
>> partial path. This is because even for a non-partial path, it seems
>> the parallel_workers can be non-zero. For e.g., in
>> create_subqueryscan_path(), it sets path->parallel_workers to
>> subpath->parallel_workers. But this path is added as a non-partial
>> path. So we need a separate info as to which of the subpaths in Append
>> path are partial subpaths. So in the v2 patch, I continued to use
>> Bitmapset in AppendPath. But in Append plan node, number of workers is
>> calculated using this bitmapset. Check the new function
>> get_append_num_workers().
>
> If the subpath from childrel->partial_pathlist, then we set the
> corresponding bit in the bitmap. Now we can infer that for any path if
> that path is found in path->parent->partial_pathlist. Since the code
> always chooses the first partial path, the search in partial_pathlist
> should not affect performance. So, we can avoid maintaining a bitmap
> in the path and keep accumulating it when collapsing append paths.

Thanks. Accordingly did these changes in attached v4 patch.
get_append_num_workers() now uses
linitial(path->parent->partial_pathlist) to determine whether the
subpath is a partial or a non-partial path. Removed the bitmapset
field from AppendPath.

>>>>
>>>>> 12. cost_append() essentially adds costs of all the subpaths and then divides
>>>>> by parallel_divisor. This might work if all the subpaths are partial paths. But
>>>>> for the subpaths which are not partial, a single worker will incur the whole
>>>>> cost of that subpath. Hence just dividing all the total cost doesn't seem the
>>>>> right thing to do. We should apply different logic for costing non-partial
>>>>> subpaths and partial subpaths.
>>>>
>>>> WIth the current partial path costing infrastructure, it is assumed
>>>> that a partial path node should return the average per-worker cost.
>>>> Hence, I thought it would be best to do it in a similar way for
>>>> Append. But let me think if we can do something. With the current
>>>> parallelism costing infrastructure, I am not sure though.
>>>
>>> The current parallel mechanism is in sync with that costing. Each
>>> worker is supposed to take the same burden, hence the same (average)
>>> cost. But it will change when a single worker has to scan an entire
>>> child relation and different child relations have different sizes.
>>
>> I gave more thought on this. Considering each subplan has different
>> number of workers, I think it makes sense to calculate average
>> per-worker cost even in parallel Append. In case of non-partial
>> subplan, a single worker will execute it, but it will next choose
>> another subplan. So on average each worker is going to process the
>> same number of rows, and also the same amount of CPU. And that amount
>> of CPU cost and rows cost should be calculated by taking the total
>> count and dividing it by number of workers (parallel_divsor actually).
>>
>
> That's not entirely true. Consider N child relations with chosen paths
> with costs C1, C2, ... CN which are very very different. If there are
> N workers, the total cost should correspond to the highest of the
> costs of subpaths, since no worker will execute more than one plan.
> The unfortunate worker which executes the costliest path would take
> the longest time.

Yeah, there seems to be no specific method that can compute the total
cost as the maximum of all the subplans total cost. So the assumption
is that there would be roughly equal distribution of workers.

In the new patch, there is a new test case output modification for
inherit.sql , because that test case started failing on account of
getting a ParallelAppend plan instead of Merge Append for an
inheritence table where seqscan was disabled.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Attachment

Re: [HACKERS] Parallel Append implementation

From
Amit Khandekar
Date:
On 16 February 2017 at 20:37, Robert Haas <robertmhaas@gmail.com> wrote:

> I'm not sure that it's going to be useful to make this logic very
> complicated.  I think the most important thing is to give 1 worker to
> each plan before we give a second worker to any plan.  In general I
> think it's sufficient to assign a worker that becomes available to the
> subplan with the fewest number of workers (or one of them, if there's
> a tie)

> without worrying too much about the target number of workers for that subplan.

The reason I have considered per-subplan workers is , for instance, so
that we can respect the parallel_workers reloption set by the user for
different tables. Or for e.g., subquery1 is a big hash join needing
more workers, and subquery2 is a small table requiring quite lesser
workers, it seems to make sense to give more workers to subquery1.



Re: [HACKERS] Parallel Append implementation

From
Robert Haas
Date:
On Fri, Feb 17, 2017 at 11:44 AM, Ashutosh Bapat
<ashutosh.bapat@enterprisedb.com> wrote:
> That's true for a partitioned table, but not necessarily for every
> append relation. Amit's patch is generic for all append relations. If
> the child plans are joins or subquery segments of set operations, I
> doubt if the same logic works. It may be better if we throw as many
> workers (or some function "summing" those up) as specified by those
> subplans. I guess, we have to use different logic for append relations
> which are base relations and append relations which are not base
> relations.

Well, I for one do not believe that if somebody writes a UNION ALL
with 100 branches, they should get 100 (or 99) workers.  Generally
speaking, the sweet spot for parallel workers on queries we've tested
so far has been between 1 and 4.  It's straining credulity to believe
that the number that's correct for parallel append is more than an
order of magnitude larger.  Since increasing resource commitment by
the logarithm of the problem size has worked reasonably well for table
scans, I believe we should pursue a similar approach here.  I'm
willing to negotiate on the details of what the formula I looked like,
but I'm not going to commit something that lets an Append relation try
to grab massively more resources than we'd use for some other plan
shape.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



Re: [HACKERS] Parallel Append implementation

From
Robert Haas
Date:
On Fri, Feb 17, 2017 at 2:56 PM, Amit Khandekar <amitdkhan.pg@gmail.com> wrote:
> The log2(num_children)+1 formula which you proposed does not take into
> account the number of workers for each of the subplans, that's why I
> am a bit more inclined to look for some other logic. May be, treat the
> children as if they belong to partitions, and accordingly calculate
> the final number of workers. So for 2 children with 4 and 5 workers
> respectively, Append parallel_workers would be : log3(3^4 + 3^5) .

In general this will give an answer not different by more than 1 or 2
from my answer, and often exactly the same.  In the case you mention,
whether we get the same answer depends on which way you round:
log3(3^4+3^5) is 5 if you round down, 6 if you round up.

My formula is more aggressive when there are many subplans that are
not parallel or take only 1 worker, because I'll always use at least 5
workers for an append that has 9-16 children, whereas you might use
only 2 if you do log3(3^0+3^0+3^0+3^0+3^0+3^0+3^0+3^0+3^0).  In that
case I like my formula better. With lots of separate children, the
chances of being able to use as many as 5 workers seem good.  (Note
that using 9 workers as Ashutosh seems to be proposing would be a
waste if the different children have very unequal execution times,
because the workers that run children with short execution times can
be reused to run additional subplans while the long ones are still
running.  Running a separate worker for each child only works out if
the shortest runtime is more than 50% of the longest runtime, which
may sometimes be true but doesn't seem like a good bet in general.)

Your formula is more aggressive when you have 3 children that all use
the same number of workers; it'll always decide on <number of workers
per child>+1, whereas mine won't add the extra worker in that case.
Possibly your formula is better than mine in that case, but I'm not
sure.  If you have as many as 9 children that all want N workers, your
formula will decide on N+2 workers, but since my formula guarantees a
minimum of 5 workers in such cases, I'll probably be within 1 of
whatever answer you were getting.

Basically, I don't believe that the log3(n) thing is anything very
special or magical.  The fact that I settled on that formula for
parallel sequential scan doesn't mean that it's exactly right for
every other case.  I do think it's likely that increasing workers
logarithmically is a fairly decent strategy here, but I wouldn't get
hung up on using log3(n) in every case or making all of the answers
100% consistent according to some grand principal.  I'm not even sure
log3(n) is right for parallel sequential scan, so insisting that
Parallel Append has to work that way when I had no better reason than
gut instinct for picking that for Parallel Sequential Scan seems to me
to be a little unprincipled.  We're still in the early stages of this
parallel query experiment, and a decent number of these algorithms are
likely to change as we get more sophisticated.  For now at least, it's
more important to pick things that work well pragmatically than to be
theoretically optimal.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



Re: [HACKERS] Parallel Append implementation

From
Ashutosh Bapat
Date:
On Sun, Feb 19, 2017 at 2:33 PM, Robert Haas <robertmhaas@gmail.com> wrote:
> On Fri, Feb 17, 2017 at 11:44 AM, Ashutosh Bapat
> <ashutosh.bapat@enterprisedb.com> wrote:
>> That's true for a partitioned table, but not necessarily for every
>> append relation. Amit's patch is generic for all append relations. If
>> the child plans are joins or subquery segments of set operations, I
>> doubt if the same logic works. It may be better if we throw as many
>> workers (or some function "summing" those up) as specified by those
>> subplans. I guess, we have to use different logic for append relations
>> which are base relations and append relations which are not base
>> relations.
>
> Well, I for one do not believe that if somebody writes a UNION ALL
> with 100 branches, they should get 100 (or 99) workers.  Generally
> speaking, the sweet spot for parallel workers on queries we've tested
> so far has been between 1 and 4.  It's straining credulity to believe
> that the number that's correct for parallel append is more than an
> order of magnitude larger.  Since increasing resource commitment by
> the logarithm of the problem size has worked reasonably well for table
> scans, I believe we should pursue a similar approach here.

Thanks for that explanation. I makes sense. So, something like this
would work: total number of workers = some function of log(sum of
sizes of relations). The number of workers allotted to each segment
are restricted to the the number of workers chosen by the planner
while planning that segment. The patch takes care of the limit right
now. It needs to incorporate the calculation for total number of
workers for append.

-- 
Best Wishes,
Ashutosh Bapat
EnterpriseDB Corporation
The Postgres Database Company



Re: [HACKERS] Parallel Append implementation

From
Robert Haas
Date:
On Mon, Feb 20, 2017 at 10:54 AM, Ashutosh Bapat
<ashutosh.bapat@enterprisedb.com> wrote:
> On Sun, Feb 19, 2017 at 2:33 PM, Robert Haas <robertmhaas@gmail.com> wrote:
>> On Fri, Feb 17, 2017 at 11:44 AM, Ashutosh Bapat
>> <ashutosh.bapat@enterprisedb.com> wrote:
>>> That's true for a partitioned table, but not necessarily for every
>>> append relation. Amit's patch is generic for all append relations. If
>>> the child plans are joins or subquery segments of set operations, I
>>> doubt if the same logic works. It may be better if we throw as many
>>> workers (or some function "summing" those up) as specified by those
>>> subplans. I guess, we have to use different logic for append relations
>>> which are base relations and append relations which are not base
>>> relations.
>>
>> Well, I for one do not believe that if somebody writes a UNION ALL
>> with 100 branches, they should get 100 (or 99) workers.  Generally
>> speaking, the sweet spot for parallel workers on queries we've tested
>> so far has been between 1 and 4.  It's straining credulity to believe
>> that the number that's correct for parallel append is more than an
>> order of magnitude larger.  Since increasing resource commitment by
>> the logarithm of the problem size has worked reasonably well for table
>> scans, I believe we should pursue a similar approach here.
>
> Thanks for that explanation. I makes sense. So, something like this
> would work: total number of workers = some function of log(sum of
> sizes of relations). The number of workers allotted to each segment
> are restricted to the the number of workers chosen by the planner
> while planning that segment. The patch takes care of the limit right
> now. It needs to incorporate the calculation for total number of
> workers for append.

log(sum of sizes of relations) isn't well-defined for a UNION ALL query.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



Re: [HACKERS] Parallel Append implementation

From
Amit Khandekar
Date:
On 19 February 2017 at 14:59, Robert Haas <robertmhaas@gmail.com> wrote:
> On Fri, Feb 17, 2017 at 2:56 PM, Amit Khandekar <amitdkhan.pg@gmail.com> wrote:
>> The log2(num_children)+1 formula which you proposed does not take into
>> account the number of workers for each of the subplans, that's why I
>> am a bit more inclined to look for some other logic. May be, treat the
>> children as if they belong to partitions, and accordingly calculate
>> the final number of workers. So for 2 children with 4 and 5 workers
>> respectively, Append parallel_workers would be : log3(3^4 + 3^5) .
>
> In general this will give an answer not different by more than 1 or 2
> from my answer, and often exactly the same.  In the case you mention,
> whether we get the same answer depends on which way you round:
> log3(3^4+3^5) is 5 if you round down, 6 if you round up.
>
> My formula is more aggressive when there are many subplans that are
> not parallel or take only 1 worker, because I'll always use at least 5
> workers for an append that has 9-16 children, whereas you might use
> only 2 if you do log3(3^0+3^0+3^0+3^0+3^0+3^0+3^0+3^0+3^0).  In that
> case I like my formula better. With lots of separate children, the
> chances of being able to use as many as 5 workers seem good.  (Note
> that using 9 workers as Ashutosh seems to be proposing would be a
> waste if the different children have very unequal execution times,
> because the workers that run children with short execution times can
> be reused to run additional subplans while the long ones are still
> running.  Running a separate worker for each child only works out if
> the shortest runtime is more than 50% of the longest runtime, which
> may sometimes be true but doesn't seem like a good bet in general.)
>
> Your formula is more aggressive when you have 3 children that all use
> the same number of workers; it'll always decide on <number of workers
> per child>+1, whereas mine won't add the extra worker in that case.
> Possibly your formula is better than mine in that case, but I'm not
> sure.  If you have as many as 9 children that all want N workers, your
> formula will decide on N+2 workers, but since my formula guarantees a
> minimum of 5 workers in such cases, I'll probably be within 1 of
> whatever answer you were getting.
>

Yeah, that seems to be right in most of the cases. The only cases
where your formula seems to give too few workers is for something like
: (2, 8, 8). For such subplans, we should at least allocate 8 workers.
It turns out that in most of the cases in my formula, the Append
workers allocated is just 1 worker more than the max per-subplan
worker count. So in (2, 1, 1, 8), it will be a fraction more than 8.
So in the patch, in addition to the log2() formula you proposed, I
have made sure that it allocates at least equal to max(per-subplan
parallel_workers values).

>
>> BTW, there is going to be some logic change in the choose-next-subplan
>> algorithm if we consider giving extra workers to subplans.
>
> I'm not sure that it's going to be useful to make this logic very
> complicated.  I think the most important thing is to give 1 worker to
> each plan before we give a second worker to any plan.  In general I
> think it's sufficient to assign a worker that becomes available to the
> subplan with the fewest number of workers (or one of them, if there's
> a tie) without worrying too much about the target number of workers
> for that subplan.

In the attached v5 patch, the logic of distributing the workers is now
kept simple : it just distributes the workers equally without
considering the per-sublan parallel_workers value. I have retained the
earlier logic of choosing the plan with minimum current workers. But
now that the pa_max_workers is not needed, I removed it, and instead a
partial_plans bitmapset is added in the Append node. Once a worker
picks up a non-partial subplan, it immediately changes its
pa_num_workers to -1. Whereas for partial subplans, the worker sets it
to -1 only after it finishes executing it.

Effectively, in parallel_append_next(), the check for whether subplan
is executing with max parallel_workers is now removed, and all code
that was using pa_max_workers is now removed.


Ashutosh Bapat <ashutosh.bapat@enterprisedb.com> wrote:
> 10. We should probably move the parallel_safe calculation out of cost_append().
> +            path->parallel_safe = path->parallel_safe &&
> +                                  subpath->parallel_safe;
>
> 11. This check shouldn't be part of cost_append().
> +            /* All child paths must have same parameterization */
> +            Assert(bms_equal(PATH_REQ_OUTER(subpath), required_outer));
>

Moved out these two statements from cost_append(). Did it separately
in create_append_path().


Also, I have removed some elog() statements which were there while
inside Spinlock in parallel_append_next().


On 17 January 2017 at 11:10, Amit Langote <Langote_Amit_f8@lab.ntt.co.jp> wrote:
> I was looking at the executor portion of this patch and I noticed that in
> exec_append_initialize_next():
>
>     if (appendstate->as_padesc)
>         return parallel_append_next(appendstate);
>
>     /*
>      * Not parallel-aware. Fine, just go on to the next subplan in the
>      * appropriate direction.
>      */
>     if (ScanDirectionIsForward(appendstate->ps.state->es_direction))
>         appendstate->as_whichplan++;
>     else
>         appendstate->as_whichplan--;
>
> which seems to mean that executing Append in parallel mode disregards the
> scan direction.  I am not immediately sure what implications that has, so
> I checked what heap scan does when executing in parallel mode, and found
> this in heapgettup():
>
>     else if (backward)
>     {
>         /* backward parallel scan not supported */
>         Assert(scan->rs_parallel == NULL);
>
> Perhaps, AppendState.as_padesc would not have been set if scan direction
> is backward, because parallel mode would be disabled for the whole query
> in that case (PlannerGlobal.parallelModeOK = false).  Maybe add an
> Assert() similar to one in heapgettup().
>

Right. Thanks for noticing this. I have added a similar Assert in
exec_append_initialize_next().

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Attachment

Re: [HACKERS] Parallel Append implementation

From
Robert Haas
Date:
On Wed, Mar 8, 2017 at 2:00 AM, Amit Khandekar <amitdkhan.pg@gmail.com> wrote:
> Yeah, that seems to be right in most of the cases. The only cases
> where your formula seems to give too few workers is for something like
> : (2, 8, 8). For such subplans, we should at least allocate 8 workers.
> It turns out that in most of the cases in my formula, the Append
> workers allocated is just 1 worker more than the max per-subplan
> worker count. So in (2, 1, 1, 8), it will be a fraction more than 8.
> So in the patch, in addition to the log2() formula you proposed, I
> have made sure that it allocates at least equal to max(per-subplan
> parallel_workers values).

Yeah, I agree with that.

Some review:

+typedef struct ParallelAppendDescData
+{
+    slock_t        pa_mutex;        /* mutual exclusion to choose
next subplan */
+    ParallelAppendInfo pa_info[FLEXIBLE_ARRAY_MEMBER];
+} ParallelAppendDescData;

Instead of having ParallelAppendInfo, how about just int
pa_workers[FLEXIBLE_ARRAY_MEMBER]?  The second structure seems like
overkill, at least for now.

+static inline void
+exec_append_scan_first(AppendState *appendstate)
+{
+    appendstate->as_whichplan = 0;
+}

I don't think this is buying you anything, and suggest backing it out.

+        /* Backward scan is not supported by parallel-aware plans */
+        Assert(!ScanDirectionIsBackward(appendstate->ps.state->es_direction));

I think you could assert ScanDirectionIsForward, couldn't you?
NoMovement, I assume, is right out.

+            elog(DEBUG2, "ParallelAppend : pid %d : all plans already
finished",
+                         MyProcPid);

Please remove (and all similar cases also).

+                 sizeof(*node->as_padesc->pa_info) * node->as_nplans);

I'd use the type name instead.

+    for (i = 0; i < node->as_nplans; i++)
+    {
+        /*
+         * Just setting all the number of workers to 0 is enough. The logic
+         * of choosing the next plan in workers will take care of everything
+         * else.
+         */
+        padesc->pa_info[i].pa_num_workers = 0;
+    }

Here I'd use memset.

+    return (min_whichplan == PA_INVALID_PLAN ? false : true);

Maybe just return (min_whichplan != PA_INVALID_PLAN);

-                                              childrel->cheapest_total_path);
+
childrel->cheapest_total_path);

Unnecessary.

+        {            partial_subpaths = accumulate_append_subpath(partial_subpaths,
  linitial(childrel->partial_pathlist));
 
+        }

Don't need to add braces.

+            /*
+             * Extract the first unparameterized, parallel-safe one among the
+             * child paths.
+             */

Can we use get_cheapest_parallel_safe_total_inner for this, from
a71f10189dc10a2fe422158a2c9409e0f77c6b9e?

+        if (rel->partial_pathlist != NIL &&
+            (Path *) linitial(rel->partial_pathlist) == subpath)
+            partial_subplans_set = bms_add_member(partial_subplans_set, i);

This seems like a scary way to figure this out.  What if we wanted to
build a parallel append subpath with some path other than the
cheapest, for some reason?  I think you ought to record the decision
that set_append_rel_pathlist makes about whether to use a partial path
or a parallel-safe path, and then just copy it over here.

-                create_append_path(grouped_rel,
-                                   paths,
-                                   NULL,
-                                   0);
+                create_append_path(grouped_rel, paths, NULL, 0);

Unnecessary.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



Re: [HACKERS] Parallel Append implementation

From
Ashutosh Bapat
Date:
>
> +        if (rel->partial_pathlist != NIL &&
> +            (Path *) linitial(rel->partial_pathlist) == subpath)
> +            partial_subplans_set = bms_add_member(partial_subplans_set, i);
>
> This seems like a scary way to figure this out.  What if we wanted to
> build a parallel append subpath with some path other than the
> cheapest, for some reason?  I think you ought to record the decision
> that set_append_rel_pathlist makes about whether to use a partial path
> or a parallel-safe path, and then just copy it over here.
>

I agree that assuming that a subpath is non-partial path if it's not
cheapest of the partial paths is risky. In fact, we can not assume
that even when it's not one of the partial_paths since it could have
been kicked out or was never added to the partial path list like
reparameterized path. But if we have to save the information about
which of the subpaths are partial paths and which are not in
AppendPath, it would take some memory, noticeable for thousands of
partitions, which will leak if the path doesn't make into the
rel->pathlist. The purpose of that information is to make sure that we
allocate only one worker to that plan. I suggested that we use
path->parallel_workers for the same, but it seems that's not
guaranteed to be reliable. The reasons were discussed upthread. Is
there any way to infer whether we can allocate more than one workers
to a plan by looking at the corresponding path?

-- 
Best Wishes,
Ashutosh Bapat
EnterpriseDB Corporation
The Postgres Database Company



Re: [HACKERS] Parallel Append implementation

From
Robert Haas
Date:
On Thu, Mar 9, 2017 at 7:42 AM, Ashutosh Bapat
<ashutosh.bapat@enterprisedb.com> wrote:
>>
>> +        if (rel->partial_pathlist != NIL &&
>> +            (Path *) linitial(rel->partial_pathlist) == subpath)
>> +            partial_subplans_set = bms_add_member(partial_subplans_set, i);
>>
>> This seems like a scary way to figure this out.  What if we wanted to
>> build a parallel append subpath with some path other than the
>> cheapest, for some reason?  I think you ought to record the decision
>> that set_append_rel_pathlist makes about whether to use a partial path
>> or a parallel-safe path, and then just copy it over here.
>
> I agree that assuming that a subpath is non-partial path if it's not
> cheapest of the partial paths is risky. In fact, we can not assume
> that even when it's not one of the partial_paths since it could have
> been kicked out or was never added to the partial path list like
> reparameterized path. But if we have to save the information about
> which of the subpaths are partial paths and which are not in
> AppendPath, it would take some memory, noticeable for thousands of
> partitions, which will leak if the path doesn't make into the
> rel->pathlist.

True, but that's no different from the situation for any other Path
node that has substructure.  For example, an IndexPath has no fewer
than 5 list pointers in it.  Generally we assume that the number of
paths won't be large enough for the memory used to really matter, and
I think that will also be true here.  And an AppendPath has a list of
subpaths, and if I'm not mistaken, those list nodes consume more
memory than the tracking information we're thinking about here will.

I think you're thinking about this issue because you've been working
on partitionwise join where memory consumption is a big issue, but
there are a lot of cases where that isn't really a big deal.

> The purpose of that information is to make sure that we
> allocate only one worker to that plan. I suggested that we use
> path->parallel_workers for the same, but it seems that's not
> guaranteed to be reliable. The reasons were discussed upthread. Is
> there any way to infer whether we can allocate more than one workers
> to a plan by looking at the corresponding path?

I think it would be smarter to track it some other way.  Either keep
two lists of paths, one of which is the partial paths and the other of
which is the parallel-safe paths, or keep a bitmapset indicating which
paths fall into which category.  I am not going to say there's no way
we could make it work without either of those things -- looking at the
parallel_workers flag might be made to work, for example -- but the
design idea I had in mind when I put this stuff into place was that
you keep them separate in other ways, not by the data they store
inside them.  I think it will be more robust if we keep to that
principle.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



Re: [HACKERS] Parallel Append implementation

From
Ashutosh Bapat
Date:
On Thu, Mar 9, 2017 at 6:28 PM, Robert Haas <robertmhaas@gmail.com> wrote:
> On Thu, Mar 9, 2017 at 7:42 AM, Ashutosh Bapat
> <ashutosh.bapat@enterprisedb.com> wrote:
>>>
>>> +        if (rel->partial_pathlist != NIL &&
>>> +            (Path *) linitial(rel->partial_pathlist) == subpath)
>>> +            partial_subplans_set = bms_add_member(partial_subplans_set, i);
>>>
>>> This seems like a scary way to figure this out.  What if we wanted to
>>> build a parallel append subpath with some path other than the
>>> cheapest, for some reason?  I think you ought to record the decision
>>> that set_append_rel_pathlist makes about whether to use a partial path
>>> or a parallel-safe path, and then just copy it over here.
>>
>> I agree that assuming that a subpath is non-partial path if it's not
>> cheapest of the partial paths is risky. In fact, we can not assume
>> that even when it's not one of the partial_paths since it could have
>> been kicked out or was never added to the partial path list like
>> reparameterized path. But if we have to save the information about
>> which of the subpaths are partial paths and which are not in
>> AppendPath, it would take some memory, noticeable for thousands of
>> partitions, which will leak if the path doesn't make into the
>> rel->pathlist.
>
> True, but that's no different from the situation for any other Path
> node that has substructure.  For example, an IndexPath has no fewer
> than 5 list pointers in it.  Generally we assume that the number of
> paths won't be large enough for the memory used to really matter, and
> I think that will also be true here.  And an AppendPath has a list of
> subpaths, and if I'm not mistaken, those list nodes consume more
> memory than the tracking information we're thinking about here will.
>

What I have observed is that we try to keep the memory usage to a
minimum, trying to avoid memory consumption as much as possible. Most
of that substructure gets absorbed by the planner or is shared across
paths. Append path lists are an exception to that, but we need
something to hold all subpaths together and list is PostgreSQL's way
of doing it. So, that's kind of unavoidable. And may be we will find
some reason for almost every substructure in paths.

> I think you're thinking about this issue because you've been working
> on partitionwise join where memory consumption is a big issue, but
> there are a lot of cases where that isn't really a big deal.

:).

>
>> The purpose of that information is to make sure that we
>> allocate only one worker to that plan. I suggested that we use
>> path->parallel_workers for the same, but it seems that's not
>> guaranteed to be reliable. The reasons were discussed upthread. Is
>> there any way to infer whether we can allocate more than one workers
>> to a plan by looking at the corresponding path?
>
> I think it would be smarter to track it some other way.  Either keep
> two lists of paths, one of which is the partial paths and the other of
> which is the parallel-safe paths, or keep a bitmapset indicating which
> paths fall into which category.

I like two lists: it consumes almost no memory (two list headers
instead of one) compared to non-parallel-append when there are
non-partial paths and what more, it consumes no extra memory when all
paths are partial.

-- 
Best Wishes,
Ashutosh Bapat
EnterpriseDB Corporation
The Postgres Database Company



Re: [HACKERS] Parallel Append implementation

From
Amit Khandekar
Date:
On 10 March 2017 at 10:13, Ashutosh Bapat
<ashutosh.bapat@enterprisedb.com> wrote:
> On Thu, Mar 9, 2017 at 6:28 PM, Robert Haas <robertmhaas@gmail.com> wrote:
>> On Thu, Mar 9, 2017 at 7:42 AM, Ashutosh Bapat
>> <ashutosh.bapat@enterprisedb.com> wrote:
>>>>
>>>> +        if (rel->partial_pathlist != NIL &&
>>>> +            (Path *) linitial(rel->partial_pathlist) == subpath)
>>>> +            partial_subplans_set = bms_add_member(partial_subplans_set, i);
>>>>
>>>> This seems like a scary way to figure this out.  What if we wanted to
>>>> build a parallel append subpath with some path other than the
>>>> cheapest, for some reason?

Yes, there was an assumption that append subpath will be either a
cheapest non-partial path, or the cheapest (i.e. first in the list)
partial path, although in the patch there is no Asserts to make sure
that a common rule has been followed at both these places.

>>>> I think you ought to record the decision
>>>> that set_append_rel_pathlist makes about whether to use a partial path
>>>> or a parallel-safe path, and then just copy it over here.
>>>
>>> I agree that assuming that a subpath is non-partial path if it's not
>>> cheapest of the partial paths is risky. In fact, we can not assume
>>> that even when it's not one of the partial_paths since it could have
>>> been kicked out or was never added to the partial path list like
>>> reparameterized path. But if we have to save the information about
>>> which of the subpaths are partial paths and which are not in
>>> AppendPath, it would take some memory, noticeable for thousands of
>>> partitions, which will leak if the path doesn't make into the
>>> rel->pathlist.
>>
>> True, but that's no different from the situation for any other Path
>> node that has substructure.  For example, an IndexPath has no fewer
>> than 5 list pointers in it.  Generally we assume that the number of
>> paths won't be large enough for the memory used to really matter, and
>> I think that will also be true here.  And an AppendPath has a list of
>> subpaths, and if I'm not mistaken, those list nodes consume more
>> memory than the tracking information we're thinking about here will.
>>
>
> What I have observed is that we try to keep the memory usage to a
> minimum, trying to avoid memory consumption as much as possible. Most
> of that substructure gets absorbed by the planner or is shared across
> paths. Append path lists are an exception to that, but we need
> something to hold all subpaths together and list is PostgreSQL's way
> of doing it. So, that's kind of unavoidable. And may be we will find
> some reason for almost every substructure in paths.
>
>> I think you're thinking about this issue because you've been working
>> on partitionwise join where memory consumption is a big issue, but
>> there are a lot of cases where that isn't really a big deal.
>
> :).
>
>>
>>> The purpose of that information is to make sure that we
>>> allocate only one worker to that plan. I suggested that we use
>>> path->parallel_workers for the same, but it seems that's not
>>> guaranteed to be reliable. The reasons were discussed upthread. Is
>>> there any way to infer whether we can allocate more than one workers
>>> to a plan by looking at the corresponding path?
>>
>> I think it would be smarter to track it some other way.  Either keep
>> two lists of paths, one of which is the partial paths and the other of
>> which is the parallel-safe paths, or keep a bitmapset indicating which
>> paths fall into which category.
>
> I like two lists: it consumes almost no memory (two list headers
> instead of one) compared to non-parallel-append when there are
> non-partial paths and what more, it consumes no extra memory when all
> paths are partial.

I agree that the two-lists approach will consume less memory than
bitmapset. Keeping two lists will effectively have an extra pointer
field which will add up to the AppendPath size, but this size will not
grow with the number of subpaths, whereas the Bitmapset will grow.

But as far as code is concerned, I think the two-list approach will
turn out to be less simple if we derive corresponding two different
arrays in AppendState node. Handling two different arrays during
execution does not look clean. Whereas, the bitmapset that I have used
in Append has turned out to be very simple. I just had to do the below
check (and that is the only location) to see if it's a partial or
non-partial subplan. There is nowhere else any special handling for
non-partial subpath.

/*
* Increment worker count for the chosen node, if at all we found one.
* For non-partial plans, set it to -1 instead, so that no other workers
* run it.
*/
if (min_whichplan != PA_INVALID_PLAN)
{  if (bms_is_member(min_whichplan,
((Append*)state->ps.plan)->partial_subplans_set))          padesc->pa_info[min_whichplan].pa_num_workers++;  else
  padesc->pa_info[min_whichplan].pa_num_workers = -1;
 
}

Now, since Bitmapset field is used during execution with such
simplicity, why not have this same data structure in AppendPath, and
re-use bitmapset field in Append plan node without making a copy of
it. Otherwise, if we have two lists in AppendPath, and a bitmap in
Append, again there is going to be code for data structure conversion.

-- 
Thanks,
-Amit Khandekar
EnterpriseDB Corporation
The Postgres Database Company



Re: [HACKERS] Parallel Append implementation

From
Ashutosh Bapat
Date:
>
> But as far as code is concerned, I think the two-list approach will
> turn out to be less simple if we derive corresponding two different
> arrays in AppendState node. Handling two different arrays during
> execution does not look clean. Whereas, the bitmapset that I have used
> in Append has turned out to be very simple. I just had to do the below
> check (and that is the only location) to see if it's a partial or
> non-partial subplan. There is nowhere else any special handling for
> non-partial subpath.
>
> /*
> * Increment worker count for the chosen node, if at all we found one.
> * For non-partial plans, set it to -1 instead, so that no other workers
> * run it.
> */
> if (min_whichplan != PA_INVALID_PLAN)
> {
>    if (bms_is_member(min_whichplan,
> ((Append*)state->ps.plan)->partial_subplans_set))
>            padesc->pa_info[min_whichplan].pa_num_workers++;
>    else
>            padesc->pa_info[min_whichplan].pa_num_workers = -1;
> }
>
> Now, since Bitmapset field is used during execution with such
> simplicity, why not have this same data structure in AppendPath, and
> re-use bitmapset field in Append plan node without making a copy of
> it. Otherwise, if we have two lists in AppendPath, and a bitmap in
> Append, again there is going to be code for data structure conversion.
>

I think there is some merit in separating out non-parallel and
parallel plans within the same array or outside it. The current logic
to assign plan to a worker looks at all the plans, unnecessarily
hopping over the un-parallel ones after they are given to a worker. If
we separate those two, we can keep assigning new workers to the
non-parallel plans first and then iterate over the parallel ones when
a worker needs a plan to execute. We might eliminate the need for
special value -1 for num workers. You may separate those two kinds in
two different arrays or within the same array and remember the
smallest index of a parallel plan.

-- 
Best Wishes,
Ashutosh Bapat
EnterpriseDB Corporation
The Postgres Database Company



Re: [HACKERS] Parallel Append implementation

From
Ashutosh Bapat
Date:
On Fri, Mar 10, 2017 at 11:33 AM, Ashutosh Bapat
<ashutosh.bapat@enterprisedb.com> wrote:
>>
>> But as far as code is concerned, I think the two-list approach will
>> turn out to be less simple if we derive corresponding two different
>> arrays in AppendState node. Handling two different arrays during
>> execution does not look clean. Whereas, the bitmapset that I have used
>> in Append has turned out to be very simple. I just had to do the below
>> check (and that is the only location) to see if it's a partial or
>> non-partial subplan. There is nowhere else any special handling for
>> non-partial subpath.
>>
>> /*
>> * Increment worker count for the chosen node, if at all we found one.
>> * For non-partial plans, set it to -1 instead, so that no other workers
>> * run it.
>> */
>> if (min_whichplan != PA_INVALID_PLAN)
>> {
>>    if (bms_is_member(min_whichplan,
>> ((Append*)state->ps.plan)->partial_subplans_set))
>>            padesc->pa_info[min_whichplan].pa_num_workers++;
>>    else
>>            padesc->pa_info[min_whichplan].pa_num_workers = -1;
>> }
>>
>> Now, since Bitmapset field is used during execution with such
>> simplicity, why not have this same data structure in AppendPath, and
>> re-use bitmapset field in Append plan node without making a copy of
>> it. Otherwise, if we have two lists in AppendPath, and a bitmap in
>> Append, again there is going to be code for data structure conversion.
>>
>
> I think there is some merit in separating out non-parallel and
> parallel plans within the same array or outside it. The current logic
> to assign plan to a worker looks at all the plans, unnecessarily
> hopping over the un-parallel ones after they are given to a worker. If
> we separate those two, we can keep assigning new workers to the
> non-parallel plans first and then iterate over the parallel ones when
> a worker needs a plan to execute. We might eliminate the need for
> special value -1 for num workers. You may separate those two kinds in
> two different arrays or within the same array and remember the
> smallest index of a parallel plan.

Further to that, with this scheme and the scheme to distribute workers
equally irrespective of the maximum workers per plan, you don't need
to "scan" the subplans to find the one with minimum workers. If you
treat the array of parallel plans as a circular queue, the plan to be
assigned next to a worker will always be the plan next to the one
which got assigned to the given worker. Once you have assigned workers
to non-parallel plans, intialize a shared variable next_plan to point
to the first parallel plan. When a worker comes asking for a plan,
assign the plan pointed by next_plan and update it to the next plan in
the circular queue.

-- 
Best Wishes,
Ashutosh Bapat
EnterpriseDB Corporation
The Postgres Database Company



Re: [HACKERS] Parallel Append implementation

From
Amit Khandekar
Date:
On 10 March 2017 at 12:33, Ashutosh Bapat
<ashutosh.bapat@enterprisedb.com> wrote:
> On Fri, Mar 10, 2017 at 11:33 AM, Ashutosh Bapat
> <ashutosh.bapat@enterprisedb.com> wrote:
>>>
>>> But as far as code is concerned, I think the two-list approach will
>>> turn out to be less simple if we derive corresponding two different
>>> arrays in AppendState node. Handling two different arrays during
>>> execution does not look clean. Whereas, the bitmapset that I have used
>>> in Append has turned out to be very simple. I just had to do the below
>>> check (and that is the only location) to see if it's a partial or
>>> non-partial subplan. There is nowhere else any special handling for
>>> non-partial subpath.
>>>
>>> /*
>>> * Increment worker count for the chosen node, if at all we found one.
>>> * For non-partial plans, set it to -1 instead, so that no other workers
>>> * run it.
>>> */
>>> if (min_whichplan != PA_INVALID_PLAN)
>>> {
>>>    if (bms_is_member(min_whichplan,
>>> ((Append*)state->ps.plan)->partial_subplans_set))
>>>            padesc->pa_info[min_whichplan].pa_num_workers++;
>>>    else
>>>            padesc->pa_info[min_whichplan].pa_num_workers = -1;
>>> }
>>>
>>> Now, since Bitmapset field is used during execution with such
>>> simplicity, why not have this same data structure in AppendPath, and
>>> re-use bitmapset field in Append plan node without making a copy of
>>> it. Otherwise, if we have two lists in AppendPath, and a bitmap in
>>> Append, again there is going to be code for data structure conversion.
>>>
>>
>> I think there is some merit in separating out non-parallel and
>> parallel plans within the same array or outside it. The current logic
>> to assign plan to a worker looks at all the plans, unnecessarily
>> hopping over the un-parallel ones after they are given to a worker. If
>> we separate those two, we can keep assigning new workers to the
>> non-parallel plans first and then iterate over the parallel ones when
>> a worker needs a plan to execute. We might eliminate the need for
>> special value -1 for num workers. You may separate those two kinds in
>> two different arrays or within the same array and remember the
>> smallest index of a parallel plan.

Do you think we might get performance benefit with this ? I am looking
more towards logic simplicity. non-parallel plans would be mostly
likely be there only in case of UNION ALL queries, and not partitioned
tables. And UNION ALL queries probably would have far lesser number of
subplans, there won't be too many unnecessary hops. The need for
num_workers=-1 will still be there for partial plans, because we need
to set it to -1 once a worker finishes a plan.

>
> Further to that, with this scheme and the scheme to distribute workers
> equally irrespective of the maximum workers per plan, you don't need
> to "scan" the subplans to find the one with minimum workers. If you
> treat the array of parallel plans as a circular queue, the plan to be
> assigned next to a worker will always be the plan next to the one
> which got assigned to the given worker.

> Once you have assigned workers
> to non-parallel plans, intialize a shared variable next_plan to point
> to the first parallel plan. When a worker comes asking for a plan,
> assign the plan pointed by next_plan and update it to the next plan in
> the circular queue.

At some point of time, this logic may stop working. Imagine plans are
running with (1, 1, 1). Next worker goes to plan 1, so they run with
(2, 1, 1). So now the next_plan points to plan 2. Now suppose worker
on plan 2 finishes. It should not again take plan 2, even though
next_plan points to 2. It should take plan 3, or whichever is not
finished. May be a worker that finishes a plan should do this check
before directly going to the next_plan. But if this is turning out as
simple as the finding-min-worker-plan, we can use this logic. But will
have to check. We can anyway consider this even when we have a single
list.

>
> --
> Best Wishes,
> Ashutosh Bapat
> EnterpriseDB Corporation
> The Postgres Database Company



-- 
Thanks,
-Amit Khandekar
EnterpriseDB Corporation
The Postgres Database Company



Re: [HACKERS] Parallel Append implementation

From
Ashutosh Bapat
Date:
>>>>
>>>
>>> I think there is some merit in separating out non-parallel and
>>> parallel plans within the same array or outside it. The current logic
>>> to assign plan to a worker looks at all the plans, unnecessarily
>>> hopping over the un-parallel ones after they are given to a worker. If
>>> we separate those two, we can keep assigning new workers to the
>>> non-parallel plans first and then iterate over the parallel ones when
>>> a worker needs a plan to execute. We might eliminate the need for
>>> special value -1 for num workers. You may separate those two kinds in
>>> two different arrays or within the same array and remember the
>>> smallest index of a parallel plan.
>
> Do you think we might get performance benefit with this ? I am looking
> more towards logic simplicity. non-parallel plans would be mostly
> likely be there only in case of UNION ALL queries, and not partitioned
> tables. And UNION ALL queries probably would have far lesser number of
> subplans, there won't be too many unnecessary hops.

A partitioned table which has foreign and local partitions would have
non-parallel and parallel plans if the foreign plans can not be
parallelized like what postgres_fdw does.

> The need for
> num_workers=-1 will still be there for partial plans, because we need
> to set it to -1 once a worker finishes a plan.
>

IIRC, we do that so that no other workers are assigned to it when
scanning the array of plans. But with the new scheme we don't need to
scan the non-parallel plans for when assigning plan to workers so -1
may not be needed. I may be wrong though.

-- 
Best Wishes,
Ashutosh Bapat
EnterpriseDB Corporation
The Postgres Database Company



Re: [HACKERS] Parallel Append implementation

From
Amit Khandekar
Date:
On 10 March 2017 at 14:05, Ashutosh Bapat
<ashutosh.bapat@enterprisedb.com> wrote:
>> The need for
>> num_workers=-1 will still be there for partial plans, because we need
>> to set it to -1 once a worker finishes a plan.
>>
>
> IIRC, we do that so that no other workers are assigned to it when
> scanning the array of plans. But with the new scheme we don't need to
> scan the non-parallel plans for when assigning plan to workers so -1
> may not be needed. I may be wrong though.
>

Still, when a worker finishes a partial subplan , it marks it as -1,
so that no new workers pick this, even if there are other workers
already executing it.

-- 
Thanks,
-Amit Khandekar
EnterpriseDB Corporation
The Postgres Database Company



Re: [HACKERS] Parallel Append implementation

From
"Tels"
Date:
Moin,

On Fri, March 10, 2017 3:24 am, Amit Khandekar wrote:
> On 10 March 2017 at 12:33, Ashutosh Bapat
> <ashutosh.bapat@enterprisedb.com> wrote:
>> On Fri, Mar 10, 2017 at 11:33 AM, Ashutosh Bapat
>> <ashutosh.bapat@enterprisedb.com> wrote:
>>>>
>>>> But as far as code is concerned, I think the two-list approach will
>>>> turn out to be less simple if we derive corresponding two different
>>>> arrays in AppendState node. Handling two different arrays during
>>>> execution does not look clean. Whereas, the bitmapset that I have used
>>>> in Append has turned out to be very simple. I just had to do the below
>>>> check (and that is the only location) to see if it's a partial or
>>>> non-partial subplan. There is nowhere else any special handling for
>>>> non-partial subpath.
>>>>
>>>> /*
>>>> * Increment worker count for the chosen node, if at all we found one.
>>>> * For non-partial plans, set it to -1 instead, so that no other
>>>> workers
>>>> * run it.
>>>> */
>>>> if (min_whichplan != PA_INVALID_PLAN)
>>>> {
>>>>    if (bms_is_member(min_whichplan,
>>>> ((Append*)state->ps.plan)->partial_subplans_set))
>>>>            padesc->pa_info[min_whichplan].pa_num_workers++;
>>>>    else
>>>>            padesc->pa_info[min_whichplan].pa_num_workers = -1;
>>>> }
>>>>
>>>> Now, since Bitmapset field is used during execution with such
>>>> simplicity, why not have this same data structure in AppendPath, and
>>>> re-use bitmapset field in Append plan node without making a copy of
>>>> it. Otherwise, if we have two lists in AppendPath, and a bitmap in
>>>> Append, again there is going to be code for data structure conversion.
>>>>
>>>
>>> I think there is some merit in separating out non-parallel and
>>> parallel plans within the same array or outside it. The current logic
>>> to assign plan to a worker looks at all the plans, unnecessarily
>>> hopping over the un-parallel ones after they are given to a worker. If
>>> we separate those two, we can keep assigning new workers to the
>>> non-parallel plans first and then iterate over the parallel ones when
>>> a worker needs a plan to execute. We might eliminate the need for
>>> special value -1 for num workers. You may separate those two kinds in
>>> two different arrays or within the same array and remember the
>>> smallest index of a parallel plan.
>
> Do you think we might get performance benefit with this ? I am looking
> more towards logic simplicity. non-parallel plans would be mostly
> likely be there only in case of UNION ALL queries, and not partitioned
> tables. And UNION ALL queries probably would have far lesser number of
> subplans, there won't be too many unnecessary hops. The need for
> num_workers=-1 will still be there for partial plans, because we need
> to set it to -1 once a worker finishes a plan.
>
>>
>> Further to that, with this scheme and the scheme to distribute workers
>> equally irrespective of the maximum workers per plan, you don't need
>> to "scan" the subplans to find the one with minimum workers. If you
>> treat the array of parallel plans as a circular queue, the plan to be
>> assigned next to a worker will always be the plan next to the one
>> which got assigned to the given worker.
>
>> Once you have assigned workers
>> to non-parallel plans, intialize a shared variable next_plan to point
>> to the first parallel plan. When a worker comes asking for a plan,
>> assign the plan pointed by next_plan and update it to the next plan in
>> the circular queue.
>
> At some point of time, this logic may stop working. Imagine plans are
> running with (1, 1, 1). Next worker goes to plan 1, so they run with
> (2, 1, 1). So now the next_plan points to plan 2. Now suppose worker
> on plan 2 finishes. It should not again take plan 2, even though
> next_plan points to 2. It should take plan 3, or whichever is not
> finished. May be a worker that finishes a plan should do this check
> before directly going to the next_plan. But if this is turning out as
> simple as the finding-min-worker-plan, we can use this logic. But will
> have to check. We can anyway consider this even when we have a single
> list.

Just a question for me to understand the implementation details vs. the
strategy:

Have you considered how the scheduling decision might impact performance
due to "inter-plan parallelism vs. in-plan parallelism"?

So what would be the scheduling strategy? And should there be a fixed one
or user-influencable? And what could be good ones?

A simple example:

E.g. if we have 5 subplans, and each can have at most 5 workers and we
have 5 workers overall.

So, do we:
 Assign 5 workers to plan 1. Let it finish. Then assign 5 workers to plan 2. Let it finish. and so on

or:
 Assign 1 workers to each plan until no workers are left?

In the second case you would have 5 plans running in a quasy-sequential
manner, which might be slower than the other way. Or not, that probably
needs some benchmarks?

Likewise, if you have a mix of plans with max workers like:
 Plan A: 1 worker Plan B: 2 workers Plan C: 3 workers Plan D: 1 worker Plan E: 4 workers

Would the strategy be:
* Serve them in first-come-first-served order? (A,B,C,D?) (Would order
here be random due to how the plan's emerge, i.e. could the user re-order
query to get a different order?)* Serve them in max-workers order? (A,D,B,C)* Serve first all with 1 worker, then fill
therest? (A,D,B,C | A,D,C,B)* Serve them by some other metric, e.g. index-only scans first, seq-scans
 
last? Or a mix of all these?

Excuse me if I just didn't see this from the thread so far. :)

Best regards,

Tels



Re: [HACKERS] Parallel Append implementation

From
Amit Khandekar
Date:
After giving more thought to our discussions, I have have used the
Bitmapset structure in AppendPath as against having two lists one for
partial and other for non-partial paths. Attached is the patch v6 that
has the required changes. So accumulate_append_subpath() now also
prepares the bitmapset containing the information about which paths
are partial paths. This is what I had done in the first version.

At this point of time, I have not given sufficient time to think about
Ashutosh's proposal of just keeping track of the next_subplan which he
mentioned. There, we just keep assigning workers to a circle of
subplans in round-robin style. But I think as of now the approach of
choosing the minimum worker subplan is pretty simple looking. So the
patch v6 is in a working condition using minimum-worker approach.

On 9 March 2017 at 07:22, Robert Haas <robertmhaas@gmail.com> wrote:

> Some review:
>
> +typedef struct ParallelAppendDescData
> +{
> +    slock_t        pa_mutex;        /* mutual exclusion to choose
> next subplan */
> +    ParallelAppendInfo pa_info[FLEXIBLE_ARRAY_MEMBER];
> +} ParallelAppendDescData;
>
> Instead of having ParallelAppendInfo, how about just int
> pa_workers[FLEXIBLE_ARRAY_MEMBER]?  The second structure seems like
> overkill, at least for now.

I have , for now, kept the structure there, just in case after further
discussion we may add something.

>
> +static inline void
> +exec_append_scan_first(AppendState *appendstate)
> +{
> +    appendstate->as_whichplan = 0;
> +}
>
> I don't think this is buying you anything, and suggest backing it out.

This is required for sequential Append, so that we can start executing
from the first subplan.

>
> +        /* Backward scan is not supported by parallel-aware plans */
> +        Assert(!ScanDirectionIsBackward(appendstate->ps.state->es_direction));
>
> I think you could assert ScanDirectionIsForward, couldn't you?
> NoMovement, I assume, is right out.

Right. Changed.

>
> +            elog(DEBUG2, "ParallelAppend : pid %d : all plans already
> finished",
> +                         MyProcPid);
>
> Please remove (and all similar cases also).

Removed at multiple places.

>
> +                 sizeof(*node->as_padesc->pa_info) * node->as_nplans);
>
> I'd use the type name instead.

Done.

>
> +    for (i = 0; i < node->as_nplans; i++)
> +    {
> +        /*
> +         * Just setting all the number of workers to 0 is enough. The logic
> +         * of choosing the next plan in workers will take care of everything
> +         * else.
> +         */
> +        padesc->pa_info[i].pa_num_workers = 0;
> +    }
>
> Here I'd use memset.

Done.

>
> +    return (min_whichplan == PA_INVALID_PLAN ? false : true);
>
> Maybe just return (min_whichplan != PA_INVALID_PLAN);

Done.

>
> -                                              childrel->cheapest_total_path);
> +
> childrel->cheapest_total_path);
>
> Unnecessary.

This call is now having more param, so kept the change.
>
> +        {
>              partial_subpaths = accumulate_append_subpath(partial_subpaths,
>                                         linitial(childrel->partial_pathlist));
> +        }
>
> Don't need to add braces.

Removed them.

>
> +            /*
> +             * Extract the first unparameterized, parallel-safe one among the
> +             * child paths.
> +             */
>
> Can we use get_cheapest_parallel_safe_total_inner for this, from
> a71f10189dc10a2fe422158a2c9409e0f77c6b9e?

Yes, Fixed.

>
> +        if (rel->partial_pathlist != NIL &&
> +            (Path *) linitial(rel->partial_pathlist) == subpath)
> +            partial_subplans_set = bms_add_member(partial_subplans_set, i);
>
> This seems like a scary way to figure this out.  What if we wanted to
> build a parallel append subpath with some path other than the
> cheapest, for some reason?  I think you ought to record the decision
> that set_append_rel_pathlist makes about whether to use a partial path
> or a parallel-safe path, and then just copy it over here.

As mentioned above, used Bitmapset in AppendPath.

>
> -                create_append_path(grouped_rel,
> -                                   paths,
> -                                   NULL,
> -                                   0);
> +                create_append_path(grouped_rel, paths, NULL, 0);
>
> Unnecessary.

Now since there was anyway a change in the number of params, I kept
the single line call.

Please refer to attached patch version v6 for all of the above changes.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Attachment

Re: [HACKERS] Parallel Append implementation

From
Robert Haas
Date:
On Fri, Mar 10, 2017 at 12:17 AM, Amit Khandekar <amitdkhan.pg@gmail.com> wrote:
> I agree that the two-lists approach will consume less memory than
> bitmapset. Keeping two lists will effectively have an extra pointer
> field which will add up to the AppendPath size, but this size will not
> grow with the number of subpaths, whereas the Bitmapset will grow.

Sure.  You'll use about one BIT of memory per subpath.  I'm kind of
baffled as to why we're treating this as an issue worth serious
discussion; the amount of memory involved is clearly very small.  Even
for an appendrel with 1000 children, that's 125 bytes of memory.
Considering the amount of memory we're going to spend planning that
appendrel overall, that's not significant.

However, Ashutosh's response made me think of something: one thing is
that we probably do want to group all of the non-partial plans at the
beginning of the Append so that they get workers first, and put the
partial plans afterward.  That's because the partial plans can always
be accelerated by adding more workers as they become available, but
the non-partial plans are just going to take as long as they take - so
we want to start them as soon as possible.  In fact, what we might
want to do is actually sort the non-partial paths in order of
decreasing cost, putting the most expensive one first and the others
in decreasing order after that - and then similarly afterward with the
partial paths.  If we did that, we wouldn't need to store a bitmapset
OR two separate lists.  We could just store the index of the first
partial plan in the list.  Then you can test whether a path is partial
by checking whether this_index >= first_partial_index.

One problem with that is that, since the leader has about a 4ms head
start on the other workers, it would tend to pick the most expensive
path to run locally before any other worker had a chance to make a
selection, and that's probably not what we want.  To fix that, let's
have the leader start at the end of the list of plans and work
backwards towards the beginning, so that it prefers cheaper and
partial plans over decisions that would force it to undertake a large
amount of work itself.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



Re: [HACKERS] Parallel Append implementation

From
Robert Haas
Date:
On Fri, Mar 10, 2017 at 8:12 AM, Amit Khandekar <amitdkhan.pg@gmail.com> wrote:
>> +static inline void
>> +exec_append_scan_first(AppendState *appendstate)
>> +{
>> +    appendstate->as_whichplan = 0;
>> +}
>>
>> I don't think this is buying you anything, and suggest backing it out.
>
> This is required for sequential Append, so that we can start executing
> from the first subplan.

My point is that there's really no point in defining a static inline
function containing one line of code.  You could just put that line of
code in whatever places need it, which would probably be more clear.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



Re: [HACKERS] Parallel Append implementation

From
Robert Haas
Date:
On Fri, Mar 10, 2017 at 6:01 AM, Tels <nospam-pg-abuse@bloodgate.com> wrote:
> Just a question for me to understand the implementation details vs. the
> strategy:
>
> Have you considered how the scheduling decision might impact performance
> due to "inter-plan parallelism vs. in-plan parallelism"?
>
> So what would be the scheduling strategy? And should there be a fixed one
> or user-influencable? And what could be good ones?
>
> A simple example:
>
> E.g. if we have 5 subplans, and each can have at most 5 workers and we
> have 5 workers overall.
>
> So, do we:
>
>   Assign 5 workers to plan 1. Let it finish.
>   Then assign 5 workers to plan 2. Let it finish.
>   and so on
>
> or:
>
>   Assign 1 workers to each plan until no workers are left?

Currently, we do the first of those, but I'm pretty sure the second is
way better.  For example, suppose each subplan has a startup cost.  If
you have all the workers pile on each plan in turn, every worker pays
the startup cost for every subplan.  If you spread them out, then
subplans can get finished without being visited by all workers, and
then the other workers never pay those costs.  Moreover, you reduce
contention for spinlocks, condition variables, etc.  It's not
impossible to imagine a scenario where having all workers pile on one
subplan at a time works out better: for example, suppose you have a
table with lots of partitions all of which are on the same disk, and
it's actually one physical spinning disk, not an SSD or a disk array
or anything, and the query is completely I/O-bound.  Well, it could
be, in that scenario, that spreading out the workers is going to turn
sequential I/O into random I/O and that might be terrible.  In most
cases, though, I think you're going to be better off.  If the
partitions are on different spindles or if there's some slack I/O
capacity for prefetching, you're going to come out ahead, maybe way
ahead.  If you come out behind, then you're evidently totally I/O
bound and have no capacity for I/O parallelism; in that scenario, you
should probably just turn parallel query off altogether, because
you're not going to benefit from it.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



Re: [HACKERS] Parallel Append implementation

From
"Tels"
Date:
Moin,

On Sat, March 11, 2017 11:29 pm, Robert Haas wrote:
> On Fri, Mar 10, 2017 at 6:01 AM, Tels <nospam-pg-abuse@bloodgate.com>
> wrote:
>> Just a question for me to understand the implementation details vs. the
>> strategy:
>>
>> Have you considered how the scheduling decision might impact performance
>> due to "inter-plan parallelism vs. in-plan parallelism"?
>>
>> So what would be the scheduling strategy? And should there be a fixed
>> one
>> or user-influencable? And what could be good ones?
>>
>> A simple example:
>>
>> E.g. if we have 5 subplans, and each can have at most 5 workers and we
>> have 5 workers overall.
>>
>> So, do we:
>>
>>   Assign 5 workers to plan 1. Let it finish.
>>   Then assign 5 workers to plan 2. Let it finish.
>>   and so on
>>
>> or:
>>
>>   Assign 1 workers to each plan until no workers are left?
>
> Currently, we do the first of those, but I'm pretty sure the second is
> way better.  For example, suppose each subplan has a startup cost.  If
> you have all the workers pile on each plan in turn, every worker pays
> the startup cost for every subplan.  If you spread them out, then
> subplans can get finished without being visited by all workers, and
> then the other workers never pay those costs.  Moreover, you reduce
> contention for spinlocks, condition variables, etc.  It's not
> impossible to imagine a scenario where having all workers pile on one
> subplan at a time works out better: for example, suppose you have a
> table with lots of partitions all of which are on the same disk, and
> it's actually one physical spinning disk, not an SSD or a disk array
> or anything, and the query is completely I/O-bound.  Well, it could
> be, in that scenario, that spreading out the workers is going to turn
> sequential I/O into random I/O and that might be terrible.  In most
> cases, though, I think you're going to be better off.  If the
> partitions are on different spindles or if there's some slack I/O
> capacity for prefetching, you're going to come out ahead, maybe way
> ahead.  If you come out behind, then you're evidently totally I/O
> bound and have no capacity for I/O parallelism; in that scenario, you
> should probably just turn parallel query off altogether, because
> you're not going to benefit from it.

I agree with the proposition that both strategies can work well, or not,
depending on system-setup, the tables and data layout. I'd be a bit more
worried about turning it into the "random-io-case", but that's still just
a feeling and guesswork.

So which one will be better seems speculative, hence the question for
benchmarking different strategies.

So, I'd like to see the scheduler be out in a single place, maybe a
function that get's called with the number of currently running workers,
the max. number of workers to be expected, the new worker, the list of
plans still todo, and then schedules that single worker to one of these
plans by strategy X.

That would make it easier to swap out X for Y and see how it fares,
wouldn't it?


However, I don't think the patch needs to select the optimal strategy
right from the beginning (if that even exists, maybe it's a mixed
strategy), even "not so optimal" parallelism will be better than doing all
things sequentially.

Best regards,

Tels



Re: [HACKERS] Parallel Append implementation

From
Amit Khandekar
Date:
On 10 March 2017 at 22:08, Robert Haas <robertmhaas@gmail.com> wrote:
> On Fri, Mar 10, 2017 at 12:17 AM, Amit Khandekar <amitdkhan.pg@gmail.com> wrote:
>> I agree that the two-lists approach will consume less memory than
>> bitmapset. Keeping two lists will effectively have an extra pointer
>> field which will add up to the AppendPath size, but this size will not
>> grow with the number of subpaths, whereas the Bitmapset will grow.
>
> Sure.  You'll use about one BIT of memory per subpath.  I'm kind of
> baffled as to why we're treating this as an issue worth serious
> discussion; the amount of memory involved is clearly very small.  Even
> for an appendrel with 1000 children, that's 125 bytes of memory.
> Considering the amount of memory we're going to spend planning that
> appendrel overall, that's not significant.
Yes, I agree that we should consider rather other things like code
simplicity to determine which data structure we should use in
AppendPath.

>
> However, Ashutosh's response made me think of something: one thing is
> that we probably do want to group all of the non-partial plans at the
> beginning of the Append so that they get workers first, and put the
> partial plans afterward.  That's because the partial plans can always
> be accelerated by adding more workers as they become available, but
> the non-partial plans are just going to take as long as they take - so
> we want to start them as soon as possible.  In fact, what we might
> want to do is actually sort the non-partial paths in order of
> decreasing cost, putting the most expensive one first and the others
> in decreasing order after that - and then similarly afterward with the
> partial paths.  If we did that, we wouldn't need to store a bitmapset
> OR two separate lists.  We could just store the index of the first
> partial plan in the list.  Then you can test whether a path is partial
> by checking whether this_index >= first_partial_index.

I agree that we should preferably have the non-partial plans started
first. But I am not sure if it is really worth ordering the partial
plans by cost. The reason we ended up not keeping track of the
per-subplan parallel_worker, is because it would not matter  much ,
and we would just equally distribute the workers among all regardless
of how big the subplans are. Even if smaller plans get more worker,
they will finish faster, and workers would be available to larger
subplans sooner.

Anyways, I have given a thought on the logic of choosing the next plan
, and that is irrespective of whether the list is sorted. I have
included Ashutosh's proposal of scanning the array round-robin as
against finding the minimum, since that method will automatically
distribute the workers evenly. Also, the logic uses a single array and
keeps track of first partial plan. The first section of the array is
non-partial, followed by partial plans. Below is the algorithm ...
There might be corner cases which I didn't yet take into account, but
first I wanted to get an agreement if this looks ok to go ahead with.
Since it does not find minimum worker count, it no longer uses
pa_num_workers. Instead it has boolean field painfo->pa_finished.

parallel_append_next(AppendState *state)
{
   /* Make a note of which subplan we have started with */   initial_plan = padesc->next_plan;
   /* Keep going to the next plan until we find an unfinished one. In
the process, also keep track of the first unfinished subplan. As the
non-partial subplans are taken one by one, the unfinished subplan will
shift ahead, so that we don't have to scan these anymore */
   whichplan = initial_plan;   for (;;)   {       ParallelAppendInfo *painfo = &padesc->pa_info[whichplan];
       /*        * Ignore plans that are already done processing. These also include        * non-partial subplans
whichhave already been taken by a worker.        */       if (!painfo->pa_finished)       {           /* If this a
non-partialplan, immediately mark it
 
finished, and shift ahead first_plan */           if (whichplan < padesc->first_partial_plan)           {
padesc->pa_info[whichplan].pa_finished= true;               padesc->first_plan++;           }
 
           break;       }
       /* Either go to the next index, or wrap around to the first
unfinished one */       whichplan = goto_next_plan(whichplan, padesc->first_plan,
padesc->as_nplans - 1));
       /* Have we scanned all subplans ? If yes, we are done */       if (whichplan == initial_plan)           break;
}
   /* If we didn't find any plan to execute, stop executing. */   if (whichplan == initial_plan || whichplan ==
PA_INVALID_PLAN)      return false;   else   {       /* Set the chosen plan, and also the next plan to be picked by
 
other workers */       state->as_whichplan = whichplan;       padesc->next_plan = goto_next_plan(whichplan,
padesc->first_plan, padesc->as_nplans - 1));       return true;   }
}

/* Either go to the next index, or wrap around to the first unfinished one */
int goto_next_plan(curplan, first_plan, last_plan)
{   if (curplan + 1 <= last_plan)       return curplan + 1;   else       return first_plan;
}

>
> One problem with that is that, since the leader has about a 4ms head
> start on the other workers, it would tend to pick the most expensive
> path to run locally before any other worker had a chance to make a
> selection, and that's probably not what we want.  To fix that, let's
> have the leader start at the end of the list of plans and work
> backwards towards the beginning, so that it prefers cheaper and
> partial plans over decisions that would force it to undertake a large
> amount of work itself.
>
> --
> Robert Haas
> EnterpriseDB: http://www.enterprisedb.com
> The Enterprise PostgreSQL Company



-- 
Thanks,
-Amit Khandekar
EnterpriseDB Corporation
The Postgres Database Company



Re: [HACKERS] Parallel Append implementation

From
Amit Khandekar
Date:
On 12 March 2017 at 19:31, Tels <nospam-pg-abuse@bloodgate.com> wrote:
> Moin,
>
> On Sat, March 11, 2017 11:29 pm, Robert Haas wrote:
>> On Fri, Mar 10, 2017 at 6:01 AM, Tels <nospam-pg-abuse@bloodgate.com>
>> wrote:
>>> Just a question for me to understand the implementation details vs. the
>>> strategy:
>>>
>>> Have you considered how the scheduling decision might impact performance
>>> due to "inter-plan parallelism vs. in-plan parallelism"?
>>>
>>> So what would be the scheduling strategy? And should there be a fixed
>>> one
>>> or user-influencable? And what could be good ones?
>>>
>>> A simple example:
>>>
>>> E.g. if we have 5 subplans, and each can have at most 5 workers and we
>>> have 5 workers overall.
>>>
>>> So, do we:
>>>
>>>   Assign 5 workers to plan 1. Let it finish.
>>>   Then assign 5 workers to plan 2. Let it finish.
>>>   and so on
>>>
>>> or:
>>>
>>>   Assign 1 workers to each plan until no workers are left?
>>
>> Currently, we do the first of those, but I'm pretty sure the second is
>> way better.  For example, suppose each subplan has a startup cost.  If
>> you have all the workers pile on each plan in turn, every worker pays
>> the startup cost for every subplan.  If you spread them out, then
>> subplans can get finished without being visited by all workers, and
>> then the other workers never pay those costs.  Moreover, you reduce
>> contention for spinlocks, condition variables, etc.  It's not
>> impossible to imagine a scenario where having all workers pile on one
>> subplan at a time works out better: for example, suppose you have a
>> table with lots of partitions all of which are on the same disk, and
>> it's actually one physical spinning disk, not an SSD or a disk array
>> or anything, and the query is completely I/O-bound.  Well, it could
>> be, in that scenario, that spreading out the workers is going to turn
>> sequential I/O into random I/O and that might be terrible.  In most
>> cases, though, I think you're going to be better off.  If the
>> partitions are on different spindles or if there's some slack I/O
>> capacity for prefetching, you're going to come out ahead, maybe way
>> ahead.  If you come out behind, then you're evidently totally I/O
>> bound and have no capacity for I/O parallelism; in that scenario, you
>> should probably just turn parallel query off altogether, because
>> you're not going to benefit from it.
>
> I agree with the proposition that both strategies can work well, or not,
> depending on system-setup, the tables and data layout. I'd be a bit more
> worried about turning it into the "random-io-case", but that's still just
> a feeling and guesswork.
>
> So which one will be better seems speculative, hence the question for
> benchmarking different strategies.
>
> So, I'd like to see the scheduler be out in a single place, maybe a
> function that get's called with the number of currently running workers,
> the max. number of workers to be expected, the new worker, the list of
> plans still todo, and then schedules that single worker to one of these
> plans by strategy X.
>
> That would make it easier to swap out X for Y and see how it fares,
> wouldn't it?

Yes, actually pretty much the scheduler logic is all in one single
function parallel_append_next().

>
>
> However, I don't think the patch needs to select the optimal strategy
> right from the beginning (if that even exists, maybe it's a mixed
> strategy), even "not so optimal" parallelism will be better than doing all
> things sequentially.
>
> Best regards,
>
> Tels



-- 
Thanks,
-Amit Khandekar
EnterpriseDB Corporation
The Postgres Database Company



Re: [HACKERS] Parallel Append implementation

From
Robert Haas
Date:
On Mon, Mar 13, 2017 at 4:59 AM, Amit Khandekar <amitdkhan.pg@gmail.com> wrote:
> I agree that we should preferably have the non-partial plans started
> first. But I am not sure if it is really worth ordering the partial
> plans by cost. The reason we ended up not keeping track of the
> per-subplan parallel_worker, is because it would not matter  much ,
> and we would just equally distribute the workers among all regardless
> of how big the subplans are. Even if smaller plans get more worker,
> they will finish faster, and workers would be available to larger
> subplans sooner.

Imagine that the plan costs are 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, and 10
and you have 2 workers.

If you move that 10 to the front, this will finish in 10 time units.
If you leave it at the end, it will take 15 time units.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



Re: [HACKERS] Parallel Append implementation

From
Robert Haas
Date:
On Mon, Mar 13, 2017 at 7:46 AM, Robert Haas <robertmhaas@gmail.com> wrote:
> On Mon, Mar 13, 2017 at 4:59 AM, Amit Khandekar <amitdkhan.pg@gmail.com> wrote:
>> I agree that we should preferably have the non-partial plans started
>> first. But I am not sure if it is really worth ordering the partial
>> plans by cost. The reason we ended up not keeping track of the
>> per-subplan parallel_worker, is because it would not matter  much ,
>> and we would just equally distribute the workers among all regardless
>> of how big the subplans are. Even if smaller plans get more worker,
>> they will finish faster, and workers would be available to larger
>> subplans sooner.
>
> Imagine that the plan costs are 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, and 10
> and you have 2 workers.
>
> If you move that 10 to the front, this will finish in 10 time units.
> If you leave it at the end, it will take 15 time units.

Oh, never mind.  You were only asking whether we should sort partial
plans.  That's a lot less important, and maybe not important at all.
The only consideration there is whether we might try to avoid having
the leader start in on a plan with a large startup cost.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



Re: [HACKERS] Parallel Append implementation

From
Amit Khandekar
Date:
On 12 March 2017 at 08:50, Robert Haas <robertmhaas@gmail.com> wrote:
>> However, Ashutosh's response made me think of something: one thing is
>> that we probably do want to group all of the non-partial plans at the
>> beginning of the Append so that they get workers first, and put the
>> partial plans afterward.  That's because the partial plans can always
>> be accelerated by adding more workers as they become available, but
>> the non-partial plans are just going to take as long as they take - so
>> we want to start them as soon as possible.  In fact, what we might
>> want to do is actually sort the non-partial paths in order of
>> decreasing cost, putting the most expensive one first and the others
>> in decreasing order after that - and then similarly afterward with the
>> partial paths.  If we did that, we wouldn't need to store a bitmapset
>> OR two separate lists.  We could just store the index of the first
>> partial plan in the list.  Then you can test whether a path is partial
>> by checking whether this_index >= first_partial_index.

Attached is an updated patch v7, which does the above. Now,
AppendState->subplans has all non-partial subplans followed by all
partial subplans, with the non-partial subplans in the order of
descending total cost. Also, for convenience, the AppendPath also now
has similar ordering in its AppendPath->subpaths. So there is a new
field both in Append and AppendPath : first_partial_path/plan, which
has value 0 if there are no non-partial subpaths.

Also the backend now scans reverse, so that it does not take up the
most expensive path.

There are also some changes in the costing done. Now that we know that
the very first path is the costliest non-partial path, we can use its
total cost as the total cost of Append in case all the partial path
costs are lesser.

Modified/enhanced an existing test scenario in
src/test/regress/select_parallel.sql so that Parallel Append is
covered.

As suggested by Robert, since pa_info->pa_finished was the only field
in pa_info, removed the ParallelAppendDescData.pa_info structure, and
instead brought pa_info->pa_finished into ParallelAppendDescData.

>>> +static inline void
>>> +exec_append_scan_first(AppendState *appendstate)
>>> +{
>>> +    appendstate->as_whichplan = 0;
>>> +}
>>>
>>> I don't think this is buying you anything, and suggest backing it out.
>>
>> This is required for sequential Append, so that we can start executing
>> from the first subplan.
>
> My point is that there's really no point in defining a static inline
> function containing one line of code.  You could just put that line of
> code in whatever places need it, which would probably be more clear.

Did the same.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Attachment

Re: [HACKERS] Parallel Append implementation

From
Ashutosh Bapat
Date:
On Thu, Mar 16, 2017 at 3:57 PM, Amit Khandekar <amitdkhan.pg@gmail.com> wrote:
> On 12 March 2017 at 08:50, Robert Haas <robertmhaas@gmail.com> wrote:
>>> However, Ashutosh's response made me think of something: one thing is
>>> that we probably do want to group all of the non-partial plans at the
>>> beginning of the Append so that they get workers first, and put the
>>> partial plans afterward.  That's because the partial plans can always
>>> be accelerated by adding more workers as they become available, but
>>> the non-partial plans are just going to take as long as they take - so
>>> we want to start them as soon as possible.  In fact, what we might
>>> want to do is actually sort the non-partial paths in order of
>>> decreasing cost, putting the most expensive one first and the others
>>> in decreasing order after that - and then similarly afterward with the
>>> partial paths.  If we did that, we wouldn't need to store a bitmapset
>>> OR two separate lists.  We could just store the index of the first
>>> partial plan in the list.  Then you can test whether a path is partial
>>> by checking whether this_index >= first_partial_index.
>
> Attached is an updated patch v7, which does the above. Now,
> AppendState->subplans has all non-partial subplans followed by all
> partial subplans, with the non-partial subplans in the order of
> descending total cost. Also, for convenience, the AppendPath also now
> has similar ordering in its AppendPath->subpaths. So there is a new
> field both in Append and AppendPath : first_partial_path/plan, which
> has value 0 if there are no non-partial subpaths.
>
> Also the backend now scans reverse, so that it does not take up the
> most expensive path.
>
> There are also some changes in the costing done. Now that we know that
> the very first path is the costliest non-partial path, we can use its
> total cost as the total cost of Append in case all the partial path
> costs are lesser.
>
> Modified/enhanced an existing test scenario in
> src/test/regress/select_parallel.sql so that Parallel Append is
> covered.
>
> As suggested by Robert, since pa_info->pa_finished was the only field
> in pa_info, removed the ParallelAppendDescData.pa_info structure, and
> instead brought pa_info->pa_finished into ParallelAppendDescData.
>
>>>> +static inline void
>>>> +exec_append_scan_first(AppendState *appendstate)
>>>> +{
>>>> +    appendstate->as_whichplan = 0;
>>>> +}
>>>>
>>>> I don't think this is buying you anything, and suggest backing it out.
>>>
>>> This is required for sequential Append, so that we can start executing
>>> from the first subplan.
>>
>> My point is that there's really no point in defining a static inline
>> function containing one line of code.  You could just put that line of
>> code in whatever places need it, which would probably be more clear.
>
> Did the same.

Some comments
+         * Check if we are already finished plans from parallel append. This
+         * can happen if all the subplans are finished when this worker
+         * has not even started returning tuples.
+         */
+        if (node->as_padesc && node->as_whichplan == PA_INVALID_PLAN)
+            return ExecClearTuple(node->ps.ps_ResultTupleSlot);
From the comment, it looks like this condition will be encountered before the
backend returns any tuple. But this code is part of the loop which returns the
tuples. Shouldn't this be outside the loop? Why do we want to check a condition
for every row returned when the condition can happen only once and that too
before returning any tuple?

Why do we need following code in both ExecAppendInitializeWorker() and
ExecAppendInitializeDSM()? Both of those things happen before starting the
actual execution, so one of those should suffice?
+    /* Choose the optimal subplan to be executed. */
+    (void) parallel_append_next(node);

There is no pa_num_worker now, so probably this should get updated. Per comment
we should also get rid of SpinLockAcquire() and SpinLockRelease()?
+ *        purpose. The spinlock is used so that it does not change the
+ *        pa_num_workers field while workers are choosing the next node.

BTW, sa_finished seems to be a misnomor. The plan is not finished yet, but it
wants no more workers. So, should it be renamed as sa_no_new_workers or
something like that?

In parallel_append_next() we shouldn't need to call goto_next_plan() twice. If
the plan indicated by pa_next_plan is finished, all the plans must have
finished. This should be true if we set pa_next_plan to 0 at the time of
initialization. Any worker picking up pa_next_plan will set it to the next
valid plan. So the next worker asking for plan should pick pa_next_plan and
set it to the next one and so on.

I am wonding whether goto_next_plan() can be simplified as some module
arithmatic e.g. (whichplan - first_plan)++ % (last_plan - first_plan)
+ first_plan.

I am still reviewing the patch.

-- 
Best Wishes,
Ashutosh Bapat
EnterpriseDB Corporation
The Postgres Database Company



Re: [HACKERS] Parallel Append implementation

From
Robert Haas
Date:
On Thu, Mar 16, 2017 at 8:48 AM, Ashutosh Bapat
<ashutosh.bapat@enterprisedb.com> wrote:
> Why do we need following code in both ExecAppendInitializeWorker() and
> ExecAppendInitializeDSM()? Both of those things happen before starting the
> actual execution, so one of those should suffice?
> +    /* Choose the optimal subplan to be executed. */
> +    (void) parallel_append_next(node);

ExecAppendInitializeWorker runs only in workers, but
ExecAppendInitializeDSM runs only in the leader.

> BTW, sa_finished seems to be a misnomor. The plan is not finished yet, but it
> wants no more workers. So, should it be renamed as sa_no_new_workers or
> something like that?

I think that's not going to improve clarity.  The comments can clarify
the exact semantics.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



Re: [HACKERS] Parallel Append implementation

From
Robert Haas
Date:
On Thu, Mar 16, 2017 at 6:27 AM, Amit Khandekar <amitdkhan.pg@gmail.com> wrote:
> Attached is an updated patch v7, which does the above.

Some comments:

- You've added a GUC (which is good) but not documented it (which is
bad) or added it to postgresql.conf.sample (also bad).

- You've used a loop inside a spinlock-protected critical section,
which is against project policy.  Use an LWLock; define and document a
new builtin tranche ID.

- The comment for pa_finished claims that it is the number of workers
executing the subplan, but it's a bool, not a count; I think this
comment is just out of date.

- paths_insert_sorted_by_cost() is a hand-coded insertion sort.  Can't
we find a way to use qsort() for this instead of hand-coding a slower
algorithm?  I think we could just create an array of the right length,
stick each path into it from add_paths_to_append_rel, and then qsort()
the array based on <is-partial, total-cost>.  Then the result can be
turned into a list.

- Maybe the new helper functions in nodeAppend.c could get names
starting with exec_append_, to match the style of
exec_append_initialize_next().

- There's a superfluous whitespace change in add_paths_to_append_rel.

- The substantive changes in add_paths_to_append_rel don't look right
either.  It's not clear why accumulate_partialappend_subpath is
getting called even in the non-enable_parallelappend case.  I don't
think the logic for the case where we're not generating a parallel
append path needs to change at all.

- When parallel append is enabled, I think add_paths_to_append_rel
should still consider all the same paths that it does today, plus one
extra.  The new path is a parallel append path where each subpath is
the cheapest subpath for that childrel, whether partial or
non-partial.  If !enable_parallelappend, or if all of the cheapest
subpaths are partial, then skip this.  (If all the cheapest subpaths
are non-partial, it's still potentially useful.)  In other words,
don't skip consideration of parallel append just because you have a
partial path available for every child rel; it could be

- I think the way cost_append() works is not right.  What you've got
assumes that you can just multiply the cost of a partial plan by the
parallel divisor to recover the total cost, which is not true because
we don't divide all elements of the plan cost by the parallel divisor
-- only the ones that seem like they should be divided.  Also, it
could be smarter about what happens with the costs of non-partial
paths. I suggest the following algorithm instead.

1. Add up all the costs of the partial paths.  Those contribute
directly to the final cost of the Append.  This ignores the fact that
the Append may escalate the parallel degree, but I think we should
just ignore that problem for now, because we have no real way of
knowing what the impact of that is going to be.

2. Next, estimate the cost of the non-partial paths.  To do this, make
an array of Cost of that length and initialize all the elements to
zero, then add the total cost of each non-partial plan in turn to the
element of the array with the smallest cost, and then take the maximum
of the array elements as the total cost of the non-partial plans.  Add
this to the result from step 1 to get the total cost.

- In get_append_num_workers, instead of the complicated formula with
log() and 0.693, just add the list lengths and call fls() on the
result.  Integer arithmetic FTW!

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



Re: [HACKERS] Parallel Append implementation

From
Amit Khandekar
Date:
On 17 March 2017 at 01:37, Robert Haas <robertmhaas@gmail.com> wrote:
> - You've added a GUC (which is good) but not documented it (which is
> bad) or added it to postgresql.conf.sample (also bad).
>
> - You've used a loop inside a spinlock-protected critical section,
> which is against project policy.  Use an LWLock; define and document a
> new builtin tranche ID.
>
> - The comment for pa_finished claims that it is the number of workers
> executing the subplan, but it's a bool, not a count; I think this
> comment is just out of date.

Yes, agreed. Will fix the above.

>
> - paths_insert_sorted_by_cost() is a hand-coded insertion sort.  Can't
> we find a way to use qsort() for this instead of hand-coding a slower
> algorithm?  I think we could just create an array of the right length,
> stick each path into it from add_paths_to_append_rel, and then qsort()
> the array based on <is-partial, total-cost>.  Then the result can be
> turned into a list.

Yeah, I was in double minds as to whether to do the
copy-to-array-and-qsort thing, or should just write the same number of
lines of code to manually do an insertion sort. Actually I was
searching if we already have a linked list sort, but it seems we don't
have. Will do the qsort now since it would be faster.

>
> - Maybe the new helper functions in nodeAppend.c could get names
> starting with exec_append_, to match the style of
> exec_append_initialize_next().
>
> - There's a superfluous whitespace change in add_paths_to_append_rel.

Will fix this.

>
> - The substantive changes in add_paths_to_append_rel don't look right
> either.  It's not clear why accumulate_partialappend_subpath is
> getting called even in the non-enable_parallelappend case.  I don't
> think the logic for the case where we're not generating a parallel
> append path needs to change at all.

When accumulate_partialappend_subpath() is called for a childrel with
a partial path, it works just like accumulate_append_subpath() when
enable_parallelappend is false. That's why, for partial child path,
the same function is called irrespective of parallel-append or
non-parallel-append case. May be mentioning this in comments should
suffice here ?

>
> - When parallel append is enabled, I think add_paths_to_append_rel
> should still consider all the same paths that it does today, plus one
> extra.  The new path is a parallel append path where each subpath is
> the cheapest subpath for that childrel, whether partial or
> non-partial.  If !enable_parallelappend, or if all of the cheapest
> subpaths are partial, then skip this.  (If all the cheapest subpaths
> are non-partial, it's still potentially useful.)

In case of all-partial childrels, the paths are *exactly* same as
those that would have been created for enable_parallelappend=off. The
extra path is there for enable_parallelappend=on only when one or more
of the child rels do not have partial paths. Does this make sense ?

> In other words,
> don't skip consideration of parallel append just because you have a
> partial path available for every child rel; it could be

I didn't get this. Are you saying that in the patch it is getting
skipped if enable_parallelappend = off ? I don't think so. For
all-partial child rels, partial append is always created. Only thing
is, in case of enable_parallelappend=off, the Append path is not
parallel_aware, so it executes just like it executes today under
Gather without being parallel-aware.

>
> - I think the way cost_append() works is not right.  What you've got
> assumes that you can just multiply the cost of a partial plan by the
> parallel divisor to recover the total cost, which is not true because
> we don't divide all elements of the plan cost by the parallel divisor
> -- only the ones that seem like they should be divided.

Yes, that was an approximation done. For those subpaths for which
there is no parallel_divsor, we cannot calculate the total cost
considering the number of workers for the subpath. I feel we should
consider the per-subpath parallel_workers somehow. The
Path->total_cost for a partial path is *always* per-worker cost, right
? Just want to confirm this assumption of mine.

> Also, it
> could be smarter about what happens with the costs of non-partial
> paths. I suggest the following algorithm instead.
>
> 1. Add up all the costs of the partial paths.  Those contribute
> directly to the final cost of the Append.  This ignores the fact that
> the Append may escalate the parallel degree, but I think we should
> just ignore that problem for now, because we have no real way of
> knowing what the impact of that is going to be.

I wanted to take into account per-subpath parallel_workers for total
cost of Append. Suppose the partial subpaths have per worker total
costs (3, 3, 3) and their parallel_workers are (2, 8, 4), with 2
Append workers available. So according to what you say, the total cost
is 9. With per-subplan parallel_workers taken into account, total cost
= (3*2 + 3*8 * 3*4)/2 = 21.

May be I didn't follow exactly what you suggested. Your logic is not
taking into account number of workers ? I am assuming you are
calculating per-worker total cost here.

>
> 2. Next, estimate the cost of the non-partial paths.  To do this, make
> an array of Cost of that length and initialize all the elements to
> zero, then add the total cost of each non-partial plan in turn to the
> element of the array with the smallest cost, and then take the maximum
> of the array elements as the total cost of the non-partial plans.  Add
> this to the result from step 1 to get the total cost.

So with costs (8, 5, 2), add 8 and 5 to 2 so that it becomes (8, 5,
15) , and so the max is 15 ? I surely am misinterpreting this.

Actually, I couldn't come up with a general formula to find the
non-partial paths total cost, given the per-subplan cost and number of
workers. I mean, we can manually find out the total cost, but turning
it into a formula seems quite involved. We can even do a dry-run of
workers consuming each of the subplan slots and find the total time
time units taken, but finding some approximation seemed ok.

For e.g. we can manually find total time units taken for following :
costs (8, 2, 2, 2) with 2 workers : 8
costs (6, 6, 4, 1) with 2 workers : 10.
costs (6, 6, 4, 1) with 3 workers : 6.

But coming up with an alogrithm or a formula didn't look worth. So I
just did the total cost and divided it by workers. And besides that,
took the maximum of the 1st plan cost (since it is the highest) and
the average of total. I understand it would be too much approximation
for some cases, but another thing is, we don't know how to take into
account some of the workers shifting to partial workers. So the shift
may be quite fuzzy since all workers may not shift to partial plans
together.

>
> - In get_append_num_workers, instead of the complicated formula with
> log() and 0.693, just add the list lengths and call fls() on the
> result.  Integer arithmetic FTW!

Yeah fls() could be used. BTW I just found that costsize.c already has
this defined in the same way I did:
#define LOG2(x)  (log(x) / 0.693147180559945)
May be we need to shift this to some common header file.



Re: [HACKERS] Parallel Append implementation

From
Amit Khandekar
Date:
On 16 March 2017 at 18:18, Ashutosh Bapat
<ashutosh.bapat@enterprisedb.com> wrote:
> +         * Check if we are already finished plans from parallel append. This
> +         * can happen if all the subplans are finished when this worker
> +         * has not even started returning tuples.
> +         */
> +        if (node->as_padesc && node->as_whichplan == PA_INVALID_PLAN)
> +            return ExecClearTuple(node->ps.ps_ResultTupleSlot);
> From the comment, it looks like this condition will be encountered before the
> backend returns any tuple. But this code is part of the loop which returns the
> tuples. Shouldn't this be outside the loop? Why do we want to check a condition
> for every row returned when the condition can happen only once and that too
> before returning any tuple?

The way ExecProcNode() gets called, there is no different special code
that gets called instead of ExecProcNode() when a tuple is fetched for
the first time. I mean, we cannot prevent ExecProcNode() from getting
called when as_whichplan is invalid right from the beginning.

One thing we can do is : have a special slot in AppenState->as_plan[]
which has some dummy execution node that just returns NULL tuple, and
initially make as_whichplan point to this slot. But I think it is not
worth doing this.

We can instead reduce the if condition to:
if (node->as_whichplan == PA_INVALID_PLAN)
{
Assert(node->as_padesc != NULL);  return ExecClearTuple(node->ps.ps_ResultTupleSlot);
}

BTW, the loop which you mentioned that returns tuples.... the loop is
not for returning tuples, the loop is for iterating to the next
subplan. Even if we take the condition out and keep it in the
beginning of ExecAppend, the issue will remain.

>
> Why do we need following code in both ExecAppendInitializeWorker() and
> ExecAppendInitializeDSM()? Both of those things happen before starting the
> actual execution, so one of those should suffice?
> +    /* Choose the optimal subplan to be executed. */
> +    (void) parallel_append_next(node);

ExecAppendInitializeWorker() is for the worker to attach (and then
initialize its own local data) to the dsm area created and shared by
ExecAppendInitializeDSM() in backend. But both worker and backend
needs to initialize its own as_whichplan to the next subplan.

>
> There is no pa_num_worker now, so probably this should get updated. Per comment
> we should also get rid of SpinLockAcquire() and SpinLockRelease()?
> + *        purpose. The spinlock is used so that it does not change the
> + *        pa_num_workers field while workers are choosing the next node.
Will do this.

>
> BTW, sa_finished seems to be a misnomor. The plan is not finished yet, but it
> wants no more workers. So, should it be renamed as sa_no_new_workers or
> something like that?

Actually in this context, "finished" means "we are done with this subplan".

>
> In parallel_append_next() we shouldn't need to call goto_next_plan() twice. If
> the plan indicated by pa_next_plan is finished, all the plans must have
> finished. This should be true if we set pa_next_plan to 0 at the time of
> initialization. Any worker picking up pa_next_plan will set it to the next
> valid plan. So the next worker asking for plan should pick pa_next_plan and
> set it to the next one and so on.

The current patch does not call it twice, but I might have overlooked
something. Let me know if I have.

>
> I am wonding whether goto_next_plan() can be simplified as some module
> arithmatic e.g. (whichplan - first_plan)++ % (last_plan - first_plan)
> + first_plan.

Hmm. IMHO it seems too much calculation for just shifting to next array element.



Re: [HACKERS] Parallel Append implementation

From
Peter Geoghegan
Date:
On Fri, Mar 17, 2017 at 10:12 AM, Amit Khandekar <amitdkhan.pg@gmail.com> wrote:
> Yeah, I was in double minds as to whether to do the
> copy-to-array-and-qsort thing, or should just write the same number of
> lines of code to manually do an insertion sort. Actually I was
> searching if we already have a linked list sort, but it seems we don't
> have. Will do the qsort now since it would be faster.

relcache.c does an insertion sort with a list of OIDs. See insert_ordered_oid().


-- 
Peter Geoghegan



Re: [HACKERS] Parallel Append implementation

From
Amit Khandekar
Date:
>> 2. Next, estimate the cost of the non-partial paths.  To do this, make
>> an array of Cost of that length and initialize all the elements to
>> zero, then add the total cost of each non-partial plan in turn to the
>> element of the array with the smallest cost, and then take the maximum
>> of the array elements as the total cost of the non-partial plans.  Add
>> this to the result from step 1 to get the total cost.
>
> So with costs (8, 5, 2), add 8 and 5 to 2 so that it becomes (8, 5,
> 15) , and so the max is 15 ? I surely am misinterpreting this.
>
> Actually, I couldn't come up with a general formula to find the
> non-partial paths total cost, given the per-subplan cost and number of
> workers. I mean, we can manually find out the total cost, but turning
> it into a formula seems quite involved. We can even do a dry-run of
> workers consuming each of the subplan slots and find the total time
> time units taken, but finding some approximation seemed ok.
>
> For e.g. we can manually find total time units taken for following :
> costs (8, 2, 2, 2) with 2 workers : 8
> costs (6, 6, 4, 1) with 2 workers : 10.
> costs (6, 6, 4, 1) with 3 workers : 6.
>
> But coming up with an alogrithm or a formula didn't look worth. So I
> just did the total cost and divided it by workers. And besides that,
> took the maximum of the 1st plan cost (since it is the highest) and
> the average of total. I understand it would be too much approximation
> for some cases, but another thing is, we don't know how to take into
> account some of the workers shifting to partial workers. So the shift
> may be quite fuzzy since all workers may not shift to partial plans
> together.


For non-partial paths, I did some comparison between the actual cost
and the cost taken by adding the per-subpath figures and dividing by
number of workers. And in the below cases, they do not differ
significantly. Here are the figures :

Case 1 :
Cost units of subpaths : 20 16 10 8 3 1.
Workers : 3
Actual total time to finish all workers : 20.
total/workers: 16.

Case 2 :
Cost units of subpaths : 20 16 10 8 3 1.
Workers : 2
Actual total time to finish all workers : 34.
total/workers: 32.

Case 3 :
Cost units of subpaths : 5 3 3 3 3
Workers : 3
Actual total time to finish all workers : 6
total/workers: 5.6

One more thing observed, is , in all of the above cases, all the
workers more or less finish at about the same time.

So this method seem to compare good which actual cost. The average
comes out a little less than the actual. But I think in the patch,
what I need to correct is, calculate separate per-worker costs of
non-partial and partial costs, and add them. This will give us
per-worker total cost, which is what a partial Append cost will be. I
just added all costs together.

There can be some extreme cases such as (5, 1, 1, 1, 1, 1) with 6
workers, where it will take at least 5 units, but average is 2. For
that we can clamp up the cost to the first path cost, so that for e.g.
it does not go lesser than 5 in this case.

Actually I have deviced one algorithm to calculate the exact time when
all workers finish non-partial costs. But I think it does not make
sense to apply it because it may be too much of calculation cost for
hundreds of paths.

But anyways, for archival purpose, here is the algorithm :

Per-subpath cost : 20 16 10 8 3 1, with 3 workers.
After 10 units (this is minimum of 20, 16, 10), the times remaining are :
10  6  0 8 3 1
After 6 units (minimum of 10, 06, 08), the times remaining are :
4  0  0 2 3 1
After 2 units (minimum of 4, 2, 3), the times remaining are :2  0  0 0 1 1
After 1 units (minimum of 2, 1, 1), the times remaining are :1  0  0 0 0 0
After 1 units (minimum of 1, 0 , 0), the times remaining are :0  0  0 0 0 0
Now add up above time chunks : 10 + 6 + 2 + 1 + 1 = 20

-- 
Thanks,
-Amit Khandekar
EnterpriseDB Corporation
The Postgres Database Company



Re: [HACKERS] Parallel Append implementation

From
Robert Haas
Date:
On Fri, Mar 17, 2017 at 1:12 PM, Amit Khandekar <amitdkhan.pg@gmail.com> wrote:
>> - The substantive changes in add_paths_to_append_rel don't look right
>> either.  It's not clear why accumulate_partialappend_subpath is
>> getting called even in the non-enable_parallelappend case.  I don't
>> think the logic for the case where we're not generating a parallel
>> append path needs to change at all.
>
> When accumulate_partialappend_subpath() is called for a childrel with
> a partial path, it works just like accumulate_append_subpath() when
> enable_parallelappend is false. That's why, for partial child path,
> the same function is called irrespective of parallel-append or
> non-parallel-append case. May be mentioning this in comments should
> suffice here ?

I don't get it.  If you can get the same effect by changing something
or not changing it, presumably it'd be better to not change it.   We
try not to change things just because we can; the change should be an
improvement in some way.

>> - When parallel append is enabled, I think add_paths_to_append_rel
>> should still consider all the same paths that it does today, plus one
>> extra.  The new path is a parallel append path where each subpath is
>> the cheapest subpath for that childrel, whether partial or
>> non-partial.  If !enable_parallelappend, or if all of the cheapest
>> subpaths are partial, then skip this.  (If all the cheapest subpaths
>> are non-partial, it's still potentially useful.)
>
> In case of all-partial childrels, the paths are *exactly* same as
> those that would have been created for enable_parallelappend=off. The
> extra path is there for enable_parallelappend=on only when one or more
> of the child rels do not have partial paths. Does this make sense ?

No, I don't think so.  Imagine that we have three children, A, B, and
C.  The cheapest partial paths have costs of 10,000 each.  A, however,
has a non-partial path with a cost of 1,000.  Even though A has a
partial path, we still want to consider a parallel append using the
non-partial path because it figures to be hugely faster.

> The
> Path->total_cost for a partial path is *always* per-worker cost, right
> ? Just want to confirm this assumption of mine.

Yes.

>> Also, it
>> could be smarter about what happens with the costs of non-partial
>> paths. I suggest the following algorithm instead.
>>
>> 1. Add up all the costs of the partial paths.  Those contribute
>> directly to the final cost of the Append.  This ignores the fact that
>> the Append may escalate the parallel degree, but I think we should
>> just ignore that problem for now, because we have no real way of
>> knowing what the impact of that is going to be.
>
> I wanted to take into account per-subpath parallel_workers for total
> cost of Append. Suppose the partial subpaths have per worker total
> costs (3, 3, 3) and their parallel_workers are (2, 8, 4), with 2
> Append workers available. So according to what you say, the total cost
> is 9. With per-subplan parallel_workers taken into account, total cost
> = (3*2 + 3*8 * 3*4)/2 = 21.

But that case never happens, because the parallel workers for the
append is always at least as large as the number of workers for any
single child.

> May be I didn't follow exactly what you suggested. Your logic is not
> taking into account number of workers ? I am assuming you are
> calculating per-worker total cost here.
>>
>> 2. Next, estimate the cost of the non-partial paths.  To do this, make
>> an array of Cost of that length and initialize all the elements to
>> zero, then add the total cost of each non-partial plan in turn to the
>> element of the array with the smallest cost, and then take the maximum
>> of the array elements as the total cost of the non-partial plans.  Add
>> this to the result from step 1 to get the total cost.
>
> So with costs (8, 5, 2), add 8 and 5 to 2 so that it becomes (8, 5,
> 15) , and so the max is 15 ? I surely am misinterpreting this.

No.  If you have costs 8, 5, and 2 and only one process, cost is 15.
If you have two processes then for costing purposes you assume worker
1 will execute the first path (cost 8) and worker 2 will execute the
other two (cost 5 + 2 = 7), so the total cost is 8.  If you have three
workers, the cost will still be 8, because there's no way to finish
the cost-8 path in less than 8 units of work.

>> - In get_append_num_workers, instead of the complicated formula with
>> log() and 0.693, just add the list lengths and call fls() on the
>> result.  Integer arithmetic FTW!
>
> Yeah fls() could be used. BTW I just found that costsize.c already has
> this defined in the same way I did:
> #define LOG2(x)  (log(x) / 0.693147180559945)
> May be we need to shift this to some common header file.

LOG2() would make sense if you're working with a value represented as
a double, but if you have an integer input, I think fls() is better.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



Re: [HACKERS] Parallel Append implementation

From
Amit Khandekar
Date:
Attached is the updated patch that handles the changes for all the
comments except the cost changes part. Details about the specific
changes are after the cost-related points discussed below.

>> I wanted to take into account per­subpath parallel_workers for total
>> cost of Append. Suppose the partial subpaths have per worker total
>> costs (3, 3, 3) and their parallel_workers are (2, 8, 4), with 2
>> Append workers available. So according to what you say, the total cost
>> is 9. With per­subplan parallel_workers taken into account, total cost
>> = (3*2 + 3*8 * 3*4)/2 = 21.
> But that case never happens, because the parallel workers for the
> append is always at least as large as the number of workers for any
> single child.

Yeah, that's right. I will use this approach for partial paths.


For non-partial paths, I was checking following 3 options :

Option 1. Just take the sum of total non-partial child costs and
divide it by number of workers. It seems to be getting close to the
actual cost.

Option 2. Calculate exact cost by an algorithm which I mentioned
before, which is pasted below for reference :
Per­subpath cost : 20 16 10 8 3 1, with 3 workers.
After 10 time units (this is minimum of first 3 i.e. 20, 16, 10), the
times remaining are :
10  6  0 8 3 1
After 6 units (minimum of 10, 06, 08), the times remaining are :
4  0  0 2 3 1
After 2 units (minimum of 4, 2, 3), the times remaining are :
 2  0  0 0 1 1
After 1 units (minimum of 2, 1, 1), the times remaining are :
 1  0  0 0 0 0
After 1 units (minimum of 1, 0 , 0), the times remaining are :
 0  0  0 0 0 0
Now add up above time chunks : 10 + 6 + 2 + 1 + 1 = 20

Option 3. Get some approximation formula like you suggested. I am also
looking for such formula, just that some things are not clear to me.
The discussion of the same is below ...
>>> 2. Next, estimate the cost of the non­partial paths.  To do this, make
>>> an array of Cost of that length and initialize all the elements to
>>> zero, then add the total cost of each non­partial plan in turn to the
>>> element of the array with the smallest cost, and then take the maximum
>>> of the array elements as the total cost of the non­partial plans.  Add
>>> this to the result from step 1 to get the total cost.
>>
>> So with costs (8, 5, 2), add 8 and 5 to 2 so that it becomes (8, 5,
>> 15) , and so the max is 15 ? I surely am misinterpreting this.
> No.  If you have costs 8, 5, and 2 and only one process, cost is 15.
> If you have two processes then for costing purposes you assume worker
> 1 will execute the first path (cost 8) and worker 2 will execute the
> other two (cost 5 + 2 = 7), so the total cost is 8.  If you have three
> workers, the cost will still be 8, because there's no way to finish
> the cost­8 path in less than 8 units of work.

So the part that you suggested about adding up total cost in turn to
the smallest cost; this suggestion applies to only 1 worker right ?
For more than worker, are you suggesting to use some algorithm similar
to the one I suggested in option 2 above ? If yes, it would be great
if you again describe how that works for multiple workers. Or is it
that you were suggesting some simple approximate arithmetic that
applies to multiple workers ?
Like I mentioned, I will be happy to get such simple approximation
arithmetic that can be applied for multiple worker case. The one logic
I suggested in option 2 is something we can keep as the last option.
And option 1 is also an approximation but we would like to have a
better approximation. So wanted to clear my queries regarding option
3.

----------

Details about all the remaining changes in updated patch are below ...

On 20 March 2017 at 17:29, Robert Haas <robertmhaas@gmail.com> wrote:
> On Fri, Mar 17, 2017 at 1:12 PM, Amit Khandekar <amitdkhan.pg@gmail.com> wrote:
>>> - The substantive changes in add_paths_to_append_rel don't look right
>>> either.  It's not clear why accumulate_partialappend_subpath is
>>> getting called even in the non-enable_parallelappend case.  I don't
>>> think the logic for the case where we're not generating a parallel
>>> append path needs to change at all.
>>
>> When accumulate_partialappend_subpath() is called for a childrel with
>> a partial path, it works just like accumulate_append_subpath() when
>> enable_parallelappend is false. That's why, for partial child path,
>> the same function is called irrespective of parallel-append or
>> non-parallel-append case. May be mentioning this in comments should
>> suffice here ?
>
> I don't get it.  If you can get the same effect by changing something
> or not changing it, presumably it'd be better to not change it.   We
> try not to change things just because we can; the change should be an
> improvement in some way.
>
>>> - When parallel append is enabled, I think add_paths_to_append_rel
>>> should still consider all the same paths that it does today, plus one
>>> extra.  The new path is a parallel append path where each subpath is
>>> the cheapest subpath for that childrel, whether partial or
>>> non-partial.  If !enable_parallelappend, or if all of the cheapest
>>> subpaths are partial, then skip this.  (If all the cheapest subpaths
>>> are non-partial, it's still potentially useful.)
>>
>> In case of all-partial childrels, the paths are *exactly* same as
>> those that would have been created for enable_parallelappend=off. The
>> extra path is there for enable_parallelappend=on only when one or more
>> of the child rels do not have partial paths. Does this make sense ?
>
> No, I don't think so.  Imagine that we have three children, A, B, and
> C.  The cheapest partial paths have costs of 10,000 each.  A, however,
> has a non-partial path with a cost of 1,000.  Even though A has a
> partial path, we still want to consider a parallel append using the
> non-partial path because it figures to be hugely faster.

Right. Now that we want to consider both cheapest partial and cheapest
non-partial path, I now get what you were saying about having an extra
path for parallel_append. I have done all of the above changes. Now we
have an extra path for enable_parallelappend=true, besides the
non-parallel partial append path.

> - You've added a GUC (which is good) but not documented it (which is
> bad) or added it to postgresql.conf.sample (also bad).

Done.

>
> - You've used a loop inside a spinlock-protected critical section,
> which is against project policy.  Use an LWLock; define and document a
> new builtin tranche ID.

Done. Used LWlock for the parallel append synchronization. But I am
not sure what does "document the new builtin trancheID" mean. Didn't
find a readme which documents tranche ids.

For setting pa_finished=true when a partial plan finished, earlier it
was using Spinlock. Now it does not use any synchronization. It was
actually earlier using it because there was another field num_workers,
but it is not needed since there is no num_workers. I was considering
whether to use atomic read and write API in atomics.c for pa_finished.
But from what I understand, just a plain read/write is already atomic.
We require them only if there are some compound operations like
increment, exchange, etc.

>
> - The comment for pa_finished claims that it is the number of workers
> executing the subplan, but it's a bool, not a count; I think this
> comment is just out of date.

Done.

>
> - paths_insert_sorted_by_cost() is a hand-coded insertion sort.  Can't
> we find a way to use qsort() for this instead of hand-coding a slower
> algorithm?  I think we could just create an array of the right length,
> stick each path into it from add_paths_to_append_rel, and then qsort()
> the array based on <is-partial, total-cost>.  Then the result can be
> turned into a list.

Now added a new function list.c list_qsort() so that it can be used in
the future.

>
> - Maybe the new helper functions in nodeAppend.c could get names
> starting with exec_append_, to match the style of
> exec_append_initialize_next().

Done.

>
> - There's a superfluous whitespace change in add_paths_to_append_rel.

Didn't find exactly which, but I guess the attached latest patch does
not have it.


>>> - In get_append_num_workers, instead of the complicated formula with
>>> log() and 0.693, just add the list lengths and call fls() on the
>>> result.  Integer arithmetic FTW!
>>
>> Yeah fls() could be used. BTW I just found that costsize.c already has
>> this defined in the same way I did:
>> #define LOG2(x)  (log(x) / 0.693147180559945)
>> May be we need to shift this to some common header file.
>
> LOG2() would make sense if you're working with a value represented as
> a double, but if you have an integer input, I think fls() is better.

Used fls() now.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Attachment

Re: [HACKERS] Parallel Append implementation

From
Robert Haas
Date:
On Wed, Mar 22, 2017 at 4:49 AM, Amit Khandekar <amitdkhan.pg@gmail.com> wrote:
> Attached is the updated patch that handles the changes for all the
> comments except the cost changes part. Details about the specific
> changes are after the cost-related points discussed below.
>
> For non-partial paths, I was checking following 3 options :
>
> Option 1. Just take the sum of total non-partial child costs and
> divide it by number of workers. It seems to be getting close to the
> actual cost.

If the costs for all children are about equal, then that works fine.
But when they are very unequal, then it's highly misleading.

> Option 2. Calculate exact cost by an algorithm which I mentioned
> before, which is pasted below for reference :
> Per­subpath cost : 20 16 10 8 3 1, with 3 workers.
> After 10 time units (this is minimum of first 3 i.e. 20, 16, 10), the
> times remaining are :
> 10  6  0 8 3 1
> After 6 units (minimum of 10, 06, 08), the times remaining are :
> 4  0  0 2 3 1
> After 2 units (minimum of 4, 2, 3), the times remaining are :
>  2  0  0 0 1 1
> After 1 units (minimum of 2, 1, 1), the times remaining are :
>  1  0  0 0 0 0
> After 1 units (minimum of 1, 0 , 0), the times remaining are :
>  0  0  0 0 0 0
> Now add up above time chunks : 10 + 6 + 2 + 1 + 1 = 20

This gives the same answer as what I was proposing, but I believe it's
more complicated to compute.  The way my proposal would work in this
case is that we would start with an array C[3] (since there are three
workers], with all entries 0.  Logically C[i] represents the amount of
work to be performed by worker i.  We add each path in turn to the
worker whose array entry is currently smallest; in the case of a tie,
just pick the first such entry.

So in your example we do this:

C[0] += 20;
C[1] += 16;
C[2] += 10;
/* C[2] is smaller than C[0] or C[1] at this point, so we add the next
path to C[2] */
C[2] += 8;
/* after the previous line, C[1] is now the smallest, so add to that
entry next */
C[1] += 3;
/* now we've got C[0] = 20, C[1] = 19, C[2] = 18, so add to C[2] */
C[2] += 1;
/* final result: C[0] = 20, C[1] = 19, C[2] = 19 */

Now we just take the highest entry that appears in any array, which in
this case is C[0], as the total cost.

Comments on this latest version:

In my previous review, I said that you should "define and document a
new builtin tranche ID"; you did the first but not the second.  See
the table in monitoring.sgml.

Definition of exec_append_goto_next_plan should have a line break
after the return type, per usual PostgreSQL style rules.

-     * initialize to scan first subplan
+     * In case it's a sequential Append, initialize to scan first subplan.

This comment is confusing because the code is executed whether it's
parallel or not.  I think it might be better to write something like
"initialize to scan first subplan (but note that we'll override this
later in the case of a parallel append)"
        /*
+         * Check if we are already finished plans from parallel append. This
+         * can happen if all the subplans are finished when this worker
+         * has not even started returning tuples.
+         */
+        if (node->as_padesc && node->as_whichplan == PA_INVALID_PLAN)
+            return ExecClearTuple(node->ps.ps_ResultTupleSlot);

There seems to be no reason why this couldn't be hoisted out of the
loop.  Actually, I think Ashutosh pointed this out before, but I
didn't understand at that time what his point was.  Looking back, I
see that he also pointed out that the as_padesc test isn't necessary,
which is also true.

+        if (node->as_padesc)
+            node->as_padesc->pa_finished[node->as_whichplan] = true;

I think you should move this logic inside exec_append_parallel_next.
That would avoid testing node->pa_desc an extra time for non-parallel
append.  I note that the comment doesn't explain why it's safe to do
this without taking the lock.  I think we could consider doing it with
the lock held, but it probably is safe, because we're only setting it
from false to true.  If someone else does the same thing, that won't
hurt anything, and if someone else fails to see our update, then the
worst-case scenario is that they'll try to execute a plan that has no
chance of returning any more rows.  That's not so bad.  Actually,
looking further, you do have a comment explaining that, but it's in
exec_append_parallel_next() where the value is used, rather than here.

+    memset(padesc->pa_finished, 0, sizeof(bool) * node->as_nplans);
+
+    shm_toc_insert(pcxt->toc, node->ps.plan->plan_node_id, padesc);
+    node->as_padesc = padesc;

Putting the shm_toc_insert call after we fully initialize the
structure seems better than putting it after we've done some of the
initialization and before we've done the rest.

+    /* Choose the optimal subplan to be executed. */

I think the word "first" would be more accurate than "optimal".  We
can only hope to pick the optimal one, but whichever one we pick is
definitely the one we're executing first!

I think the loop in exec_append_parallel_next() is a bit confusing.
It has three exit conditions, one checked at the top of the loop and
two other ways to exit via break statements.  Sometimes it exits
because whichplan == PA_INVALID_PLAN was set by
exec_append_goto_next_plan(), and other times it exits because
whichplan == initial_plan and then it sets whichplan ==
PA_INVALID_PLAN itself.  I feel like this whole function could be
written more simply somehow.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



Re: [HACKERS] Parallel Append implementation

From
Amit Khandekar
Date:
On 23 March 2017 at 05:55, Robert Haas <robertmhaas@gmail.com> wrote:
> On Wed, Mar 22, 2017 at 4:49 AM, Amit Khandekar <amitdkhan.pg@gmail.com> wrote:
>> Attached is the updated patch that handles the changes for all the
>> comments except the cost changes part. Details about the specific
>> changes are after the cost-related points discussed below.
>>
>> For non-partial paths, I was checking following 3 options :
>>
>> Option 1. Just take the sum of total non-partial child costs and
>> divide it by number of workers. It seems to be getting close to the
>> actual cost.
>
> If the costs for all children are about equal, then that works fine.
> But when they are very unequal, then it's highly misleading.
>
>> Option 2. Calculate exact cost by an algorithm which I mentioned
>> before, which is pasted below for reference :
>> Per­subpath cost : 20 16 10 8 3 1, with 3 workers.
>> After 10 time units (this is minimum of first 3 i.e. 20, 16, 10), the
>> times remaining are :
>> 10  6  0 8 3 1
>> After 6 units (minimum of 10, 06, 08), the times remaining are :
>> 4  0  0 2 3 1
>> After 2 units (minimum of 4, 2, 3), the times remaining are :
>>  2  0  0 0 1 1
>> After 1 units (minimum of 2, 1, 1), the times remaining are :
>>  1  0  0 0 0 0
>> After 1 units (minimum of 1, 0 , 0), the times remaining are :
>>  0  0  0 0 0 0
>> Now add up above time chunks : 10 + 6 + 2 + 1 + 1 = 20
>

> This gives the same answer as what I was proposing

Ah I see.

> but I believe it's more complicated to compute.
Yes a bit, particularly because in my algorithm, I would have to do
'n' subtractions each time, in case of 'n' workers. But it looked more
natural because it follows exactly the way we manually calculate.

> The way my proposal would work in this
> case is that we would start with an array C[3] (since there are three
> workers], with all entries 0.  Logically C[i] represents the amount of
> work to be performed by worker i.  We add each path in turn to the
> worker whose array entry is currently smallest; in the case of a tie,
> just pick the first such entry.
>
> So in your example we do this:
>
> C[0] += 20;
> C[1] += 16;
> C[2] += 10;
> /* C[2] is smaller than C[0] or C[1] at this point, so we add the next
> path to C[2] */
> C[2] += 8;
> /* after the previous line, C[1] is now the smallest, so add to that
> entry next */
> C[1] += 3;
> /* now we've got C[0] = 20, C[1] = 19, C[2] = 18, so add to C[2] */
> C[2] += 1;
> /* final result: C[0] = 20, C[1] = 19, C[2] = 19 */
>
> Now we just take the highest entry that appears in any array, which in
> this case is C[0], as the total cost.

Wow. The way your final result exactly tallies with my algorithm
result is very interesting. This looks like some maths or computer
science theory that I am not aware.

I am currently coding the algorithm using your method. Meanwhile
attached is a patch that takes care of your other comments, details of
which are below...

>
> In my previous review, I said that you should "define and document a
> new builtin tranche ID"; you did the first but not the second.  See
> the table in monitoring.sgml.

Yeah, I tried to search how TBM did in the source, but I guess I
didn't correctly run "git grep" commands, so the results did not have
monitoring.sgml, so I thought may be you mean something else by
"document".

Added changes in monitoring.sgml now.

>
> Definition of exec_append_goto_next_plan should have a line break
> after the return type, per usual PostgreSQL style rules.

Oops. Done.

>
> -     * initialize to scan first subplan
> +     * In case it's a sequential Append, initialize to scan first subplan.
>
> This comment is confusing because the code is executed whether it's
> parallel or not.  I think it might be better to write something like
> "initialize to scan first subplan (but note that we'll override this
> later in the case of a parallel append)"
Done.

>
>          /*
> +         * Check if we are already finished plans from parallel append. This
> +         * can happen if all the subplans are finished when this worker
> +         * has not even started returning tuples.
> +         */
> +        if (node->as_padesc && node->as_whichplan == PA_INVALID_PLAN)
> +            return ExecClearTuple(node->ps.ps_ResultTupleSlot);
>
> There seems to be no reason why this couldn't be hoisted out of the
> loop.  Actually, I think Ashutosh pointed this out before, but I
> didn't understand at that time what his point was.  Looking back, I
> see that he also pointed out that the as_padesc test isn't necessary,
> which is also true.

I am assuming both yours and Ashutosh's concern is that this check
will be executed for *each* tuple returned, and which needs to be
avoided. Actually, just moving it out of the loop is not going to
solve the runs-for-each-tuple issue. It still will execute for each
tuple. But after a thought, now I agree this can be taken out of loop
anyways, but, not for solving the per-tuple issue, but because it need
not be run for each of the iteration of the loop because that loop is
there to go to the next subplan.

When a worker tries to choose a plan to execute at the very beginning
(i.e in ExecAppendInitializeWorker()), it sometimes finds there is no
plan to execute, because all the others have already taken them and
they are already finished or they are all non-partial plans. In short,
for all subplans, pa_finished = true. So as_whichplan has to be
PA_INVALID_PLAN. To get rid of the extra check in ExecAppend(), in
ExecAppendInitializeWorker() if all plans are finished, we can very
well assign as_whichplan to a partial plan which has already finished,
so that ExecAppend() will execute this finished subplan and just
return NULL. But if all plans are non-partial, we cannot do that.

Now, when ExecAppend() is called, there is no way to know whether this
is the first time ExecProcNode() is executed or not. So we have to
keep on checking the node->as_whichplan == PA_INVALID_PLAN condition.


My earlier response to Ashutosh's feedback on this same point are
pasted below, where there are some possible improvements discussed :

The way ExecProcNode() gets called, there is no different special code
that gets called instead of ExecProcNode() when a tuple is fetched for
the first time. I mean, we cannot prevent ExecProcNode() from getting
called when as_whichplan is invalid right from the beginning.

One thing we can do is : have a special slot in AppenState­>as_plan[]
which has some dummy execution node that just returns NULL tuple, and
initially make as_whichplan point to this slot. But I think it is not
worth doing this.

We can instead reduce the if condition to:
if (node­>as_whichplan == PA_INVALID_PLAN)
{
    Assert(node­>as_padesc != NULL);
    return ExecClearTuple(node­>ps.ps_ResultTupleSlot);
}
BTW, the loop which you mentioned that returns tuples.... the loop is
not for returning tuples, the loop is for iterating to the next
subplan. Even if we take the condition out and keep it in the
beginning of ExecAppend, the issue will remain.

>
> +        if (node->as_padesc)
> +            node->as_padesc->pa_finished[node->as_whichplan] = true;
>
> I think you should move this logic inside exec_append_parallel_next.
> That would avoid testing node->pa_desc an extra time for non-parallel
> append.

Actually exec_append_parallel_next() is called at other places also,
for which we cannot set pa_finished to true inside
exec_append_parallel_next().

But I have done the changes another way. I have taken
exec_append_parallel_next() out of exec_append_initialize_next(), and
put two different conditional code blocks in ExecAppend(), one which
calls set_finished() followed by exec_append_parallel_next() and the
other calls exec_append_initialize_next() (now renamed to
exec_append_seq_next()

But one thing to note is that this condition is not executed for each
tuple. It is only while going to the next subplan.

> I note that the comment doesn't explain why it's safe to do
> this without taking the lock.  I think we could consider doing it with
> the lock held, but it probably is safe, because we're only setting it
> from false to true.  If someone else does the same thing, that won't
> hurt anything, and if someone else fails to see our update, then the
> worst-case scenario is that they'll try to execute a plan that has no
> chance of returning any more rows.  That's not so bad.  Actually,
> looking further, you do have a comment explaining that, but it's in
> exec_append_parallel_next() where the value is used, rather than here.
Yes, right.

>
> +    memset(padesc->pa_finished, 0, sizeof(bool) * node->as_nplans);
> +
> +    shm_toc_insert(pcxt->toc, node->ps.plan->plan_node_id, padesc);
> +    node->as_padesc = padesc;
>
> Putting the shm_toc_insert call after we fully initialize the
> structure seems better than putting it after we've done some of the
> initialization and before we've done the rest.

Done. Also found out that I was memset()ing only pa_finished[]. Now
there is a memset for the whole ParallelAppendDesc structure.

>
> +    /* Choose the optimal subplan to be executed. */
>
> I think the word "first" would be more accurate than "optimal".  We
> can only hope to pick the optimal one, but whichever one we pick is
> definitely the one we're executing first!
Done.

>
> I think the loop in exec_append_parallel_next() is a bit confusing.
> It has three exit conditions, one checked at the top of the loop and
> two other ways to exit via break statements.  Sometimes it exits
> because whichplan == PA_INVALID_PLAN was set by
> exec_append_goto_next_plan(), and other times it exits because
> whichplan == initial_plan

Yeah, we cannot bring up the (whichplan == initialplan) to the top in
for(;;) because initially whichplan is initialplan, and we have to
execute the loop at least once (unless whichplan = INVALID).
And we cannot bring down the for condition (which != PA_INVALID_PLAN)
because whichplan can be INVALID right at the beginning if
pa_next_plan itself can be PA_INVALID_PLAN.

> and then it sets whichplan == PA_INVALID_PLAN itself.
It sets that to PA_INVALID_PLAN only when it does not find any next
plan to execute. This is essential to do that particularly because
initiallly when ExecAppendInitialize[Worker/DSM]() is called, it
cannot set as_whichplan to any valid value.

> I feel like this whole function could be written more simply somehow.
Yeah, the main reason it is a bit compilcated is because we are
simulating circular array structure, and that too with an optimization
that we can skip the finished non-partial plans while wrapping around
to the next plan in the circular array. I have tried to add a couple
of more comments.

Also renamed exec_append_goto_next_plan() to
exec_append_get_next_plan() since it is not actually shifting any
counter, it is just returning what is the next counter.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Attachment

Re: [HACKERS] Parallel Append implementation

From
Amit Khandekar
Date:
On 23 March 2017 at 16:26, Amit Khandekar <amitdkhan.pg@gmail.com> wrote:
> On 23 March 2017 at 05:55, Robert Haas <robertmhaas@gmail.com> wrote:
>> On Wed, Mar 22, 2017 at 4:49 AM, Amit Khandekar <amitdkhan.pg@gmail.com> wrote:
>>> Attached is the updated patch that handles the changes for all the
>>> comments except the cost changes part. Details about the specific
>>> changes are after the cost-related points discussed below.
>>>
>>> For non-partial paths, I was checking following 3 options :
>>>
>>> Option 1. Just take the sum of total non-partial child costs and
>>> divide it by number of workers. It seems to be getting close to the
>>> actual cost.
>>
>> If the costs for all children are about equal, then that works fine.
>> But when they are very unequal, then it's highly misleading.
>>
>>> Option 2. Calculate exact cost by an algorithm which I mentioned
>>> before, which is pasted below for reference :
>>> Per­subpath cost : 20 16 10 8 3 1, with 3 workers.
>>> After 10 time units (this is minimum of first 3 i.e. 20, 16, 10), the
>>> times remaining are :
>>> 10  6  0 8 3 1
>>> After 6 units (minimum of 10, 06, 08), the times remaining are :
>>> 4  0  0 2 3 1
>>> After 2 units (minimum of 4, 2, 3), the times remaining are :
>>>  2  0  0 0 1 1
>>> After 1 units (minimum of 2, 1, 1), the times remaining are :
>>>  1  0  0 0 0 0
>>> After 1 units (minimum of 1, 0 , 0), the times remaining are :
>>>  0  0  0 0 0 0
>>> Now add up above time chunks : 10 + 6 + 2 + 1 + 1 = 20
>>
>
>> This gives the same answer as what I was proposing
>
> Ah I see.
>
>> but I believe it's more complicated to compute.
> Yes a bit, particularly because in my algorithm, I would have to do
> 'n' subtractions each time, in case of 'n' workers. But it looked more
> natural because it follows exactly the way we manually calculate.
>
>> The way my proposal would work in this
>> case is that we would start with an array C[3] (since there are three
>> workers], with all entries 0.  Logically C[i] represents the amount of
>> work to be performed by worker i.  We add each path in turn to the
>> worker whose array entry is currently smallest; in the case of a tie,
>> just pick the first such entry.
>>
>> So in your example we do this:
>>
>> C[0] += 20;
>> C[1] += 16;
>> C[2] += 10;
>> /* C[2] is smaller than C[0] or C[1] at this point, so we add the next
>> path to C[2] */
>> C[2] += 8;
>> /* after the previous line, C[1] is now the smallest, so add to that
>> entry next */
>> C[1] += 3;
>> /* now we've got C[0] = 20, C[1] = 19, C[2] = 18, so add to C[2] */
>> C[2] += 1;
>> /* final result: C[0] = 20, C[1] = 19, C[2] = 19 */
>>
>> Now we just take the highest entry that appears in any array, which in
>> this case is C[0], as the total cost.
>
> Wow. The way your final result exactly tallies with my algorithm
> result is very interesting. This looks like some maths or computer
> science theory that I am not aware.
>
> I am currently coding the algorithm using your method.

While I was coding this, I was considering if Path->rows also should
be calculated similar to total cost for non-partial subpath and total
cost for partial subpaths. I think for rows, we can just take
total_rows divided by workers for non-partial paths, and this
approximation should suffice. It looks odd that it be treated with the
same algorithm we chose for total cost for non-partial paths.

Meanwhile, attached is a WIP patch v10. The only change in this patch
w.r.t. the last patch (v9) is that this one has a new function defined
append_nonpartial_cost(). Just sending this to show how the algorithm
looks like; haven't yet called it.
Attachment

Re: [HACKERS] Parallel Append implementation

From
Rajkumar Raghuwanshi
Date:
On Fri, Mar 24, 2017 at 12:38 AM, Amit Khandekar <amitdkhan.pg@gmail.com> wrote:
> Meanwhile, attached is a WIP patch v10. The only change in this patch
> w.r.t. the last patch (v9) is that this one has a new function defined
> append_nonpartial_cost(). Just sending this to show how the algorithm
> looks like; haven't yet called it.
>

Hi,

I have given patch on latest pg sources (on commit
457a4448732881b5008f7a3bcca76fc299075ac3). configure and make all
install ran successfully, but initdb failed with below error.

[edb@localhost bin]$ ./initdb -D data
The files belonging to this database system will be owned by user "edb".
This user must also own the server process.

The database cluster will be initialized with locale "en_US.UTF-8".
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".

Data page checksums are disabled.

creating directory data ... ok
creating subdirectories ... ok
selecting default max_connections ... sh: line 1:  3106 Aborted        (core dumped)
"/home/edb/WORKDB/PG3/postgresql/inst/bin/postgres" --boot -x0 -F -c
max_connections=100 -c shared_buffers=1000 -c
dynamic_shared_memory_type=none < "/dev/null" > "/dev/null" 2>&1
sh: line 1:  3112 Aborted                 (core dumped)
"/home/edb/WORKDB/PG3/postgresql/inst/bin/postgres" --boot -x0 -F -c
max_connections=50 -c shared_buffers=500 -c
dynamic_shared_memory_type=none < "/dev/null" > "/dev/null" 2>&1
sh: line 1:  3115 Aborted                 (core dumped)
"/home/edb/WORKDB/PG3/postgresql/inst/bin/postgres" --boot -x0 -F -c
max_connections=40 -c shared_buffers=400 -c
dynamic_shared_memory_type=none < "/dev/null" > "/dev/null" 2>&1
sh: line 1:  3118 Aborted                 (core dumped)
"/home/edb/WORKDB/PG3/postgresql/inst/bin/postgres" --boot -x0 -F -c
max_connections=30 -c shared_buffers=300 -c
dynamic_shared_memory_type=none < "/dev/null" > "/dev/null" 2>&1
sh: line 1:  3121 Aborted                 (core dumped)
"/home/edb/WORKDB/PG3/postgresql/inst/bin/postgres" --boot -x0 -F -c
max_connections=20 -c shared_buffers=200 -c
dynamic_shared_memory_type=none < "/dev/null" > "/dev/null" 2>&1
sh: line 1:  3124 Aborted                 (core dumped)
"/home/edb/WORKDB/PG3/postgresql/inst/bin/postgres" --boot -x0 -F -c
max_connections=10 -c shared_buffers=100 -c
dynamic_shared_memory_type=none < "/dev/null" > "/dev/null" 2>&1
10
selecting default shared_buffers ... sh: line 1:  3127 Aborted       (core dumped)
"/home/edb/WORKDB/PG3/postgresql/inst/bin/postgres" --boot -x0 -F -c
max_connections=10 -c shared_buffers=16384 -c
dynamic_shared_memory_type=none < "/dev/null" > "/dev/null" 2>&1
400kB
selecting dynamic shared memory implementation ... posix
creating configuration files ... ok
running bootstrap script ... TRAP:
FailedAssertion("!(LWLockTranchesAllocated >=
LWTRANCHE_FIRST_USER_DEFINED)", File: "lwlock.c", Line: 501)
child process was terminated by signal 6: Aborted
initdb: removing data directory "data"

[edb@localhost bin]$

Thanks & Regards,
Rajkumar Raghuwanshi
QMG, EnterpriseDB Corporation



Re: [HACKERS] Parallel Append implementation

From
Amit Khandekar
Date:
On 24 March 2017 at 13:11, Rajkumar Raghuwanshi
<rajkumar.raghuwanshi@enterprisedb.com> wrote:
> I have given patch on latest pg sources (on commit
> 457a4448732881b5008f7a3bcca76fc299075ac3). configure and make all
> install ran successfully, but initdb failed with below error.

> FailedAssertion("!(LWLockTranchesAllocated >=
> LWTRANCHE_FIRST_USER_DEFINED)", File: "lwlock.c", Line: 501)

Thanks for reporting, Rajkumar.

With the new PARALLEL_APPEND tranche ID, LWTRANCHE_FIRST_USER_DEFINED
value has crossed the value 64. So we need to increase the initial
size of LWLockTrancheArray from 64 to 128. Attached is the updated
patch.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Attachment

Re: Parallel Append implementation

From
Amit Khandekar
Date:
On 24 March 2017 at 00:38, Amit Khandekar <amitdkhan.pg@gmail.com> wrote:
> On 23 March 2017 at 16:26, Amit Khandekar <amitdkhan.pg@gmail.com> wrote:
>> On 23 March 2017 at 05:55, Robert Haas <robertmhaas@gmail.com> wrote:
>>>
>>> So in your example we do this:
>>>
>>> C[0] += 20;
>>> C[1] += 16;
>>> C[2] += 10;
>>> /* C[2] is smaller than C[0] or C[1] at this point, so we add the next
>>> path to C[2] */
>>> C[2] += 8;
>>> /* after the previous line, C[1] is now the smallest, so add to that
>>> entry next */
>>> C[1] += 3;
>>> /* now we've got C[0] = 20, C[1] = 19, C[2] = 18, so add to C[2] */
>>> C[2] += 1;
>>> /* final result: C[0] = 20, C[1] = 19, C[2] = 19 */
>>>
>>> Now we just take the highest entry that appears in any array, which in
>>> this case is C[0], as the total cost.
>>
>> Wow. The way your final result exactly tallies with my algorithm
>> result is very interesting. This looks like some maths or computer
>> science theory that I am not aware.
>>
>> I am currently coding the algorithm using your method.
>

> While I was coding this, I was considering if Path->rows also should
> be calculated similar to total cost for non-partial subpath and total
> cost for partial subpaths. I think for rows, we can just take
> total_rows divided by workers for non-partial paths, and this
> approximation should suffice. It looks odd that it be treated with the
> same algorithm we chose for total cost for non-partial paths.

Attached is the patch v12, where Path->rows calculation of non-partial
paths is kept separate from the way total cost is done for non-partial
costs. rows for non-partial paths is calculated as total_rows divided
by workers as approximation. And then rows for partial paths are just
added one by one.

>
> Meanwhile, attached is a WIP patch v10. The only change in this patch
> w.r.t. the last patch (v9) is that this one has a new function defined
> append_nonpartial_cost(). Just sending this to show how the algorithm
> looks like; haven't yet called it.

Now append_nonpartial_cost() is used, and it is tested.

-- 
Thanks,
-Amit Khandekar
EnterpriseDB Corporation
The Postgres Database Company

Attachment

Re: Parallel Append implementation

From
Andres Freund
Date:
Hi,


On 2017-03-24 21:32:57 +0530, Amit Khandekar wrote:
> diff --git a/src/backend/executor/nodeAppend.c b/src/backend/executor/nodeAppend.c
> index a107545..e9e8676 100644
> --- a/src/backend/executor/nodeAppend.c
> +++ b/src/backend/executor/nodeAppend.c
> @@ -59,9 +59,47 @@
>  
>  #include "executor/execdebug.h"
>  #include "executor/nodeAppend.h"
> +#include "miscadmin.h"
> +#include "optimizer/cost.h"
> +#include "storage/spin.h"
> +
> +/*
> + * Shared state for Parallel Append.
> + *
> + * Each backend participating in a Parallel Append has its own
> + * descriptor in backend-private memory, and those objects all contain
> + * a pointer to this structure.
> + */
> +typedef struct ParallelAppendDescData
> +{
> +    LWLock        pa_lock;        /* mutual exclusion to choose next subplan */
> +    int            pa_first_plan;    /* plan to choose while wrapping around plans */
> +    int            pa_next_plan;    /* next plan to choose by any worker */
> +
> +    /*
> +     * pa_finished : workers currently executing the subplan. A worker which
> +     * finishes a subplan should set pa_finished to true, so that no new
> +     * worker picks this subplan. For non-partial subplan, a worker which picks
> +     * up that subplan should immediately set to true, so as to make sure
> +     * there are no more than 1 worker assigned to this subplan.
> +     */
> +    bool        pa_finished[FLEXIBLE_ARRAY_MEMBER];
> +} ParallelAppendDescData;


> +typedef ParallelAppendDescData *ParallelAppendDesc;

Pointer hiding typedefs make this Andres sad.



> @@ -291,6 +362,276 @@ ExecReScanAppend(AppendState *node)
>          if (subnode->chgParam == NULL)
>              ExecReScan(subnode);
>      }
> +
> +    if (padesc)
> +    {
> +        padesc->pa_first_plan = padesc->pa_next_plan = 0;
> +        memset(padesc->pa_finished, 0, sizeof(bool) * node->as_nplans);
> +    }
> +

Is it actually guaranteed that none of the parallel workers are doing
something at that point?


> +/* ----------------------------------------------------------------
> + *        exec_append_parallel_next
> + *
> + *        Determine the next subplan that should be executed. Each worker uses a
> + *        shared next_subplan counter index to start looking for unfinished plan,
> + *        executes the subplan, then shifts ahead this counter to the next
> + *        subplan, so that other workers know which next plan to choose. This
> + *        way, workers choose the subplans in round robin order, and thus they
> + *        get evenly distributed among the subplans.
> + *
> + *        Returns false if and only if all subplans are already finished
> + *        processing.
> + * ----------------------------------------------------------------
> + */
> +static bool
> +exec_append_parallel_next(AppendState *state)
> +{
> +    ParallelAppendDesc padesc = state->as_padesc;
> +    int        whichplan;
> +    int        initial_plan;
> +    int        first_partial_plan = ((Append *)state->ps.plan)->first_partial_plan;
> +    bool    found;
> +
> +    Assert(padesc != NULL);
> +
> +    /* Backward scan is not supported by parallel-aware plans */
> +    Assert(ScanDirectionIsForward(state->ps.state->es_direction));
> +
> +    /* The parallel leader chooses its next subplan differently */
> +    if (!IsParallelWorker())
> +        return exec_append_leader_next(state);

It's a bit weird that the leader's case does is so separate, and does
it's own lock acquisition.


> +    found = false;
> +    for (whichplan = initial_plan; whichplan != PA_INVALID_PLAN;)
> +    {
> +        /*
> +         * Ignore plans that are already done processing. These also include
> +         * non-partial subplans which have already been taken by a worker.
> +         */
> +        if (!padesc->pa_finished[whichplan])
> +        {
> +            found = true;
> +            break;
> +        }
> +
> +        /*
> +         * Note: There is a chance that just after the child plan node is
> +         * chosen above, some other worker finishes this node and sets
> +         * pa_finished to true. In that case, this worker will go ahead and
> +         * call ExecProcNode(child_node), which will return NULL tuple since it
> +         * is already finished, and then once again this worker will try to
> +         * choose next subplan; but this is ok : it's just an extra
> +         * "choose_next_subplan" operation.
> +         */

IIRC not all node types are safe against being executed again when
they've previously returned NULL.  That's why e.g. nodeMaterial.c
contains the following blurb:/* * If necessary, try to fetch another row from the subplan. * * Note: the eof_underlying
statevariable exists to short-circuit further * subplan calls.  It's not optional, unfortunately, because some plan *
nodetypes are not robust about being called again when they've already * returned NULL. */
 


> +    else if (IsA(subpath, MergeAppendPath))
> +    {
> +        MergeAppendPath *mpath = (MergeAppendPath *) subpath;
> +
> +        /*
> +         * If at all MergeAppend is partial, all its child plans have to be
> +         * partial : we don't currently support a mix of partial and
> +         * non-partial MergeAppend subpaths.
> +         */

Why is that?



> +int
> +get_append_num_workers(List *partial_subpaths, List *nonpartial_subpaths)
> +{
> +    ListCell   *lc;
> +    double        log2w;
> +    int            num_workers;
> +    int            max_per_plan_workers;
> +
> +    /*
> +     * log2(number_of_subpaths)+1 formula seems to give an appropriate number of
> +     * workers for Append path either having high number of children (> 100) or
> +     * having all non-partial subpaths or subpaths with 1-2 parallel_workers.
> +     * Whereas, if the subpaths->parallel_workers is high, this formula is not
> +     * suitable, because it does not take into account per-subpath workers.
> +     * For e.g., with workers (2, 8, 8),

That's the per-subplan workers for three subplans?  That's not
necessarily clear.


> the Append workers should be at least
> +     * 8, whereas the formula gives 2. In this case, it seems better to follow
> +     * the method used for calculating parallel_workers of an unpartitioned
> +     * table : log3(table_size). So we treat the UNION query as if the data

Which "UNION query"?


> +     * belongs to a single unpartitioned table, and then derive its workers. So
> +     * it will be : logb(b^w1 + b^w2 + b^w3) where w1, w2.. are per-subplan
> +     * workers and b is some logarithmic base such as 2 or 3. It turns out that
> +     * this evaluates to a value just a bit greater than max(w1,w2, w3). So, we
> +     * just use the maximum of workers formula. But this formula gives too few
> +     * workers when all paths have single worker (meaning they are non-partial)
> +     * For e.g. with workers : (1, 1, 1, 1, 1, 1), it is better to allocate 3
> +     * workers, whereas this method allocates only 1.
> +     * So we use whichever method that gives higher number of workers.
> +     */
> +
> +    /* Get log2(num_subpaths) */
> +    log2w = fls(list_length(partial_subpaths) +
> +                list_length(nonpartial_subpaths));
> +
> +    /* Avoid further calculations if we already crossed max workers limit */
> +    if (max_parallel_workers_per_gather <= log2w + 1)
> +        return max_parallel_workers_per_gather;
> +
> +
> +    /*
> +     * Get the parallel_workers value of the partial subpath having the highest
> +     * parallel_workers.
> +     */
> +    max_per_plan_workers = 1;
> +    foreach(lc, partial_subpaths)
> +    {
> +        Path       *subpath = lfirst(lc);
> +        max_per_plan_workers = Max(max_per_plan_workers,
> +                                   subpath->parallel_workers);
> +    }
> +
> +    /* Choose the higher of the results of the two formulae */
> +    num_workers = rint(Max(log2w, max_per_plan_workers) + 1);
> +
> +    /* In no case use more than max_parallel_workers_per_gather workers. */
> +    num_workers = Min(num_workers, max_parallel_workers_per_gather);
> +
> +    return num_workers;
> +}

Hm.  I'm not really convinced by the logic here.  Wouldn't it be better
to try to compute the minimum total cost across all workers for
1..#max_workers for the plans in an iterative manner?  I.e. try to map
each of the subplans to 1 (if non-partial) or N workers (partial) using
some fitting algorith (e.g. always choosing the worker(s) that currently
have the least work assigned).  I think the current algorithm doesn't
lead to useful #workers for e.g. cases with a lot of non-partial,
high-startup plans - imo a quite reasonable scenario.


I'm afraid this is too late for v10 - do you agree?

- Andres



Re: Parallel Append implementation

From
Robert Haas
Date:
On Mon, Apr 3, 2017 at 4:17 PM, Andres Freund <andres@anarazel.de> wrote:
> Hm.  I'm not really convinced by the logic here.  Wouldn't it be better
> to try to compute the minimum total cost across all workers for
> 1..#max_workers for the plans in an iterative manner?  I.e. try to map
> each of the subplans to 1 (if non-partial) or N workers (partial) using
> some fitting algorith (e.g. always choosing the worker(s) that currently
> have the least work assigned).  I think the current algorithm doesn't
> lead to useful #workers for e.g. cases with a lot of non-partial,
> high-startup plans - imo a quite reasonable scenario.

Well, that'd be totally unlike what we do in any other case.  We only
generate a Parallel Seq Scan plan for a given table with one # of
workers, and we cost it based on that.  We have no way to re-cost it
if we changed our mind later about how many workers to use.
Eventually, we should probably have something like what you're
describing here, but in general, not just for this specific case.  One
problem, of course, is to avoid having a larger number of workers
always look better than a smaller number, which with the current
costing model would probably happen a lot.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



Re: Parallel Append implementation

From
Andres Freund
Date:
On 2017-04-03 22:13:18 -0400, Robert Haas wrote:
> On Mon, Apr 3, 2017 at 4:17 PM, Andres Freund <andres@anarazel.de> wrote:
> > Hm.  I'm not really convinced by the logic here.  Wouldn't it be better
> > to try to compute the minimum total cost across all workers for
> > 1..#max_workers for the plans in an iterative manner?  I.e. try to map
> > each of the subplans to 1 (if non-partial) or N workers (partial) using
> > some fitting algorith (e.g. always choosing the worker(s) that currently
> > have the least work assigned).  I think the current algorithm doesn't
> > lead to useful #workers for e.g. cases with a lot of non-partial,
> > high-startup plans - imo a quite reasonable scenario.
> 
> Well, that'd be totally unlike what we do in any other case.  We only
> generate a Parallel Seq Scan plan for a given table with one # of
> workers, and we cost it based on that.  We have no way to re-cost it
> if we changed our mind later about how many workers to use.
> Eventually, we should probably have something like what you're
> describing here, but in general, not just for this specific case.  One
> problem, of course, is to avoid having a larger number of workers
> always look better than a smaller number, which with the current
> costing model would probably happen a lot.

I don't think the parallel seqscan is comparable in complexity with the
parallel append case.  Each worker there does the same kind of work, and
if one of them is behind, it'll just do less.  But correct sizing will
be more important with parallel-append, because with non-partial
subplans the work is absolutely *not* uniform.

Greetings,

Andres Freund



Re: Parallel Append implementation

From
Amit Khandekar
Date:
Thanks Andres for your review comments. Will get back with the other
comments, but meanwhile some queries about the below particular
comment ...

On 4 April 2017 at 10:17, Andres Freund <andres@anarazel.de> wrote:
> On 2017-04-03 22:13:18 -0400, Robert Haas wrote:
>> On Mon, Apr 3, 2017 at 4:17 PM, Andres Freund <andres@anarazel.de> wrote:
>> > Hm.  I'm not really convinced by the logic here.  Wouldn't it be better
>> > to try to compute the minimum total cost across all workers for
>> > 1..#max_workers for the plans in an iterative manner?  I.e. try to map
>> > each of the subplans to 1 (if non-partial) or N workers (partial) using
>> > some fitting algorith (e.g. always choosing the worker(s) that currently
>> > have the least work assigned).  I think the current algorithm doesn't
>> > lead to useful #workers for e.g. cases with a lot of non-partial,
>> > high-startup plans - imo a quite reasonable scenario.

I think I might have not understood this part exactly. Are you saying
we need to consider per-subplan parallel_workers to calculate total
number of workers for Append ? I also didn't get about non-partial
subplans. Can you please explain how many workers you think should be
expected with , say , 7 subplans out of which 3 are non-partial
subplans ?

>>
>> Well, that'd be totally unlike what we do in any other case.  We only
>> generate a Parallel Seq Scan plan for a given table with one # of
>> workers, and we cost it based on that.  We have no way to re-cost it
>> if we changed our mind later about how many workers to use.
>> Eventually, we should probably have something like what you're
>> describing here, but in general, not just for this specific case.  One
>> problem, of course, is to avoid having a larger number of workers
>> always look better than a smaller number, which with the current
>> costing model would probably happen a lot.
>
> I don't think the parallel seqscan is comparable in complexity with the
> parallel append case.  Each worker there does the same kind of work, and
> if one of them is behind, it'll just do less.  But correct sizing will
> be more important with parallel-append, because with non-partial
> subplans the work is absolutely *not* uniform.
>
> Greetings,
>
> Andres Freund



-- 
Thanks,
-Amit Khandekar
EnterpriseDB Corporation
The Postgres Database Company



Re: Parallel Append implementation

From
Amit Khandekar
Date:
On 4 April 2017 at 01:47, Andres Freund <andres@anarazel.de> wrote:
>> +typedef struct ParallelAppendDescData
>> +{
>> +     LWLock          pa_lock;                /* mutual exclusion to choose next subplan */
>> +     int                     pa_first_plan;  /* plan to choose while wrapping around plans */
>> +     int                     pa_next_plan;   /* next plan to choose by any worker */
>> +
>> +     /*
>> +      * pa_finished : workers currently executing the subplan. A worker which
>> +      * finishes a subplan should set pa_finished to true, so that no new
>> +      * worker picks this subplan. For non-partial subplan, a worker which picks
>> +      * up that subplan should immediately set to true, so as to make sure
>> +      * there are no more than 1 worker assigned to this subplan.
>> +      */
>> +     bool            pa_finished[FLEXIBLE_ARRAY_MEMBER];
>> +} ParallelAppendDescData;
>
>
>> +typedef ParallelAppendDescData *ParallelAppendDesc;
>
> Pointer hiding typedefs make this Andres sad.

Yeah .. was trying to be consistent with other parts of code where we
have typedefs for both structure and a pointer to that structure.

>
>
>
>> @@ -291,6 +362,276 @@ ExecReScanAppend(AppendState *node)
>>               if (subnode->chgParam == NULL)
>>                       ExecReScan(subnode);
>>       }
>> +
>> +     if (padesc)
>> +     {
>> +             padesc->pa_first_plan = padesc->pa_next_plan = 0;
>> +             memset(padesc->pa_finished, 0, sizeof(bool) * node->as_nplans);
>> +     }
>> +
>
> Is it actually guaranteed that none of the parallel workers are doing
> something at that point?

ExecReScanAppend() would be called by ExecReScanGather().
ExecReScanGather() shuts down all the parallel workers before calling
its child node (i.e. ExecReScanAppend).


>> +static bool
>> +exec_append_parallel_next(AppendState *state)
>> +{
>> +     ParallelAppendDesc padesc = state->as_padesc;
>> +     int             whichplan;
>> +     int             initial_plan;
>> +     int             first_partial_plan = ((Append *)state->ps.plan)->first_partial_plan;
>> +     bool    found;
>> +
>> +     Assert(padesc != NULL);
>> +
>> +     /* Backward scan is not supported by parallel-aware plans */
>> +     Assert(ScanDirectionIsForward(state->ps.state->es_direction));
>> +
>> +     /* The parallel leader chooses its next subplan differently */
>> +     if (!IsParallelWorker())
>> +             return exec_append_leader_next(state);
>
> It's a bit weird that the leader's case does is so separate, and does
> it's own lock acquisition.

Since we wanted to prevent it from taking the most expensive
non-partial plans first , thought it would be better to keep its logic
simple and separate, so could not merge it in the main logic code.

>
>
>> +     found = false;
>> +     for (whichplan = initial_plan; whichplan != PA_INVALID_PLAN;)
>> +     {
>> +             /*
>> +              * Ignore plans that are already done processing. These also include
>> +              * non-partial subplans which have already been taken by a worker.
>> +              */
>> +             if (!padesc->pa_finished[whichplan])
>> +             {
>> +                     found = true;
>> +                     break;
>> +             }
>> +
>> +             /*
>> +              * Note: There is a chance that just after the child plan node is
>> +              * chosen above, some other worker finishes this node and sets
>> +              * pa_finished to true. In that case, this worker will go ahead and
>> +              * call ExecProcNode(child_node), which will return NULL tuple since it
>> +              * is already finished, and then once again this worker will try to
>> +              * choose next subplan; but this is ok : it's just an extra
>> +              * "choose_next_subplan" operation.
>> +              */
>
> IIRC not all node types are safe against being executed again when
> they've previously returned NULL.  That's why e.g. nodeMaterial.c
> contains the following blurb:
>         /*
>          * If necessary, try to fetch another row from the subplan.
>          *
>          * Note: the eof_underlying state variable exists to short-circuit further
>          * subplan calls.  It's not optional, unfortunately, because some plan
>          * node types are not robust about being called again when they've already
>          * returned NULL.
>          */

This scenario is different from the parallel append scenario described
by my comment. A worker sets pa_finished to true only when it itself
gets a NULL tuple for a given subplan. So in
exec_append_parallel_next(), suppose a worker W1 finds a subplan with
pa_finished=false. So it chooses it. Now a different worker W2 sets
this subplan's pa_finished=true because W2 has got a NULL tuple. But
W1 hasn't yet got a NULL tuple. If it had got a NULL tuple earlier, it
would have itself set pa_finished to true, and then it would have
never again chosen this subplan. So effectively, a worker would never
execute the same subplan once that subplan returns NULL.

>
>
>> +     else if (IsA(subpath, MergeAppendPath))
>> +     {
>> +             MergeAppendPath *mpath = (MergeAppendPath *) subpath;
>> +
>> +             /*
>> +              * If at all MergeAppend is partial, all its child plans have to be
>> +              * partial : we don't currently support a mix of partial and
>> +              * non-partial MergeAppend subpaths.
>> +              */
>
> Why is that?

The mix of partial and non-partial subplans is being implemented only
for Append plan. In the future if and when we extend this support for
MergeAppend, then we would need to change this. Till then, we can
assume that if MergeAppend is partial, all it child plans have to be
partial otherwise there woudn't have been a partial MergeAppendPath.

BTW MergeAppendPath currently is itself never partial. So in the
comment it is mentioned "If at all MergeAppend is partial".

>
>
>
>> +int
>> +get_append_num_workers(List *partial_subpaths, List *nonpartial_subpaths)
>> +{
>> +     ListCell   *lc;
>> +     double          log2w;
>> +     int                     num_workers;
>> +     int                     max_per_plan_workers;
>> +
>> +     /*
>> +      * log2(number_of_subpaths)+1 formula seems to give an appropriate number of
>> +      * workers for Append path either having high number of children (> 100) or
>> +      * having all non-partial subpaths or subpaths with 1-2 parallel_workers.
>> +      * Whereas, if the subpaths->parallel_workers is high, this formula is not
>> +      * suitable, because it does not take into account per-subpath workers.
>> +      * For e.g., with workers (2, 8, 8),
>
> That's the per-subplan workers for three subplans?  That's not
> necessarily clear.

Right. Corrected it to : "3 subplans having per-subplan workers such
as (2, 8, 8)"

>
>
>> the Append workers should be at least
>> +      * 8, whereas the formula gives 2. In this case, it seems better to follow
>> +      * the method used for calculating parallel_workers of an unpartitioned
>> +      * table : log3(table_size). So we treat the UNION query as if the data
>
> Which "UNION query"?

Changed it to "partitioned table". The idea is : treat all the data of
a partitioned table as if it belonged to a single non-partitioned
table, and then calculate the workers for such a table. It may not
exactly apply for UNION query because that can involve different
tables and with joins too. So replaced UNION query to partitioned
table.

>
>
>> +      * belongs to a single unpartitioned table, and then derive its workers. So
>> +      * it will be : logb(b^w1 + b^w2 + b^w3) where w1, w2.. are per-subplan
>> +      * workers and b is some logarithmic base such as 2 or 3. It turns out that
>> +      * this evaluates to a value just a bit greater than max(w1,w2, w3). So, we
>> +      * just use the maximum of workers formula. But this formula gives too few
>> +      * workers when all paths have single worker (meaning they are non-partial)
>> +      * For e.g. with workers : (1, 1, 1, 1, 1, 1), it is better to allocate 3
>> +      * workers, whereas this method allocates only 1.
>> +      * So we use whichever method that gives higher number of workers.
>> +      */
>> +
>> +     /* Get log2(num_subpaths) */
>> +     log2w = fls(list_length(partial_subpaths) +
>> +                             list_length(nonpartial_subpaths));
>> +
>> +     /* Avoid further calculations if we already crossed max workers limit */
>> +     if (max_parallel_workers_per_gather <= log2w + 1)
>> +             return max_parallel_workers_per_gather;
>> +
>> +
>> +     /*
>> +      * Get the parallel_workers value of the partial subpath having the highest
>> +      * parallel_workers.
>> +      */
>> +     max_per_plan_workers = 1;
>> +     foreach(lc, partial_subpaths)
>> +     {
>> +             Path       *subpath = lfirst(lc);
>> +             max_per_plan_workers = Max(max_per_plan_workers,
>> +                                                                subpath->parallel_workers);
>> +     }
>> +
>> +     /* Choose the higher of the results of the two formulae */
>> +     num_workers = rint(Max(log2w, max_per_plan_workers) + 1);
>> +
>> +     /* In no case use more than max_parallel_workers_per_gather workers. */
>> +     num_workers = Min(num_workers, max_parallel_workers_per_gather);
>> +
>> +     return num_workers;
>> +}
>
> Hm.  I'm not really convinced by the logic here.  Wouldn't it be better
> to try to compute the minimum total cost across all workers for
> 1..#max_workers for the plans in an iterative manner?  I.e. try to map
> each of the subplans to 1 (if non-partial) or N workers (partial) using
> some fitting algorith (e.g. always choosing the worker(s) that currently
> have the least work assigned).  I think the current algorithm doesn't
> lead to useful #workers for e.g. cases with a lot of non-partial,
> high-startup plans - imo a quite reasonable scenario.

Have responded in a separate reply.

>
>
> I'm afraid this is too late for v10 - do you agree?

I am not exactly sure; may be it depends upon how much more review
comments would follow this week. I anticipate there would not be any
high level/design-level changes now.

Attached is an updated patch v13 that has some comments changed as per
your review, and also rebased on latest master.

Attachment

Re: Parallel Append implementation

From
Robert Haas
Date:
On Tue, Apr 4, 2017 at 12:47 AM, Andres Freund <andres@anarazel.de> wrote:
> I don't think the parallel seqscan is comparable in complexity with the
> parallel append case.  Each worker there does the same kind of work, and
> if one of them is behind, it'll just do less.  But correct sizing will
> be more important with parallel-append, because with non-partial
> subplans the work is absolutely *not* uniform.

Sure, that's a problem, but I think it's still absolutely necessary to
ramp up the maximum "effort" (in terms of number of workers)
logarithmically.  If you just do it by costing, the winning number of
workers will always be the largest number that we think we'll be able
to put to use - e.g. with 100 branches of relatively equal cost we'll
pick 100 workers.  That's not remotely sane.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



Re: Parallel Append implementation

From
Robert Haas
Date:
On Mon, Apr 3, 2017 at 4:17 PM, Andres Freund <andres@anarazel.de> wrote:
> I'm afraid this is too late for v10 - do you agree?

Yeah, I think so.  The benefit of this will be a lot more once we get
partitionwise join and partitionwise aggregate working, but that
probably won't happen for this release, or at best in limited cases.
And while we might not agree on exactly what work this patch still
needs, I think it does still need some work.  I've moved this to the
next CommitFest.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



Re: Parallel Append implementation

From
Andres Freund
Date:
On 2017-04-04 08:01:32 -0400, Robert Haas wrote:
> On Tue, Apr 4, 2017 at 12:47 AM, Andres Freund <andres@anarazel.de> wrote:
> > I don't think the parallel seqscan is comparable in complexity with the
> > parallel append case.  Each worker there does the same kind of work, and
> > if one of them is behind, it'll just do less.  But correct sizing will
> > be more important with parallel-append, because with non-partial
> > subplans the work is absolutely *not* uniform.
>
> Sure, that's a problem, but I think it's still absolutely necessary to
> ramp up the maximum "effort" (in terms of number of workers)
> logarithmically.  If you just do it by costing, the winning number of
> workers will always be the largest number that we think we'll be able
> to put to use - e.g. with 100 branches of relatively equal cost we'll
> pick 100 workers.  That's not remotely sane.

I'm quite unconvinced that just throwing a log() in there is the best
way to combat that.  Modeling the issue of starting more workers through
tuple transfer, locking, startup overhead costing seems a better to me.

If the goal is to compute the results of the query as fast as possible,
and to not use more than max_parallel_per_XXX, and it's actually
beneficial to use more workers, then we should.  Because otherwise you
really can't use the resources available.

- Andres



Re: Parallel Append implementation

From
Ashutosh Bapat
Date:


On Wed, Apr 5, 2017 at 1:43 AM, Andres Freund <andres@anarazel.de> wrote:
On 2017-04-04 08:01:32 -0400, Robert Haas wrote:
> On Tue, Apr 4, 2017 at 12:47 AM, Andres Freund <andres@anarazel.de> wrote:
> > I don't think the parallel seqscan is comparable in complexity with the
> > parallel append case.  Each worker there does the same kind of work, and
> > if one of them is behind, it'll just do less.  But correct sizing will
> > be more important with parallel-append, because with non-partial
> > subplans the work is absolutely *not* uniform.
>
> Sure, that's a problem, but I think it's still absolutely necessary to
> ramp up the maximum "effort" (in terms of number of workers)
> logarithmically.  If you just do it by costing, the winning number of
> workers will always be the largest number that we think we'll be able
> to put to use - e.g. with 100 branches of relatively equal cost we'll
> pick 100 workers.  That's not remotely sane.

I'm quite unconvinced that just throwing a log() in there is the best
way to combat that.  Modeling the issue of starting more workers through
tuple transfer, locking, startup overhead costing seems a better to me.

If the goal is to compute the results of the query as fast as possible,
and to not use more than max_parallel_per_XXX, and it's actually
beneficial to use more workers, then we should.  Because otherwise you
really can't use the resources available.
 
+1. I had expressed similar opinion earlier, but yours is better articulated. Thanks.

--
Best Wishes,
Ashutosh Bapat
EnterpriseDB Corporation
The Postgres Database Company

Re: Parallel Append implementation

From
Amit Khandekar
Date:
On 5 April 2017 at 01:43, Andres Freund <andres@anarazel.de> wrote:
> On 2017-04-04 08:01:32 -0400, Robert Haas wrote:
>> On Tue, Apr 4, 2017 at 12:47 AM, Andres Freund <andres@anarazel.de> wrote:
>> > I don't think the parallel seqscan is comparable in complexity with the
>> > parallel append case.  Each worker there does the same kind of work, and
>> > if one of them is behind, it'll just do less.  But correct sizing will
>> > be more important with parallel-append, because with non-partial
>> > subplans the work is absolutely *not* uniform.
>>
>> Sure, that's a problem, but I think it's still absolutely necessary to
>> ramp up the maximum "effort" (in terms of number of workers)
>> logarithmically.  If you just do it by costing, the winning number of
>> workers will always be the largest number that we think we'll be able
>> to put to use - e.g. with 100 branches of relatively equal cost we'll
>> pick 100 workers.  That's not remotely sane.
>
> I'm quite unconvinced that just throwing a log() in there is the best
> way to combat that.  Modeling the issue of starting more workers through
> tuple transfer, locking, startup overhead costing seems a better to me.
>
> If the goal is to compute the results of the query as fast as possible,
> and to not use more than max_parallel_per_XXX, and it's actually
> beneficial to use more workers, then we should.  Because otherwise you
> really can't use the resources available.
>
> - Andres

This is what the earlier versions of my patch had done : just add up
per-subplan parallel_workers (1 for non-partial subplan and
subpath->parallel_workers for partial subplans) and set this total as
the Append parallel_workers.

Robert had a valid point that this would be inconsistent with the
worker count that we would come up with if it were a single table with
a cost as big as the total cost of all Append subplans. We were
discussing rather about partitioned table versus if it were
unpartitioned, but I think the same argument goes for a union query
with non-partial plans : if we want to clamp down the number of
workers for a single table for a good reason, we should then also
follow that policy and prevent assigning too many workers even for an
Append.

Now I am not sure of the reason why for a single table parallel scan,
we increase number of workers logarithmically; but I think there might
have been an observation that after certain number of workers, adding
up more workers does not make significant difference, but this is just
my guess.

If we try to calculate workers based on each of the subplan costs
rather than just the number of workers, still I think the total worker
count should be a *log* of the total cost, so as to be consistent with
what we did for other scans. Now log(total_cost) does not increase
significantly with cost. For cost of 1000 units, the log3(cost) will
be 6, and for cost of 10,000 units, it is 8, i.e. just 2 more workers.
So I think since its a logarithmic value, it would be might as well
better to just drop the cost factor, and consider only number of
workers.

But again, in the future if we drop the method of log(), then the
above is not valid. But I think till then we should follow some common
strategy we have been following.

BTW all of the above points apply only for non-partial plans. For
partial plans, what we have done in the patch is : Take the highest of
the per-subplan parallel_workers, and make sure that Append workers is
at least as high as this value.

-- 
Thanks,
-Amit Khandekar
EnterpriseDB Corporation
The Postgres Database Company



Re: Parallel Append implementation

From
Robert Haas
Date:
On Tue, Apr 4, 2017 at 4:13 PM, Andres Freund <andres@anarazel.de> wrote:
> I'm quite unconvinced that just throwing a log() in there is the best
> way to combat that.  Modeling the issue of starting more workers through
> tuple transfer, locking, startup overhead costing seems a better to me.

Knock yourself out.  There's no doubt that the way the number of
parallel workers is computed is pretty stupid right now, and it
obviously needs to get a lot smarter before we can consider doing
things like throwing 40 workers at a query.  If you throw 2 or 4
workers at a query and it turns out that it doesn't help much, that's
sad, but if you throw 40 workers at a query and it turns out that it
doesn't help much, or even regresses, that's a lot sadder.  The
existing system does try to model startup and tuple transfer overhead
during costing, but only as a way of comparing parallel plans to each
other or to non-parallel plans, not to work out the right number of
workers.  It also does not model contention, which it absolutely needs
to do.  I was kind of hoping that once the first version of parallel
query was committed, other developers who care about the query planner
would be motivated to improve some of this stuff, but so far that
hasn't really happened.  This release adds a decent number of new
execution capabilities, and there is a lot more work to be done there,
but without some serious work on the planner end of things I fear
we're never going to be able to get more than ~4x speedup out of
parallel query, because we're just too dumb to know how many workers
we really ought to be using.

That having been said, I completely and emphatically disagree that
this patch ought to be required to be an order-of-magnitude smarter
than the existing logic in order to get committed.  There are four
main things that this patch can hope to accomplish:

1. If we've got an Append node with children that have a non-zero
startup cost, it is currently pretty much guaranteed that every worker
will pay the startup cost for every child.  With Parallel Append, we
can spread out the workers across the plans, and once a plan has been
finished by however many workers it got, other workers can ignore it,
which means that its startup cost need not be paid by those workers.
This case will arise a lot more frequently once we have partition-wise
join.

2. When the Append node's children are partial plans, spreading out
the workers reduces contention for whatever locks those workers use to
coordinate access to shared data.

3. If the Append node represents a scan of a partitioned table, and
the partitions are on different tablespaces (or there's just enough
I/O bandwidth available in a single tablespace to read more than one
of them at once without slowing things down), then spreading out the
work gives us I/O parallelism.  This is an area where some
experimentation and benchmarking is needed, because there is a
possibility of regressions if we run several sequential scans on the
same spindle in parallel instead of consecutively.  We might need to
add some logic to try to avoid this, but it's not clear how that logic
should work.

4. If the Append node is derived from a UNION ALL query, we can run
different branches in different processes even if the plans are not
themselves able to be parallelized.  This was proposed by Stephen
among others as an "easy" case for parallelism, which was maybe a tad
optimistic, but it's sad that we're going to release v10 without
having done anything about it.

All of those things (except possibly #3) are wins over the status quo
even if the way we choose the number of workers is still pretty dumb.
It shouldn't get away with being dumber than what we've already got,
but it shouldn't be radically smarter - or even just radically
different because, if it is, then the results you get when you query a
partitioned table will be very different from what you get when you
query an partitioned table, which is not sensible.  I very much agree
that doing something smarter than log-scaling on the number of workers
is an a good project for somebody to do, but it's not *this* project.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



Re: Parallel Append implementation

From
Andres Freund
Date:
On 2017-04-05 14:52:38 +0530, Amit Khandekar wrote:
> This is what the earlier versions of my patch had done : just add up
> per-subplan parallel_workers (1 for non-partial subplan and
> subpath->parallel_workers for partial subplans) and set this total as
> the Append parallel_workers.

I don't think that's great, consider e.g. the case that you have one
very expensive query, and a bunch of cheaper ones. Most of those workers
wouldn't do much while waiting for the the expensive query.  What I'm
basically thinking we should do is something like the following
pythonesque pseudocode:

best_nonpartial_cost = -1
best_nonpartial_nworkers = -1

for numworkers in 1...#max workers:  worker_work = [0 for x in range(0, numworkers)]
  nonpartial_cost += startup_cost * numworkers
  # distribute all nonpartial tasks over workers.  Assign tasks to the  # worker with the least amount of work already
performed. for task in all_nonpartial_subqueries:      least_busy_worker = worker_work.smallest()
least_busy_worker+= task.total_nonpartial_cost
 
  # the nonpartial cost here is the largest amount any single worker  # has to perform.  nonpartial_cost +=
worker_work.largest()
  total_partial_cost = 0  for task in all_partial_subqueries:      total_partial_cost += task.total_nonpartial_cost
  # Compute resources needed by partial tasks. First compute how much  # cost we can distribute to workers that take
shorterthan the  # "busiest" worker doing non-partial tasks.  remaining_avail_work = 0  for i in range(0, numworkers):
   remaining_avail_work += worker_work.largest() - worker_work[i]
 
  # Equally divide up remaining work over all workers  if remaining_avail_work < total_partial_cost:
nonpartial_cost+= (worker_work.largest - remaining_avail_work) / numworkers
 
  # check if this is the best number of workers  if best_nonpartial_cost == -1 or best_nonpartial_cost >
nonpartial_cost:    best_nonpartial_cost = worker_work.largest     best_nonpartial_nworkers = nworkers
 

Does that make sense?


> BTW all of the above points apply only for non-partial plans.

Indeed. But I think that's going to be a pretty common type of plan,
especially if we get partitionwise joins.


Greetings,

Andres Freund



Re: [HACKERS] Parallel Append implementation

From
Amit Khandekar
Date:
On 6 April 2017 at 07:33, Andres Freund <andres@anarazel.de> wrote:
> On 2017-04-05 14:52:38 +0530, Amit Khandekar wrote:
>> This is what the earlier versions of my patch had done : just add up
>> per-subplan parallel_workers (1 for non-partial subplan and
>> subpath->parallel_workers for partial subplans) and set this total as
>> the Append parallel_workers.
>
> I don't think that's great, consider e.g. the case that you have one
> very expensive query, and a bunch of cheaper ones. Most of those workers
> wouldn't do much while waiting for the the expensive query.  What I'm
> basically thinking we should do is something like the following
> pythonesque pseudocode:
>
> best_nonpartial_cost = -1
> best_nonpartial_nworkers = -1
>
> for numworkers in 1...#max workers:
>    worker_work = [0 for x in range(0, numworkers)]
>
>    nonpartial_cost += startup_cost * numworkers
>
>    # distribute all nonpartial tasks over workers.  Assign tasks to the
>    # worker with the least amount of work already performed.
>    for task in all_nonpartial_subqueries:
>        least_busy_worker = worker_work.smallest()
>        least_busy_worker += task.total_nonpartial_cost
>
>    # the nonpartial cost here is the largest amount any single worker
>    # has to perform.
>    nonpartial_cost += worker_work.largest()
>
>    total_partial_cost = 0
>    for task in all_partial_subqueries:
>        total_partial_cost += task.total_nonpartial_cost
>
>    # Compute resources needed by partial tasks. First compute how much
>    # cost we can distribute to workers that take shorter than the
>    # "busiest" worker doing non-partial tasks.
>    remaining_avail_work = 0
>    for i in range(0, numworkers):
>        remaining_avail_work += worker_work.largest() - worker_work[i]
>
>    # Equally divide up remaining work over all workers
>    if remaining_avail_work < total_partial_cost:
>       nonpartial_cost += (worker_work.largest - remaining_avail_work) / numworkers
>
>    # check if this is the best number of workers
>    if best_nonpartial_cost == -1 or best_nonpartial_cost > nonpartial_cost:
>       best_nonpartial_cost = worker_work.largest
>       best_nonpartial_nworkers = nworkers
>
> Does that make sense?

Yeah, I gather what you are trying to achieve is : allocate number of
workers such that the total cost does not exceed the cost of the first
non-partial plan (i.e. the costliest one, because the plans are sorted
by descending cost).

So for non-partial costs such as (20, 10, 5, 2) allocate only 2
workers because the 2nd worker will execute (10, 5, 2) while 1st
worker executes (20).

But for costs such as (4, 4, 4,  .... 20 times), the logic would give
us 20 workers because we want to finish the Append in 4 time units;
and this what we want to avoid when we go with
don't-allocate-too-many-workers approach.

>
>
>> BTW all of the above points apply only for non-partial plans.
>
> Indeed. But I think that's going to be a pretty common type of plan,

Yes it is.

> especially if we get partitionwise joins.

About that I am not sure, because we already have support for parallel
joins, so wouldn't the join subpaths corresponding to all of the
partitions be partial paths ? I may be wrong about that.

But if the subplans are foreign scans, then yes all would be
non-partial plans. This may provoke  off-topic discussion, but here
instead of assigning so many workers to all these foreign plans and
all those workers waiting for the results, a single asynchronous
execution node (which is still in the making) would be desirable
because it would do the job of all these workers.

-- 
Thanks,
-Amit Khandekar
EnterpriseDB Corporation
The Postgres Database Company



Re: [HACKERS] Parallel Append implementation

From
Andres Freund
Date:
Hi,

On 2017-04-07 11:44:39 +0530, Amit Khandekar wrote:
> On 6 April 2017 at 07:33, Andres Freund <andres@anarazel.de> wrote:
> > On 2017-04-05 14:52:38 +0530, Amit Khandekar wrote:
> >> This is what the earlier versions of my patch had done : just add up
> >> per-subplan parallel_workers (1 for non-partial subplan and
> >> subpath->parallel_workers for partial subplans) and set this total as
> >> the Append parallel_workers.
> >
> > I don't think that's great, consider e.g. the case that you have one
> > very expensive query, and a bunch of cheaper ones. Most of those workers
> > wouldn't do much while waiting for the the expensive query.  What I'm
> > basically thinking we should do is something like the following
> > pythonesque pseudocode:
> >
> > best_nonpartial_cost = -1
> > best_nonpartial_nworkers = -1
> >
> > for numworkers in 1...#max workers:
> >    worker_work = [0 for x in range(0, numworkers)]
> >
> >    nonpartial_cost += startup_cost * numworkers
> >
> >    # distribute all nonpartial tasks over workers.  Assign tasks to the
> >    # worker with the least amount of work already performed.
> >    for task in all_nonpartial_subqueries:
> >        least_busy_worker = worker_work.smallest()
> >        least_busy_worker += task.total_nonpartial_cost
> >
> >    # the nonpartial cost here is the largest amount any single worker
> >    # has to perform.
> >    nonpartial_cost += worker_work.largest()
> >
> >    total_partial_cost = 0
> >    for task in all_partial_subqueries:
> >        total_partial_cost += task.total_nonpartial_cost
> >
> >    # Compute resources needed by partial tasks. First compute how much
> >    # cost we can distribute to workers that take shorter than the
> >    # "busiest" worker doing non-partial tasks.
> >    remaining_avail_work = 0
> >    for i in range(0, numworkers):
> >        remaining_avail_work += worker_work.largest() - worker_work[i]
> >
> >    # Equally divide up remaining work over all workers
> >    if remaining_avail_work < total_partial_cost:
> >       nonpartial_cost += (worker_work.largest - remaining_avail_work) / numworkers
> >
> >    # check if this is the best number of workers
> >    if best_nonpartial_cost == -1 or best_nonpartial_cost > nonpartial_cost:
> >       best_nonpartial_cost = worker_work.largest
> >       best_nonpartial_nworkers = nworkers
> >
> > Does that make sense?
> 
> Yeah, I gather what you are trying to achieve is : allocate number of
> workers such that the total cost does not exceed the cost of the first
> non-partial plan (i.e. the costliest one, because the plans are sorted
> by descending cost).
> 
> So for non-partial costs such as (20, 10, 5, 2) allocate only 2
> workers because the 2nd worker will execute (10, 5, 2) while 1st
> worker executes (20).
> 
> But for costs such as (4, 4, 4,  .... 20 times), the logic would give
> us 20 workers because we want to finish the Append in 4 time units;
> and this what we want to avoid when we go with
> don't-allocate-too-many-workers approach.

I guess, my problem is that I don't agree with that as a goal in an of
itself.  If you actually want to run your query quickly, you *want* 20
workers here.  The issues of backend startup overhead (already via
parallel_setup_cost), concurrency and such cost should be modelled, not
burried in a formula the user can't change.  If we want to make it less
and less likely to start more workers we should make that configurable,
not the default.
I think there's some precedent taken from the parallel seqscan case,
that's not actually applicable here.  Parallel seqscans have a good
amount of shared state, both on the kernel and pg side, and that shared
state reduces gains of increasing the number of workers.  But with
non-partial scans such shared state largely doesn't exist.


> > especially if we get partitionwise joins.
> 
> About that I am not sure, because we already have support for parallel
> joins, so wouldn't the join subpaths corresponding to all of the
> partitions be partial paths ? I may be wrong about that.

We'll probably generate both, and then choose the cheaper one.  The
startup cost for partitionwise joins should usually be a lot cheaper
(because e.g. for hashtables we'll generate smaller hashtables), and we
should have less cost of concurrency.


> But if the subplans are foreign scans, then yes all would be
> non-partial plans. This may provoke  off-topic discussion, but here
> instead of assigning so many workers to all these foreign plans and
> all those workers waiting for the results, a single asynchronous
> execution node (which is still in the making) would be desirable
> because it would do the job of all these workers.

That's something that probably shouldn't be modelled throug a parallel
append, I agree - it shouldn't be too hard to develop a costing model
for that however.

Greetings,

Andres Freund



Re: [HACKERS] Parallel Append implementation

From
Amit Khandekar
Date:
On 7 April 2017 at 20:35, Andres Freund <andres@anarazel.de> wrote:
>> But for costs such as (4, 4, 4,  .... 20 times), the logic would give
>> us 20 workers because we want to finish the Append in 4 time units;
>> and this what we want to avoid when we go with
>> don't-allocate-too-many-workers approach.
>
> I guess, my problem is that I don't agree with that as a goal in an of
> itself.  If you actually want to run your query quickly, you *want* 20
> workers here.  The issues of backend startup overhead (already via
> parallel_setup_cost), concurrency and such cost should be modelled, not
> burried in a formula the user can't change.  If we want to make it less
> and less likely to start more workers we should make that configurable,
> not the default.
> I think there's some precedent taken from the parallel seqscan case,
> that's not actually applicable here.  Parallel seqscans have a good
> amount of shared state, both on the kernel and pg side, and that shared
> state reduces gains of increasing the number of workers.  But with
> non-partial scans such shared state largely doesn't exist.

After searching through earlier mails about parallel scan, I am not
sure whether the shared state was considered to be a potential factor
that might reduce parallel query gains, when deciding the calculation
for number of workers for a parallel seq scan. I mean even today if we
allocate 10 workers as against a calculated 4 workers count for a
parallel seq scan, they might help. I think it's just that we don't
know if they would *always* help or it would regress sometimes.



Re: [HACKERS] Parallel Append implementation

From
Rafia Sabih
Date:

On Tue, Apr 4, 2017 at 12:37 PM, Amit Khandekar <amitdkhan.pg@gmail.com> wrote:
Attached is an updated patch v13 that has some comments changed as per
your review, and also rebased on latest master.

This is not applicable on the latest head i.e. commit -- 08aed6604de2e6a9f4d499818d7c641cbf5eb9f7, looks like need a rebasing.

--
Regards,
Rafia Sabih

Re: [HACKERS] Parallel Append implementation

From
Amit Khandekar
Date:
On 30 June 2017 at 15:10, Rafia Sabih <rafia.sabih@enterprisedb.com> wrote:
>
> On Tue, Apr 4, 2017 at 12:37 PM, Amit Khandekar <amitdkhan.pg@gmail.com>
> wrote:
>>
>> Attached is an updated patch v13 that has some comments changed as per
>> your review, and also rebased on latest master.
>
>
> This is not applicable on the latest head i.e. commit --
> 08aed6604de2e6a9f4d499818d7c641cbf5eb9f7, looks like need a rebasing.

Thanks for notifying. Attached is the rebased version of the patch.

-- 
Thanks,
-Amit Khandekar
EnterpriseDB Corporation
The Postgres Database Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Attachment

Re: [HACKERS] Parallel Append implementation

From
Robert Haas
Date:
On Wed, Jul 5, 2017 at 7:53 AM, Amit Khandekar <amitdkhan.pg@gmail.com> wrote:
>> This is not applicable on the latest head i.e. commit --
>> 08aed6604de2e6a9f4d499818d7c641cbf5eb9f7, looks like need a rebasing.
>
> Thanks for notifying. Attached is the rebased version of the patch.

This again needs a rebase.

(And, hey everybody, it also needs some review!)

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



Re: [HACKERS] Parallel Append implementation

From
Amit Khandekar
Date:
On 9 August 2017 at 19:05, Robert Haas <robertmhaas@gmail.com> wrote:
> On Wed, Jul 5, 2017 at 7:53 AM, Amit Khandekar <amitdkhan.pg@gmail.com> wrote:
>>> This is not applicable on the latest head i.e. commit --
>>> 08aed6604de2e6a9f4d499818d7c641cbf5eb9f7, looks like need a rebasing.
>>
>> Thanks for notifying. Attached is the rebased version of the patch.
>
> This again needs a rebase.

Attached rebased version of the patch. Thanks.

-- 
Thanks,
-Amit Khandekar
EnterpriseDB Corporation
The Postgres Database Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Attachment

Re: [HACKERS] Parallel Append implementation

From
Rafia Sabih
Date:
On Thu, Aug 10, 2017 at 11:04 AM, Amit Khandekar <amitdkhan.pg@gmail.com> wrote:
> On 9 August 2017 at 19:05, Robert Haas <robertmhaas@gmail.com> wrote:
>> On Wed, Jul 5, 2017 at 7:53 AM, Amit Khandekar <amitdkhan.pg@gmail.com> wrote:
>>>> This is not applicable on the latest head i.e. commit --
>>>> 08aed6604de2e6a9f4d499818d7c641cbf5eb9f7, looks like need a rebasing.
>>>
>>> Thanks for notifying. Attached is the rebased version of the patch.
>>
>> This again needs a rebase.
>
> Attached rebased version of the patch. Thanks.
>

I tested this patch for partitioned TPC-H queries along with
partition-wise join patches [1]. The experimental setup used is as
follows,
Partitions were done on tables lineitem and orders and the partitioned
keys were l_orderkey and o_orderkey respectively. Range partitioning
scheme was used and the total number of partitions for each of the
tables was 17. These experiments are on scale factor 20. Server
parameters are kept as follows,
work_mem = 1GB
shared_buffers = 10GB
effective_cache_size = 10GB

All the values of time are in seconds

Query | Head | ParallelAppend + PWJ | Patches used by query
Q1 | 395 | 398 | only PA
Q3 | 130  | 90 | only PA
Q4 | 244 | 12 | PA and PWJ, time by only PWJ - 41
Q5 | 123 | 77 | PA only
Q6 | 29 | 12 | PA only
Q7 | 134 | 88 | PA only
Q9 | 1051 | 1135 | PA only
Q10 | 111 | 70 | PA and PWJ, time by only PWJ - 89
Q12 | 114 | 70 | PA and PWJ, time by only PWJ - 100
Q14 | 13 | 12 | PA only
Q18 | 508 | 489 | PA only
Q21 | 649 | 163 | PA only

To conclude, the patch is working good for the benchmark with no
serious cases of regression atleast at this scale factor and the
improvement in performance is significant. Please find the attached
file for the explain analyse output of the queries.

[1] CAFjFpRfy-YBL6AX3yeO30pAupTMQXgkxDc2P3XBK52QDzGtX5Q@mail.gmail.com

-- 
Regards,
Rafia Sabih
EnterpriseDB: http://www.enterprisedb.com/

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Attachment

Re: [HACKERS] Parallel Append implementation

From
Robert Haas
Date:
Thanks for the benchmarking results!

On Tue, Aug 15, 2017 at 11:35 PM, Rafia Sabih
<rafia.sabih@enterprisedb.com> wrote:
> Q4 | 244 | 12 | PA and PWJ, time by only PWJ - 41

12 seconds instead of 244?  Whoa.  I find it curious that we picked a
Parallel Append with a bunch of non-partial plans when we could've
just as easily picked partial plans, or so it seems to me.  To put
that another way, why did we end up with a bunch of Bitmap Heap Scans
here instead of Parallel Bitmap Heap Scans?

> Q7 | 134 | 88 | PA only
> Q18 | 508 | 489 | PA only

What's interesting in these results is that the join order changes
quite a lot when PA is in the mix, and I don't really see why that
should happen.  I haven't thought about how we're doing the PA costing
in a while, so that might just be my ignorance.  But I think we need
to try to separate the effect of the plan changes from the
execution-time effect of PA itself, so that we can (1) be sure that
the plan changes are legitimate and justifiable rather than the result
of some bug and (2) make sure that replacing an Append with a Parallel
Append with no other changes to the plan produces an execution-time
benefit as we're hoping.

> Q21 | 649 | 163 | PA only

This is a particularly interesting case because in both the patched
and unpatched plans, the driving scan is on the lineitem table and in
both cases a Parallel Seq Scan is used.  The join order is more
similar than in some of the other pans, but not the same: in the
unpatched case, it's l1-(nation-supplier)-l2-orders-l3; in the patched
case, it's l1-(nation-supplier)-l3-l2-orders.  The Parallel Append
node actually runs slower than the plan Append node (42.4 s vs. 39.0
s) but that plan ends up being faster overall.  I suspect that's
partly because the patched plan pulls 265680 rows through the Gather
node while the unpatched plan pulls 2888728 rows through the Gather
node, more than 10x more.  That's a very strange choice for the
planner to make, seemingly, and what's even stranger is that if it did
ALL of the joins below the Gather node it would only need to pull
78214 rows through the Gather node; why not do that?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



Re: [HACKERS] Parallel Append implementation

From
Amit Khandekar
Date:
On 16 August 2017 at 18:34, Robert Haas <robertmhaas@gmail.com> wrote:
> Thanks for the benchmarking results!
>
> On Tue, Aug 15, 2017 at 11:35 PM, Rafia Sabih
> <rafia.sabih@enterprisedb.com> wrote:
>> Q4 | 244 | 12 | PA and PWJ, time by only PWJ - 41
>
> 12 seconds instead of 244?  Whoa.  I find it curious that we picked a
> Parallel Append with a bunch of non-partial plans when we could've
> just as easily picked partial plans, or so it seems to me.  To put
> that another way, why did we end up with a bunch of Bitmap Heap Scans
> here instead of Parallel Bitmap Heap Scans?
>
>> Q7 | 134 | 88 | PA only
>> Q18 | 508 | 489 | PA only
>
> What's interesting in these results is that the join order changes
> quite a lot when PA is in the mix, and I don't really see why that
> should happen.

Yes, it seems hard to determine why exactly the join order changes.
Parallel Append is expected to give the benefit especially if there
are no partial subplans. But for all of the cases here, partial
subplans seem possible, and so even on HEAD it executed Partial
Append. So between a Parallel Append having partial subplans and a
Partial Append having partial subplans , the cost difference would not
be significant. Even if we assume that Parallel Append was chosen
because its cost turned out to be a bit cheaper, the actual
performance gain seems quite large as compared to the expected cost
difference. So it might be even possible that the performance gain
might be due to some other reasons. I will investigate this, and the
other queries.



-- 
Thanks,
-Amit Khandekar
EnterpriseDB Corporation
The Postgres Database Company



Re: [HACKERS] Parallel Append implementation

From
Amit Khandekar
Date:
Hi Rafia,

On 17 August 2017 at 14:12, Amit Khandekar <amitdkhan.pg@gmail.com> wrote:
> But for all of the cases here, partial
> subplans seem possible, and so even on HEAD it executed Partial
> Append. So between a Parallel Append having partial subplans and a
> Partial Append having partial subplans , the cost difference would not
> be significant. Even if we assume that Parallel Append was chosen
> because its cost turned out to be a bit cheaper, the actual
> performance gain seems quite large as compared to the expected cost
> difference. So it might be even possible that the performance gain
> might be due to some other reasons. I will investigate this, and the
> other queries.
>

I ran all the queries that were showing performance benefits in your
run. But for me, the ParallelAppend benefits are shown only for plans
that use Partition-Wise-Join.

For all the queries that use only PA plans but not PWJ plans, I got
the exact same plan for HEAD as for PA+PWJ patch, except that for the
later, the Append is a ParallelAppend. Whereas, for you, the plans
have join-order changed.

Regarding actual costs; consequtively, for me the actual-cost are more
or less the same for HEAD and PA+PWJ. Whereas, for your runs, you have
quite different costs naturally because the plans themselves are
different on head versus PA+PWJ.

My PA+PWJ plan outputs (and actual costs) match exactly what you get
with PA+PWJ patch. But like I said, I get the same join order and same
plans (and actual costs) for HEAD as well (except
ParallelAppend=>Append).

May be, if you have the latest HEAD code with your setup, you can
yourself check some of the queries again to see if they are still
seeing higher costs as compared to PA ? I suspect that some changes in
latest code might be causing this discrepancy; because when I tested
some of the explains with a HEAD-branch server running with your
database, I got results matching PA figures.

Attached is my explain-analyze outputs.

On 16 August 2017 at 18:34, Robert Haas <robertmhaas@gmail.com> wrote:
> Thanks for the benchmarking results!
>
> On Tue, Aug 15, 2017 at 11:35 PM, Rafia Sabih
> <rafia.sabih@enterprisedb.com> wrote:
>> Q4 | 244 | 12 | PA and PWJ, time by only PWJ - 41
>
> 12 seconds instead of 244?  Whoa.  I find it curious that we picked a
> Parallel Append with a bunch of non-partial plans when we could've
> just as easily picked partial plans, or so it seems to me.  To put
> that another way, why did we end up with a bunch of Bitmap Heap Scans
> here instead of Parallel Bitmap Heap Scans?

Actually, the cost difference would be quite low for Parallel Append
with partial plans and Parallel Append with non-partial plans with 2
workers. But yes, I should take a look at why it is consistently
taking non-partial Bitmap Heap Scan.

----

> Q6 | 29 | 12 | PA only

This one needs to be analysed, because here, the plan cost is the
same, but actual cost for PA is almost half the cost for HEAD. This is
the same observation for my run also.

Thanks
-Amit

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Attachment

Re: [HACKERS] Parallel Append implementation

From
Amit Khandekar
Date:
The last updated patch needs a rebase. Attached is the rebased version.

Thanks
-Amit Khandekar

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Attachment

Re: [HACKERS] Parallel Append implementation

From
Amit Khandekar
Date:
On 30 August 2017 at 17:32, Amit Khandekar <amitdkhan.pg@gmail.com> wrote:
> On 16 August 2017 at 18:34, Robert Haas <robertmhaas@gmail.com> wrote:
>> Thanks for the benchmarking results!
>>
>> On Tue, Aug 15, 2017 at 11:35 PM, Rafia Sabih
>> <rafia.sabih@enterprisedb.com> wrote:
>>> Q4 | 244 | 12 | PA and PWJ, time by only PWJ - 41
>>
>> 12 seconds instead of 244?  Whoa.  I find it curious that we picked a
>> Parallel Append with a bunch of non-partial plans when we could've
>> just as easily picked partial plans, or so it seems to me.  To put
>> that another way, why did we end up with a bunch of Bitmap Heap Scans
>> here instead of Parallel Bitmap Heap Scans?
>
> Actually, the cost difference would be quite low for Parallel Append
> with partial plans and Parallel Append with non-partial plans with 2
> workers. But yes, I should take a look at why it is consistently
> taking non-partial Bitmap Heap Scan.

Here, I checked that Partial Bitmap Heap Scan Path is not getting
created in the first place; but I think it should.

As you can see from the below plan snippet, the inner path of the join
is a parameterized Index Scan :

->  Parallel Append->  Nested Loop Semi Join  ->  Bitmap Heap Scan on orders_004      Recheck Cond: ((o_orderdate >=
'1994-01-01'::date)AND
 
(o_orderdate < '1994-04-01 00:00:00'::timestamp without time zone))      ->  Bitmap Index Scan on
idx_orders_orderdate_004          Index Cond: ((o_orderdate >= '1994-01-01'::date) AND
 
(o_orderdate < '1994-04-01 00:00:00'::timestamp without time zone))  ->  Index Scan using idx_lineitem_orderkey_004 on
lineitem_004     Index Cond: (l_orderkey = orders_004.o_orderkey)      Filter: (l_commitdate < l_receiptdate)
 

In the index condition of the inner IndexScan path, it is referencing
partition order_004 which is used by the outer path. So this should
satisfy the partial join path restriction concerning parameterized
inner path : "inner path should not refer to relations *outside* the
join path". Here, it is referring to relations *inside* the join path.
But still this join path gets rejected by try_partial_nestloop_path(),
here :

if (inner_path->param_info != NULL)
{  Relids inner_paramrels = inner_path->param_info->ppi_req_outer;  if (!bms_is_subset(inner_paramrels,
outer_path->parent->relids))    return;
 
}

Actually, bms_is_subset() above should return true, because
inner_paramrels and outer_path relids should have orders_004. But
that's not happening. inner_paramrels is referring to orders, not
orders_004. And hence bms_is_subset() returns false (thereby rejecting
the partial nestloop path). I suspect this is because the innerpath is
not getting reparameterized so as to refer to child relations. In the
PWJ patch, I saw that reparameterize_path_by_child() is called by
try_nestloop_path(), but not by try_partial_nestloop_path().

Now, for Parallel Append, if this partial nestloop subpath gets
created, it may or may not get chosen, depending upon the number of
workers. For e.g. if the number of workers is 6, and ParalleAppend+PWJ
runs with only 2 partitions, then partial nestedloop join would
definitely win because we can put all 6 workers to work, whereas for
ParallelAppend with all non-partial nested loop join subpaths, at the
most only 2 workers could be allotted, one for each child. But if the
partitions are more, and available workers are less, then I think the
cost difference in case of partial versus non-partial join paths would
not be significant.

But here the issue is, partial nest loop subpaths don't get created in
the first place. Looking at the above analysis, this issue should be
worked by a different thread, not in this one.

-- 
Thanks,
-Amit Khandekar
EnterpriseDB Corporation
The Postgres Database Company



Re: [HACKERS] Parallel Append implementation

From
Amit Kapila
Date:
On Thu, Aug 31, 2017 at 12:47 PM, Amit Khandekar <amitdkhan.pg@gmail.com> wrote:
> The last updated patch needs a rebase. Attached is the rebased version.
>

Few comments on the first read of the patch:

1.
@@ -279,6 +347,7 @@ voidExecReScanAppend(AppendState *node){ int i;
+ ParallelAppendDesc padesc = node->as_padesc;
 for (i = 0; i < node->as_nplans; i++) {
@@ -298,6 +367,276 @@ ExecReScanAppend(AppendState *node) if (subnode->chgParam == NULL) ExecReScan(subnode); }
+
+ if (padesc)
+ {
+ padesc->pa_first_plan = padesc->pa_next_plan = 0;
+ memset(padesc->pa_finished, 0, sizeof(bool) * node->as_nplans);
+ }
+

For rescan purpose, the parallel state should be reinitialized via
ExecParallelReInitializeDSM.  We need to do that way mainly to avoid
cases where sometimes in rescan machinery we don't perform rescan of
all the nodes.  See commit 41b0dd987d44089dc48e9c70024277e253b396b7.

2.
+ * shared next_subplan counter index to start looking for unfinished plan,

Here using "counter index" seems slightly confusing. I think using one
of those will be better.

3.
+/* ----------------------------------------------------------------
+ * exec_append_leader_next
+ *
+ * To be used only if it's a parallel leader. The backend should scan
+ * backwards from the last plan. This is to prevent it from taking up
+ * the most expensive non-partial plan, i.e. the first subplan.
+ * ----------------------------------------------------------------
+ */
+static bool
+exec_append_leader_next(AppendState *state)

From above explanation, it is clear that you don't want backend to
pick an expensive plan for a leader, but the reason for this different
treatment is not clear.

4.
accumulate_partialappend_subpath()
{
..
+ /* Add partial subpaths, if any. */
+ return list_concat(partial_subpaths, apath_partial_paths);
..
+ return partial_subpaths;
..
+ if (is_partial)
+ return lappend(partial_subpaths, subpath);
..
}

In this function, instead of returning from multiple places
partial_subpaths list, you can just return it at the end and in all
other places just append to it if required.  That way code will look
more clear and simpler.

5.* is created to represent the case that a relation is provably empty.
+ * */typedef struct AppendPath

Spurious line addition.

6.
if (!node->as_padesc)
{
/*
* This is Parallel-aware append. Follow it's own logic of choosing
* the next subplan.
*/
if (!exec_append_seq_next(node))

I think this is the case of non-parallel-aware appends, but the
comments are indicating the opposite.

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com



Re: [HACKERS] Parallel Append implementation

From
Rafia Sabih
Date:
On Wed, Aug 30, 2017 at 5:32 PM, Amit Khandekar <amitdkhan.pg@gmail.com> wrote:
> Hi Rafia,
>
> On 17 August 2017 at 14:12, Amit Khandekar <amitdkhan.pg@gmail.com> wrote:
>> But for all of the cases here, partial
>> subplans seem possible, and so even on HEAD it executed Partial
>> Append. So between a Parallel Append having partial subplans and a
>> Partial Append having partial subplans , the cost difference would not
>> be significant. Even if we assume that Parallel Append was chosen
>> because its cost turned out to be a bit cheaper, the actual
>> performance gain seems quite large as compared to the expected cost
>> difference. So it might be even possible that the performance gain
>> might be due to some other reasons. I will investigate this, and the
>> other queries.
>>
>
> I ran all the queries that were showing performance benefits in your
> run. But for me, the ParallelAppend benefits are shown only for plans
> that use Partition-Wise-Join.
>
> For all the queries that use only PA plans but not PWJ plans, I got
> the exact same plan for HEAD as for PA+PWJ patch, except that for the
> later, the Append is a ParallelAppend. Whereas, for you, the plans
> have join-order changed.
>
> Regarding actual costs; consequtively, for me the actual-cost are more
> or less the same for HEAD and PA+PWJ. Whereas, for your runs, you have
> quite different costs naturally because the plans themselves are
> different on head versus PA+PWJ.
>
> My PA+PWJ plan outputs (and actual costs) match exactly what you get
> with PA+PWJ patch. But like I said, I get the same join order and same
> plans (and actual costs) for HEAD as well (except
> ParallelAppend=>Append).
>
> May be, if you have the latest HEAD code with your setup, you can
> yourself check some of the queries again to see if they are still
> seeing higher costs as compared to PA ? I suspect that some changes in
> latest code might be causing this discrepancy; because when I tested
> some of the explains with a HEAD-branch server running with your
> database, I got results matching PA figures.
>
> Attached is my explain-analyze outputs.
>

Strange. Please let me know what was the commit-id you were
experimenting on. I think we need to investigate this a little
further. Additionally I want to point that I also applied patch [1],
which I forgot to mention before.

[1]
https://www.postgresql.org/message-id/CAEepm%3D3%3DNHHko3oOzpik%2BggLy17AO%2Bpx3rGYrg3x_x05%2BBr9-A%40mail.gmail.com

> On 16 August 2017 at 18:34, Robert Haas <robertmhaas@gmail.com> wrote:
>> Thanks for the benchmarking results!
>>
>> On Tue, Aug 15, 2017 at 11:35 PM, Rafia Sabih
>> <rafia.sabih@enterprisedb.com> wrote:
>>> Q4 | 244 | 12 | PA and PWJ, time by only PWJ - 41
>>
>> 12 seconds instead of 244?  Whoa.  I find it curious that we picked a
>> Parallel Append with a bunch of non-partial plans when we could've
>> just as easily picked partial plans, or so it seems to me.  To put
>> that another way, why did we end up with a bunch of Bitmap Heap Scans
>> here instead of Parallel Bitmap Heap Scans?
>
> Actually, the cost difference would be quite low for Parallel Append
> with partial plans and Parallel Append with non-partial plans with 2
> workers. But yes, I should take a look at why it is consistently
> taking non-partial Bitmap Heap Scan.
>
> ----
>
>> Q6 | 29 | 12 | PA only
>
> This one needs to be analysed, because here, the plan cost is the
> same, but actual cost for PA is almost half the cost for HEAD. This is
> the same observation for my run also.
>
> Thanks
> -Amit



-- 
Regards,
Rafia Sabih
EnterpriseDB: http://www.enterprisedb.com/



Re: [HACKERS] Parallel Append implementation

From
Amit Khandekar
Date:
On 7 September 2017 at 13:40, Rafia Sabih <rafia.sabih@enterprisedb.com> wrote:
> On Wed, Aug 30, 2017 at 5:32 PM, Amit Khandekar <amitdkhan.pg@gmail.com> wrote:
>> Hi Rafia,
>>
>> On 17 August 2017 at 14:12, Amit Khandekar <amitdkhan.pg@gmail.com> wrote:
>>> But for all of the cases here, partial
>>> subplans seem possible, and so even on HEAD it executed Partial
>>> Append. So between a Parallel Append having partial subplans and a
>>> Partial Append having partial subplans , the cost difference would not
>>> be significant. Even if we assume that Parallel Append was chosen
>>> because its cost turned out to be a bit cheaper, the actual
>>> performance gain seems quite large as compared to the expected cost
>>> difference. So it might be even possible that the performance gain
>>> might be due to some other reasons. I will investigate this, and the
>>> other queries.
>>>
>>
>> I ran all the queries that were showing performance benefits in your
>> run. But for me, the ParallelAppend benefits are shown only for plans
>> that use Partition-Wise-Join.
>>
>> For all the queries that use only PA plans but not PWJ plans, I got
>> the exact same plan for HEAD as for PA+PWJ patch, except that for the
>> later, the Append is a ParallelAppend. Whereas, for you, the plans
>> have join-order changed.
>>
>> Regarding actual costs; consequtively, for me the actual-cost are more
>> or less the same for HEAD and PA+PWJ. Whereas, for your runs, you have
>> quite different costs naturally because the plans themselves are
>> different on head versus PA+PWJ.
>>
>> My PA+PWJ plan outputs (and actual costs) match exactly what you get
>> with PA+PWJ patch. But like I said, I get the same join order and same
>> plans (and actual costs) for HEAD as well (except
>> ParallelAppend=>Append).
>>
>> May be, if you have the latest HEAD code with your setup, you can
>> yourself check some of the queries again to see if they are still
>> seeing higher costs as compared to PA ? I suspect that some changes in
>> latest code might be causing this discrepancy; because when I tested
>> some of the explains with a HEAD-branch server running with your
>> database, I got results matching PA figures.
>>
>> Attached is my explain-analyze outputs.
>>
>
> Strange. Please let me know what was the commit-id you were
> experimenting on. I think we need to investigate this a little
> further.

Sure. I think the commit was b5c75fec. It was sometime in Aug 30 when
I ran the tests. But you may try on latest head.

> Additionally I want to point that I also applied patch [1],
> which I forgot to mention before.

Yes , I also had applied that patch over PA+PWJ.

>
> [1]
https://www.postgresql.org/message-id/CAEepm%3D3%3DNHHko3oOzpik%2BggLy17AO%2Bpx3rGYrg3x_x05%2BBr9-A%40mail.gmail.com

-- 
Thanks,
-Amit Khandekar
EnterpriseDB Corporation
The Postgres Database Company



Re: [HACKERS] Parallel Append implementation

From
Amit Khandekar
Date:
On 7 September 2017 at 11:05, Amit Kapila <amit.kapila16@gmail.com> wrote:
> On Thu, Aug 31, 2017 at 12:47 PM, Amit Khandekar <amitdkhan.pg@gmail.com> wrote:
>> The last updated patch needs a rebase. Attached is the rebased version.
>>
>
> Few comments on the first read of the patch:

Thanks !

>
> 1.
> @@ -279,6 +347,7 @@ void
>  ExecReScanAppend(AppendState *node)
>  {
>   int i;
> + ParallelAppendDesc padesc = node->as_padesc;
>
>   for (i = 0; i < node->as_nplans; i++)
>   {
> @@ -298,6 +367,276 @@ ExecReScanAppend(AppendState *node)
>   if (subnode->chgParam == NULL)
>   ExecReScan(subnode);
>   }
> +
> + if (padesc)
> + {
> + padesc->pa_first_plan = padesc->pa_next_plan = 0;
> + memset(padesc->pa_finished, 0, sizeof(bool) * node->as_nplans);
> + }
> +
>
> For rescan purpose, the parallel state should be reinitialized via
> ExecParallelReInitializeDSM.  We need to do that way mainly to avoid
> cases where sometimes in rescan machinery we don't perform rescan of
> all the nodes.  See commit 41b0dd987d44089dc48e9c70024277e253b396b7.

Right. I didn't notice this while I rebased my patch over that commit.
Fixed it. Also added an exec_append_parallel_next() call in
ExecAppendReInitializeDSM(), otherwise the next ExecAppend() in leader
will get an uninitialized as_whichplan.

>
> 2.
> + * shared next_subplan counter index to start looking for unfinished plan,

Done.

>
> Here using "counter index" seems slightly confusing. I think using one
> of those will be better.

Re-worded it a bit. See whether that's what you wanted.

>
> 3.
> +/* ----------------------------------------------------------------
> + * exec_append_leader_next
> + *
> + * To be used only if it's a parallel leader. The backend should scan
> + * backwards from the last plan. This is to prevent it from taking up
> + * the most expensive non-partial plan, i.e. the first subplan.
> + * ----------------------------------------------------------------
> + */
> +static bool
> +exec_append_leader_next(AppendState *state)
>
> From above explanation, it is clear that you don't want backend to
> pick an expensive plan for a leader, but the reason for this different
> treatment is not clear.

Explained it, saying that for more workers, a leader spends more work
in processing the worker tuples , and less work contributing to
parallel processing. So it should not take expensive plans, otherwise
it will affect the total time to finish Append plan.

>
> 4.
> accumulate_partialappend_subpath()
> {
> ..
> + /* Add partial subpaths, if any. */
> + return list_concat(partial_subpaths, apath_partial_paths);
> ..
> + return partial_subpaths;
> ..
> + if (is_partial)
> + return lappend(partial_subpaths, subpath);
> ..
> }
>
> In this function, instead of returning from multiple places
> partial_subpaths list, you can just return it at the end and in all
> other places just append to it if required.  That way code will look
> more clear and simpler.

Agreed. Did it that way.

>
> 5.
>  * is created to represent the case that a relation is provably empty.
> + *
>   */
>  typedef struct AppendPath
>
> Spurious line addition.
Removed.

>
> 6.
> if (!node->as_padesc)
> {
> /*
> * This is Parallel-aware append. Follow it's own logic of choosing
> * the next subplan.
> */
> if (!exec_append_seq_next(node))
>
> I think this is the case of non-parallel-aware appends, but the
> comments are indicating the opposite.

Yeah, this comment got left over there when the relevant code got
changed. Shifted that comment upwards where it is appropriate.

Attached is the updated patch v14.

-- 
Thanks,
-Amit Khandekar
EnterpriseDB Corporation
The Postgres Database Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Attachment

Re: [HACKERS] Parallel Append implementation

From
Rafia Sabih
Date:
On Wed, Aug 30, 2017 at 5:32 PM, Amit Khandekar <amitdkhan.pg@gmail.com> wrote:
> Hi Rafia,
>
> On 17 August 2017 at 14:12, Amit Khandekar <amitdkhan.pg@gmail.com> wrote:
>> But for all of the cases here, partial
>> subplans seem possible, and so even on HEAD it executed Partial
>> Append. So between a Parallel Append having partial subplans and a
>> Partial Append having partial subplans , the cost difference would not
>> be significant. Even if we assume that Parallel Append was chosen
>> because its cost turned out to be a bit cheaper, the actual
>> performance gain seems quite large as compared to the expected cost
>> difference. So it might be even possible that the performance gain
>> might be due to some other reasons. I will investigate this, and the
>> other queries.
>>
>
> I ran all the queries that were showing performance benefits in your
> run. But for me, the ParallelAppend benefits are shown only for plans
> that use Partition-Wise-Join.
>
> For all the queries that use only PA plans but not PWJ plans, I got
> the exact same plan for HEAD as for PA+PWJ patch, except that for the
> later, the Append is a ParallelAppend. Whereas, for you, the plans
> have join-order changed.
>
> Regarding actual costs; consequtively, for me the actual-cost are more
> or less the same for HEAD and PA+PWJ. Whereas, for your runs, you have
> quite different costs naturally because the plans themselves are
> different on head versus PA+PWJ.
>
> My PA+PWJ plan outputs (and actual costs) match exactly what you get
> with PA+PWJ patch. But like I said, I get the same join order and same
> plans (and actual costs) for HEAD as well (except
> ParallelAppend=>Append).
>
> May be, if you have the latest HEAD code with your setup, you can
> yourself check some of the queries again to see if they are still
> seeing higher costs as compared to PA ? I suspect that some changes in
> latest code might be causing this discrepancy; because when I tested
> some of the explains with a HEAD-branch server running with your
> database, I got results matching PA figures.
>
> Attached is my explain-analyze outputs.
>

Now, when I compare your results with the ones I posted I could see
one major difference between them -- selectivity estimation errors.
In the results I posted, e.g. Q3, on head it gives following

->  Finalize GroupAggregate  (cost=41131358.89..101076015.45
rows=455492628 width=44) (actual time=126436.642..129247.972
rows=226765 loops=1)              Group Key: lineitem_001.l_orderkey,
orders_001.o_orderdate, orders_001.o_shippriority              ->  Gather Merge  (cost=41131358.89..90637642.73
rows=379577190 width=44) (actual time=126436.602..127791.768
rows=235461 loops=1)                    Workers Planned: 2                    Workers Launched: 2

and in your results it is,
->  Finalize GroupAggregate  (cost=4940619.86..6652725.07
rows=13009521 width=44) (actual time=89573.830..91956.956 rows=226460
loops=1)              Group Key: lineitem_001.l_orderkey,
orders_001.o_orderdate, orders_001.o_shippriority              ->  Gather Merge  (cost=4940619.86..6354590.21
rows=10841268 width=44) (actual time=89573.752..90747.393 rows=235465
loops=1)                    Workers Planned: 2                    Workers Launched: 2

However, for the results with the patch/es this is not the case,

in my results, with patch,
->  Finalize GroupAggregate  (cost=4933450.21..6631111.01
rows=12899766 width=44) (actual time=87250.039..90593.716 rows=226765
loops=1)              Group Key: lineitem_001.l_orderkey,
orders_001.o_orderdate, orders_001.o_shippriority              ->  Gather Merge  (cost=4933450.21..6335491.38
rows=10749804 width=44) (actual time=87250.020..89125.279 rows=227291
loops=1)                    Workers Planned: 2                    Workers Launched: 2

I think this explains the reason for drastic different in the plan
choices and thus the performance for both the cases.

Since I was using same database for the cases, I don't have much
reasons for such difference in selectivity estimation for these
queries. The only thing might be a missing vacuum analyse, but since I
checked it a couple of times I am not sure if even that could be the
reason. Additionally, it is not the case for all the queries, like in
Q10 and Q21, the estimates are similar.

However, on a fresh database the selectivity-estimates and plans as
reported by you and with the patched version I posted seems to be the
correct one. I'll see if I may check performance of these queries once
again to verify these.

-- 
Regards,
Rafia Sabih
EnterpriseDB: http://www.enterprisedb.com/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Parallel Append implementation

From
Amit Kapila
Date:
On Fri, Sep 8, 2017 at 3:59 PM, Amit Khandekar <amitdkhan.pg@gmail.com> wrote:
> On 7 September 2017 at 11:05, Amit Kapila <amit.kapila16@gmail.com> wrote:
>> On Thu, Aug 31, 2017 at 12:47 PM, Amit Khandekar <amitdkhan.pg@gmail.com> wrote:
>> 3.
>> +/* ----------------------------------------------------------------
>> + * exec_append_leader_next
>> + *
>> + * To be used only if it's a parallel leader. The backend should scan
>> + * backwards from the last plan. This is to prevent it from taking up
>> + * the most expensive non-partial plan, i.e. the first subplan.
>> + * ----------------------------------------------------------------
>> + */
>> +static bool
>> +exec_append_leader_next(AppendState *state)
>>
>> From above explanation, it is clear that you don't want backend to
>> pick an expensive plan for a leader, but the reason for this different
>> treatment is not clear.
>
> Explained it, saying that for more workers, a leader spends more work
> in processing the worker tuples , and less work contributing to
> parallel processing. So it should not take expensive plans, otherwise
> it will affect the total time to finish Append plan.
>

In that case, why can't we keep the workers also process in same
order, what is the harm in that?  Also, the leader will always scan
the subplans from last subplan even if all the subplans are partial
plans.  I think this will be the unnecessary difference in the
strategy of leader and worker especially when all paths are partial.
I think the selection of next subplan might become simpler if we use
the same strategy for worker and leader.

Few more comments:

1.
+ else if (IsA(subpath, MergeAppendPath))
+ {
+ MergeAppendPath *mpath = (MergeAppendPath *) subpath;
+
+ /*
+ * If at all MergeAppend is partial, all its child plans have to be
+ * partial : we don't currently support a mix of partial and
+ * non-partial MergeAppend subpaths.
+ */
+ if (is_partial)
+ return list_concat(partial_subpaths, list_copy(mpath->subpaths));

In which situation partial MergeAppendPath is generated?  Can you
provide one example of such path?

2.
add_paths_to_append_rel()
{
..
+ /* Consider parallel append path. */
+ if (pa_subpaths_valid)
+ {
+ AppendPath *appendpath;
+ int parallel_workers;
+
+ parallel_workers = get_append_num_workers(pa_partial_subpaths,
+  pa_nonpartial_subpaths);
+ appendpath = create_append_path(rel, pa_nonpartial_subpaths,
+ pa_partial_subpaths,
+ NULL, parallel_workers, true,
+ partitioned_rels);
+ add_partial_path(rel, (Path *) appendpath);
+ }
+ /*
- * Consider an append of partial unordered, unparameterized partial paths.
+ * Consider non-parallel partial append path. But if the parallel append
+ * path is made out of all partial subpaths, don't create another partial
+ * path; we will keep only the parallel append path in that case. */
- if (partial_subpaths_valid)
+ if (partial_subpaths_valid && !pa_all_partial_subpaths) { AppendPath *appendpath; ListCell   *lc; int
parallel_workers= 0;
 
 /*
- * Decide on the number of workers to request for this append path.
- * For now, we just use the maximum value from among the members.  It
- * might be useful to use a higher number if the Append node were
- * smart enough to spread out the workers, but it currently isn't.
+ * To decide the number of workers, just use the maximum value from
+ * among the children. */ foreach(lc, partial_subpaths) {
@@ -1421,9 +1502,9 @@ add_paths_to_append_rel(PlannerInfo *root,
RelOptInfo *rel, } Assert(parallel_workers > 0);

- /* Generate a partial append path. */
- appendpath = create_append_path(rel, partial_subpaths, NULL,
- parallel_workers, partitioned_rels);
+ appendpath = create_append_path(rel, NIL, partial_subpaths,
+ NULL, parallel_workers, false,
+ partitioned_rels); add_partial_path(rel, (Path *) appendpath); }
..
}

I think it might be better to add a sentence why we choose a different
way to decide a number of workers in the second case
(non-parallel-aware append).  Do you think non-parallel-aware Append
will be better in any case when there is a parallel-aware append?  I
mean to say let's try to create non-parallel-aware append only when
parallel-aware append is not possible.

3.
+ * evaluates to a value just a bit greater than max(w1,w2, w3). So, we

The spacing between w1, w2, w3 is not same.

4.
-  select count(*) from a_star;
-select count(*) from a_star;
+  select round(avg(aa)), sum(aa) from a_star;
+select round(avg(aa)), sum(aa) from a_star;

Why you have changed the existing test. It seems count(*) will also
give what you are expecting.



-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Parallel Append implementation

From
Amit Khandekar
Date:
On 8 September 2017 at 19:17, Amit Kapila <amit.kapila16@gmail.com> wrote:
> On Fri, Sep 8, 2017 at 3:59 PM, Amit Khandekar <amitdkhan.pg@gmail.com> wrote:
>> On 7 September 2017 at 11:05, Amit Kapila <amit.kapila16@gmail.com> wrote:
>>> On Thu, Aug 31, 2017 at 12:47 PM, Amit Khandekar <amitdkhan.pg@gmail.com> wrote:
>>> 3.
>>> +/* ----------------------------------------------------------------
>>> + * exec_append_leader_next
>>> + *
>>> + * To be used only if it's a parallel leader. The backend should scan
>>> + * backwards from the last plan. This is to prevent it from taking up
>>> + * the most expensive non-partial plan, i.e. the first subplan.
>>> + * ----------------------------------------------------------------
>>> + */
>>> +static bool
>>> +exec_append_leader_next(AppendState *state)
>>>
>>> From above explanation, it is clear that you don't want backend to
>>> pick an expensive plan for a leader, but the reason for this different
>>> treatment is not clear.
>>
>> Explained it, saying that for more workers, a leader spends more work
>> in processing the worker tuples , and less work contributing to
>> parallel processing. So it should not take expensive plans, otherwise
>> it will affect the total time to finish Append plan.
>>
>
> In that case, why can't we keep the workers also process in same
> order, what is the harm in that?

Because of the way the logic of queuing works, the workers finish
earlier if they start with expensive plans first. For e.g. : with 3
plans with costs 8, 4, 4 and with 2 workers w1 and w2, they will
finish in 8 time units (w1 will finish plan 1 in 8, while in parallel
w2 will finish the remaining 2 plans in 8 units. Whereas if the plans
are ordered like : 4, 4, 8, then the workers will finish in 12 time
units (w1 and w2 will finish each of the 1st two plans in 4 units, and
then w1 or w2 will take up plan 3 and finish in 8 units, while the
other worker remains idle).

> Also, the leader will always scan
> the subplans from last subplan even if all the subplans are partial
> plans.

Since we already need to have two different code paths, I think we can
use the same code paths for any subplans.

> I think this will be the unnecessary difference in the
> strategy of leader and worker especially when all paths are partial.
> I think the selection of next subplan might become simpler if we use
> the same strategy for worker and leader.

Yeah if we had a common method for both it would have been better. But
anyways we have different logics to maintain.

>
> Few more comments:
>
> 1.
> + else if (IsA(subpath, MergeAppendPath))
> + {
> + MergeAppendPath *mpath = (MergeAppendPath *) subpath;
> +
> + /*
> + * If at all MergeAppend is partial, all its child plans have to be
> + * partial : we don't currently support a mix of partial and
> + * non-partial MergeAppend subpaths.
> + */
> + if (is_partial)
> + return list_concat(partial_subpaths, list_copy(mpath->subpaths));
>
> In which situation partial MergeAppendPath is generated?  Can you
> provide one example of such path?

Actually currently we don't support partial paths for MergeAppendPath.
That code just has that if condition (is_partial) but currently that
condition won't be true for MergeAppendPath.

>
> 2.
> add_paths_to_append_rel()
> {
> ..
> + /* Consider parallel append path. */
> + if (pa_subpaths_valid)
> + {
> + AppendPath *appendpath;
> + int parallel_workers;
> +
> + parallel_workers = get_append_num_workers(pa_partial_subpaths,
> +  pa_nonpartial_subpaths);
> + appendpath = create_append_path(rel, pa_nonpartial_subpaths,
> + pa_partial_subpaths,
> + NULL, parallel_workers, true,
> + partitioned_rels);
> + add_partial_path(rel, (Path *) appendpath);
> + }
> +
>   /*
> - * Consider an append of partial unordered, unparameterized partial paths.
> + * Consider non-parallel partial append path. But if the parallel append
> + * path is made out of all partial subpaths, don't create another partial
> + * path; we will keep only the parallel append path in that case.
>   */
> - if (partial_subpaths_valid)
> + if (partial_subpaths_valid && !pa_all_partial_subpaths)
>   {
>   AppendPath *appendpath;
>   ListCell   *lc;
>   int parallel_workers = 0;
>
>   /*
> - * Decide on the number of workers to request for this append path.
> - * For now, we just use the maximum value from among the members.  It
> - * might be useful to use a higher number if the Append node were
> - * smart enough to spread out the workers, but it currently isn't.
> + * To decide the number of workers, just use the maximum value from
> + * among the children.
>   */
>   foreach(lc, partial_subpaths)
>   {
> @@ -1421,9 +1502,9 @@ add_paths_to_append_rel(PlannerInfo *root,
> RelOptInfo *rel,
>   }
>   Assert(parallel_workers > 0);
>
> - /* Generate a partial append path. */
> - appendpath = create_append_path(rel, partial_subpaths, NULL,
> - parallel_workers, partitioned_rels);
> + appendpath = create_append_path(rel, NIL, partial_subpaths,
> + NULL, parallel_workers, false,
> + partitioned_rels);
>   add_partial_path(rel, (Path *) appendpath);
>   }
> ..
> }
>
> I think it might be better to add a sentence why we choose a different
> way to decide a number of workers in the second case
> (non-parallel-aware append).

Yes, I agree. Will do that after we conclude with your next point below ...

> Do you think non-parallel-aware Append
> will be better in any case when there is a parallel-aware append?  I
> mean to say let's try to create non-parallel-aware append only when
> parallel-aware append is not possible.

By non-parallel-aware append, I am assuming you meant  partial
non-parallel-aware Append. Yes, if the parallel-aware Append path has
*all* partial subpaths chosen, then we do omit a partial non-parallel
Append path, as seen in this code in the patch :

/*
* Consider non-parallel partial append path. But if the parallel append
* path is made out of all partial subpaths, don't create another partial
* path; we will keep only the parallel append path in that case.
*/
if (partial_subpaths_valid && !pa_all_partial_subpaths)
{
......
}

But if the parallel-Append path has a mix of partial and non-partial
subpaths, then we can't really tell which of the two could be cheapest
until we calculate the cost. It can be that the non-parallel-aware
partial Append can be cheaper as well.

>
> 3.
> + * evaluates to a value just a bit greater than max(w1,w2, w3). So, we
>
> The spacing between w1, w2, w3 is not same.

Right, will note this down for the next updated patch.

>
> 4.
> -  select count(*) from a_star;
> -select count(*) from a_star;
> +  select round(avg(aa)), sum(aa) from a_star;
> +select round(avg(aa)), sum(aa) from a_star;
>
> Why you have changed the existing test. It seems count(*) will also
> give what you are expecting.

Needed to do cover some data testing with Parallel Append execution.


-- 
Thanks,
-Amit Khandekar
EnterpriseDB Corporation
The Postgres Database Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Parallel Append implementation

From
Amit Kapila
Date:
On Mon, Sep 11, 2017 at 4:49 PM, Amit Khandekar <amitdkhan.pg@gmail.com> wrote:
> On 8 September 2017 at 19:17, Amit Kapila <amit.kapila16@gmail.com> wrote:
>>
>> In that case, why can't we keep the workers also process in same
>> order, what is the harm in that?
>
> Because of the way the logic of queuing works, the workers finish
> earlier if they start with expensive plans first. For e.g. : with 3
> plans with costs 8, 4, 4 and with 2 workers w1 and w2, they will
> finish in 8 time units (w1 will finish plan 1 in 8, while in parallel
> w2 will finish the remaining 2 plans in 8 units.  Whereas if the plans
> are ordered like : 4, 4, 8, then the workers will finish in 12 time
> units (w1 and w2 will finish each of the 1st two plans in 4 units, and
> then w1 or w2 will take up plan 3 and finish in 8 units, while the
> other worker remains idle).
>

I think the patch stores only non-partial paths in decreasing order,
what if partial paths having more costs follows those paths?

>
>>
>> Few more comments:
>>
>> 1.
>> + else if (IsA(subpath, MergeAppendPath))
>> + {
>> + MergeAppendPath *mpath = (MergeAppendPath *) subpath;
>> +
>> + /*
>> + * If at all MergeAppend is partial, all its child plans have to be
>> + * partial : we don't currently support a mix of partial and
>> + * non-partial MergeAppend subpaths.
>> + */
>> + if (is_partial)
>> + return list_concat(partial_subpaths, list_copy(mpath->subpaths));
>>
>> In which situation partial MergeAppendPath is generated?  Can you
>> provide one example of such path?
>
> Actually currently we don't support partial paths for MergeAppendPath.
> That code just has that if condition (is_partial) but currently that
> condition won't be true for MergeAppendPath.
>

I think then it is better to have Assert for MergeAppend.  It is
generally not a good idea to add code which we can never hit.

>>
>> 2.
>> add_paths_to_append_rel()
..
>>
>> I think it might be better to add a sentence why we choose a different
>> way to decide a number of workers in the second case
>> (non-parallel-aware append).
>
> Yes, I agree. Will do that after we conclude with your next point below ...
>
>> Do you think non-parallel-aware Append
>> will be better in any case when there is a parallel-aware append?  I
>> mean to say let's try to create non-parallel-aware append only when
>> parallel-aware append is not possible.
>
> By non-parallel-aware append, I am assuming you meant  partial
> non-parallel-aware Append. Yes, if the parallel-aware Append path has
> *all* partial subpaths chosen, then we do omit a partial non-parallel
> Append path, as seen in this code in the patch :
>
> /*
> * Consider non-parallel partial append path. But if the parallel append
> * path is made out of all partial subpaths, don't create another partial
> * path; we will keep only the parallel append path in that case.
> */
> if (partial_subpaths_valid && !pa_all_partial_subpaths)
> {
> ......
> }
>
> But if the parallel-Append path has a mix of partial and non-partial
> subpaths, then we can't really tell which of the two could be cheapest
> until we calculate the cost. It can be that the non-parallel-aware
> partial Append can be cheaper as well.
>

How?  See, if you have four partial subpaths and two non-partial
subpaths, then for parallel-aware append it considers all six paths in
parallel path whereas for non-parallel-aware append it will consider
just four paths and that too with sub-optimal strategy.  Can you
please try to give me some example so that it will be clear.

>>
>> 4.
>> -  select count(*) from a_star;
>> -select count(*) from a_star;
>> +  select round(avg(aa)), sum(aa) from a_star;
>> +select round(avg(aa)), sum(aa) from a_star;
>>
>> Why you have changed the existing test. It seems count(*) will also
>> give what you are expecting.
>
> Needed to do cover some data testing with Parallel Append execution.
>

Okay.

One more thing, I think you might want to update comment atop
add_paths_to_append_rel to reflect the new type of parallel-aware
append path being generated. In particular, I am referring to below
part of the comment:
* Similarly it collects partial paths from* non-dummy children to create partial append paths.*/
static void
add_paths_to_append_rel()


-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Parallel Append implementation

From
Amit Khandekar
Date:
On 11 September 2017 at 18:55, Amit Kapila <amit.kapila16@gmail.com> wrote:
>>> Do you think non-parallel-aware Append
>>> will be better in any case when there is a parallel-aware append?  I
>>> mean to say let's try to create non-parallel-aware append only when
>>> parallel-aware append is not possible.
>>
>> By non-parallel-aware append, I am assuming you meant  partial
>> non-parallel-aware Append. Yes, if the parallel-aware Append path has
>> *all* partial subpaths chosen, then we do omit a partial non-parallel
>> Append path, as seen in this code in the patch :
>>
>> /*
>> * Consider non-parallel partial append path. But if the parallel append
>> * path is made out of all partial subpaths, don't create another partial
>> * path; we will keep only the parallel append path in that case.
>> */
>> if (partial_subpaths_valid && !pa_all_partial_subpaths)
>> {
>> ......
>> }
>>
>> But if the parallel-Append path has a mix of partial and non-partial
>> subpaths, then we can't really tell which of the two could be cheapest
>> until we calculate the cost. It can be that the non-parallel-aware
>> partial Append can be cheaper as well.
>>
>
> How?  See, if you have four partial subpaths and two non-partial
> subpaths, then for parallel-aware append it considers all six paths in
> parallel path whereas for non-parallel-aware append it will consider
> just four paths and that too with sub-optimal strategy.  Can you
> please try to give me some example so that it will be clear.

Suppose 4 appendrel children have costs for their cheapest partial (p)
and non-partial paths (np)  as shown below :

p1=5000  np1=100
p2=200   np2=1000
p3=80   np3=2000
p4=3000  np4=50

Here, following two Append paths will be generated :

1. a parallel-aware Append path with subpaths :
np1, p2, p3, np4

2. Partial (i.e. non-parallel-aware) Append path with all partial subpaths:
p1,p2,p3,p4

Now, one thing we can do above is : Make the path#2 parallel-aware as
well; so both Append paths would be parallel-aware. Are you suggesting
exactly this ?

So above, what I am saying is, we can't tell which of the paths #1 and
#2 are cheaper until we calculate total cost. I didn't understand what
did you mean by "non-parallel-aware append will consider only the
partial subpaths and that too with sub-optimal strategy" in the above
example. I guess, you were considering a different scenario than the
above one.

Whereas, if one or more subpaths of Append do not have partial subpath
in the first place, then non-parallel-aware partial Append is out of
question, which we both agree.
And the other case where we skip non-parallel-aware partial Append is
when all the cheapest subpaths of the parallel-aware Append path are
partial paths: we do not want parallel-aware and non-parallel-aware
Append paths both having exactly the same partial subpaths.

---------

I will be addressing your other comments separately.

Thanks
-Amit Khandekar


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Parallel Append implementation

From
Robert Haas
Date:
On Mon, Sep 11, 2017 at 9:25 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:
> I think the patch stores only non-partial paths in decreasing order,
> what if partial paths having more costs follows those paths?

The general picture here is that we don't want the leader to get stuck
inside some long-running operation because then it won't be available
to read tuples from the workers.  On the other hand, we don't want to
just have the leader do no work because that might be slow.  And in
most cast cases, the leader will be the first participant to arrive at
the Append node, because of the worker startup time.  So the idea is
that the workers should pick expensive things first, and the leader
should pick cheap things first.  This may not always work out
perfectly and certainly the details of the algorithm may need some
refinement, but I think the basic concept is good.  Of course, that
may be because I proposed it...

Note that there's a big difference between the leader picking a
partial path and the leader picking a non-partial path.  If the leader
picks a partial path, it isn't committed to executing that path to
completion.  Other workers can help.  If the leader picks a
non-partial path, though, the workers are locked out of that path,
because a single process must run it all the way through.  So the
leader should avoid choosing a non-partial path unless there are no
partial paths remaining.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Parallel Append implementation

From
Amit Kapila
Date:
On Thu, Sep 14, 2017 at 9:41 PM, Robert Haas <robertmhaas@gmail.com> wrote:
> On Mon, Sep 11, 2017 at 9:25 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:
>> I think the patch stores only non-partial paths in decreasing order,
>> what if partial paths having more costs follows those paths?
>
> The general picture here is that we don't want the leader to get stuck
> inside some long-running operation because then it won't be available
> to read tuples from the workers.  On the other hand, we don't want to
> just have the leader do no work because that might be slow.  And in
> most cast cases, the leader will be the first participant to arrive at
> the Append node, because of the worker startup time.  So the idea is
> that the workers should pick expensive things first, and the leader
> should pick cheap things first.
>

At a broader level, the idea is good, but I think it won't turn out
exactly like that considering your below paragraph which indicates
that it is okay if leader picks a partial path that is costly among
other partial paths as a leader won't be locked with that.
Considering this is a good design for parallel append, the question is
do we really need worker and leader to follow separate strategy for
choosing next path.  I think the patch will be simpler if we can come
up with a way for the worker and leader to use the same strategy to
pick next path to process.  How about we arrange the list of paths
such that first, all partial paths will be there and then non-partial
paths and probably both in decreasing order of cost.  Now, both leader
and worker can start from the beginning of the list. In most cases,
the leader will start at the first partial path and will only ever
need to scan non-partial path if there is no other partial path left.
This is not bulletproof as it is possible that some worker starts
before leader in which case leader might scan non-partial path before
all partial paths are finished, but I think we can avoid that as well
if we are too worried about such cases.


-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Parallel Append implementation

From
Amit Kapila
Date:
On Thu, Sep 14, 2017 at 8:30 PM, Amit Khandekar <amitdkhan.pg@gmail.com> wrote:
> On 11 September 2017 at 18:55, Amit Kapila <amit.kapila16@gmail.com> wrote:
>>>
>>
>> How?  See, if you have four partial subpaths and two non-partial
>> subpaths, then for parallel-aware append it considers all six paths in
>> parallel path whereas for non-parallel-aware append it will consider
>> just four paths and that too with sub-optimal strategy.  Can you
>> please try to give me some example so that it will be clear.
>
> Suppose 4 appendrel children have costs for their cheapest partial (p)
> and non-partial paths (np)  as shown below :
>
> p1=5000  np1=100
> p2=200   np2=1000
> p3=80   np3=2000
> p4=3000  np4=50
>
> Here, following two Append paths will be generated :
>
> 1. a parallel-aware Append path with subpaths :
> np1, p2, p3, np4
>
> 2. Partial (i.e. non-parallel-aware) Append path with all partial subpaths:
> p1,p2,p3,p4
>
> Now, one thing we can do above is : Make the path#2 parallel-aware as
> well; so both Append paths would be parallel-aware.
>

Yes, we can do that and that is what I think is probably better.  So,
the question remains that in which case non-parallel-aware partial
append will be required?  Basically, it is not clear to me why after
having parallel-aware partial append we need non-parallel-aware
version?  Are you keeping it for the sake of backward-compatibility or
something like for cases if someone has disabled parallel append with
the help of new guc in this patch?

>
> So above, what I am saying is, we can't tell which of the paths #1 and
> #2 are cheaper until we calculate total cost. I didn't understand what
> did you mean by "non-parallel-aware append will consider only the
> partial subpaths and that too with sub-optimal strategy" in the above
> example. I guess, you were considering a different scenario than the
> above one.
>

Yes, something different, but I think you can ignore that as we can
discuss the guts of my point based on the example given by you above.

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Parallel Append implementation

From
Amit Khandekar
Date:
On 16 September 2017 at 11:45, Amit Kapila <amit.kapila16@gmail.com> wrote:
> On Thu, Sep 14, 2017 at 8:30 PM, Amit Khandekar <amitdkhan.pg@gmail.com> wrote:
>> On 11 September 2017 at 18:55, Amit Kapila <amit.kapila16@gmail.com> wrote:
>>>>
>>>
>>> How?  See, if you have four partial subpaths and two non-partial
>>> subpaths, then for parallel-aware append it considers all six paths in
>>> parallel path whereas for non-parallel-aware append it will consider
>>> just four paths and that too with sub-optimal strategy.  Can you
>>> please try to give me some example so that it will be clear.
>>
>> Suppose 4 appendrel children have costs for their cheapest partial (p)
>> and non-partial paths (np)  as shown below :
>>
>> p1=5000  np1=100
>> p2=200   np2=1000
>> p3=80   np3=2000
>> p4=3000  np4=50
>>
>> Here, following two Append paths will be generated :
>>
>> 1. a parallel-aware Append path with subpaths :
>> np1, p2, p3, np4
>>
>> 2. Partial (i.e. non-parallel-aware) Append path with all partial subpaths:
>> p1,p2,p3,p4
>>
>> Now, one thing we can do above is : Make the path#2 parallel-aware as
>> well; so both Append paths would be parallel-aware.
>>
>
> Yes, we can do that and that is what I think is probably better.  So,
> the question remains that in which case non-parallel-aware partial
> append will be required?  Basically, it is not clear to me why after
> having parallel-aware partial append we need non-parallel-aware
> version?  Are you keeping it for the sake of backward-compatibility or
> something like for cases if someone has disabled parallel append with
> the help of new guc in this patch?

Yes one case is the enable_parallelappend GUC case. If a user disables
it, we do want to add the usual non-parallel-aware append partial
path.

About backward compatibility, the concern we discussed in [1] was that
we better continue to have the usual non-parallel-aware partial Append
path, plus we should have an additional parallel-aware Append path
containing mix of partial and non-partial subpaths.

But thinking again on the example above, I think Amit, I tend to agree
that we don't have to worry about the existing behaviour, and so we
can make the path#2 parallel-aware as well.

Robert, can you please suggest what is your opinion on the paths that
are chosen in the above example ?

[1] https://www.postgresql.org/message-id/CA%2BTgmoaLRtaWdJVHfhHej2s7w1spbr6gZiZXJrM5bsz1KQ54Rw%40mail.gmail.com

>
-- 
Thanks,
-Amit Khandekar
EnterpriseDB Corporation
The Postgres Database Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Parallel Append implementation

From
Amit Khandekar
Date:
On 16 September 2017 at 10:42, Amit Kapila <amit.kapila16@gmail.com> wrote:
> On Thu, Sep 14, 2017 at 9:41 PM, Robert Haas <robertmhaas@gmail.com> wrote:
>> On Mon, Sep 11, 2017 at 9:25 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:
>>> I think the patch stores only non-partial paths in decreasing order,
>>> what if partial paths having more costs follows those paths?
>>
>> The general picture here is that we don't want the leader to get stuck
>> inside some long-running operation because then it won't be available
>> to read tuples from the workers.  On the other hand, we don't want to
>> just have the leader do no work because that might be slow.  And in
>> most cast cases, the leader will be the first participant to arrive at
>> the Append node, because of the worker startup time.  So the idea is
>> that the workers should pick expensive things first, and the leader
>> should pick cheap things first.
>>
>
> At a broader level, the idea is good, but I think it won't turn out
> exactly like that considering your below paragraph which indicates
> that it is okay if leader picks a partial path that is costly among
> other partial paths as a leader won't be locked with that.
> Considering this is a good design for parallel append, the question is
> do we really need worker and leader to follow separate strategy for
> choosing next path.  I think the patch will be simpler if we can come
> up with a way for the worker and leader to use the same strategy to
> pick next path to process.  How about we arrange the list of paths
> such that first, all partial paths will be there and then non-partial
> paths and probably both in decreasing order of cost.  Now, both leader
> and worker can start from the beginning of the list. In most cases,
> the leader will start at the first partial path and will only ever
> need to scan non-partial path if there is no other partial path left.
> This is not bulletproof as it is possible that some worker starts
> before leader in which case leader might scan non-partial path before
> all partial paths are finished, but I think we can avoid that as well
> if we are too worried about such cases.

If there are no partial subpaths, then again the leader is likely to
take up the expensive subpath. And this scenario would not be
uncommon. And for this scenario at least, we anyway have to make it
start from cheapest one, so will have to maintain code for that logic.
Now since we anyway have to maintain that logic, why not use the same
logic for leader for all cases. Otherwise, if we try to come up with a
common logic that conditionally chooses different next plan for leader
or worker, then that logic would most probably get complicated than
the current state. Also, I don't see any performance issue if there is
a leader is running backwards while the others are going forwards.



-- 
Thanks,
-Amit Khandekar
EnterpriseDB Corporation
The Postgres Database Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Parallel Append implementation

From
Amit Khandekar
Date:
On 11 September 2017 at 18:55, Amit Kapila <amit.kapila16@gmail.com> wrote:
>>> 1.
>>> + else if (IsA(subpath, MergeAppendPath))
>>> + {
>>> + MergeAppendPath *mpath = (MergeAppendPath *) subpath;
>>> +
>>> + /*
>>> + * If at all MergeAppend is partial, all its child plans have to be
>>> + * partial : we don't currently support a mix of partial and
>>> + * non-partial MergeAppend subpaths.
>>> + */
>>> + if (is_partial)
>>> + return list_concat(partial_subpaths, list_copy(mpath->subpaths));
>>>
>>> In which situation partial MergeAppendPath is generated?  Can you
>>> provide one example of such path?
>>
>> Actually currently we don't support partial paths for MergeAppendPath.
>> That code just has that if condition (is_partial) but currently that
>> condition won't be true for MergeAppendPath.
>>
>
> I think then it is better to have Assert for MergeAppend.  It is
> generally not a good idea to add code which we can never hit.

Done.

> One more thing, I think you might want to update comment atop
> add_paths_to_append_rel to reflect the new type of parallel-aware
> append path being generated. In particular, I am referring to below
> part of the comment:
>
>  * Similarly it collects partial paths from
>  * non-dummy children to create partial append paths.
>  */
> static void
> add_paths_to_append_rel()
>

Added comments.

Attached revised patch v15. There is still the open point being
discussed : whether to have non-parallel-aware partial Append path, or
always have parallel-aware Append paths.


-- 
Thanks,
-Amit Khandekar
EnterpriseDB Corporation
The Postgres Database Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Attachment

Re: [HACKERS] Parallel Append implementation

From
Amit Khandekar
Date:
On 20 September 2017 at 11:32, Amit Khandekar <amitdkhan.pg@gmail.com> wrote:
> There is still the open point being
> discussed : whether to have non-parallel-aware partial Append path, or
> always have parallel-aware Append paths.

Attached is the revised patch v16. In previous versions, we used to
add a non-parallel-aware Partial Append path having all partial
subpaths if the Parallel Append path already added does not contain
all-partial subpaths. Now in the patch, when we add such Append Path
containing all partial subpaths, we make it parallel-aware (unless
enable_parallelappend is false). So in this case, there will be a
parallel-aware Append path containing one or more non-partial
subpaths, and there will be another parallel-aware Append path
containing all-partial subpaths.

-- 
Thanks,
-Amit Khandekar
EnterpriseDB Corporation
The Postgres Database Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Attachment

Re: [HACKERS] Parallel Append implementation

From
Amit Kapila
Date:
On Wed, Sep 20, 2017 at 10:59 AM, Amit Khandekar <amitdkhan.pg@gmail.com> wrote:
> On 16 September 2017 at 10:42, Amit Kapila <amit.kapila16@gmail.com> wrote:
>> On Thu, Sep 14, 2017 at 9:41 PM, Robert Haas <robertmhaas@gmail.com> wrote:
>>> On Mon, Sep 11, 2017 at 9:25 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:
>>>> I think the patch stores only non-partial paths in decreasing order,
>>>> what if partial paths having more costs follows those paths?
>>>
>>> The general picture here is that we don't want the leader to get stuck
>>> inside some long-running operation because then it won't be available
>>> to read tuples from the workers.  On the other hand, we don't want to
>>> just have the leader do no work because that might be slow.  And in
>>> most cast cases, the leader will be the first participant to arrive at
>>> the Append node, because of the worker startup time.  So the idea is
>>> that the workers should pick expensive things first, and the leader
>>> should pick cheap things first.
>>>
>>
>> At a broader level, the idea is good, but I think it won't turn out
>> exactly like that considering your below paragraph which indicates
>> that it is okay if leader picks a partial path that is costly among
>> other partial paths as a leader won't be locked with that.
>> Considering this is a good design for parallel append, the question is
>> do we really need worker and leader to follow separate strategy for
>> choosing next path.  I think the patch will be simpler if we can come
>> up with a way for the worker and leader to use the same strategy to
>> pick next path to process.  How about we arrange the list of paths
>> such that first, all partial paths will be there and then non-partial
>> paths and probably both in decreasing order of cost.  Now, both leader
>> and worker can start from the beginning of the list. In most cases,
>> the leader will start at the first partial path and will only ever
>> need to scan non-partial path if there is no other partial path left.
>> This is not bulletproof as it is possible that some worker starts
>> before leader in which case leader might scan non-partial path before
>> all partial paths are finished, but I think we can avoid that as well
>> if we are too worried about such cases.
>
> If there are no partial subpaths, then again the leader is likely to
> take up the expensive subpath.
>

I think in general the non-partial paths should be cheaper as compared
to partial paths as that is the reason planner choose not to make a
partial plan at first place. I think the idea patch is using will help
because the leader will choose to execute partial path in most cases
(when there is a mix of partial and non-partial paths) and for that
case, the leader is not bound to complete the execution of that path.
However, if all the paths are non-partial, then I am not sure much
difference it would be to choose one path over other.

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Parallel Append implementation

From
Robert Haas
Date:
On Sat, Sep 16, 2017 at 2:15 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:
> Yes, we can do that and that is what I think is probably better.  So,
> the question remains that in which case non-parallel-aware partial
> append will be required?  Basically, it is not clear to me why after
> having parallel-aware partial append we need non-parallel-aware
> version?  Are you keeping it for the sake of backward-compatibility or
> something like for cases if someone has disabled parallel append with
> the help of new guc in this patch?

We can't use parallel append if there are pathkeys, because parallel
append will disturb the output ordering.  Right now, that never
happens anyway, but that will change when
https://commitfest.postgresql.org/14/1093/ is committed.

Parallel append is also not safe for a parameterized path, and
set_append_rel_pathlist() already creates those.  I guess it could be
safe if the parameter is passed down from above the Gather, if we
allowed that, but it's sure not safe in a case like this:

Gather
-> Nested Loop -> Parallel Seq Scan -> Append    -> whatever

If it's not clear why that's a disaster, please ask for a more
detailed explaination...

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Parallel Append implementation

From
Robert Haas
Date:
On Fri, Sep 29, 2017 at 7:48 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:
> I think in general the non-partial paths should be cheaper as compared
> to partial paths as that is the reason planner choose not to make a
> partial plan at first place. I think the idea patch is using will help
> because the leader will choose to execute partial path in most cases
> (when there is a mix of partial and non-partial paths) and for that
> case, the leader is not bound to complete the execution of that path.
> However, if all the paths are non-partial, then I am not sure much
> difference it would be to choose one path over other.

The case where all plans are non-partial is the case where it matters
most!  If the leader is going to take a share of the work, we want it
to take the smallest share possible.

It's a lot fuzzier what is best when there are only partial plans.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Parallel Append implementation

From
Amit Kapila
Date:
On Sat, Sep 30, 2017 at 4:02 AM, Robert Haas <robertmhaas@gmail.com> wrote:
> On Fri, Sep 29, 2017 at 7:48 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:
>> I think in general the non-partial paths should be cheaper as compared
>> to partial paths as that is the reason planner choose not to make a
>> partial plan at first place. I think the idea patch is using will help
>> because the leader will choose to execute partial path in most cases
>> (when there is a mix of partial and non-partial paths) and for that
>> case, the leader is not bound to complete the execution of that path.
>> However, if all the paths are non-partial, then I am not sure much
>> difference it would be to choose one path over other.
>
> The case where all plans are non-partial is the case where it matters
> most!  If the leader is going to take a share of the work, we want it
> to take the smallest share possible.
>

Okay, but the point is whether it will make any difference
practically.  Let us try to see with an example, consider there are
two children (just taking two for simplicity, we can extend it to
many) and first having 1000 pages to scan and second having 900 pages
to scan, then it might not make much difference which child plan
leader chooses.  Now, it might matter if the first child relation has
1000 pages to scan and second has just 1 page to scan, but not sure
how much difference will it be in practice considering that is almost
the maximum possible theoretical difference between two non-partial
paths (if we have pages greater than 1024 pages
(min_parallel_table_scan_size) then it will have a partial path).

> It's a lot fuzzier what is best when there are only partial plans.
>

The point that bothers me a bit is whether it is a clear win if we
allow the leader to choose a different strategy to pick the paths or
is this just our theoretical assumption.  Basically, I think the patch
will become simpler if pick some simple strategy to choose paths.

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Parallel Append implementation

From
Amit Kapila
Date:
On Wed, Sep 20, 2017 at 10:59 AM, Amit Khandekar <amitdkhan.pg@gmail.com> wrote:
> On 16 September 2017 at 10:42, Amit Kapila <amit.kapila16@gmail.com> wrote:
>>
>> At a broader level, the idea is good, but I think it won't turn out
>> exactly like that considering your below paragraph which indicates
>> that it is okay if leader picks a partial path that is costly among
>> other partial paths as a leader won't be locked with that.
>> Considering this is a good design for parallel append, the question is
>> do we really need worker and leader to follow separate strategy for
>> choosing next path.  I think the patch will be simpler if we can come
>> up with a way for the worker and leader to use the same strategy to
>> pick next path to process.  How about we arrange the list of paths
>> such that first, all partial paths will be there and then non-partial
>> paths and probably both in decreasing order of cost.  Now, both leader
>> and worker can start from the beginning of the list. In most cases,
>> the leader will start at the first partial path and will only ever
>> need to scan non-partial path if there is no other partial path left.
>> This is not bulletproof as it is possible that some worker starts
>> before leader in which case leader might scan non-partial path before
>> all partial paths are finished, but I think we can avoid that as well
>> if we are too worried about such cases.
>
> If there are no partial subpaths, then again the leader is likely to
> take up the expensive subpath. And this scenario would not be
> uncommon.
>

While thinking about how common the case of no partial subpaths would
be, it occurred to me that as of now we always create a partial path
for the inheritance child if it is parallel-safe and the user has not
explicitly set the value of parallel_workers to zero (refer
compute_parallel_worker).  So, unless you are planning to change that,
I think it will be quite uncommon to have no partial subpaths.

Few nitpicks in your latest patch:
1.
@@ -298,6 +366,292 @@ ExecReScanAppend(AppendState *node) if (subnode->chgParam == NULL) ExecReScan(subnode); }
+

Looks like a spurious line.

2.
@@ -1285,7 +1291,11 @@ add_paths_to_append_rel(PlannerInfo *root,
RelOptInfo *rel,
..
+ if (chosen_path && chosen_path != cheapest_partial_path)
+ pa_all_partial_subpaths = false;

It will keep on setting pa_all_partial_subpaths as false for
non-partial paths which don't seem to be the purpose of this variable.
I think you want it to be set even when there is one non-partial path,
so isn't it better to write as below or something similar:
if (pa_nonpartial_subpaths && pa_all_partial_subpaths)
pa_all_partial_subpaths = false;


-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Parallel Append implementation

From
Robert Haas
Date:
On Sat, Sep 30, 2017 at 12:20 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:
> Okay, but the point is whether it will make any difference
> practically.  Let us try to see with an example, consider there are
> two children (just taking two for simplicity, we can extend it to
> many) and first having 1000 pages to scan and second having 900 pages
> to scan, then it might not make much difference which child plan
> leader chooses.  Now, it might matter if the first child relation has
> 1000 pages to scan and second has just 1 page to scan, but not sure
> how much difference will it be in practice considering that is almost
> the maximum possible theoretical difference between two non-partial
> paths (if we have pages greater than 1024 pages
> (min_parallel_table_scan_size) then it will have a partial path).

But that's comparing two non-partial paths for the same relation --
the point here is to compare across relations.  Also keep in mind
scenarios like this:

SELECT ... FROM relation UNION ALL SELECT ... FROM generate_series(...);

>> It's a lot fuzzier what is best when there are only partial plans.
>>
>
> The point that bothers me a bit is whether it is a clear win if we
> allow the leader to choose a different strategy to pick the paths or
> is this just our theoretical assumption.  Basically, I think the patch
> will become simpler if pick some simple strategy to choose paths.

Well, that's true, but is it really that much complexity?

And I actually don't see how this is very debatable.  If the only
paths that are reasonably cheap are GIN index scans, then the only
strategy is to dole them out across the processes you've got.  Giving
the leader the cheapest one seems to be to be clearly smarter than any
other strategy.  Am I missing something?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Parallel Append implementation

From
Amit Kapila
Date:
On Sat, Sep 30, 2017 at 9:25 PM, Robert Haas <robertmhaas@gmail.com> wrote:
> On Sat, Sep 30, 2017 at 12:20 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:
>> Okay, but the point is whether it will make any difference
>> practically.  Let us try to see with an example, consider there are
>> two children (just taking two for simplicity, we can extend it to
>> many) and first having 1000 pages to scan and second having 900 pages
>> to scan, then it might not make much difference which child plan
>> leader chooses.  Now, it might matter if the first child relation has
>> 1000 pages to scan and second has just 1 page to scan, but not sure
>> how much difference will it be in practice considering that is almost
>> the maximum possible theoretical difference between two non-partial
>> paths (if we have pages greater than 1024 pages
>> (min_parallel_table_scan_size) then it will have a partial path).
>
> But that's comparing two non-partial paths for the same relation --
> the point here is to compare across relations.

Isn't it for both?  I mean it is about comparing the non-partial paths
for child relations of the same relation and also when there are
different relations involved as in Union All kind of query.  In any
case, the point I was trying to say is that generally non-partial
relations will have relatively smaller scan size, so probably should
take lesser time to complete.

>  Also keep in mind
> scenarios like this:
>
> SELECT ... FROM relation UNION ALL SELECT ... FROM generate_series(...);
>

I think for the FunctionScan case, non-partial paths can be quite costly.

>>> It's a lot fuzzier what is best when there are only partial plans.
>>>
>>
>> The point that bothers me a bit is whether it is a clear win if we
>> allow the leader to choose a different strategy to pick the paths or
>> is this just our theoretical assumption.  Basically, I think the patch
>> will become simpler if pick some simple strategy to choose paths.
>
> Well, that's true, but is it really that much complexity?
>
> And I actually don't see how this is very debatable.  If the only
> paths that are reasonably cheap are GIN index scans, then the only
> strategy is to dole them out across the processes you've got.  Giving
> the leader the cheapest one seems to be to be clearly smarter than any
> other strategy.
>

Sure, I think it is quite good if we can achieve that but it seems to
me that we will not be able to achieve that in all scenario's with the
patch and rather I think in some situations it can result in leader
ended up picking the costly plan (in case when there are all partial
plans or mix of partial and non-partial plans).  Now, we are ignoring
such cases based on the assumption that other workers might help to
complete master backend.  I think it is quite possible that the worker
backends picks up some plans which emit rows greater than tuple queue
size and they instead wait on the master backend which itself is busy
in completing its plan.  So master backend will end up taking too much
time.  If we want to go with a strategy of master (leader) backend and
workers taking a different strategy to pick paths to work on, then it
might be better if we should try to ensure that master backend always
starts from the place which has cheapest plans irrespective of whether
the path is partial or non-partial.

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Parallel Append implementation

From
Robert Haas
Date:
On Sun, Oct 1, 2017 at 9:55 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:
> Isn't it for both?  I mean it is about comparing the non-partial paths
> for child relations of the same relation and also when there are
> different relations involved as in Union All kind of query.  In any
> case, the point I was trying to say is that generally non-partial
> relations will have relatively smaller scan size, so probably should
> take lesser time to complete.

I don't think that's a valid inference.  It's true that a relation
could fail to have a partial path because it's small, but that's only
one reason among very many.  The optimal index type could be one that
doesn't support parallel index scans, for example.

> Sure, I think it is quite good if we can achieve that but it seems to
> me that we will not be able to achieve that in all scenario's with the
> patch and rather I think in some situations it can result in leader
> ended up picking the costly plan (in case when there are all partial
> plans or mix of partial and non-partial plans).  Now, we are ignoring
> such cases based on the assumption that other workers might help to
> complete master backend.  I think it is quite possible that the worker
> backends picks up some plans which emit rows greater than tuple queue
> size and they instead wait on the master backend which itself is busy
> in completing its plan.  So master backend will end up taking too much
> time.  If we want to go with a strategy of master (leader) backend and
> workers taking a different strategy to pick paths to work on, then it
> might be better if we should try to ensure that master backend always
> starts from the place which has cheapest plans irrespective of whether
> the path is partial or non-partial.

I agree that it's complicated to get this right in all cases; I'm
realizing that's probably an unattainable ideal.

However, I don't think ignoring the distinction between partial and
non-partial plans is an improvement, because the argument that other
workers may be able to help the leader is a correct one.  You are
correct in saying that the workers might fill up their tuple queues
while the leader is working, but once the leader returns one tuple it
will switch to reading from the queues.  Every other tuple can be
returned by some worker.  On the other hand, if the leader picks a
non-partial plan, it must run that plan through to completion.

Imagine (a) a non-partial path with a cost of 1000 returning 100
tuples and (b) a partial path with a cost of 10,000 returning 1,000
tuples.  No matter which path the leader picks, it has about 10 units
of work to do to return 1 tuple.  However, if it picks the first path,
it is committed to doing 990 more units of work later, regardless of
whether the workers have filled the tuple queues or not.  If it picks
the second path, it again has about 10 units of work to do to return 1
tuple, but it hasn't committed to doing all the rest of the work of
that path.  So that's better.

Now it's hard to get all of the cases right.  If the partial path in
the previous example had a cost of 1 crore then even returning 1 tuple
is more costly than running the whole non-partial path.  When you
introduce partition-wise join and parallel hash, there are even more
problems.  Now the partial path might have a large startup cost, so
returning the first tuple may be very expensive, and that work may
help other workers (if this is a parallel hash) or it may not (if this
is a non-parallel hash).  I don't know that we can get all of these
cases right, or that we should try.  However, I still think that
picking partial paths preferentially is sensible -- we don't know all
the details, but we do know that they're typically going to at least
try to divide up the work in a fine-grained fashion, while a
non-partial path, once started, the leader must run it to completion.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Parallel Append implementation

From
Amit Kapila
Date:
On Mon, Oct 2, 2017 at 6:21 PM, Robert Haas <robertmhaas@gmail.com> wrote:
> On Sun, Oct 1, 2017 at 9:55 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:
>> Isn't it for both?  I mean it is about comparing the non-partial paths
>> for child relations of the same relation and also when there are
>> different relations involved as in Union All kind of query.  In any
>> case, the point I was trying to say is that generally non-partial
>> relations will have relatively smaller scan size, so probably should
>> take lesser time to complete.
>
> I don't think that's a valid inference.  It's true that a relation
> could fail to have a partial path because it's small, but that's only
> one reason among very many.  The optimal index type could be one that
> doesn't support parallel index scans, for example.
>

Valid point.

>> Sure, I think it is quite good if we can achieve that but it seems to
>> me that we will not be able to achieve that in all scenario's with the
>> patch and rather I think in some situations it can result in leader
>> ended up picking the costly plan (in case when there are all partial
>> plans or mix of partial and non-partial plans).  Now, we are ignoring
>> such cases based on the assumption that other workers might help to
>> complete master backend.  I think it is quite possible that the worker
>> backends picks up some plans which emit rows greater than tuple queue
>> size and they instead wait on the master backend which itself is busy
>> in completing its plan.  So master backend will end up taking too much
>> time.  If we want to go with a strategy of master (leader) backend and
>> workers taking a different strategy to pick paths to work on, then it
>> might be better if we should try to ensure that master backend always
>> starts from the place which has cheapest plans irrespective of whether
>> the path is partial or non-partial.
>
> I agree that it's complicated to get this right in all cases; I'm
> realizing that's probably an unattainable ideal.
>
> However, I don't think ignoring the distinction between partial and
> non-partial plans is an improvement, because the argument that other
> workers may be able to help the leader is a correct one.  You are
> correct in saying that the workers might fill up their tuple queues
> while the leader is working, but once the leader returns one tuple it
> will switch to reading from the queues.  Every other tuple can be
> returned by some worker.  On the other hand, if the leader picks a
> non-partial plan, it must run that plan through to completion.
>
> Imagine (a) a non-partial path with a cost of 1000 returning 100
> tuples and (b) a partial path with a cost of 10,000 returning 1,000
> tuples.  No matter which path the leader picks, it has about 10 units
> of work to do to return 1 tuple.  However, if it picks the first path,
> it is committed to doing 990 more units of work later, regardless of
> whether the workers have filled the tuple queues or not.  If it picks
> the second path, it again has about 10 units of work to do to return 1
> tuple, but it hasn't committed to doing all the rest of the work of
> that path.  So that's better.
>
> Now it's hard to get all of the cases right.  If the partial path in
> the previous example had a cost of 1 crore then even returning 1 tuple
> is more costly than running the whole non-partial path.  When you
> introduce partition-wise join and parallel hash, there are even more
> problems.  Now the partial path might have a large startup cost, so
> returning the first tuple may be very expensive, and that work may
> help other workers (if this is a parallel hash) or it may not (if this
> is a non-parallel hash).
>

Yeah, these were the type of cases I am also worried.

>  I don't know that we can get all of these
> cases right, or that we should try.  However, I still think that
> picking partial paths preferentially is sensible -- we don't know all
> the details, but we do know that they're typically going to at least
> try to divide up the work in a fine-grained fashion, while a
> non-partial path, once started, the leader must run it to completion.
>

Okay, but can't we try to pick the cheapest partial path for master
backend and maybe master backend can try to work on a partial path
which is already picked up by some worker.

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Parallel Append implementation

From
Amit Khandekar
Date:
On 30 September 2017 at 19:21, Amit Kapila <amit.kapila16@gmail.com> wrote:
> On Wed, Sep 20, 2017 at 10:59 AM, Amit Khandekar <amitdkhan.pg@gmail.com> wrote:
>> On 16 September 2017 at 10:42, Amit Kapila <amit.kapila16@gmail.com> wrote:
>>>
>>> At a broader level, the idea is good, but I think it won't turn out
>>> exactly like that considering your below paragraph which indicates
>>> that it is okay if leader picks a partial path that is costly among
>>> other partial paths as a leader won't be locked with that.
>>> Considering this is a good design for parallel append, the question is
>>> do we really need worker and leader to follow separate strategy for
>>> choosing next path.  I think the patch will be simpler if we can come
>>> up with a way for the worker and leader to use the same strategy to
>>> pick next path to process.  How about we arrange the list of paths
>>> such that first, all partial paths will be there and then non-partial
>>> paths and probably both in decreasing order of cost.  Now, both leader
>>> and worker can start from the beginning of the list. In most cases,
>>> the leader will start at the first partial path and will only ever
>>> need to scan non-partial path if there is no other partial path left.
>>> This is not bulletproof as it is possible that some worker starts
>>> before leader in which case leader might scan non-partial path before
>>> all partial paths are finished, but I think we can avoid that as well
>>> if we are too worried about such cases.
>>
>> If there are no partial subpaths, then again the leader is likely to
>> take up the expensive subpath. And this scenario would not be
>> uncommon.
>>
>
> While thinking about how common the case of no partial subpaths would
> be, it occurred to me that as of now we always create a partial path
> for the inheritance child if it is parallel-safe and the user has not
> explicitly set the value of parallel_workers to zero (refer
> compute_parallel_worker).  So, unless you are planning to change that,
> I think it will be quite uncommon to have no partial subpaths.

There are still some paths that can have non-partial paths cheaper
than the partial paths. Also, there can be UNION ALL queries which
could have non-partial subpaths. I guess this has already been
discussed in the other replies.

>
> Few nitpicks in your latest patch:
> 1.
> @@ -298,6 +366,292 @@ ExecReScanAppend(AppendState *node)
>   if (subnode->chgParam == NULL)
>   ExecReScan(subnode);
>   }
> +
>
> Looks like a spurious line.
>
> 2.
> @@ -1285,7 +1291,11 @@ add_paths_to_append_rel(PlannerInfo *root,
> RelOptInfo *rel,
> ..
> + if (chosen_path && chosen_path != cheapest_partial_path)
> + pa_all_partial_subpaths = false;
>
> It will keep on setting pa_all_partial_subpaths as false for
> non-partial paths which don't seem to be the purpose of this variable.
> I think you want it to be set even when there is one non-partial path,
> so isn't it better to write as below or something similar:
> if (pa_nonpartial_subpaths && pa_all_partial_subpaths)
> pa_all_partial_subpaths = false;

Ok. How about removing pa_all_partial_subpaths altogether , and
instead of the below condition :

/*
* If all the child rels have partial paths, and if the above Parallel
* Append path has a mix of partial and non-partial subpaths, then consider
* another Parallel Append path which will have *all* partial subpaths.
* If enable_parallelappend is off, make this one non-parallel-aware.
*/
if (partial_subpaths_valid && !pa_all_partial_subpaths)
......

Use this condition :
if (partial_subpaths_valid && pa_nonpartial_subpaths != NIL)
......

----


Regarding a mix of partial and non-partial paths, I feel it always
makes sense for the leader to choose the partial path. If it chooses a
non-partial path, no other worker would be able to help finish that
path. Among the partial paths, whether it chooses the cheapest one or
expensive one does not matter, I think. We have the partial paths
unordered. So whether it starts from the last partial path or the
first partial path should not matter.

Regarding scenario where all paths are non-partial, here is an e.g. :
Suppose we have 4 child paths with costs : 10 5 5 3, and with 2
workers plus one leader. And suppose the leader takes additionally
1/4th of these costs to process the returned tuples.

If leader takes least expensive one (3)  :
2 workers will finish 10, 5, 5 in 10 units,
and leader simultaneously chooses the plan with cost 3, and so it
takes 3 + (1/4)(10 + 5 + 5 + 3) = 9 units.
So the total time taken by Append is : 10.


Whereas if leader takes most expensive one (10) :
10 + .25 (total) = 10 + 6 = 16
The 2 workers will finish 2nd, 3rd and 4th plan (5,5,3) in 8 units.
and simultaneously leader will finish 1st plan (10) in 10 units, plus
tuple processing cost i.e. 10 +  (1/4)(10 + 5 + 5 + 3) = 15 units.
So the total time taken by Append is : 15.


-- 
Thanks,
-Amit Khandekar
EnterpriseDB Corporation
The Postgres Database Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Parallel Append implementation

From
Robert Haas
Date:
On Thu, Oct 5, 2017 at 6:29 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:
> Okay, but can't we try to pick the cheapest partial path for master
> backend and maybe master backend can try to work on a partial path
> which is already picked up by some worker.

Well, the master backend is typically going to be the first process to
arrive at the Parallel Append because it's already running, whereas
the workers have to start.  So in the case that really matters, no
paths will have been picked yet.  Later on, we could have the master
try to choose a path on which some other worker is already working,
but I really doubt that's optimal.  Either the workers are generating
a lot of tuples (in which case the leader probably won't do much work
on any path because it will be busy reading tuples) or they are
generating only a few tuples (in which case the leader is probably
better off working on an a path not yet chosen, to reduce contention).

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Parallel Append implementation

From
Amit Kapila
Date:
On Thu, Oct 5, 2017 at 6:14 PM, Robert Haas <robertmhaas@gmail.com> wrote:
> On Thu, Oct 5, 2017 at 6:29 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:
>> Okay, but can't we try to pick the cheapest partial path for master
>> backend and maybe master backend can try to work on a partial path
>> which is already picked up by some worker.
>
> Well, the master backend is typically going to be the first process to
> arrive at the Parallel Append because it's already running, whereas
> the workers have to start.
>

Sure, the leader can pick the cheapest partial path to start with.

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Parallel Append implementation

From
Amit Kapila
Date:
On Thu, Oct 5, 2017 at 4:11 PM, Amit Khandekar <amitdkhan.pg@gmail.com> wrote:
>
> Ok. How about removing pa_all_partial_subpaths altogether , and
> instead of the below condition :
>
> /*
> * If all the child rels have partial paths, and if the above Parallel
> * Append path has a mix of partial and non-partial subpaths, then consider
> * another Parallel Append path which will have *all* partial subpaths.
> * If enable_parallelappend is off, make this one non-parallel-aware.
> */
> if (partial_subpaths_valid && !pa_all_partial_subpaths)
> ......
>
> Use this condition :
> if (partial_subpaths_valid && pa_nonpartial_subpaths != NIL)
> ......
>

Sounds good to me.

One minor point:

+ if (!node->as_padesc)
+ {
+ /*
+ */
+ if (!exec_append_seq_next(node))
+ return ExecClearTuple(node->ps.ps_ResultTupleSlot);
+ }

It seems either you want to add a comment in above part of patch or
you just left /**/ mistakenly.

> ----
>
>
> Regarding a mix of partial and non-partial paths, I feel it always
> makes sense for the leader to choose the partial path.
>

Okay, but why not cheapest partial path?


-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Parallel Append implementation

From
Amit Khandekar
Date:
On 6 October 2017 at 08:49, Amit Kapila <amit.kapila16@gmail.com> wrote:
> On Thu, Oct 5, 2017 at 4:11 PM, Amit Khandekar <amitdkhan.pg@gmail.com> wrote:
>>
>> Ok. How about removing pa_all_partial_subpaths altogether , and
>> instead of the below condition :
>>
>> /*
>> * If all the child rels have partial paths, and if the above Parallel
>> * Append path has a mix of partial and non-partial subpaths, then consider
>> * another Parallel Append path which will have *all* partial subpaths.
>> * If enable_parallelappend is off, make this one non-parallel-aware.
>> */
>> if (partial_subpaths_valid && !pa_all_partial_subpaths)
>> ......
>>
>> Use this condition :
>> if (partial_subpaths_valid && pa_nonpartial_subpaths != NIL)
>> ......
>>
>
> Sounds good to me.
>
> One minor point:
>
> + if (!node->as_padesc)
> + {
> + /*
> + */
> + if (!exec_append_seq_next(node))
> + return ExecClearTuple(node->ps.ps_ResultTupleSlot);
> + }
>
> It seems either you want to add a comment in above part of patch or
> you just left /**/ mistakenly.

Oops. Yeah, the comment wrapper remained there when I moved its
content "This is Parallel-aware append. Follow it's own logic ..." out
of the if block. Since this is too small a change for an updated
patch, I will do this along with any other changes that would be
required as the review progresses.

>
>> ----
>>
>>
>> Regarding a mix of partial and non-partial paths, I feel it always
>> makes sense for the leader to choose the partial path.
>>
>
> Okay, but why not cheapest partial path?

I gave some thought on this point. Overall I feel it does not matter
which partial path it should pick up. RIght now the partial paths are
not ordered. But for non-partial paths sake, we are just choosing the
very last path. So in case of mixed paths, leader will get a partial
path, but that partial path would not be the cheapest path. But if we
also order the partial paths, the same logic would then pick up
cheapest partial path. The question is, should we also order the
partial paths for the leader ?

The only scenario I see where leader choosing cheapest partial path
*might* show some benefit, is if there are some partial paths that
need to do some startup work using only one worker. I think currently,
parallel hash join is one case where it builds the hash table, but I
guess here also, we support parallel hash build, but not sure about
the status. For such plan, if leader starts it, it would be slow, and
no other worker would be able to help it, so its actual startup cost
would be drastically increased. (Another path is parallel bitmap heap
scan where the leader has to do something and the other workers wait.
But here, I think it's not much work for the leader to do). So
overall, to handle such cases, it's better for leader to choose a
cheapest path, or may be, a path with cheapest startup cost. We can
also consider sorting partial paths with decreasing startup cost.

-- 
Thanks,
-Amit Khandekar
EnterpriseDB Corporation
The Postgres Database Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Parallel Append implementation

From
Amit Kapila
Date:
On Fri, Oct 6, 2017 at 12:03 PM, Amit Khandekar <amitdkhan.pg@gmail.com> wrote:
> On 6 October 2017 at 08:49, Amit Kapila <amit.kapila16@gmail.com> wrote:
>>
>> Okay, but why not cheapest partial path?
>
> I gave some thought on this point. Overall I feel it does not matter
> which partial path it should pick up. RIght now the partial paths are
> not ordered. But for non-partial paths sake, we are just choosing the
> very last path. So in case of mixed paths, leader will get a partial
> path, but that partial path would not be the cheapest path. But if we
> also order the partial paths, the same logic would then pick up
> cheapest partial path. The question is, should we also order the
> partial paths for the leader ?
>
> The only scenario I see where leader choosing cheapest partial path
> *might* show some benefit, is if there are some partial paths that
> need to do some startup work using only one worker. I think currently,
> parallel hash join is one case where it builds the hash table, but I
> guess here also, we support parallel hash build, but not sure about
> the status.
>

You also need to consider how merge join is currently work in parallel
(each worker need to perform the whole of work of right side).  I
think there could be more scenario's where the startup cost is high
and parallel worker needs to do that work independently.
For such plan, if leader starts it, it would be slow, and
> no other worker would be able to help it, so its actual startup cost
> would be drastically increased. (Another path is parallel bitmap heap
> scan where the leader has to do something and the other workers wait.
> But here, I think it's not much work for the leader to do). So
> overall, to handle such cases, it's better for leader to choose a
> cheapest path, or may be, a path with cheapest startup cost. We can
> also consider sorting partial paths with decreasing startup cost.
>

Yeah, that sounds reasonable.

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Parallel Append implementation

From
Amit Khandekar
Date:
On 9 October 2017 at 16:03, Amit Kapila <amit.kapila16@gmail.com> wrote:
> On Fri, Oct 6, 2017 at 12:03 PM, Amit Khandekar <amitdkhan.pg@gmail.com> wrote:
>> On 6 October 2017 at 08:49, Amit Kapila <amit.kapila16@gmail.com> wrote:
>>>
>>> Okay, but why not cheapest partial path?
>>
>> I gave some thought on this point. Overall I feel it does not matter
>> which partial path it should pick up. RIght now the partial paths are
>> not ordered. But for non-partial paths sake, we are just choosing the
>> very last path. So in case of mixed paths, leader will get a partial
>> path, but that partial path would not be the cheapest path. But if we
>> also order the partial paths, the same logic would then pick up
>> cheapest partial path. The question is, should we also order the
>> partial paths for the leader ?
>>
>> The only scenario I see where leader choosing cheapest partial path
>> *might* show some benefit, is if there are some partial paths that
>> need to do some startup work using only one worker. I think currently,
>> parallel hash join is one case where it builds the hash table, but I
>> guess here also, we support parallel hash build, but not sure about
>> the status.
>>
>
> You also need to consider how merge join is currently work in parallel
> (each worker need to perform the whole of work of right side).

Yes, here if the leader happens to take the right side, it may slow
down the overall merge join. But this seems to be a different case
than the case of high startup costs.

>  I think there could be more scenario's where the startup cost is high
> and parallel worker needs to do that work independently.

True.

>
>  For such plan, if leader starts it, it would be slow, and
>> no other worker would be able to help it, so its actual startup cost
>> would be drastically increased. (Another path is parallel bitmap heap
>> scan where the leader has to do something and the other workers wait.
>> But here, I think it's not much work for the leader to do). So
>> overall, to handle such cases, it's better for leader to choose a
>> cheapest path, or may be, a path with cheapest startup cost. We can
>> also consider sorting partial paths with decreasing startup cost.
>>
>
> Yeah, that sounds reasonable.

Attached patch sorts partial paths by descending startup cost.


On 6 October 2017 at 08:49, Amit Kapila <amit.kapila16@gmail.com> wrote:
> On Thu, Oct 5, 2017 at 4:11 PM, Amit Khandekar <amitdkhan.pg@gmail.com> wrote:
>>
>> Ok. How about removing pa_all_partial_subpaths altogether , and
>> instead of the below condition :
>>
>> /*
>> * If all the child rels have partial paths, and if the above Parallel
>> * Append path has a mix of partial and non-partial subpaths, then consider
>> * another Parallel Append path which will have *all* partial subpaths.
>> * If enable_parallelappend is off, make this one non-parallel-aware.
>> */
>> if (partial_subpaths_valid && !pa_all_partial_subpaths)
>> ......
>>
>> Use this condition :
>> if (partial_subpaths_valid && pa_nonpartial_subpaths != NIL)
>> ......
>>
>
> Sounds good to me.

Did this. Here is the new condition I used  along with the comments
explaining it :

+        * If parallel append has not been added above, or the added
one has a mix
+        * of partial and non-partial subpaths, then consider another Parallel
+        * Append path which will have *all* partial subpaths. We can add such a
+        * path only if all childrels have partial paths in the first
place. This
+        * new path will be parallel-aware unless enable_parallelappend is off.
         */
-       if (partial_subpaths_valid && !pa_all_partial_subpaths)
+       if (partial_subpaths_valid &&
+               (!pa_subpaths_valid || pa_nonpartial_subpaths != NIL))

Also added some test scenarios.

On 6 October 2017 at 12:03, Amit Khandekar <amitdkhan.pg@gmail.com> wrote:
> On 6 October 2017 at 08:49, Amit Kapila <amit.kapila16@gmail.com> wrote:
>>
>> One minor point:
>>
>> + if (!node->as_padesc)
>> + {
>> + /*
>> + */
>> + if (!exec_append_seq_next(node))
>> + return ExecClearTuple(node->ps.ps_ResultTupleSlot);
>> + }
>>
>> It seems either you want to add a comment in above part of patch or
>> you just left /**/ mistakenly.
>
> Oops. Yeah, the comment wrapper remained there when I moved its
> content "This is Parallel-aware append. Follow it's own logic ..." out
> of the if block.

Removed the comment wrapper.


Thanks,
-Amit Khandekar
EnterpriseDB Corporation
The Postgres Database Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Attachment

Re: [HACKERS] Parallel Append implementation

From
Robert Haas
Date:
On Wed, Oct 11, 2017 at 8:51 AM, Amit Khandekar <amitdkhan.pg@gmail.com> wrote:
> [ new patch ]

+         <entry><literal>parallel_append</></entry>
+         <entry>Waiting to choose the next subplan during Parallel Append plan
+         execution.</entry>
+        </row>
+        <row>

Probably needs to update a morerows values of some earlier entry.

+       <primary><varname>enable_parallelappend</> configuration
parameter</primary>

How about enable_parallel_append?

+     * pa_finished : workers currently executing the subplan. A worker which

The way the colon is used here is not a standard comment style for PostgreSQL.

+         * Go on to the "next" subplan. If no more subplans, return the empty
+         * slot set up for us by ExecInitAppend.
+         * Note: Parallel-aware Append follows different logic for choosing the
+         * next subplan.

Formatting looks wrong, and moreover I don't think this is the right
way of handling this comment anyway.  Move the existing comment inside
the if (!node->padesc) block and leave it unchanged; the else block
explains the differences for parallel append.

+ *        ExecAppendEstimate
+ *
+ *        estimates the space required to serialize Append node.

Ugh, this is wrong, but I notice it follows various other
equally-wrong comments for other parallel-aware node types. I guess
I'll go fix that.  We are not in serializing the Append node.

I do not think that it's a good idea to call
exec_append_parallel_next() from ExecAppendInitializeDSM,
ExecAppendReInitializeDSM, and ExecAppendInitializeWorker.  We want to
postpone selecting which plan to run until we're actually ready to run
that plan.  Otherwise, for example, the leader might seize a
non-partial plan (if only such plans are included in the Parallel
Append) when it isn't really necessary for it to do so.  If the
workers would've reached the plans and started returning tuples to the
leader before it grabbed a plan, oh well, too bad.  The leader's still
claimed that plan and must now run it.

I concede that's not a high-probability scenario, but I still maintain
that it is better for processes not to claim a subplan until the last
possible moment.  I think we need to initialize as_whichplan as
PA_INVALID plan and then fix it when ExecProcNode() is called for the
first time.

+    if (!IsParallelWorker())

This is not a great test, because it would do the wrong thing if we
ever allowed an SQL function called from a parallel worker to run a
parallel query of its own.  Currently that's not allowed but we might
want to allow it someday.  What we really want to test is whether
we're the leader for *this* query.  Maybe use a flag in the
AppendState for that, and set it correctly in
ExecAppendInitializeWorker.

I think maybe the loop in exec_append_parallel_next should look more like this:

/* Pick the next plan. */
state->as_whichplan = padesc->pa_nextplan;
if (state->as_whichplan != PA_INVALID_PLAN)
{   int nextplan = state->as_whichplan;
   /* Mark non-partial plans done immediately so that they can't be
picked again. */   if (nextplan < first_partial_plan)       padesc->pa_finished[nextplan] = true;
   /* Figure out what plan the next worker should pick. */   do   {       /* If we've run through all the plans, loop
backthrough
 
partial plans only. */       if (++nextplan >= state->as_nplans)           nextplan = first_partial_plan;
       /* No plans remaining or tried them all?  Then give up. */       if (nextplan == state->as_whichplan || nextplan
>=state->as_nplans)       {           nextplan = PA_INVALID_PLAN;           break;       }   } while
(padesc->pa_finished[nextplan]);
   /* Store calculated next plan back into shared memory. */   padesc->pa_next_plan = nextplan;
}

This might not be exactly right and the comments may need work, but
here are a couple of points:

- As you have it coded, the loop exit condition is whichplan !=
PA_INVALID_PLAN, but that's probably an uncommon case and you have two
other ways out of the loop.  It's easier to understand the code if the
loop condition corresponds to the most common way of exiting the loop,
and any break is for some corner case.

- Don't need a separate call to exec_append_get_next_plan; it's all
handled here (and, I think, pretty compactly).

- No need for pa_first_plan any more.  Looping back to
first_partial_plan is a fine substitute, because by the time anybody
loops around, pa_first_plan would equal first_partial_plan anyway
(unless there's a bug).

- In your code, the value in shared memory is the point at which to
start the search for the next plan.  Here, I made it the value that
the next worker should adopt without question.  Another option would
be to make it the value that the last worker adopted.  I think that
either that option or what I did above are slightly better than what
you have, because as you have it, you've got to use the
increment-with-looping logic in two different places whereas either of
those options only need it in one place, which makes this a little
simpler.

None of this is really a big deal I suppose but I find the logic here
rather sprawling right now and I think we should try to tighten it up
as much as possible.

I only looked over the executor changes on this pass, not the planner stuff.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Parallel Append implementation

From
Amit Khandekar
Date:
On 13 October 2017 at 00:29, Robert Haas <robertmhaas@gmail.com> wrote:
> On Wed, Oct 11, 2017 at 8:51 AM, Amit Khandekar <amitdkhan.pg@gmail.com> wrote:
>> [ new patch ]
>
> +         <entry><literal>parallel_append</></entry>
> +         <entry>Waiting to choose the next subplan during Parallel Append plan
> +         execution.</entry>
> +        </row>
> +        <row>
>
> Probably needs to update a morerows values of some earlier entry.

From what I observed from the other places, the morerows value is one
less than the number of following entries. I have changed it to 10
since it has 11 entries.

>
> +       <primary><varname>enable_parallelappend</> configuration
> parameter</primary>
>
> How about enable_parallel_append?

Done.

>
> +     * pa_finished : workers currently executing the subplan. A worker which
>
> The way the colon is used here is not a standard comment style for PostgreSQL.

Changed it to "pa_finished:".

>
> +         * Go on to the "next" subplan. If no more subplans, return the empty
> +         * slot set up for us by ExecInitAppend.
> +         * Note: Parallel-aware Append follows different logic for choosing the
> +         * next subplan.
>
> Formatting looks wrong, and moreover I don't think this is the right
> way of handling this comment anyway.  Move the existing comment inside
> the if (!node->padesc) block and leave it unchanged; the else block
> explains the differences for parallel append.

I think the first couple of lines do apply to both parallel-append and
sequential append plans. I have moved the remaining couple of lines
inside the else block.

>
> + *        ExecAppendEstimate
> + *
> + *        estimates the space required to serialize Append node.
>
> Ugh, this is wrong, but I notice it follows various other
> equally-wrong comments for other parallel-aware node types. I guess
> I'll go fix that.  We are not in serializing the Append node.

I didn't clealy get this. Do you think it should be "space required to
copy the Append node into the shared memory" ?

>
> I do not think that it's a good idea to call
> exec_append_parallel_next() from ExecAppendInitializeDSM,
> ExecAppendReInitializeDSM, and ExecAppendInitializeWorker.  We want to
> postpone selecting which plan to run until we're actually ready to run
> that plan.  Otherwise, for example, the leader might seize a
> non-partial plan (if only such plans are included in the Parallel
> Append) when it isn't really necessary for it to do so.  If the
> workers would've reached the plans and started returning tuples to the
> leader before it grabbed a plan, oh well, too bad.  The leader's still
> claimed that plan and must now run it.
>
> I concede that's not a high-probability scenario, but I still maintain
> that it is better for processes not to claim a subplan until the last
> possible moment.  I think we need to initialize as_whichplan as
> PA_INVALID plan and then fix it when ExecProcNode() is called for the
> first time.

Done. Set as_whichplan to PA_INVALID_PLAN in
ExecAppendInitializeDSM(), ExecAppendReInitializeDSM() and
ExecAppendInitializeWorker(). Then when ExecAppend() is called for the
first time, we notice that as_whichplan is PA_INVALID_PLAN, that means
we need to choose the plan.

>
> +    if (!IsParallelWorker())
>
> This is not a great test, because it would do the wrong thing if we
> ever allowed an SQL function called from a parallel worker to run a
> parallel query of its own.  Currently that's not allowed but we might
> want to allow it someday.  What we really want to test is whether
> we're the leader for *this* query.  Maybe use a flag in the
> AppendState for that, and set it correctly in
> ExecAppendInitializeWorker.

Done. Set a new AppendState->is_parallel_worker field to true in
ExecAppendInitializeWorker().

>
> I think maybe the loop in exec_append_parallel_next should look more like this:
>
> /* Pick the next plan. */
> state->as_whichplan = padesc->pa_nextplan;
> if (state->as_whichplan != PA_INVALID_PLAN)
> {
>     int nextplan = state->as_whichplan;
>
>     /* Mark non-partial plans done immediately so that they can't be
> picked again. */
>     if (nextplan < first_partial_plan)
>         padesc->pa_finished[nextplan] = true;
>
>     /* Figure out what plan the next worker should pick. */
>     do
>     {
>         /* If we've run through all the plans, loop back through
> partial plans only. */
>         if (++nextplan >= state->as_nplans)
>             nextplan = first_partial_plan;
>
>         /* No plans remaining or tried them all?  Then give up. */
>         if (nextplan == state->as_whichplan || nextplan >= state->as_nplans)
>         {
>             nextplan = PA_INVALID_PLAN;
>             break;
>         }
>     } while (padesc->pa_finished[nextplan]);
>
>     /* Store calculated next plan back into shared memory. */
>     padesc->pa_next_plan = nextplan;
> }
>
> This might not be exactly right and the comments may need work, but
> here are a couple of points:
>
> - As you have it coded, the loop exit condition is whichplan !=
> PA_INVALID_PLAN, but that's probably an uncommon case and you have two
> other ways out of the loop.  It's easier to understand the code if the
> loop condition corresponds to the most common way of exiting the loop,
> and any break is for some corner case.
>
> - Don't need a separate call to exec_append_get_next_plan; it's all
> handled here (and, I think, pretty compactly).

Got rid of exec_append_get_next_plan() and having to do that logic twice.

>
> - No need for pa_first_plan any more.  Looping back to
> first_partial_plan is a fine substitute, because by the time anybody
> loops around, pa_first_plan would equal first_partial_plan anyway
> (unless there's a bug).

Yeah, I agree. Got rid of pa_first_plan.

>
> - In your code, the value in shared memory is the point at which to
> start the search for the next plan.  Here, I made it the value that
> the next worker should adopt without question.

I was considering this option, but found out that we *have* to return
from exec_append_parallel_next() with this next worker chosen. Now if
the leader happens to reach this plan and finish it, and then for the
workers the padesc->pa_next_plan happens to point to this same plan,
we need to return some other plan.

> Another option would
> be to make it the value that the last worker adopted.

Here, we need to think of an initial value of pa_next_plan when
workers haven't yet started. It can be PA_INVALID_PLAN, but I felt
this does not convey clearly whether none of the plans have started
yet, or all plans have ended.

> I think that
> either that option or what I did above are slightly better than what
> you have, because as you have it, you've got to use the
> increment-with-looping logic in two different places whereas either of
> those options only need it in one place, which makes this a little
> simpler.

The way I have now used the logic more or less looks like the code you
showed above. The differences are :

The padesc->pa_next_plan still points to the plan from which to search
for an unfinished plan. But what's changed is : I keep track of
whichplan and also a nextplan position while searching for the plan.
So even if we find an unfinished plan, there will be a nextplan
pointing appropriately. If the whichplan is a finished one, in the
next iteration nextplan value is assigned to whichplan. This way I
avoided having to separately call the wrap-around logic again outside
of the search loop.

Another additional logic added is : While searching, if whichplan
still points to a non-partial plan, and the backend has already
finished the partial plans and the remaining non-partial plan, then
this condition is not enough to break out of the loop :
>         if (++nextplan >= state->as_nplans)
>             nextplan = first_partial_plan;
>         /* No plans remaining or tried them all?  Then give up. */
>         if (nextplan == state->as_whichplan || nextplan >= state->as_nplans)
>         {
>             nextplan = PA_INVALID_PLAN;
>             break;
>         }
This is because, the initial plan with which we started is a
non-partial plan. So above, nextplan never becomes state->as_whichplan
because state->as_whichplan would always be lesser than
first_partial_plan.

So I have split the break condition into two conditions, one of which
is for wrap-around case :

if (whichplan + 1 == state->as_nplans)
{
    nextplan = first_partial_plan;
    /*
     * If we had started from a non-partial plan, that means we have
     * searched all the nonpartial and partial plans.
     */
    if (initial_plan <= first_partial_plan)
        break;
}
else
{
    nextplan = whichplan + 1;

    /* Have we made a full circle ? */
    if (nextplan == initial_plan)
        break;
}

Also, we need to consider the possibility that the next plan to be
chosen can even be the same plan that we have started with. This
happens when there is only one unfinished partial plan remaining. So
we should not unconditionally do "nextplan = PA_INVALID_PLAN" if
(nextplan == state->as_whichplan). The changes in the patch considers
this (this was considered also in the previous versions).

Where we set node->as_padesc->pa_finished to true for a partial plan,
I have wrapped it with LWLock lock and release calls. This is
especially because now we use this field while deciding whether the
nextplan is to be set to PA_INVALID_PLAN. I guess this might not be
required for correctness, but it looks less safe with pa_finished
value getting changed while we make decisions depending on it.


Attached v18 patch.

-- 
Thanks,
-Amit Khandekar
EnterpriseDB Corporation
The Postgres Database Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Attachment

Re: [HACKERS] Parallel Append implementation

From
Robert Haas
Date:
On Thu, Oct 19, 2017 at 9:05 AM, Amit Khandekar <amitdkhan.pg@gmail.com> wrote:
>> + *        ExecAppendEstimate
>> + *
>> + *        estimates the space required to serialize Append node.
>>
>> Ugh, this is wrong, but I notice it follows various other
>> equally-wrong comments for other parallel-aware node types. I guess
>> I'll go fix that.  We are not in serializing the Append node.
>
> I didn't clealy get this. Do you think it should be "space required to
> copy the Append node into the shared memory" ?

No, because the Append node is *NOT* getting copied into shared
memory.  I have pushed a comment update to the existing functions; you
can use the same comment for this patch.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Parallel Append implementation

From
Robert Haas
Date:
On Sat, Oct 28, 2017 at 5:50 AM, Robert Haas <robertmhaas@gmail.com> wrote:
> No, because the Append node is *NOT* getting copied into shared
> memory.  I have pushed a comment update to the existing functions; you
> can use the same comment for this patch.

I spent the last several days working on this patch, which had a
number of problems both cosmetic and functional.  I think the attached
is in better shape now, but it could certainly use some more review
and testing since I only just finished modifying it, and I modified it
pretty heavily.  Changes:

- I fixed the "morerows" entries in the documentation.  If you had
built the documentation the way you had it and loaded up in a web
browser, you would have seen that the way you had it was not correct.

- I moved T_AppendState to a different position in the switch inside
ExecParallelReInitializeDSM, so as to keep that switch in the same
order as all of the other switch statements in that file.

- I rewrote the comment for pa_finished.  It previously began with
"workers currently executing the subplan", which is not an accurate
description. I suspect this was a holdover from a previous version of
the patch in which this was an array of integers rather than an array
of type bool.  I also fixed the comment in ExecAppendEstimate and
added, removed, or rewrote various other comments as well.

- I renamed PA_INVALID_PLAN to INVALID_SUBPLAN_INDEX, which I think is
more clear and allows for the possibility that this sentinel value
might someday be used for non-parallel-aware Append plans.

- I largely rewrote the code for picking the next subplan.  A
superficial problem with the way that you had it is that you had
renamed exec_append_initialize_next to exec_append_seq_next but not
updated the header comment to match.  Also, the logic was spread out
all over the file.  There are three cases: not parallel aware, leader,
worker.  You had the code for the first case at the top of the file
and the other two cases at the bottom of the file and used multiple
"if" statements to pick the right one in each case.  I replaced all
that with a function pointer stored in the AppendState, moved the code
so it's all together, and rewrote it in a way that I find easier to
understand.  I also changed the naming convention.

- I renamed pappend_len to pstate_len and ParallelAppendDescData to
ParallelAppendState.  I think the use of the word "descriptor" is a
carryover from the concept of a scan descriptor.  There's nothing
really wrong with inventing the concept of an "append descriptor", but
it seems more clear to just refer to shared state.

- I fixed ExecAppendReInitializeDSM not to reset node->as_whichplan.
Per commit 41b0dd987d44089dc48e9c70024277e253b396b7, that's wrong;
instead, local state should be reset in ExecReScanAppend.  I installed
what I believe to be the correct logic in that function instead.

- I fixed list_qsort() so that it copies the type of the old list into
the new list.  Otherwise, sorting a list of type T_IntList or
T_OidList would turn it into just plain T_List, which is wrong.

- I removed get_append_num_workers and integrated the logic into the
callers.  This function was coded quite strangely: it assigned the
return value of fls() to a double and then eventually rounded the
result back to an integer.  But fls() returns an integer, so this
doesn't make much sense.  On a related note, I made it use fls(# of
subpaths) instead of fls(# of subpaths)+1.  Adding 1 doesn't make
sense to me here because it leads to a decision to use 2 workers for a
single, non-partial subpath.  I suspect both of these mistakes stem
from thinking that fls() returns the base-2 logarithm, but in fact it
doesn't, quite: log2(1) = 0.0 but fls(1) = 1.

- In the process of making the changes described in the previous
point, I added a couple of assertions, one of which promptly failed.
It turns out the reason is that your patch didn't update
accumulate_append_subpaths(), which can result in flattening
non-partial paths from a Parallel Append into a parent Append's list
of partial paths, which is bad.  The easiest way to fix that would be
to just teach accumulate_append_subpaths() not to flatten a Parallel
Append into a parent Append or MergeAppend node, but it seemed to me
that there was a fair amount of duplication between
accumulate_partialappend_subpath() and accumulate_append_subpaths, so
what I did instead is folded all of the necessarily logic directly
into accumulate_append_subpaths().  This approach also avoids some
assumptions that your code made, such as the assumption that we will
never have a partial MergeAppend path.

- I changed create_append_path() so that it uses the parallel_aware
argument as the only determinant of whether the output path is flagged
as parallel-aware. Your version also considered whether
parallel_workers > 0, but I think that's not a good idea; the caller
should pass the correct value for parallel_aware rather than relying
on this function to fix it.  Possibly you indirectly encountered the
problem mentioned in the previous point and worked around it like
this, or maybe there was some other reason, but it doesn't seem to be
necessary.

- I changed things around to enforce the rule that all partial paths
added to an appendrel must use the same row count estimate.  (This is
not a new coding rule, but this patch provides a new way to violate
it.) I did that by forcing the row-count for any parallel append
mixing partial and non-partial paths to use the same row count as the
row already added. I also changed the way the row count is calculated
in the case where the only parallel append path mixes partial and
non-partial plans; I think this way is more consistent with what we've
done elsewhere.  This amounts to the assumption that we're trying to
estimate the average number of rows per worker rather than the largest
possible number; I'm not sure what the best thing to do here is in
theory, but one advantage of this approach is that I think it will
produce answers closer to the value we get for an all-partial-paths
append.  That's good, because we don't want the row-count estimate to
change precipitously based on whether an all-partial-paths append is
possible.

- I fixed some whitespace problems by running pgindent on various
files and manually breaking some long lines.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Attachment

Re: [HACKERS] Parallel Append implementation

From
Amit Khandekar
Date:
Thanks a lot Robert for the patch. I will have a look. Quickly tried
to test some aggregate queries with a partitioned pgbench_accounts
table, and it is crashing. Will get back with the fix, and any other
review comments.

Thanks
-Amit Khandekar

On 9 November 2017 at 23:44, Robert Haas <robertmhaas@gmail.com> wrote:
> On Sat, Oct 28, 2017 at 5:50 AM, Robert Haas <robertmhaas@gmail.com> wrote:
>> No, because the Append node is *NOT* getting copied into shared
>> memory.  I have pushed a comment update to the existing functions; you
>> can use the same comment for this patch.
>
> I spent the last several days working on this patch, which had a
> number of problems both cosmetic and functional.  I think the attached
> is in better shape now, but it could certainly use some more review
> and testing since I only just finished modifying it, and I modified it
> pretty heavily.  Changes:
>
> - I fixed the "morerows" entries in the documentation.  If you had
> built the documentation the way you had it and loaded up in a web
> browser, you would have seen that the way you had it was not correct.
>
> - I moved T_AppendState to a different position in the switch inside
> ExecParallelReInitializeDSM, so as to keep that switch in the same
> order as all of the other switch statements in that file.
>
> - I rewrote the comment for pa_finished.  It previously began with
> "workers currently executing the subplan", which is not an accurate
> description. I suspect this was a holdover from a previous version of
> the patch in which this was an array of integers rather than an array
> of type bool.  I also fixed the comment in ExecAppendEstimate and
> added, removed, or rewrote various other comments as well.
>
> - I renamed PA_INVALID_PLAN to INVALID_SUBPLAN_INDEX, which I think is
> more clear and allows for the possibility that this sentinel value
> might someday be used for non-parallel-aware Append plans.
>
> - I largely rewrote the code for picking the next subplan.  A
> superficial problem with the way that you had it is that you had
> renamed exec_append_initialize_next to exec_append_seq_next but not
> updated the header comment to match.  Also, the logic was spread out
> all over the file.  There are three cases: not parallel aware, leader,
> worker.  You had the code for the first case at the top of the file
> and the other two cases at the bottom of the file and used multiple
> "if" statements to pick the right one in each case.  I replaced all
> that with a function pointer stored in the AppendState, moved the code
> so it's all together, and rewrote it in a way that I find easier to
> understand.  I also changed the naming convention.
>
> - I renamed pappend_len to pstate_len and ParallelAppendDescData to
> ParallelAppendState.  I think the use of the word "descriptor" is a
> carryover from the concept of a scan descriptor.  There's nothing
> really wrong with inventing the concept of an "append descriptor", but
> it seems more clear to just refer to shared state.
>
> - I fixed ExecAppendReInitializeDSM not to reset node->as_whichplan.
> Per commit 41b0dd987d44089dc48e9c70024277e253b396b7, that's wrong;
> instead, local state should be reset in ExecReScanAppend.  I installed
> what I believe to be the correct logic in that function instead.
>
> - I fixed list_qsort() so that it copies the type of the old list into
> the new list.  Otherwise, sorting a list of type T_IntList or
> T_OidList would turn it into just plain T_List, which is wrong.
>
> - I removed get_append_num_workers and integrated the logic into the
> callers.  This function was coded quite strangely: it assigned the
> return value of fls() to a double and then eventually rounded the
> result back to an integer.  But fls() returns an integer, so this
> doesn't make much sense.  On a related note, I made it use fls(# of
> subpaths) instead of fls(# of subpaths)+1.  Adding 1 doesn't make
> sense to me here because it leads to a decision to use 2 workers for a
> single, non-partial subpath.  I suspect both of these mistakes stem
> from thinking that fls() returns the base-2 logarithm, but in fact it
> doesn't, quite: log2(1) = 0.0 but fls(1) = 1.
>
> - In the process of making the changes described in the previous
> point, I added a couple of assertions, one of which promptly failed.
> It turns out the reason is that your patch didn't update
> accumulate_append_subpaths(), which can result in flattening
> non-partial paths from a Parallel Append into a parent Append's list
> of partial paths, which is bad.  The easiest way to fix that would be
> to just teach accumulate_append_subpaths() not to flatten a Parallel
> Append into a parent Append or MergeAppend node, but it seemed to me
> that there was a fair amount of duplication between
> accumulate_partialappend_subpath() and accumulate_append_subpaths, so
> what I did instead is folded all of the necessarily logic directly
> into accumulate_append_subpaths().  This approach also avoids some
> assumptions that your code made, such as the assumption that we will
> never have a partial MergeAppend path.
>
> - I changed create_append_path() so that it uses the parallel_aware
> argument as the only determinant of whether the output path is flagged
> as parallel-aware. Your version also considered whether
> parallel_workers > 0, but I think that's not a good idea; the caller
> should pass the correct value for parallel_aware rather than relying
> on this function to fix it.  Possibly you indirectly encountered the
> problem mentioned in the previous point and worked around it like
> this, or maybe there was some other reason, but it doesn't seem to be
> necessary.
>
> - I changed things around to enforce the rule that all partial paths
> added to an appendrel must use the same row count estimate.  (This is
> not a new coding rule, but this patch provides a new way to violate
> it.) I did that by forcing the row-count for any parallel append
> mixing partial and non-partial paths to use the same row count as the
> row already added. I also changed the way the row count is calculated
> in the case where the only parallel append path mixes partial and
> non-partial plans; I think this way is more consistent with what we've
> done elsewhere.  This amounts to the assumption that we're trying to
> estimate the average number of rows per worker rather than the largest
> possible number; I'm not sure what the best thing to do here is in
> theory, but one advantage of this approach is that I think it will
> produce answers closer to the value we get for an all-partial-paths
> append.  That's good, because we don't want the row-count estimate to
> change precipitously based on whether an all-partial-paths append is
> possible.
>
> - I fixed some whitespace problems by running pgindent on various
> files and manually breaking some long lines.
>
> --
> Robert Haas
> EnterpriseDB: http://www.enterprisedb.com
> The Enterprise PostgreSQL Company



-- 
Thanks,
-Amit Khandekar
EnterpriseDB Corporation
The Postgres Database Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Parallel Append implementation

From
Rafia Sabih
Date:
On Mon, Nov 13, 2017 at 12:54 PM, Amit Khandekar <amitdkhan.pg@gmail.com> wrote:
> Thanks a lot Robert for the patch. I will have a look. Quickly tried
> to test some aggregate queries with a partitioned pgbench_accounts
> table, and it is crashing. Will get back with the fix, and any other
> review comments.
>
> Thanks
> -Amit Khandekar

I was trying to get the performance of this patch at commit id -
11e264517dff7a911d9e6494de86049cab42cde3 and TPC-H scale factor 20
with the following parameter settings,
work_mem = 1 GB
shared_buffers = 10GB
effective_cache_size = 10GB
max_parallel_workers_per_gather = 4
enable_partitionwise_join = on

and the details of the partitioning scheme is as follows,
tables partitioned = lineitem on l_orderkey and orders on o_orderkey
number of partitions in each table = 10

As per the explain outputs PA was used in following queries- 1, 3, 4,
5, 6, 7, 8, 10, 12, 14, 15, 18, and 21.
Unfortunately, at the time of executing any of these query, it is
crashing with the following information in  core dump of each of the
workers,

Program terminated with signal 11, Segmentation fault.
#0  0x0000000010600984 in pg_atomic_read_u32_impl (ptr=0x3ffffec29294)
at ../../../../src/include/port/atomics/generic.h:48
48 return ptr->value;

In case this a different issue as you pointed upthread, you may want
to have a look at this as well.
Please let me know if you need any more information in this regard.



-- 
Regards,
Rafia Sabih
EnterpriseDB: http://www.enterprisedb.com/


Re: [HACKERS] Parallel Append implementation

From
Amit Khandekar
Date:
On 21 November 2017 at 12:44, Rafia Sabih <rafia.sabih@enterprisedb.com> wrote:
> On Mon, Nov 13, 2017 at 12:54 PM, Amit Khandekar <amitdkhan.pg@gmail.com> wrote:
>> Thanks a lot Robert for the patch. I will have a look. Quickly tried
>> to test some aggregate queries with a partitioned pgbench_accounts
>> table, and it is crashing. Will get back with the fix, and any other
>> review comments.
>>
>> Thanks
>> -Amit Khandekar
>
> I was trying to get the performance of this patch at commit id -
> 11e264517dff7a911d9e6494de86049cab42cde3 and TPC-H scale factor 20
> with the following parameter settings,
> work_mem = 1 GB
> shared_buffers = 10GB
> effective_cache_size = 10GB
> max_parallel_workers_per_gather = 4
> enable_partitionwise_join = on
>
> and the details of the partitioning scheme is as follows,
> tables partitioned = lineitem on l_orderkey and orders on o_orderkey
> number of partitions in each table = 10
>
> As per the explain outputs PA was used in following queries- 1, 3, 4,
> 5, 6, 7, 8, 10, 12, 14, 15, 18, and 21.
> Unfortunately, at the time of executing any of these query, it is
> crashing with the following information in  core dump of each of the
> workers,
>
> Program terminated with signal 11, Segmentation fault.
> #0  0x0000000010600984 in pg_atomic_read_u32_impl (ptr=0x3ffffec29294)
> at ../../../../src/include/port/atomics/generic.h:48
> 48 return ptr->value;
>
> In case this a different issue as you pointed upthread, you may want
> to have a look at this as well.
> Please let me know if you need any more information in this regard.

Right, for me the crash had occurred with a similar stack, although
the real crash happened in one of the workers. Attached is the script
file
pgbench_partitioned.sql to create a schema with which I had reproduced
the crash.

The query that crashed :
select sum(aid), avg(aid) from pgbench_accounts;

Set max_parallel_workers_per_gather to at least 5.

Also attached is v19 patch rebased.

-- 
Thanks,
-Amit Khandekar
EnterpriseDB Corporation
The Postgres Database Company

Attachment

Re: [HACKERS] Parallel Append implementation

From
amul sul
Date:
On Tue, Nov 21, 2017 at 2:22 PM, Amit Khandekar <amitdkhan.pg@gmail.com> wrote:
> On 21 November 2017 at 12:44, Rafia Sabih <rafia.sabih@enterprisedb.com> wrote:
>> On Mon, Nov 13, 2017 at 12:54 PM, Amit Khandekar <amitdkhan.pg@gmail.com> wrote:
>>> Thanks a lot Robert for the patch. I will have a look. Quickly tried
>>> to test some aggregate queries with a partitioned pgbench_accounts
>>> table, and it is crashing. Will get back with the fix, and any other
>>> review comments.
>>>
>>> Thanks
>>> -Amit Khandekar
>>
>> I was trying to get the performance of this patch at commit id -
>> 11e264517dff7a911d9e6494de86049cab42cde3 and TPC-H scale factor 20
>> with the following parameter settings,
>> work_mem = 1 GB
>> shared_buffers = 10GB
>> effective_cache_size = 10GB
>> max_parallel_workers_per_gather = 4
>> enable_partitionwise_join = on
>>
>> and the details of the partitioning scheme is as follows,
>> tables partitioned = lineitem on l_orderkey and orders on o_orderkey
>> number of partitions in each table = 10
>>
>> As per the explain outputs PA was used in following queries- 1, 3, 4,
>> 5, 6, 7, 8, 10, 12, 14, 15, 18, and 21.
>> Unfortunately, at the time of executing any of these query, it is
>> crashing with the following information in  core dump of each of the
>> workers,
>>
>> Program terminated with signal 11, Segmentation fault.
>> #0  0x0000000010600984 in pg_atomic_read_u32_impl (ptr=0x3ffffec29294)
>> at ../../../../src/include/port/atomics/generic.h:48
>> 48 return ptr->value;
>>
>> In case this a different issue as you pointed upthread, you may want
>> to have a look at this as well.
>> Please let me know if you need any more information in this regard.
>
> Right, for me the crash had occurred with a similar stack, although
> the real crash happened in one of the workers. Attached is the script
> file
> pgbench_partitioned.sql to create a schema with which I had reproduced
> the crash.
>
> The query that crashed :
> select sum(aid), avg(aid) from pgbench_accounts;
>
> Set max_parallel_workers_per_gather to at least 5.
>
> Also attached is v19 patch rebased.
>

I've spent little time to debug this crash. The crash happens in ExecAppend()
due to subnode in node->appendplans array is referred using incorrect
array index (out of bound value) in the following code:

        /*
         * figure out which subplan we are currently processing
         */
        subnode = node->appendplans[node->as_whichplan];

This incorrect value to node->as_whichplan is get assigned in the
choose_next_subplan_for_worker().

By doing following change on the v19 patch does the fix for me:

--- a/src/backend/executor/nodeAppend.c
+++ b/src/backend/executor/nodeAppend.c
@@ -489,11 +489,9 @@ choose_next_subplan_for_worker(AppendState *node)
    }

    /* Pick the plan we found, and advance pa_next_plan one more time. */
-   node->as_whichplan = pstate->pa_next_plan;
+   node->as_whichplan = pstate->pa_next_plan++;
    if (pstate->pa_next_plan == node->as_nplans)
        pstate->pa_next_plan = append->first_partial_plan;
-   else
-       pstate->pa_next_plan++;

    /* If non-partial, immediately mark as finished. */
    if (node->as_whichplan < append->first_partial_plan)

Attaching patch does same changes to Amit's ParallelAppend_v19_rebased.patch.

Regards,
Amul

Attachment

Re: [HACKERS] Parallel Append implementation

From
Robert Haas
Date:
On Tue, Nov 21, 2017 at 6:57 AM, amul sul <sulamul@gmail.com> wrote:
> By doing following change on the v19 patch does the fix for me:
>
> --- a/src/backend/executor/nodeAppend.c
> +++ b/src/backend/executor/nodeAppend.c
> @@ -489,11 +489,9 @@ choose_next_subplan_for_worker(AppendState *node)
>     }
>
>     /* Pick the plan we found, and advance pa_next_plan one more time. */
> -   node->as_whichplan = pstate->pa_next_plan;
> +   node->as_whichplan = pstate->pa_next_plan++;
>     if (pstate->pa_next_plan == node->as_nplans)
>         pstate->pa_next_plan = append->first_partial_plan;
> -   else
> -       pstate->pa_next_plan++;
>
>     /* If non-partial, immediately mark as finished. */
>     if (node->as_whichplan < append->first_partial_plan)
>
> Attaching patch does same changes to Amit's ParallelAppend_v19_rebased.patch.

Yes, that looks like a correct fix.  Thanks.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Re: [HACKERS] Parallel Append implementation

From
amul sul
Date:
On Wed, Nov 22, 2017 at 1:44 AM, Robert Haas <robertmhaas@gmail.com> wrote:
> On Tue, Nov 21, 2017 at 6:57 AM, amul sul <sulamul@gmail.com> wrote:
>> By doing following change on the v19 patch does the fix for me:
>>
>> --- a/src/backend/executor/nodeAppend.c
>> +++ b/src/backend/executor/nodeAppend.c
>> @@ -489,11 +489,9 @@ choose_next_subplan_for_worker(AppendState *node)
>>     }
>>
>>     /* Pick the plan we found, and advance pa_next_plan one more time. */
>> -   node->as_whichplan = pstate->pa_next_plan;
>> +   node->as_whichplan = pstate->pa_next_plan++;
>>     if (pstate->pa_next_plan == node->as_nplans)
>>         pstate->pa_next_plan = append->first_partial_plan;
>> -   else
>> -       pstate->pa_next_plan++;
>>
>>     /* If non-partial, immediately mark as finished. */
>>     if (node->as_whichplan < append->first_partial_plan)
>>
>> Attaching patch does same changes to Amit's ParallelAppend_v19_rebased.patch.
>
> Yes, that looks like a correct fix.  Thanks.
>

Attaching updated version of "ParallelAppend_v19_rebased" includes this fix.

Regards,
Amul

Attachment

Re: [HACKERS] Parallel Append implementation

From
Rajkumar Raghuwanshi
Date:
On Thu, Nov 23, 2017 at 9:45 AM, amul sul <sulamul@gmail.com> wrote:
Attaching updated version of "ParallelAppend_v19_rebased" includes this fix.

Hi,

I have applied attached patch and got a crash with below query. please take a look.

CREATE TABLE tbl (a int, b int, c text, d int) PARTITION BY LIST(c);
CREATE TABLE tbl_p1 PARTITION OF tbl FOR VALUES IN ('0000', '0001', '0002', '0003');
CREATE TABLE tbl_p2 PARTITION OF tbl FOR VALUES IN ('0004', '0005', '0006', '0007');
CREATE TABLE tbl_p3 PARTITION OF tbl FOR VALUES IN ('0008', '0009', '0010', '0011');
INSERT INTO tbl SELECT i % 20, i % 30, to_char(i % 12, 'FM0000'), i % 30 FROM generate_series(0, 9999999) i;
ANALYZE tbl;

EXPLAIN ANALYZE SELECT c, sum(a), avg(b), COUNT(*) FROM tbl GROUP BY c HAVING avg(d) < 15 ORDER BY 1, 2, 3;
WARNING:  terminating connection because of crash of another server process
DETAIL:  The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
HINT:  In a moment you should be able to reconnect to the database and repeat your command.
server closed the connection unexpectedly
    This probably means the server terminated abnormally
    before or while processing the request.
The connection to the server was lost. Attempting reset: Failed.
!>


stack-trace is given below.

Reading symbols from /lib64/libnss_files.so.2...Reading symbols from /usr/lib/debug/lib64/libnss_files-2.12.so.debug...done.
done.
Loaded symbols for /lib64/libnss_files.so.2
Core was generated by `postgres: parallel worker for PID 104999                 '.
Program terminated with signal 11, Segmentation fault.
#0  0x00000000006dc4b3 in ExecProcNode (node=0x7f7f7f7f7f7f7f7e) at ../../../src/include/executor/executor.h:238
238        if (node->chgParam != NULL) /* something changed? */
Missing separate debuginfos, use: debuginfo-install keyutils-libs-1.4-5.el6.x86_64 krb5-libs-1.10.3-65.el6.x86_64 libcom_err-1.41.12-23.el6.x86_64 libselinux-2.0.94-7.el6.x86_64 openssl-1.0.1e-57.el6.x86_64 zlib-1.2.3-29.el6.x86_64
(gdb) bt
#0  0x00000000006dc4b3 in ExecProcNode (node=0x7f7f7f7f7f7f7f7e) at ../../../src/include/executor/executor.h:238
#1  0x00000000006dc72e in ExecAppend (pstate=0x1947ed0) at nodeAppend.c:207
#2  0x00000000006d1e7c in ExecProcNodeInstr (node=0x1947ed0) at execProcnode.c:446
#3  0x00000000006dcef1 in ExecProcNode (node=0x1947ed0) at ../../../src/include/executor/executor.h:241
#4  0x00000000006dd398 in fetch_input_tuple (aggstate=0x1947fe8) at nodeAgg.c:699
#5  0x00000000006e02f7 in agg_fill_hash_table (aggstate=0x1947fe8) at nodeAgg.c:2536
#6  0x00000000006dfb37 in ExecAgg (pstate=0x1947fe8) at nodeAgg.c:2148
#7  0x00000000006d1e7c in ExecProcNodeInstr (node=0x1947fe8) at execProcnode.c:446
#8  0x00000000006d1e4d in ExecProcNodeFirst (node=0x1947fe8) at execProcnode.c:430
#9  0x00000000006c9439 in ExecProcNode (node=0x1947fe8) at ../../../src/include/executor/executor.h:241
#10 0x00000000006cbd73 in ExecutePlan (estate=0x1947590, planstate=0x1947fe8, use_parallel_mode=0 '\000', operation=CMD_SELECT, sendTuples=1 '\001', numberTuples=0,
    direction=ForwardScanDirection, dest=0x192acb0, execute_once=1 '\001') at execMain.c:1718
#11 0x00000000006c9a12 in standard_ExecutorRun (queryDesc=0x194ffc0, direction=ForwardScanDirection, count=0, execute_once=1 '\001') at execMain.c:361
#12 0x00000000006c982e in ExecutorRun (queryDesc=0x194ffc0, direction=ForwardScanDirection, count=0, execute_once=1 '\001') at execMain.c:304
#13 0x00000000006d096c in ParallelQueryMain (seg=0x18aa2a8, toc=0x7f899a227000) at execParallel.c:1271
#14 0x000000000053272d in ParallelWorkerMain (main_arg=1218206688) at parallel.c:1149
#15 0x00000000007e8ca5 in StartBackgroundWorker () at bgworker.c:841
#16 0x00000000007fc035 in do_start_bgworker (rw=0x18ced00) at postmaster.c:5741
#17 0x00000000007fc377 in maybe_start_bgworkers () at postmaster.c:5945
#18 0x00000000007fb406 in sigusr1_handler (postgres_signal_arg=10) at postmaster.c:5134
#19 <signal handler called>
#20 0x0000003dd26e1603 in __select_nocancel () at ../sysdeps/unix/syscall-template.S:82
#21 0x00000000007f6bfa in ServerLoop () at postmaster.c:1721
#22 0x00000000007f63e9 in PostmasterMain (argc=3, argv=0x18a8180) at postmaster.c:1365
#23 0x000000000072cb4c in main (argc=3, argv=0x18a8180) at main.c:228
(gdb)


Thanks & Regards,
Rajkumar Raghuwanshi
QMG, EnterpriseDB Corporation

Re: [HACKERS] Parallel Append implementation

From
amul sul
Date:
Look like it is the same crash what v20 claim to be fixed, indeed I
missed to add fix[1] in v20 patch, sorry about that. Attached updated
patch includes aforementioned fix.


1] http://postgr.es/m/CAAJ_b97kLNW8Z9nvc_JUUG5wVQUXvG=f37WsX8ALF0A=KAHh3w@mail.gmail.com


Regards,
Amul

On Thu, Nov 23, 2017 at 1:50 PM, Rajkumar Raghuwanshi
<rajkumar.raghuwanshi@enterprisedb.com> wrote:
> On Thu, Nov 23, 2017 at 9:45 AM, amul sul <sulamul@gmail.com> wrote:
>>
>> Attaching updated version of "ParallelAppend_v19_rebased" includes this
>> fix.
>
>
> Hi,
>
> I have applied attached patch and got a crash with below query. please take
> a look.
>
> CREATE TABLE tbl (a int, b int, c text, d int) PARTITION BY LIST(c);
> CREATE TABLE tbl_p1 PARTITION OF tbl FOR VALUES IN ('0000', '0001', '0002',
> '0003');
> CREATE TABLE tbl_p2 PARTITION OF tbl FOR VALUES IN ('0004', '0005', '0006',
> '0007');
> CREATE TABLE tbl_p3 PARTITION OF tbl FOR VALUES IN ('0008', '0009', '0010',
> '0011');
> INSERT INTO tbl SELECT i % 20, i % 30, to_char(i % 12, 'FM0000'), i % 30
> FROM generate_series(0, 9999999) i;
> ANALYZE tbl;
>
> EXPLAIN ANALYZE SELECT c, sum(a), avg(b), COUNT(*) FROM tbl GROUP BY c
> HAVING avg(d) < 15 ORDER BY 1, 2, 3;
> WARNING:  terminating connection because of crash of another server process
> DETAIL:  The postmaster has commanded this server process to roll back the
> current transaction and exit, because another server process exited
> abnormally and possibly corrupted shared memory.
> HINT:  In a moment you should be able to reconnect to the database and
> repeat your command.
> server closed the connection unexpectedly
>     This probably means the server terminated abnormally
>     before or while processing the request.
> The connection to the server was lost. Attempting reset: Failed.
> !>
>
>
> stack-trace is given below.
>
> Reading symbols from /lib64/libnss_files.so.2...Reading symbols from
> /usr/lib/debug/lib64/libnss_files-2.12.so.debug...done.
> done.
> Loaded symbols for /lib64/libnss_files.so.2
> Core was generated by `postgres: parallel worker for PID 104999
> '.
> Program terminated with signal 11, Segmentation fault.
> #0  0x00000000006dc4b3 in ExecProcNode (node=0x7f7f7f7f7f7f7f7e) at
> ../../../src/include/executor/executor.h:238
> 238        if (node->chgParam != NULL) /* something changed? */
> Missing separate debuginfos, use: debuginfo-install
> keyutils-libs-1.4-5.el6.x86_64 krb5-libs-1.10.3-65.el6.x86_64
> libcom_err-1.41.12-23.el6.x86_64 libselinux-2.0.94-7.el6.x86_64
> openssl-1.0.1e-57.el6.x86_64 zlib-1.2.3-29.el6.x86_64
> (gdb) bt
> #0  0x00000000006dc4b3 in ExecProcNode (node=0x7f7f7f7f7f7f7f7e) at
> ../../../src/include/executor/executor.h:238
> #1  0x00000000006dc72e in ExecAppend (pstate=0x1947ed0) at nodeAppend.c:207
> #2  0x00000000006d1e7c in ExecProcNodeInstr (node=0x1947ed0) at
> execProcnode.c:446
> #3  0x00000000006dcef1 in ExecProcNode (node=0x1947ed0) at
> ../../../src/include/executor/executor.h:241
> #4  0x00000000006dd398 in fetch_input_tuple (aggstate=0x1947fe8) at
> nodeAgg.c:699
> #5  0x00000000006e02f7 in agg_fill_hash_table (aggstate=0x1947fe8) at
> nodeAgg.c:2536
> #6  0x00000000006dfb37 in ExecAgg (pstate=0x1947fe8) at nodeAgg.c:2148
> #7  0x00000000006d1e7c in ExecProcNodeInstr (node=0x1947fe8) at
> execProcnode.c:446
> #8  0x00000000006d1e4d in ExecProcNodeFirst (node=0x1947fe8) at
> execProcnode.c:430
> #9  0x00000000006c9439 in ExecProcNode (node=0x1947fe8) at
> ../../../src/include/executor/executor.h:241
> #10 0x00000000006cbd73 in ExecutePlan (estate=0x1947590,
> planstate=0x1947fe8, use_parallel_mode=0 '\000', operation=CMD_SELECT,
> sendTuples=1 '\001', numberTuples=0,
>     direction=ForwardScanDirection, dest=0x192acb0, execute_once=1 '\001')
> at execMain.c:1718
> #11 0x00000000006c9a12 in standard_ExecutorRun (queryDesc=0x194ffc0,
> direction=ForwardScanDirection, count=0, execute_once=1 '\001') at
> execMain.c:361
> #12 0x00000000006c982e in ExecutorRun (queryDesc=0x194ffc0,
> direction=ForwardScanDirection, count=0, execute_once=1 '\001') at
> execMain.c:304
> #13 0x00000000006d096c in ParallelQueryMain (seg=0x18aa2a8,
> toc=0x7f899a227000) at execParallel.c:1271
> #14 0x000000000053272d in ParallelWorkerMain (main_arg=1218206688) at
> parallel.c:1149
> #15 0x00000000007e8ca5 in StartBackgroundWorker () at bgworker.c:841
> #16 0x00000000007fc035 in do_start_bgworker (rw=0x18ced00) at
> postmaster.c:5741
> #17 0x00000000007fc377 in maybe_start_bgworkers () at postmaster.c:5945
> #18 0x00000000007fb406 in sigusr1_handler (postgres_signal_arg=10) at
> postmaster.c:5134
> #19 <signal handler called>
> #20 0x0000003dd26e1603 in __select_nocancel () at
> ../sysdeps/unix/syscall-template.S:82
> #21 0x00000000007f6bfa in ServerLoop () at postmaster.c:1721
> #22 0x00000000007f63e9 in PostmasterMain (argc=3, argv=0x18a8180) at
> postmaster.c:1365
> #23 0x000000000072cb4c in main (argc=3, argv=0x18a8180) at main.c:228
> (gdb)
>
>
> Thanks & Regards,
> Rajkumar Raghuwanshi
> QMG, EnterpriseDB Corporation

Attachment

Re: [HACKERS] Parallel Append implementation

From
Rafia Sabih
Date:


On Tue, Nov 21, 2017 at 5:27 PM, amul sul <sulamul@gmail.com> wrote:
>
> I've spent little time to debug this crash. The crash happens in ExecAppend()
> due to subnode in node->appendplans array is referred using incorrect
> array index (out of bound value) in the following code:
>
>         /*
>          * figure out which subplan we are currently processing
>          */
>         subnode = node->appendplans[node->as_whichplan];
>
> This incorrect value to node->as_whichplan is get assigned in the
> choose_next_subplan_for_worker().
>
> By doing following change on the v19 patch does the fix for me:
>
> --- a/src/backend/executor/nodeAppend.c
> +++ b/src/backend/executor/nodeAppend.c
> @@ -489,11 +489,9 @@ choose_next_subplan_for_worker(AppendState *node)
>     }
>
>     /* Pick the plan we found, and advance pa_next_plan one more time. */
> -   node->as_whichplan = pstate->pa_next_plan;
> +   node->as_whichplan = pstate->pa_next_plan++;
>     if (pstate->pa_next_plan == node->as_nplans)
>         pstate->pa_next_plan = append->first_partial_plan;
> -   else
> -       pstate->pa_next_plan++;
>
>     /* If non-partial, immediately mark as finished. */
>     if (node->as_whichplan < append->first_partial_plan)
>
> Attaching patch does same changes to Amit's ParallelAppend_v19_rebased.patch.
>
Thanks for the patch, I tried it and worked fine for me. The performance numbers for this patch are as follows,

Query | head | Patch |
1 |241633.69  | 243916.798   
3 |74000.394 | 75966.013
4 |12241.87 | 12310.405
5 |65190.68 | 64968.069
6 |8718.477 | 7150.98
7 |69920.367 | 68504.058
8 |21722.406 | 21488.255
10 |37807.3 | 36308.253
12 |40654.877 | 36532.134
14 |10910.043 | 9982.559
15 |57074.768 | 51328.908
18 |293655.538 | 282611.02
21 |1905000.232 | 1803922.924

All the values of execution time are in ms. The setup used for the experiment is same as mentioned upthread,
I was trying to get the performance of this patch at commit id -
11e264517dff7a911d9e6494de86049cab42cde3 and TPC-H scale factor 20
with the following parameter settings,
work_mem = 1 GB
shared_buffers = 10GB
effective_cache_size = 10GB
max_parallel_workers_per_gather = 4
enable_partitionwise_join = on

and the details of the partitioning scheme is as follows,
tables partitioned = lineitem on l_orderkey and orders on o_orderkey
number of partitions in each table = 10

Please find the attached zip for the explain analyse outputs for head and patch for the above mentioned queries.

Overall, performance wise the presence of patch doesn't adds much, may be because of scale factor, I don't know. If anybody has better ideas regarding setup please enlighten me. Otherwise we may investigate further the performance for this patch, by spending some time looking into the plans and check for what queries append was the bottleneck, or with parallel-append in picture which nodes get faster.

--
Regards,
Rafia Sabih
EnterpriseDB: http://www.enterprisedb.com/
Attachment

Re: [HACKERS] Parallel Append implementation

From
Rajkumar Raghuwanshi
Date:
On Thu, Nov 23, 2017 at 2:22 PM, amul sul <sulamul@gmail.com> wrote:
> Look like it is the same crash what v20 claim to be fixed, indeed I
> missed to add fix[1] in v20 patch, sorry about that. Attached updated
> patch includes aforementioned fix.

Hi,

I have applied latest v21 patch, it got crashed when enabled
partition-wise-join,
same query is working fine with and without partition-wise-join
enabled on PG-head.
please take a look.

SET enable_partition_wise_join TO true;

CREATE TABLE pt1 (a int, b int, c text, d int) PARTITION BY LIST(c);
CREATE TABLE pt1_p1 PARTITION OF pt1 FOR VALUES IN ('0000', '0001',
'0002', '0003');
CREATE TABLE pt1_p2 PARTITION OF pt1 FOR VALUES IN ('0004', '0005',
'0006', '0007');
CREATE TABLE pt1_p3 PARTITION OF pt1 FOR VALUES IN ('0008', '0009',
'0010', '0011');
INSERT INTO pt1 SELECT i % 20, i % 30, to_char(i % 12, 'FM0000'), i %
30 FROM generate_series(0, 99999) i;
ANALYZE pt1;

CREATE TABLE pt2 (a int, b int, c text, d int) PARTITION BY LIST(c);
CREATE TABLE pt2_p1 PARTITION OF pt2 FOR VALUES IN ('0000', '0001',
'0002', '0003');
CREATE TABLE pt2_p2 PARTITION OF pt2 FOR VALUES IN ('0004', '0005',
'0006', '0007');
CREATE TABLE pt2_p3 PARTITION OF pt2 FOR VALUES IN ('0008', '0009',
'0010', '0011');
INSERT INTO pt2 SELECT i % 20, i % 30, to_char(i % 12, 'FM0000'), i %
30 FROM generate_series(0, 99999) i;
ANALYZE pt2;

EXPLAIN ANALYZE
SELECT t1.c, sum(t2.a), COUNT(*) FROM pt1 t1 FULL JOIN pt2 t2 ON t1.c
= t2.c GROUP BY t1.c ORDER BY 1, 2, 3;
WARNING:  terminating connection because of crash of another server process
DETAIL:  The postmaster has commanded this server process to roll back
the current transaction and exit, because another server process
exited abnormally and possibly corrupted shared memory.
HINT:  In a moment you should be able to reconnect to the database and
repeat your command.
server closed the connection unexpectedly   This probably means the server terminated abnormally   before or while
processingthe request.
 
The connection to the server was lost. Attempting reset: Failed.
!>

stack-trace is given below.

Core was generated by `postgres: parallel worker for PID 73935        '.
Program terminated with signal 11, Segmentation fault.
#0  0x00000000006dc4b3 in ExecProcNode (node=0x7f7f7f7f7f7f7f7e) at
../../../src/include/executor/executor.h:238
238        if (node->chgParam != NULL) /* something changed? */
Missing separate debuginfos, use: debuginfo-install
keyutils-libs-1.4-5.el6.x86_64 krb5-libs-1.10.3-65.el6.x86_64
libcom_err-1.41.12-23.el6.x86_64 libselinux-2.0.94-7.el6.x86_64
openssl-1.0.1e-57.el6.x86_64 zlib-1.2.3-29.el6.x86_64
(gdb) bt
#0  0x00000000006dc4b3 in ExecProcNode (node=0x7f7f7f7f7f7f7f7e) at
../../../src/include/executor/executor.h:238
#1  0x00000000006dc72e in ExecAppend (pstate=0x26cd6e0) at nodeAppend.c:207
#2  0x00000000006d1e7c in ExecProcNodeInstr (node=0x26cd6e0) at
execProcnode.c:446
#3  0x00000000006dcee5 in ExecProcNode (node=0x26cd6e0) at
../../../src/include/executor/executor.h:241
#4  0x00000000006dd38c in fetch_input_tuple (aggstate=0x26cd7f8) at
nodeAgg.c:699
#5  0x00000000006e02eb in agg_fill_hash_table (aggstate=0x26cd7f8) at
nodeAgg.c:2536
#6  0x00000000006dfb2b in ExecAgg (pstate=0x26cd7f8) at nodeAgg.c:2148
#7  0x00000000006d1e7c in ExecProcNodeInstr (node=0x26cd7f8) at
execProcnode.c:446
#8  0x00000000006d1e4d in ExecProcNodeFirst (node=0x26cd7f8) at
execProcnode.c:430
#9  0x00000000006c9439 in ExecProcNode (node=0x26cd7f8) at
../../../src/include/executor/executor.h:241
#10 0x00000000006cbd73 in ExecutePlan (estate=0x26ccda0,
planstate=0x26cd7f8, use_parallel_mode=0 '\000', operation=CMD_SELECT,
sendTuples=1 '\001', numberTuples=0,   direction=ForwardScanDirection, dest=0x26b2ce0, execute_once=1
'\001') at execMain.c:1718
#11 0x00000000006c9a12 in standard_ExecutorRun (queryDesc=0x26d7fa0,
direction=ForwardScanDirection, count=0, execute_once=1 '\001') at
execMain.c:361
#12 0x00000000006c982e in ExecutorRun (queryDesc=0x26d7fa0,
direction=ForwardScanDirection, count=0, execute_once=1 '\001') at
execMain.c:304
#13 0x00000000006d096c in ParallelQueryMain (seg=0x26322a8,
toc=0x7fda24d46000) at execParallel.c:1271
#14 0x000000000053272d in ParallelWorkerMain (main_arg=1203628635) at
parallel.c:1149
#15 0x00000000007e8c99 in StartBackgroundWorker () at bgworker.c:841
#16 0x00000000007fc029 in do_start_bgworker (rw=0x2656d00) at postmaster.c:5741
#17 0x00000000007fc36b in maybe_start_bgworkers () at postmaster.c:5945
#18 0x00000000007fb3fa in sigusr1_handler (postgres_signal_arg=10) at
postmaster.c:5134
#19 <signal handler called>
#20 0x0000003dd26e1603 in __select_nocancel () at
../sysdeps/unix/syscall-template.S:82
#21 0x00000000007f6bee in ServerLoop () at postmaster.c:1721
#22 0x00000000007f63dd in PostmasterMain (argc=3, argv=0x2630180) at
postmaster.c:1365
#23 0x000000000072cb40 in main (argc=3, argv=0x2630180) at main.c:228

Thanks & Regards,
Rajkumar Raghuwanshi
QMG, EnterpriseDB Corporation


Re: [HACKERS] Parallel Append implementation

From
amul sul
Date:
Thanks a lot Rajkumar for this test. I am able to reproduce this crash by enabling partition wise join. The reason for this crash is the same as ​ the​ previous[1] i.e node->as_whichplan value. This time append->first_partial_plan value looks suspicious. With the following change to the v21 patch, I am able to reproduce this crash as assert failure when enable_partition_wise_join = ON otherwise working fine. diff --git a/src/backend/executor/nodeAppend.c b/src/backend/executor/nodeAppend.c index e3b17cf0e2..4b337ac633 100644 --- a/src/backend/executor/nodeAppend.c +++ b/src/backend/executor/nodeAppend.c @@ -458,6 +458,7 @@ choose_next_subplan_for_worker(AppendState *node) /* Backward scan is not supported by parallel-aware plans */ Assert(ScanDirectionIsForward(node->ps.state->es_direction)); + Assert(append->first_partial_plan < node->as_nplans); LWLockAcquire(&pstate->pa_lock, LW_EXCLUSIVE); Will look into this more, tomorrow. ​ ​ ​1. http://postgr.es/m/CAAJ_b97kLNW8Z9nvc_JUUG5wVQUXvG= f37WsX8ALF0A=KAHh3w@mail.gmail.com Regards, Amul On Fri, Nov 24, 2017 at 5:00 PM, Rajkumar Raghuwanshi wrote: > On Thu, Nov 23, 2017 at 2:22 PM, amul sul wrote: >> Look like it is the same crash what v20 claim to be fixed, indeed I >> missed to add fix[1] in v20 patch, sorry about that. Attached updated >> patch includes aforementioned fix. > > Hi, > > I have applied latest v21 patch, it got crashed when enabled > partition-wise-join, > same query is working fine with and without partition-wise-join > enabled on PG-head. > please take a look. > > SET enable_partition_wise_join TO true; > > CREATE TABLE pt1 (a int, b int, c text, d int) PARTITION BY LIST(c); > CREATE TABLE pt1_p1 PARTITION OF pt1 FOR VALUES IN ('0000', '0001', > '0002', '0003'); > CREATE TABLE pt1_p2 PARTITION OF pt1 FOR VALUES IN ('0004', '0005', > '0006', '0007'); > CREATE TABLE pt1_p3 PARTITION OF pt1 FOR VALUES IN ('0008', '0009', > '0010', '0011'); > INSERT INTO pt1 SELECT i % 20, i % 30, to_char(i % 12, 'FM0000'), i % > 30 FROM generate_series(0, 99999) i; > ANALYZE pt1; > > CREATE TABLE pt2 (a int, b int, c text, d int) PARTITION BY LIST(c); > CREATE TABLE pt2_p1 PARTITION OF pt2 FOR VALUES IN ('0000', '0001', > '0002', '0003'); > CREATE TABLE pt2_p2 PARTITION OF pt2 FOR VALUES IN ('0004', '0005', > '0006', '0007'); > CREATE TABLE pt2_p3 PARTITION OF pt2 FOR VALUES IN ('0008', '0009', > '0010', '0011'); > INSERT INTO pt2 SELECT i % 20, i % 30, to_char(i % 12, 'FM0000'), i % > 30 FROM generate_series(0, 99999) i; > ANALYZE pt2; > > EXPLAIN ANALYZE > SELECT t1.c, sum(t2.a), COUNT(*) FROM pt1 t1 FULL JOIN pt2 t2 ON t1.c > = t2.c GROUP BY t1.c ORDER BY 1, 2, 3; > WARNING: terminating connection because of crash of another server process > DETAIL: The postmaster has commanded this server process to roll back > the current transaction and exit, because another server process > exited abnormally and possibly corrupted shared memory. > HINT: In a moment you should be able to reconnect to the database and > repeat your command. > server closed the connection unexpectedly > This probably means the server terminated abnormally > before or while processing the request. > The connection to the server was lost. Attempting reset: Failed. > !> > > stack-trace is given below. > > Core was generated by `postgres: parallel worker for PID 73935 > '. > Program terminated with signal 11, Segmentation fault. > #0 0x00000000006dc4b3 in ExecProcNode (node=0x7f7f7f7f7f7f7f7e) at > ../../../src/include/executor/executor.h:238 > 238 if (node->chgParam != NULL) /* something changed? */ > Missing separate debuginfos, use: debuginfo-install > keyutils-libs-1.4-5.el6.x86_64 krb5-libs-1.10.3-65.el6.x86_64 > libcom_err-1.41.12-23.el6.x86_64 libselinux-2.0.94-7.el6.x86_64 > openssl-1.0.1e-57.el6.x86_64 zlib-1.2.3-29.el6.x86_64 > (gdb) bt > #0 0x00000000006dc4b3 in ExecProcNode (node=0x7f7f7f7f7f7f7f7e) at > ../../../src/include/executor/executor.h:238 > #1 0x00000000006dc72e in ExecAppend (pstate=0x26cd6e0) at nodeAppend.c:207 > #2 0x00000000006d1e7c in ExecProcNodeInstr (node=0x26cd6e0) at > execProcnode.c:446 > #3 0x00000000006dcee5 in ExecProcNode (node=0x26cd6e0) at > ../../../src/include/executor/executor.h:241 > #4 0x00000000006dd38c in fetch_input_tuple (aggstate=0x26cd7f8) at > nodeAgg.c:699 > #5 0x00000000006e02eb in agg_fill_hash_table (aggstate=0x26cd7f8) at > nodeAgg.c:2536 > #6 0x00000000006dfb2b in ExecAgg (pstate=0x26cd7f8) at nodeAgg.c:2148 > #7 0x00000000006d1e7c in ExecProcNodeInstr (node=0x26cd7f8) at > execProcnode.c:446 > #8 0x00000000006d1e4d in ExecProcNodeFirst (node=0x26cd7f8) at > execProcnode.c:430 > #9 0x00000000006c9439 in ExecProcNode (node=0x26cd7f8) at > ../../../src/include/executor/executor.h:241 > #10 0x00000000006cbd73 in ExecutePlan (estate=0x26ccda0, > planstate=0x26cd7f8, use_parallel_mode=0 '\000', operation=CMD_SELECT, > sendTuples=1 '\001', numberTuples=0, > direction=ForwardScanDirection, dest=0x26b2ce0, execute_once=1 > '\001') at execMain.c:1718 > #11 0x00000000006c9a12 in standard_ExecutorRun (queryDesc=0x26d7fa0, > direction=ForwardScanDirection, count=0, execute_once=1 '\001') at > execMain.c:361 > #12 0x00000000006c982e in ExecutorRun (queryDesc=0x26d7fa0, > direction=ForwardScanDirection, count=0, execute_once=1 '\001') at > execMain.c:304 > #13 0x00000000006d096c in ParallelQueryMain (seg=0x26322a8, > toc=0x7fda24d46000) at execParallel.c:1271 > #14 0x000000000053272d in ParallelWorkerMain (main_arg=1203628635) at > parallel.c:1149 > #15 0x00000000007e8c99 in StartBackgroundWorker () at bgworker.c:841 > #16 0x00000000007fc029 in do_start_bgworker (rw=0x2656d00) at postmaster.c:5741 > #17 0x00000000007fc36b in maybe_start_bgworkers () at postmaster.c:5945 > #18 0x00000000007fb3fa in sigusr1_handler (postgres_signal_arg=10) at > postmaster.c:5134 > #19 > #20 0x0000003dd26e1603 in __select_nocancel () at > ../sysdeps/unix/syscall-template.S:82 > #21 0x00000000007f6bee in ServerLoop () at postmaster.c:1721 > #22 0x00000000007f63dd in PostmasterMain (argc=3, argv=0x2630180) at > postmaster.c:1365 > #23 0x000000000072cb40 in main (argc=3, argv=0x2630180) at main.c:228 > > Thanks & Regards, > Rajkumar Raghuwanshi > QMG, EnterpriseDB Corporation

Re: [HACKERS] Parallel Append implementation

From
amul sul
Date:
On Mon, Nov 27, 2017 at 10:21 PM, amul sul <sulamul@gmail.com> wrote:
> Thanks a lot Rajkumar for this test. I am able to reproduce this crash by
> enabling  partition wise join.
>
> The reason for this crash is the same as
> the
> previous[1] i.e node->as_whichplan
> value.  This time append->first_partial_plan value looks suspicious. With
> the
> following change to the v21 patch, I am able to reproduce this crash as
> assert
> failure when enable_partition_wise_join = ON otherwise working fine.
>
> diff --git a/src/backend/executor/nodeAppend.c
> b/src/backend/executor/nodeAppend.c
> index e3b17cf0e2..4b337ac633 100644
> --- a/src/backend/executor/nodeAppend.c
> +++ b/src/backend/executor/nodeAppend.c
> @@ -458,6 +458,7 @@ choose_next_subplan_for_worker(AppendState *node)
>
>     /* Backward scan is not supported by parallel-aware plans */
>     Assert(ScanDirectionIsForward(node->ps.state->es_direction));
> +   Assert(append->first_partial_plan < node->as_nplans);
>
>     LWLockAcquire(&pstate->pa_lock, LW_EXCLUSIVE);
>
>
> Will look into this more, tomorrow.
>
I haven't reached the actual reason why there wasn't any partial plan
(i.e.  value of append->first_partial_plan & node->as_nplans are same)
when the partition-wise join is enabled.  I think in this case we could simply
return false from choose_next_subplan_for_worker() when there aren't any
partial plan and we done with all non-partition plan, although I may be wrong
because I am yet to understand this patch.

Here are the changes I did on v21 patch to handle crash reported by Rajkumar[1]:

diff --git a/src/backend/executor/nodeAppend.c
b/src/backend/executor/nodeAppend.c
index e3b17cf0e2..e0ee918808 100644
--- a/src/backend/executor/nodeAppend.c
+++ b/src/backend/executor/nodeAppend.c
@@ -479,9 +479,12 @@ choose_next_subplan_for_worker(AppendState *node)
            pstate->pa_next_plan = append->first_partial_plan;
        else
            pstate->pa_next_plan++;
-       if (pstate->pa_next_plan == node->as_whichplan)
+
+       if (pstate->pa_next_plan == node->as_whichplan ||
+           (pstate->pa_next_plan == append->first_partial_plan &&
+            append->first_partial_plan >= node->as_nplans))
        {
-           /* We've tried everything! */
+           /* We've tried everything or there were no partial plans */
            pstate->pa_next_plan = INVALID_SUBPLAN_INDEX;
            LWLockRelease(&pstate->pa_lock);
            return false;

Apart from this I have added few assert to keep eye on node->as_whichplan
value in the attached patch, thanks.

1] http://postgr.es/m/CAKcux6nyDxOyE4PA8O%3DQgF-ugZp_y1G2U%2Burmf76-%3Df2knDsWA%40mail.gmail.com

Regards,
Amul

Attachment

Re: [HACKERS] Parallel Append implementation

From
Michael Paquier
Date:
On Tue, Nov 28, 2017 at 8:02 PM, amul sul <sulamul@gmail.com> wrote:
> Apart from this I have added few assert to keep eye on node->as_whichplan
> value in the attached patch, thanks.

This is still hot, moved to next CF.
-- 
Michael


Re: [HACKERS] Parallel Append implementation

From
Robert Haas
Date:
On Tue, Nov 28, 2017 at 6:02 AM, amul sul <sulamul@gmail.com> wrote:
> Here are the changes I did on v21 patch to handle crash reported by Rajkumar[1]:
>
> diff --git a/src/backend/executor/nodeAppend.c
> b/src/backend/executor/nodeAppend.c
> index e3b17cf0e2..e0ee918808 100644
> --- a/src/backend/executor/nodeAppend.c
> +++ b/src/backend/executor/nodeAppend.c
> @@ -479,9 +479,12 @@ choose_next_subplan_for_worker(AppendState *node)
>             pstate->pa_next_plan = append->first_partial_plan;
>         else
>             pstate->pa_next_plan++;
> -       if (pstate->pa_next_plan == node->as_whichplan)
> +
> +       if (pstate->pa_next_plan == node->as_whichplan ||
> +           (pstate->pa_next_plan == append->first_partial_plan &&
> +            append->first_partial_plan >= node->as_nplans))
>         {
> -           /* We've tried everything! */
> +           /* We've tried everything or there were no partial plans */
>             pstate->pa_next_plan = INVALID_SUBPLAN_INDEX;
>             LWLockRelease(&pstate->pa_lock);
>             return false;

I changed this around a little, added a test case, and committed this.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Re: [HACKERS] Parallel Append implementation

From
Amit Khandekar
Date:
On 6 December 2017 at 04:01, Robert Haas <robertmhaas@gmail.com> wrote:
> On Tue, Nov 28, 2017 at 6:02 AM, amul sul <sulamul@gmail.com> wrote:
>> Here are the changes I did on v21 patch to handle crash reported by Rajkumar[1]:
>>
>> diff --git a/src/backend/executor/nodeAppend.c
>> b/src/backend/executor/nodeAppend.c
>> index e3b17cf0e2..e0ee918808 100644
>> --- a/src/backend/executor/nodeAppend.c
>> +++ b/src/backend/executor/nodeAppend.c
>> @@ -479,9 +479,12 @@ choose_next_subplan_for_worker(AppendState *node)
>>             pstate->pa_next_plan = append->first_partial_plan;
>>         else
>>             pstate->pa_next_plan++;
>> -       if (pstate->pa_next_plan == node->as_whichplan)
>> +
>> +       if (pstate->pa_next_plan == node->as_whichplan ||
>> +           (pstate->pa_next_plan == append->first_partial_plan &&
>> +            append->first_partial_plan >= node->as_nplans))
>>         {
>> -           /* We've tried everything! */
>> +           /* We've tried everything or there were no partial plans */
>>             pstate->pa_next_plan = INVALID_SUBPLAN_INDEX;
>>             LWLockRelease(&pstate->pa_lock);
>>             return false;
>
> I changed this around a little, added a test case, and committed this.

Thanks Robert !

The crash that is reported on pgsql-committers, is being discussed on
that list itself.

>
> --
> Robert Haas
> EnterpriseDB: http://www.enterprisedb.com
> The Enterprise PostgreSQL Company



-- 
Thanks,
-Amit Khandekar
EnterpriseDB Corporation
The Postgres Database Company


Re: [HACKERS] Parallel Append implementation

From
Adrien Nayrat
Date:
Hello,

I notice Parallel append is not listed on Parallel Plans documentation :
https://www.postgresql.org/docs/devel/static/parallel-plans.html

If you agree I can add it to Open Items.

Thanks,

--
Adrien NAYRAT



Attachment

Re: [HACKERS] Parallel Append implementation

From
Robert Haas
Date:
On Sat, Apr 7, 2018 at 10:21 AM, Adrien Nayrat
<adrien.nayrat@anayrat.info> wrote:
> I notice Parallel append is not listed on Parallel Plans documentation :
> https://www.postgresql.org/docs/devel/static/parallel-plans.html

I agree it might be nice to mention this somewhere on this page, but
I'm not exactly sure where it would make logical sense to put it.



-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Re: [HACKERS] Parallel Append implementation

From
Thomas Munro
Date:
On Tue, May 8, 2018 at 5:23 AM, Robert Haas <robertmhaas@gmail.com> wrote:
> On Sat, Apr 7, 2018 at 10:21 AM, Adrien Nayrat
> <adrien.nayrat@anayrat.info> wrote:
>> I notice Parallel append is not listed on Parallel Plans documentation :
>> https://www.postgresql.org/docs/devel/static/parallel-plans.html
>
> I agree it might be nice to mention this somewhere on this page, but
> I'm not exactly sure where it would make logical sense to put it.

It's not a scan, it's not a join and it's not an aggregation so I
think it needs to be in a new <sect2> as the same level as those
others.  It's a different kind of thing.

-- 
Thomas Munro
http://www.enterprisedb.com


Re: [HACKERS] Parallel Append implementation

From
Robert Haas
Date:
On Tue, May 8, 2018 at 12:10 AM, Thomas Munro
<thomas.munro@enterprisedb.com> wrote:
> It's not a scan, it's not a join and it's not an aggregation so I
> think it needs to be in a new <sect2> as the same level as those
> others.  It's a different kind of thing.

I'm a little skeptical about that idea because I'm not sure it's
really in the same category as far as importance is concerned, but I
don't have a better idea.  Here's a patch.  I'm worried this is too
much technical jargon, but I don't know how to explain it any more
simply.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

Attachment

Re: [HACKERS] Parallel Append implementation

From
Thomas Munro
Date:
On Wed, May 9, 2018 at 1:15 AM, Robert Haas <robertmhaas@gmail.com> wrote:
> On Tue, May 8, 2018 at 12:10 AM, Thomas Munro
> <thomas.munro@enterprisedb.com> wrote:
>> It's not a scan, it's not a join and it's not an aggregation so I
>> think it needs to be in a new <sect2> as the same level as those
>> others.  It's a different kind of thing.
>
> I'm a little skeptical about that idea because I'm not sure it's
> really in the same category as far as importance is concerned, but I
> don't have a better idea.  Here's a patch.  I'm worried this is too
> much technical jargon, but I don't know how to explain it any more
> simply.

+    scanning them more than once would preduce duplicate results.  Plans that

s/preduce/produce/

+    <literal>Append</literal> or <literal>MergeAppend</literal> plan node.
vs.
+    Append</literal> of regular <literal>Index Scan</literal> plans; each

I think we should standardise on <literal>Foo Bar</literal>,
<literal>FooBar</literal> or <emphasis>foo bar</emphasis> when
discussing executor nodes on this page.

-- 
Thomas Munro
http://www.enterprisedb.com


Re: [HACKERS] Parallel Append implementation

From
Robert Haas
Date:
On Tue, May 8, 2018 at 5:05 PM, Thomas Munro
<thomas.munro@enterprisedb.com> wrote:
> +    scanning them more than once would preduce duplicate results.  Plans that
>
> s/preduce/produce/

Fixed, thanks.

> +    <literal>Append</literal> or <literal>MergeAppend</literal> plan node.
> vs.
> +    Append</literal> of regular <literal>Index Scan</literal> plans; each
>
> I think we should standardise on <literal>Foo Bar</literal>,
> <literal>FooBar</literal> or <emphasis>foo bar</emphasis> when
> discussing executor nodes on this page.

Well, EXPLAIN prints MergeAppend but Index Scan, and I think we should
follow that precedent here.

As for <emphasis> vs. <literal>, I think the reason I ended up using
<emphasis> in the section on scans was because I thought that
<literal>Parallel Seq Scan</literal> might be confusing (what's a
"seq"?), so I tried to fudge my way around that by referring to it as
an abstract idea rather than the exact EXPLAIN output.  You then
copied that style in the join section, and, well, like you say, now we
have a sort of hodgepodge of styles.  Maybe that's a problem for
another patch, though.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

Attachment

Re: [HACKERS] Parallel Append implementation

From
Thomas Munro
Date:
On Thu, May 10, 2018 at 7:08 AM, Robert Haas <robertmhaas@gmail.com> wrote:
>  [parallel-append-doc-v2.patch]

+    plans just as they can in any other plan.  However, in a parallel plan,
+    it is also possible that the planner may choose to substitute a
+    <literal>Parallel Append</literal> node.

Maybe drop "it is also possible that "?  It seems a bit unnecessary
and sounds a bit odd followed by "may <verb>", but maybe it's just me.

+    Also, unlike a regular <literal>Append</literal> node, which can only have
+    partial children when used within a parallel plan, <literal>Parallel
+    Append</literal> node can have both partial and non-partial child plans.

Missing "a" before "<literal>Parallel".

+    Non-partial children will be scanned by only a single worker, since

Are we using "worker" in a more general sense that possibly includes
the leader?  Hmm, yes, other text on this page does that too.  Ho hum.

-- 
Thomas Munro
http://www.enterprisedb.com


Re: [HACKERS] Parallel Append implementation

From
Robert Haas
Date:
On Sun, Jul 29, 2018 at 5:49 PM, Thomas Munro
<thomas.munro@enterprisedb.com> wrote:
> On Thu, May 10, 2018 at 7:08 AM, Robert Haas <robertmhaas@gmail.com> wrote:
>>  [parallel-append-doc-v2.patch]
>
> +    plans just as they can in any other plan.  However, in a parallel plan,
> +    it is also possible that the planner may choose to substitute a
> +    <literal>Parallel Append</literal> node.
>
> Maybe drop "it is also possible that "?  It seems a bit unnecessary
> and sounds a bit odd followed by "may <verb>", but maybe it's just me.

Changed.

> +    Also, unlike a regular <literal>Append</literal> node, which can only have
> +    partial children when used within a parallel plan, <literal>Parallel
> +    Append</literal> node can have both partial and non-partial child plans.
>
> Missing "a" before "<literal>Parallel".

Fixed.

> +    Non-partial children will be scanned by only a single worker, since
>
> Are we using "worker" in a more general sense that possibly includes
> the leader?  Hmm, yes, other text on this page does that too.  Ho hum.

Tried to be more careful about this.

New version attached.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

Attachment

Re: [HACKERS] Parallel Append implementation

From
Thomas Munro
Date:
On Tue, Jul 31, 2018 at 5:05 AM, Robert Haas <robertmhaas@gmail.com> wrote:
> New version attached.

Looks good to me.

-- 
Thomas Munro
http://www.enterprisedb.com


Re: [HACKERS] Parallel Append implementation

From
Robert Haas
Date:
On Mon, Jul 30, 2018 at 8:02 PM, Thomas Munro
<thomas.munro@enterprisedb.com> wrote:
> On Tue, Jul 31, 2018 at 5:05 AM, Robert Haas <robertmhaas@gmail.com> wrote:
>> New version attached.
>
> Looks good to me.

Committed to master and v11.  Thanks for the review.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Re: [HACKERS] Parallel Append implementation

From
Adrien NAYRAT
Date:
On 08/01/2018 03:14 PM, Robert Haas wrote:
> Committed to master and v11.  Thanks for the review.

Thanks!