Hi Philip,
> On 4. Jun 2020, at 00:23, Philip Semanchuk <philip@americanefficient.com> wrote:
>
>> I guess you should show an explain analyze, specifically "Workers
>> Planned/Launched", maybe by linking to explain.depesz.com
>
> Out of an abundance of caution, our company has a policy of not pasting our plans to public servers. However, I can
confirmthat when I set max_parallel_workers_per_gather > 4 and the runtime increases, this is what’s in the EXPLAIN
ANALYZEoutput:
>
> Workers Planned: 1
> Workers Launched: 1
Can you please verify the amount of max_parallel_workers and max_worker_processes? It should be roughly
max_worker_processes> max_parallel_workers > max_parallel_workers_per_gather, for instance:
max_worker_processes = 24
max_parallel_workers = 18
max_parallel_workers_per_gather = 6
Also, there are more configuration settings related to parallel queries you might want to look into. Most notably:
parallel_setup_cost
parallel_tuple_cost
min_parallel_table_scan_size
Especially the last one is a typical dealbreaker, you can try to set it to 0 for the beginning. Good starters for the
othersare 500 and 0.1 respectively.
> FWIW, the Planning Time reported in EXPLAIN ANALYZE output doesn’t vary significantly, only from 411-443ms, and the
variationwithin that range correlates only very weakly with max_parallel_workers_per_gather.
It can happen, that more parallelism does not help the query but slows it down beyond a specific amount of parallel
workers.You can see this in EXPLAIN when there is for instance a BITMAP HEAP INDEX SCAN or similar involved.
Cheers,
Sebastian