Re: crashes due to setting max_parallel_workers=0 - Mailing list pgsql-hackers

From Robert Haas
Subject Re: crashes due to setting max_parallel_workers=0
Date
Msg-id CA+TgmoZXZZtxrS-x5UWuB0ghA3or-dT=mVqcvnf+HOoq5jHjCQ@mail.gmail.com
Whole thread Raw
In response to crashes due to setting max_parallel_workers=0  (Tomas Vondra <tomas.vondra@2ndquadrant.com>)
Responses Re: crashes due to setting max_parallel_workers=0  (Tomas Vondra <tomas.vondra@2ndquadrant.com>)
Re: crashes due to setting max_parallel_workers=0  (Tom Lane <tgl@sss.pgh.pa.us>)
List pgsql-hackers
On Mon, Mar 27, 2017 at 9:54 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> Robert Haas <robertmhaas@gmail.com> writes:
>> On Mon, Mar 27, 2017 at 1:29 AM, Rushabh Lathia
>> <rushabh.lathia@gmail.com> wrote:
>>> But it seems a bit futile to produce the parallel plan in the first place,
>>> because with max_parallel_workers=0 we can't possibly get any parallel
>>> workers ever. I wonder why compute_parallel_worker() only looks at
>>> max_parallel_workers_per_gather, i.e. why shouldn't it do:
>>> parallel_workers = Min(parallel_workers, max_parallel_workers);
>>> Perhaps this was discussed and is actually intentional, though.
>
>> It was intentional.  See the last paragraph of
>> https://www.postgresql.org/message-id/CA%2BTgmoaMSn6a1780VutfsarCu0LCr%3DCO2yi4vLUo-JQbn4YuRA@mail.gmail.com
>
> Since this has now come up twice, I suggest adding a comment there
> that explains why we're intentionally ignoring max_parallel_workers.

Hey, imagine if the comments explained the logic behind the code!

Good idea.  How about the attached?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

Attachment

pgsql-hackers by date:

Previous
From: Robert Haas
Date:
Subject: Re: partitioned tables and contrib/sepgsql
Next
From: Tom Lane
Date:
Subject: Re: WIP: Faster Expression Processing v4