On 3/25/17 09:01, David Rowley wrote: > On 25 March 2017 at 23:09, Rushabh Lathia <rushabh.lathia@gmail.com> wrote: >> Also another point which I think we should fix is, when someone set >> max_parallel_workers = 0, we should also set the >> max_parallel_workers_per_gather >> to zero. So that way it we can avoid generating the gather path with >> max_parallel_worker = 0. > I see that it was actually quite useful that it works the way it does. > If it had worked the same as max_parallel_workers_per_gather, then > likely Tomas would never have found this bug.
Another problem is that the GUC system doesn't really support cases where the validity of one setting depends on the current value of another setting. So each individual setting needs to be robust against cases of related settings being nonsensical.
Okay.
About the original issue reported by Tomas, I did more debugging and
found that - problem was gather_merge_clear_slots() was not returning
the clear slot when nreader is zero (means nworkers_launched = 0).
Due to the same scan was continue even all the tuple are exhausted,
and then end up with server crash at gather_merge_getnext(). In the patch
I also added the Assert into gather_merge_getnext(), about the index should be less then the nreaders + 1 (leader).