On 03/25/2017 02:01 PM, David Rowley wrote:>
> I wondered if there's anything we can do here to better test cases
> when no workers are able to try to ensure the parallel nodes work
> correctly, but the more I think about it, the more I don't see wrong
> with just using SET max_parallel_workers = 0;
>
It's demonstrably a valid way to disable parallel queries (i.e. there's
nothing wrong with it), because the docs say this:
Setting this value to 0 disables parallel query execution.
>
> My vote would be to leave the GUC behaviour as is and add some tests
> to select_parallel.sql which exploit setting max_parallel_workers to 0
> for running some tests.
>
> If that's not going to fly, then unless we add something else to allow
> us to reliably not get any workers, then we're closing to close the
> door on being able to write automatic tests to capture this sort of
> bug.
>
> ... thinks for a bit....
>
> perhaps some magic value like -1 could be used for this... unsure of
> how that would be documented though...
>
I agree it'd be very useful to have a more where we generate parallel
plans but then prohibit starting any workers. That would detect this and
similar issues, I think.
I'm not sure we need to invent a new magic value, though. Can we simply
look at force_parallel_mode, and if it's 'regress' then tread 0 differently?
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services