Re: max_parallel_degree > 0 for 9.6 beta - Mailing list pgsql-hackers

From Tom Lane
Subject Re: max_parallel_degree > 0 for 9.6 beta
Date
Msg-id 20419.1461332003@sss.pgh.pa.us
Whole thread Raw
In response to Re: max_parallel_degree > 0 for 9.6 beta  (Robert Haas <robertmhaas@gmail.com>)
Responses Re: max_parallel_degree > 0 for 9.6 beta
List pgsql-hackers
Robert Haas <robertmhaas@gmail.com> writes:
> On Thu, Apr 21, 2016 at 7:20 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
>> Is that because max_worker_processes is only 8 by default?  Maybe we
>> need to raise that, at least for beta purposes?

> I'm not really in favor of that.  I mean, almost all of our default
> settings are optimized for running PostgreSQL on, for example, a
> Raspberry Pi 2, so it would seem odd to suddenly swing the other
> direction and assume that there are more than 8 unused CPU cores.

I'm not following why you think that max_worker_processes cannot be
set higher than the number of cores.  By that argument, it's insane
that we ship with max_connections = 100.  In practice it's generally
fine, and people can get away with oversubscribing their core count
even more than that, because it's seldom that all those processes
are actually contending for CPU at the same time.  There are enough
inefficiencies in our parallel-query design that the same will most
certainly be true for parallel workers.

So what I'm concerned about for beta purposes is that we have a setup that
can exercise cases like, say, varying orders in which different workers
return tuples, or potential deadlocks between sibling workers.  We'd get
no coverage of that behavioral space at max_parallel_degree=1.  I'm not
really convinced that we'll get adequate coverage at
max_parallel_degree=2.
        regards, tom lane



pgsql-hackers by date:

Previous
From: Amit Kapila
Date:
Subject: Re: Re: [COMMITTERS] pgsql: Avoid extra locks in GetSnapshotData if old_snapshot_threshold <
Next
From: Robert Haas
Date:
Subject: Re: max_parallel_degree > 0 for 9.6 beta