Re: max_parallel_degree > 0 for 9.6 beta - Mailing list pgsql-hackers

From Robert Haas
Subject Re: max_parallel_degree > 0 for 9.6 beta
Date
Msg-id CA+TgmoaxGZ5LC_F94Gngk5FGU497pDHj1bJ3KVEBE=Vip4nPCA@mail.gmail.com
Whole thread Raw
In response to Re: max_parallel_degree > 0 for 9.6 beta  (Tom Lane <tgl@sss.pgh.pa.us>)
Responses Re: max_parallel_degree > 0 for 9.6 beta  ("Joshua D. Drake" <jd@commandprompt.com>)
List pgsql-hackers
On Fri, Apr 22, 2016 at 9:33 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> Robert Haas <robertmhaas@gmail.com> writes:
>> On Thu, Apr 21, 2016 at 7:20 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
>>> Is that because max_worker_processes is only 8 by default?  Maybe we
>>> need to raise that, at least for beta purposes?
>
>> I'm not really in favor of that.  I mean, almost all of our default
>> settings are optimized for running PostgreSQL on, for example, a
>> Raspberry Pi 2, so it would seem odd to suddenly swing the other
>> direction and assume that there are more than 8 unused CPU cores.
>
> I'm not following why you think that max_worker_processes cannot be
> set higher than the number of cores.  By that argument, it's insane
> that we ship with max_connections = 100.  In practice it's generally
> fine, and people can get away with oversubscribing their core count
> even more than that, because it's seldom that all those processes
> are actually contending for CPU at the same time.  There are enough
> inefficiencies in our parallel-query design that the same will most
> certainly be true for parallel workers.

It is much less likely to be true for parallel workers.  The reason
why those processes aren't contending for the CPU at the same time is
generally that most of the connections are in fact idle.  But a
parallel worker is never idle.  It is launched when it is needed to
run a query and exits immediately afterward.  If it's not contending
for the CPU, it will be contending for I/O bandwidth, or a lock.

> So what I'm concerned about for beta purposes is that we have a setup that
> can exercise cases like, say, varying orders in which different workers
> return tuples, or potential deadlocks between sibling workers.  We'd get
> no coverage of that behavioral space at max_parallel_degree=1.  I'm not
> really convinced that we'll get adequate coverage at
> max_parallel_degree=2.

The right solution to that is for people who have the right hardware
to raise the settings, not to unleash a ridiculous set of defaults on
everyone.  I really hope that some people do serious destruction
testing of parallel query and try to break it.  For example, you could
use the parallel_degree reloption to force 100 parallel workers to
scan the same relation.   That's likely to be dog slow, but it might
well turn up some bugs.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



pgsql-hackers by date:

Previous
From: Tom Lane
Date:
Subject: Re: max_parallel_degree > 0 for 9.6 beta
Next
From: Tom Lane
Date:
Subject: Re: VS 2015 support in src/tools/msvc