Re: max_parallel_degree > 0 for 9.6 beta - Mailing list pgsql-hackers

From Joshua D. Drake
Subject Re: max_parallel_degree > 0 for 9.6 beta
Date
Msg-id 571A300E.7020102@commandprompt.com
Whole thread Raw
In response to Re: max_parallel_degree > 0 for 9.6 beta  (Robert Haas <robertmhaas@gmail.com>)
Responses Re: max_parallel_degree > 0 for 9.6 beta  (Robert Haas <robertmhaas@gmail.com>)
List pgsql-hackers
On 04/22/2016 06:47 AM, Robert Haas wrote:
> On Fri, Apr 22, 2016 at 9:33 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
>> Robert Haas <robertmhaas@gmail.com> writes:
>>> On Thu, Apr 21, 2016 at 7:20 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
>>>> Is that because max_worker_processes is only 8 by default?  Maybe we
>>>> need to raise that, at least for beta purposes?
>>
>>> I'm not really in favor of that.  I mean, almost all of our default
>>> settings are optimized for running PostgreSQL on, for example, a
>>> Raspberry Pi 2, so it would seem odd to suddenly swing the other
>>> direction and assume that there are more than 8 unused CPU cores.

This is the problem right here.

We should be shipping for a reasonable production configuration. It is 
not reasonable to assume that someone is going to be running on a 
Rasberry Pi 2. Yes, we can effectively run on that platform that doesn't 
mean it should be our default target configuration. Consider that a 
5.00/mo Digital Ocean VM is going to outperform a Rasberry Pi.

>
> It is much less likely to be true for parallel workers.  The reason
> why those processes aren't contending for the CPU at the same time is
> generally that most of the connections are in fact idle.  But a
> parallel worker is never idle.  It is launched when it is needed to
> run a query and exits immediately afterward.  If it's not contending
> for the CPU, it will be contending for I/O bandwidth, or a lock.
>

True, but isn't that also what context switching and (possibly) 
hyperthreading are for?


>> So what I'm concerned about for beta purposes is that we have a setup that
>> can exercise cases like, say, varying orders in which different workers
>> return tuples, or potential deadlocks between sibling workers.  We'd get
>> no coverage of that behavioral space at max_parallel_degree=1.  I'm not
>> really convinced that we'll get adequate coverage at
>> max_parallel_degree=2.
>
> The right solution to that is for people who have the right hardware
> to raise the settings, not to unleash a ridiculous set of defaults on
> everyone.  I really hope that some people do serious destruction
> testing of parallel query and try to break it.  For example, you could
> use the parallel_degree reloption to force 100 parallel workers to
> scan the same relation.   That's likely to be dog slow, but it might
> well turn up some bugs.

I think your argument sounds more like a production solution, not a Beta 
solution. We should be pushing it a little bit in Beta.

JD

>





-- 
Command Prompt, Inc.                  http://the.postgres.company/                        +1-503-667-4564
PostgreSQL Centered full stack support, consulting and development.
Everyone appreciates your honesty, until you are honest with them.



pgsql-hackers by date:

Previous
From: Michael Paquier
Date:
Subject: Re: VS 2015 support in src/tools/msvc
Next
From: Andrew Dunstan
Date:
Subject: Re: VS 2015 support in src/tools/msvc