Thread: Disable parallel query by default
Hello Hackers, Over the last 24 months, I've noticed a pattern amongst users with unexpected plan flips landing on parallel plans. 77cd477 (9.6 beta) defaulted parallel query on (max_parallel_degree = 2), it's been nine years and I'd like to open the discussionto see what our thoughts are. Especially since it seems that the decision was made for 9.6 beta testing, and neverreally revisited. I'll open by proposing that we prevent the planner from automatically selecting parallel plans by default, opting insteadto allow users to set their max_parallel_workers_per_gather as needed. IOW: lets make the default max_parallel_workers_per_gather=0for V18 forward. Just to be clear, my concern isn't with parallel query in general, the issue we see is when high-frequency, low-latency queriesstart executing with parallelism on their own (i.e. the traditional plan flip with a twist). Given that max_parallel_workers_per_gatheris dynamic and easily configured per session (or even per query with something like the pg_hint_planextension), dissuading the planner from opting in to parallelism by default will contain the fallout that wesee when plans flip to parallel execution. What is the fallout? When a high-volume, low-latency query flips to parallel execution on a busy system, we end up in asituation where the database is effectively DDOSing itself with a very high rate of connection establish and tear-down requests. Even if the query ends up being faster (it generally does not), the CPU requirements for the same workload rapidlydouble or worse, with most of it being spent in the OS (context switch, fork(), destroy()). When looking at the database,you'll see a high load average, and high wait for CPU with very little actual work being done within the database. For an example of scale, we have seen users with low connection rates (<= 5 / minute) suddenly spike to between 2000 and3000 connect requests per minute until the system grinds to a halt. I'm looking forward to the upcoming monitoring in e7a9496 (Add two attributes to pg_stat_database for parallel workers activity),it will be easier to empirically prove that parallel query is being used. I don't think the patch goes far enoughthough, we really need the ability to pinpoint the query and the specific variables used that triggered the parallelplan. When we tell a user that parallel query is in-use and suspected, it is almost always met with "no, we don'tuse that feature". Users do not even realize that it's happening and quickly ask for a list of all queries that haveever undergone parallel execution. It's pretty much impossible to give an accurate list of these because there is noinstrumentation available (even in the new patch) to get to the per-query level. When a user says "I'm not using parallel query" we have to walk through circumstantial evidence of its use. I typicallycombine IPC:BgWorkerShutDown, IPC:ParallelFinish, IO:DataFileRead (this helps nail it for sequential scans) witha high rate of connection establishment. When you look at all of these together, it still hard to see that parallelismis the cause, but when we disable automated plan selection, system stability returns. The recommendation that I give to users is pretty straightforward: "Disable automatic parallel query, enable it for querieswhere you find substantial savings and can control the rate of execution." I always tell users that if they're usingparallel query for anything that should execute in less than 5 minutes, they're probably pushing on the wrong tuningstrategy as the load induced by the parallel query infrastructure is likely going to negate the savings that they'regetting. I'm curious to hear what others think of this proposal. I've dealt with so many of these over the last 24 months, most ofthem causing strife along the way, that I'm interested in what others think. -- Scott Mead Amazon Web Services scott@meads.us Note: When testing the attached patch, there are failures in misc_sanity.out and misc_functions.out (replication origin nameis too long). I assume these are unrelated to my attached patch.
Attachment
On Tue, May 13, 2025 at 4:37 PM Scott Mead <scott@meads.us> wrote:
I'll open by proposing that we prevent the planner from automatically selecting parallel plans by default
That seems a pretty heavy hammer, when we have things like parallel_setup_cost that should be tweaked first.
The recommendation that I give to users is pretty straightforward: "Disable automatic parallel query, enable it for queries where you find substantial savings and can control the rate of execution." I always tell users that if they're using parallel query for anything that should execute in less than 5 minutes, they're probably pushing on the wrong tuning strategy as the load induced by the parallel query infrastructure is likely going to negate the savings that they're getting.
Five minutes?! That's not been my experience. Not claiming parallelism is perfect yet, but there are plenty of parallel performance savings under the five minute mark.
Cheers,
Greg
--
Crunchy Data - https://www.crunchydata.com
Enterprise Postgres Software Products & Tech Support
On Tue, 2025-05-13 at 17:53 -0400, Scott Mead wrote: > On Tue, May 13, 2025, at 5:07 PM, Greg Sabino Mullane wrote: > > On Tue, May 13, 2025 at 4:37 PM Scott Mead <scott@meads.us> wrote: > > > I'll open by proposing that we prevent the planner from automatically > > > selecting parallel plans by default > > > > That seems a pretty heavy hammer, when we have things like > > parallel_setup_cost that should be tweaked first. > > I agree it's a big hammer and I thought through parallel_setup_cost > quite a bit myself. The problem with parallel_setup_cost is that it > doesn't actually represent the overhead of a setting up parallel > query for a busy system. It does define the cost of setup for a > *single* parallel session, but it cannot accurately express the > cost of CPU and other overhead associated with the second, third, > fourth, etc... query that is executed as parallel. The expense to > the operating system is a function of the _rate_ of parallel query > executions being issued. Without new infrastructure, there's no way > to define something that will give me a true representation of the > cost of issuing a query with parallelism. There is no way for the optimizer to represent that your system is under CPU overload currently. But I agree with Greg that parallel_setup_cost is the setting that should be adjusted. If PostgreSQL is more reluctant to even start considering a parallel plan, that would be a move in the right direction in a case like this: > > > What is the fallout? When a high-volume, low-latency query flips to > > > parallel execution on a busy system, we end up in a situation where > > > the database is effectively DDOSing itself with a very high rate of > > > connection establish and tear-down requests. Even if the query ends > > > up being faster (it generally does not), the CPU requirements for the > > > same workload rapidly double or worse, with most of it being spent > > > in the OS (context switch, fork(), destroy()). When looking at the > > > database, you'll see a high load average, and high wait for CPU with > > > very little actual work being done within the database. You are painting a bleak picture indeed. I get to see PostgreSQL databases in trouble regularly, but I have not seen anything like what you describe. If a rather cheap, very frequent query is suddenly estimated to be expensive enough to warrant a parallel plan, I'd suspect that the estimates must be seriously off. With an argument like that, you may as well disable nested loop joins. I have seen enough cases where disabling nested loop joins, without any deeper analysis, made very slow queries reasonably fast. Sure enough, I often see systems where I recommend disabling parallel query - in fact, whenever throughput is more important than response time. But I also see many cases where parallel query works just like it should and leads to a better user experience. I have come to disable JIT by default, but not parallel query. The primary problem that I encounter with parallel query is that dynamic shared memory segments grow to a size where they cause OOM errors. That's the most frequent reason for me to recommend disabling parallel query. Yours, Laurenz Albe
On Wed, May 14, 2025, at 4:06 AM, Laurenz Albe wrote: > On Tue, 2025-05-13 at 17:53 -0400, Scott Mead wrote: > > On Tue, May 13, 2025, at 5:07 PM, Greg Sabino Mullane wrote: > > > On Tue, May 13, 2025 at 4:37 PM Scott Mead <scott@meads.us> wrote: > > > > I'll open by proposing that we prevent the planner from automatically > > > > selecting parallel plans by default > > > > > > That seems a pretty heavy hammer, when we have things like > > > parallel_setup_cost that should be tweaked first. > > > > I agree it's a big hammer and I thought through parallel_setup_cost > > quite a bit myself. The problem with parallel_setup_cost is that it > > doesn't actually represent the overhead of a setting up parallel > > query for a busy system. It does define the cost of setup for a > > *single* parallel session, but it cannot accurately express the > > cost of CPU and other overhead associated with the second, third, > > fourth, etc... query that is executed as parallel. The expense to > > the operating system is a function of the _rate_ of parallel query > > executions being issued. Without new infrastructure, there's no way > > to define something that will give me a true representation of the > > cost of issuing a query with parallelism. > > There is no way for the optimizer to represent that your system is > under CPU overload currently. But I agree with Greg that > parallel_setup_cost is the setting that should be adjusted. > If PostgreSQL is more reluctant to even start considering a parallel plan, > that would be a move in the right direction in a case like this: > > > > > What is the fallout? When a high-volume, low-latency query flips to > > > > parallel execution on a busy system, we end up in a situation where > > > > the database is effectively DDOSing itself with a very high rate of > > > > connection establish and tear-down requests. Even if the query ends > > > > up being faster (it generally does not), the CPU requirements for the > > > > same workload rapidly double or worse, with most of it being spent > > > > in the OS (context switch, fork(), destroy()). When looking at the > > > > database, you'll see a high load average, and high wait for CPU with > > > > very little actual work being done within the database. > > You are painting a bleak picture indeed. I get to see PostgreSQL databases > in trouble regularly, but I have not seen anything like what you describe. > If a rather cheap, very frequent query is suddenly estimated to be > expensive enough to warrant a parallel plan, I'd suspect that the estimates > must be seriously off. > > With an argument like that, you may as well disable nested loop joins. > I have seen enough cases where disabling nested loop joins, without any > deeper analysis, made very slow queries reasonably fast. My argument is that parallel query should not be allowed to be invoked without user intervention. Yes, nestedloop can havea similar impact, but let's take a look at the breakdown at scale of PQ: 1. pgbench -i -s 100 2. Make a query that will execute in parallel SELECT aid, a.bid, bbalance FROM pgbench_accounts a, pgbench_branches b WHERE a.bid = b.bid ORDER BY bbalance desc; Non Parallel query = 4506.559 ms Parallel query = 2849.073 Arguably, much better. 3. Use pgbench to execute these with a concurrency of 10, rate limit of 1 tps pgbench -R 1 -r -T 120 -P 5 --no-vacuum -f pselect.sql -c 10 4. The parallel query was executing ~ 2.8 seconds in isolation, but when running with 10 concurrent sessions, breaks downto 5.8 seconds the non-parallel version executes on average of 5.5 seconds. You've completely erased the gains and onlyhave a concurrency of 5 (that's with max_parallel_workers = 8). If you increase max_parallel_workers, this quickly becomesworse. Even though parallel query is faster in isolation, even a small amount of concurrency has a quickly compounding effect thedegrades very quickly (again, defaults with a 16 processor machine). Concurrency - Non Parallel Runtime - Parallel Runtime 1 - 5003.951 - 3681.452 5 - 4936.565 - 4565.171 10 - 5573.239 - 5894.397 15 - 6224.292 - 8470.982 20 - 5632.948 - 13277.857 Even with max_parallel_workers protecting us with '8' (default), we erase our advantage by the time we go to concurrencyof 5 clients. Going back to the original commit which enabled PQ by default[1], it was done so that the feature would be tested duringbeta. I think it's time that we limit the accidental impact this can have to users by disabling the feature by default. [1]- https://github.com/postgres/postgres/commit/77cd477c4ba885cfa1ba67beaa82e06f2e182b85 " Enable parallel query by default. Change max_parallel_degree default from 0 to 2. It is possible that this is not a good idea, or that we should go with 1 worker rather than 2, but we won't find out without trying it. Along the way, reword the documentation for max_parallel_degree a little bit to hopefully make it more clear. Discussion: 20160420174631.3qjjhpwsvvx5bau5@alap3.anarazel.de " > > Sure enough, I often see systems where I recommend disabling parallel > query - in fact, whenever throughput is more important than response time. > But I also see many cases where parallel query works just like it should > and leads to a better user experience. > > I have come to disable JIT by default, but not parallel query. > > The primary problem that I encounter with parallel query is that dynamic > shared memory segments grow to a size where they cause OOM errors. > That's the most frequent reason for me to recommend disabling parallel query. > > Yours, > Laurenz Albe > -- Scott Mead Amazon Web Services scott@meads.us
On Tue, 2025-05-20 at 16:58 -0400, Scott Mead wrote: > On Wed, May 14, 2025, at 4:06 AM, Laurenz Albe wrote: > > On Tue, 2025-05-13 at 17:53 -0400, Scott Mead wrote: > > > On Tue, May 13, 2025, at 5:07 PM, Greg Sabino Mullane wrote: > > > > On Tue, May 13, 2025 at 4:37 PM Scott Mead <scott@meads.us> wrote: > > > > > I'll open by proposing that we prevent the planner from automatically > > > > > selecting parallel plans by default > > > > > > > What is the fallout? When a high-volume, low-latency query flips to > > > > > parallel execution on a busy system, we end up in a situation where > > > > > the database is effectively DDOSing itself with a very high rate of > > > > > connection establish and tear-down requests. > > > > You are painting a bleak picture indeed. I get to see PostgreSQL databases > > in trouble regularly, but I have not seen anything like what you describe. > > > > With an argument like that, you may as well disable nested loop joins. > > I have seen enough cases where disabling nested loop joins, without any > > deeper analysis, made very slow queries reasonably fast. > > My argument is that parallel query should not be allowed to be invoked without > user intervention. Yes, nestedloop can have a similar impact, but let's take > a look at the breakdown at scale of PQ: > > [pgbench run that shows that parallel query is bad for throughput] I think that your experiment is somewhat misleading. Sure, if you overload the machine with parallel workers, that will eventually also harm the query response time. But many databases out there are not overloaded, and the shorter response time that parallel query offers makes many users happy. It is well known that what is beneficial for response time is detrimental for the overall throughput and vice versa. Now parallel query clearly is a feature that is good for response time and bad for throughput, but that is not necessarily wrong. Essentially, you are arguing that the default configuration should favor throughput over response time. > Going back to the original commit which enabled PQ by default[1], it was > done so that the feature would be tested during beta. I think it's time > that we limit the accidental impact this can have to users by disabling > the feature by default. I disagree. My experience is that parallel query often improves the user experience. Sure, there are cases where I recommend disabling it, but I think that disabling it by default would be a move in the wrong direction. On the other hand, I have also seen cases where bad estimates trigger parallel query by mistake, making queries slower. So I'd support an effort to increase the default value for "parallel_setup_cost". Yours, Laurenz Albe
On Wed, May 21, 2025, at 3:50 AM, Laurenz Albe wrote: > On Tue, 2025-05-20 at 16:58 -0400, Scott Mead wrote: > > On Wed, May 14, 2025, at 4:06 AM, Laurenz Albe wrote: > > > On Tue, 2025-05-13 at 17:53 -0400, Scott Mead wrote: > > > > On Tue, May 13, 2025, at 5:07 PM, Greg Sabino Mullane wrote: > > > > > On Tue, May 13, 2025 at 4:37 PM Scott Mead <scott@meads.us> wrote: > > > > > > I'll open by proposing that we prevent the planner from automatically > > > > > > selecting parallel plans by default > > > > > > > > > What is the fallout? When a high-volume, low-latency query flips to > > > > > > parallel execution on a busy system, we end up in a situation where > > > > > > the database is effectively DDOSing itself with a very high rate of > > > > > > connection establish and tear-down requests. > > > > > > You are painting a bleak picture indeed. I get to see PostgreSQL databases > > > in trouble regularly, but I have not seen anything like what you describe. > > > > > > With an argument like that, you may as well disable nested loop joins. > > > I have seen enough cases where disabling nested loop joins, without any > > > deeper analysis, made very slow queries reasonably fast. > > > > My argument is that parallel query should not be allowed to be invoked without > > user intervention. Yes, nestedloop can have a similar impact, but let's take > > a look at the breakdown at scale of PQ: > > > > [pgbench run that shows that parallel query is bad for throughput] > > I think that your experiment is somewhat misleading. Sure, if you > overload the machine with parallel workers, that will eventually also > harm the query response time. But many databases out there are not > overloaded, and the shorter response time that parallel query offers > makes many users happy. It's not intended to be misleading, sorry for that. I agree that PQ can have a positive effect, the point is that our currentdefaults will very quickly take a basic workload on a modest (16 CPU box) and quickly swamp it with a concurrencyof 5, which is counter-intuitive, hard to debug, and usually not desired (again, in the case of a plan that silentlyinvokes parallelism). FWIW, setting max_parallel_workers_per_gather to 0 by default only disables automatic PQ selection behind a SIGHUP (or witha user context), users can easily re-enable it if they think want without having to restart (similar to parallel_setup_cost,but without the uncertainty). During my testing, I actually found (again, at concurrency = 5) that the default max_parallel_workers and max_worker_processesof 8 is not high enough. If the default max_parallel_workers_per_gather is 0, then we'd be able to tocrank those defaults up (especially max_worker_processes which requires a restart). > > It is well known that what is beneficial for response time is detrimental > for the overall throughput and vice versa. It is well-known. What's not is that the postgres defaults will quickly swamp a machine with parallelism. That's a lessonthat many only learn after it's happened to them. ISTM that the better path is to let someone try to optimize withparallelism rather than have to fight with it during an emergent event. IOW: I'd rather know that I'm walking into a marsh with rattlesnakes rather than find out after I'd been bitten. > Now parallel query clearly is a feature that is good for response time > and bad for throughput, but that is not necessarily wrong. Agreed, I do like and use parallel query. I just don't think it's wise that we allow that planner to make that decision ona user's behalf when the overhead is this high and the concurrency behavior falls apart so spectacularly fast. > > Essentially, you are arguing that the default configuration should favor > throughput over response time. That's one take on it, I'm actually saying that the default configuration should protect medium-sized systems from unintendedbehavior that quickly degrades performance while being very hard to identify and quantify. > > > Going back to the original commit which enabled PQ by default[1], it was > > done so that the feature would be tested during beta. I think it's time > > that we limit the accidental impact this can have to users by disabling > > the feature by default. > > I disagree. > My experience is that parallel query often improves the user experience. > Sure, there are cases where I recommend disabling it, but I think that > disabling it by default would be a move in the wrong direction. > > On the other hand, I have also seen cases where bad estimates trigger > parallel query by mistake, making queries slower. So I'd support an > effort to increase the default value for "parallel_setup_cost". I'm open to discussing a value for parallel_setup_cost that protects users from runaway here, I just haven't been able tofind a value that allows users to be protected while simultaneously allowing users who want automatic parallel-plan selectionto take advantage of it. What I've found (and it sounds somewhat similar to what you are saying) is that if you use parallelism intentionally anddesign for it (hardware, concurrency model, etc...) it's very, very powerful. In cases where it 'just kicks in', I haven'tseen an example that makes users happy. > > Yours, > Laurenz Albe > -- Scott Mead Amazon Web Services scott@meads.us