Disable parallel query by default - Mailing list pgsql-hackers
From | Scott Mead |
---|---|
Subject | Disable parallel query by default |
Date | |
Msg-id | a5916f83-de79-4a40-933a-fb0d9ba2f5a0@app.fastmail.com Whole thread Raw |
Responses |
Re: Disable parallel query by default
|
List | pgsql-hackers |
Hello Hackers, Over the last 24 months, I've noticed a pattern amongst users with unexpected plan flips landing on parallel plans. 77cd477 (9.6 beta) defaulted parallel query on (max_parallel_degree = 2), it's been nine years and I'd like to open the discussionto see what our thoughts are. Especially since it seems that the decision was made for 9.6 beta testing, and neverreally revisited. I'll open by proposing that we prevent the planner from automatically selecting parallel plans by default, opting insteadto allow users to set their max_parallel_workers_per_gather as needed. IOW: lets make the default max_parallel_workers_per_gather=0for V18 forward. Just to be clear, my concern isn't with parallel query in general, the issue we see is when high-frequency, low-latency queriesstart executing with parallelism on their own (i.e. the traditional plan flip with a twist). Given that max_parallel_workers_per_gatheris dynamic and easily configured per session (or even per query with something like the pg_hint_planextension), dissuading the planner from opting in to parallelism by default will contain the fallout that wesee when plans flip to parallel execution. What is the fallout? When a high-volume, low-latency query flips to parallel execution on a busy system, we end up in asituation where the database is effectively DDOSing itself with a very high rate of connection establish and tear-down requests. Even if the query ends up being faster (it generally does not), the CPU requirements for the same workload rapidlydouble or worse, with most of it being spent in the OS (context switch, fork(), destroy()). When looking at the database,you'll see a high load average, and high wait for CPU with very little actual work being done within the database. For an example of scale, we have seen users with low connection rates (<= 5 / minute) suddenly spike to between 2000 and3000 connect requests per minute until the system grinds to a halt. I'm looking forward to the upcoming monitoring in e7a9496 (Add two attributes to pg_stat_database for parallel workers activity),it will be easier to empirically prove that parallel query is being used. I don't think the patch goes far enoughthough, we really need the ability to pinpoint the query and the specific variables used that triggered the parallelplan. When we tell a user that parallel query is in-use and suspected, it is almost always met with "no, we don'tuse that feature". Users do not even realize that it's happening and quickly ask for a list of all queries that haveever undergone parallel execution. It's pretty much impossible to give an accurate list of these because there is noinstrumentation available (even in the new patch) to get to the per-query level. When a user says "I'm not using parallel query" we have to walk through circumstantial evidence of its use. I typicallycombine IPC:BgWorkerShutDown, IPC:ParallelFinish, IO:DataFileRead (this helps nail it for sequential scans) witha high rate of connection establishment. When you look at all of these together, it still hard to see that parallelismis the cause, but when we disable automated plan selection, system stability returns. The recommendation that I give to users is pretty straightforward: "Disable automatic parallel query, enable it for querieswhere you find substantial savings and can control the rate of execution." I always tell users that if they're usingparallel query for anything that should execute in less than 5 minutes, they're probably pushing on the wrong tuningstrategy as the load induced by the parallel query infrastructure is likely going to negate the savings that they'regetting. I'm curious to hear what others think of this proposal. I've dealt with so many of these over the last 24 months, most ofthem causing strife along the way, that I'm interested in what others think. -- Scott Mead Amazon Web Services scott@meads.us Note: When testing the attached patch, there are failures in misc_sanity.out and misc_functions.out (replication origin nameis too long). I assume these are unrelated to my attached patch.
Attachment
pgsql-hackers by date: