Re: Mini improvement: statement_cost_limit - Mailing list pgsql-hackers
From | Simon Riggs |
---|---|
Subject | Re: Mini improvement: statement_cost_limit |
Date | |
Msg-id | 1217790742.3934.247.camel@ebony.t-mobile.de. Whole thread Raw |
In response to | Re: Mini improvement: statement_cost_limit (daveg <daveg@sonic.net>) |
Responses |
Re: Mini improvement: statement_cost_limit
Re: Mini improvement: statement_cost_limit |
List | pgsql-hackers |
On Sun, 2008-08-03 at 00:44 -0700, daveg wrote: > On Sat, Aug 02, 2008 at 09:30:08PM +0200, Hans-Jürgen Schönig wrote: > > On Aug 2, 2008, at 8:38 PM, Tom Lane wrote: > > > > >Andrew Dunstan <andrew@dunslane.net> writes: > > >>Hans-Jürgen Schönig wrote: > > >>>i introduced a GUC called statement_cost_limit which can be used to > > >>>error out if a statement is expected to be too expensive. > > > > > >>You clearly have far more faith in the cost estimates than I do. > > > > > >Wasn't this exact proposal discussed and rejected awhile back? > > > > > i don't remember precisely. > > i have seen it on simon's wiki page and it is something which would > > have been useful in some cases in the past. I still support it. Regrettably, many SQL developers introduce product joins and other unintentional errors. Why let problem queries through? Security-wise they're great Denial of Service attacks, bringing the server to its knees better than most ways I know, in conjunction with a nice hefty work_mem setting. 27 table product joins: memory, CPU, I/O and diskspace resources used all in a simple killer query. If anybody thinks costs are inaccurate, don't use it. Or better still improve the cost models. It isn't any harder or easier to find a useful value than it is to use statement_timeout. What's the difference between picking an arbitrary time and an arbitrary cost? You need to alter the value according to people's complaints in both cases. > I think a variation on this could be very useful in development and test > environments. Suppose it raised a warning or notice if the cost was over > the limit. Then one could set a limit of a few million on the development > and test servers and developers would at least have a clue that they needed > to look at explain for that query. As it is now, one can exhort them to > run explain, but it has no effect. Instead we later see queries killed > by a 24 hour timeout with estimated costs ranging from "until they unplug > the machine and dump it" to "until the sun turns into a red giant". Great argument. So that's 4 in favour at least. A compromise would be to have log_min_statement_cost (or warn_min_statement_cost) which will at least help find these problems in testing before we put things live, but that still won't help with production issues. Another alternative would be to have a plugin that can examine the plan immediately after planner executes, so you can implement this yourself, plus some other possibilities. -- Simon Riggs www.2ndQuadrant.comPostgreSQL Training, Services and Support
pgsql-hackers by date: