Re: Re: Should we have an optional limit on the recursion depth of recursive CTEs? - Mailing list pgsql-hackers

From Peter Geoghegan
Subject Re: Re: Should we have an optional limit on the recursion depth of recursive CTEs?
Date
Msg-id CAEYLb_U8nr0FJgSsPrsXsVjM9UfmBzqsYnc3r0LMMRjexqDCHQ@mail.gmail.com
Whole thread Raw
In response to Re: Re: Should we have an optional limit on the recursion depth of recursive CTEs?  (Robert Haas <robertmhaas@gmail.com>)
List pgsql-hackers
On 16 August 2011 14:43, Robert Haas <robertmhaas@gmail.com> wrote:
> What would the following program do?
> 10 PRINT "HELLO"
> 20 GOTO 10
>
> And gets back an infinite stream of HELLO HELLO HELLO HELLO HELLO....

heh, that's pretty funny. It also compliments my view, because the
Turing test is only failed because the human eventually thinks "hmm,
he's taking way too long to get to the '...and so on infinitum' bit".

> I don't think it's going to be feasible to implement a security
> restriction that keeps untrusted users from hosing the machine with a
> long running CTE; there are nearly infinitely many ways for an
> untrusted user who can run queries to hose the machine, and plugging
> one of them imperfectly is going to get us pretty much nowhere.

Unless that happens to be the exact area that is a problem for you,
due perhaps to a poorly written application. We're protecting against
Murphy, not Machiavelli - if your users are malicious, or are
motivated by seeing if they can somehow hose the machine for kicks,
clearly all bets are off. This mindset happens to pretty well meet the
needs of industry, IMHO. That said, I admit the case for making a
separate SUSET GUC is the least compelling one I've made on this
thread, if only because of the glaring inconsistency with other areas.

> On the other hand, there is perhaps a reasonable argument to be made that
> we should cut off CTE processing at some point to prevent
> *inadvertent* exhaustion of system resources.  Or even query
> processing more generally.
>
> In fact, we already have some things sort of like this: you can use
> statement_timeout to kill queries that run for too long, and we just
> recently added temp_file_limit to kill those that eat too much temp
> file space.

statement_timeout is far too blunt an instrument to deal with this
problem. For one thing, it may vary based on many external factors,
whereas number of iterations is a consistent, useful metric for the
WITH query in isolation. For another, it prevents the DBA from
managing known problems with deployed apps per database - maybe they
have a reporting query that is expected to take a really long time.
Sure, they can increase statement_timeout when that it run, but that's
another thing to remember.

> I can see a good case for memory_limit and
> query_cpu_limit and maybe some others.  cte_recursion_depth_limit
> wouldn't be all that high on my personal list, I guess, but the
> concept doesn't seem completely insane.

I agree that those things would be much better than this. This is
still a useful, easy-to-implement feature though.

On 16 August 2011 15:26, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> Indeed: the real question here is why a recursive CTE is any worse
> than, say, an accidentally unconstrained join (or three or four...).

It's much worse because an unconstrained join query will not
all-of-a-sudden fail to have a terminating condition. It will, for the
most part, take forever or practically forever predictably and
consistently, even as the contents of tables changes over time.

--
Peter Geoghegan       http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training and Services


pgsql-hackers by date:

Previous
From: Jan Urbański
Date:
Subject: Re: plpython crash
Next
From: Steve Singer
Date:
Subject: Re: Online base backup from the hot-standby