Thread: Trapping QUERY_CANCELED: yes, no, maybe?
Now that it is possible for plpgsql functions (and, no doubt, soon others) to trap errors, I think we need a little bit of discussion about what kind of restrictions should be put on trapping the QUERY_CANCELED error. If we treat QUERY_CANCELED as an absolutely ordinary error, it would be fairly easy for a carelessly written function to disable the backend from responding to ^C or statement_timeout --- it need only catch the error and keep on processing. On the other hand, it's not impossible to envision scenarios where you'd like to be able to catch QUERY_CANCELED. I can see people using that ability in combination with statement_timeout, in particular. The compromise that I put into plpgsql for the moment works like this: 1. EXCEPTION WHEN OTHERS THEN matches all error types *except* QUERY_CANCELED; thus you cannot accidentally catch that error. (When the EXCEPTION clause does not match the error type, the error just propagates outward, so QUERY_CANCELED will always force control back to the idle loop.) 2. But you can explicitly say EXCEPTION WHEN QUERY_CANCELED THEN if you really want to catch it. I can see that point #2 might be controversial, and that some might feel that QUERY_CANCELED should be untrappable, period. Does anyone want to argue for that? Does anyone think #1 is a bad idea? There is also an implementation-level issue, which is whether we should rely on the PLs' individual error handlers (which are mostly yet to be written) to enforce such policies consistently. As long as QUERY_CANCELED is the only special case, things aren't too bad, but if there's any prospect of having more special cases it could get messy. I thought about inventing a new elevel category, but really that does not help much, since we'd still be dependent on the error handlers to be cooperative. (We can't not run the handlers, since there is very likely state that they *must* undo before losing control.) Anyone see a cleaner approach? regards, tom lane
You are asking if a funciton itself might want to set query cancel and catch the cancel and do something else? For 7.5 I would be happy just canceling the entire thing. --------------------------------------------------------------------------- Tom Lane wrote: > Now that it is possible for plpgsql functions (and, no doubt, soon > others) to trap errors, I think we need a little bit of discussion > about what kind of restrictions should be put on trapping the > QUERY_CANCELED error. If we treat QUERY_CANCELED as an absolutely > ordinary error, it would be fairly easy for a carelessly written > function to disable the backend from responding to ^C or > statement_timeout --- it need only catch the error and keep on > processing. On the other hand, it's not impossible to envision > scenarios where you'd like to be able to catch QUERY_CANCELED. > I can see people using that ability in combination with > statement_timeout, in particular. > > The compromise that I put into plpgsql for the moment works like this: > > 1. EXCEPTION WHEN OTHERS THEN matches all error types *except* > QUERY_CANCELED; thus you cannot accidentally catch that error. (When > the EXCEPTION clause does not match the error type, the error just > propagates outward, so QUERY_CANCELED will always force control back > to the idle loop.) > > 2. But you can explicitly say EXCEPTION WHEN QUERY_CANCELED THEN > if you really want to catch it. > > I can see that point #2 might be controversial, and that some might > feel that QUERY_CANCELED should be untrappable, period. Does anyone > want to argue for that? Does anyone think #1 is a bad idea? > > There is also an implementation-level issue, which is whether we should > rely on the PLs' individual error handlers (which are mostly yet to be > written) to enforce such policies consistently. As long as > QUERY_CANCELED is the only special case, things aren't too bad, but > if there's any prospect of having more special cases it could get messy. > I thought about inventing a new elevel category, but really that does > not help much, since we'd still be dependent on the error handlers to be > cooperative. (We can't not run the handlers, since there is very likely > state that they *must* undo before losing control.) Anyone see a > cleaner approach? > > regards, tom lane > > ---------------------------(end of broadcast)--------------------------- > TIP 5: Have you checked our extensive FAQ? > > http://www.postgresql.org/docs/faqs/FAQ.html > -- Bruce Momjian | http://candle.pha.pa.us pgman@candle.pha.pa.us | (610) 359-1001+ If your life is a hard drive, | 13 Roberts Road + Christ can be your backup. | Newtown Square, Pennsylvania19073
Bruce Momjian <pgman@candle.pha.pa.us> writes: > You are asking if a funciton itself might want to set query cancel and > catch the cancel and do something else? Exactly. There's a proof-of-concept test at the bottom of regress/sql/plpgsql.sql, wherein a function gets control back from a query that would have run for an unreasonably long time. regards, tom lane
On Sat, 31 Jul 2004 21:24:33 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote: >Exactly. There's a proof-of-concept test at the bottom of >regress/sql/plpgsql.sql, wherein a function gets control back >from a query that would have run for an unreasonably long time. referring to | -- we assume this will take longer than 1 second: | select count(*) into x from tenk1 a, tenk1 b, tenk1 c; On Thu, 29 Aug 2002 13:27:36 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote: >> You mean, that the test might fail on a system that takes more than >> ten seconds to INSERT or UPDATE a single row? I don't think this is a >> real problem. > >I don't like depending on a timeout *at all* in a regression test; >the exact value of the timeout is not particularly relevant to my >concern about it. MaybeSELECT sleep('0:0:2'::interval); as used in regress/sql/stats.sql is a better way to ensure that the query takes longer than one second? ServusManfred
Manfred Koizar <mkoi-pg@aon.at> writes: > referring to > | -- we assume this will take longer than 1 second: > | select count(*) into x from tenk1 a, tenk1 b, tenk1 c; > Maybe > SELECT sleep('0:0:2'::interval); > as used in regress/sql/stats.sql is a better way to ensure that the > query takes longer than one second? You think there's a serious risk of failure there ;-) ? By my count the query will try to generate one trillion join rows. regards, tom lane
On Fri, 06 Aug 2004 18:55:49 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote: >You think there's a serious risk of failure there ;-) ? Not on my hardware... ServusManfred