On 11 January 2013 15:54, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:
> Tom Lane escribió:
>> Pavel Stehule <pavel.stehule@gmail.com> writes:
>> > My propose is proposed for different dimensions and purpose - for
>> > example - we have a limit 20 minutes for almost all queries, and after
>> > this limit we killing queries. But we have to know little bit more
>> > about these bad queries - and we hope, so execution plan can give this
>> > additional info. We have same motivation like people who use
>> > auto_explain for slow query - but we can't to wait to query complete.
>>
>> Oh, sorry, not enough caffeine yet --- somehow I was thinking about
>> pg_stat_statements not auto_explain.
>>
>> However, auto_explain is even worse on the other problem. You flat out
>> cannot do catalog lookups in a failed transaction, but there's no way to
>> print a decompiled plan without such lookups. So it won't work. (It
>> would also be appropriate to be suspicious of whether the executor's
>> plan state tree is even fully set up at the time the error is thrown...)
>
> Maybe it'd work to save the query source text and parameter values
> somewhere and log an explain in a different session.
I think this would be an important feature.
But then I also want to be able to kill a query without it doing 50
pushups and a backflip before it dies, since that will inevitably go
wrong.
Perhaps we can have a new signal that means "exit gracefully, with
info if requested". That way we can keep "kill" to mean "kill".
An even better feature would be to be able to send a signal to a
running query to log its currently executing plan. That way you can
ask "Why so slow?" before deciding to kill it. That way we don't need
to overload the kill signal at all. That's the most useful part of a
"progress indicator" tool without the complexity.
-- Simon Riggs http://www.2ndQuadrant.com/PostgreSQL Development, 24x7 Support, Training & Services