At Thu, 27 Feb 2020 06:27:24 +0100, Pavel Stehule <pavel.stehule@gmail.com> wrote in
> odesílatel Kyotaro Horiguchi <horikyota.ntt@gmail.com>
> napsal:
> > > In the current patch, log_before_query (will be log_before_execution)
> > > has no effect if log_analyze is enabled in order to avoid to log the
> > > same plans twice. Instead, is it better to log the plan always twice,
> > > before and after the execution, if log_before_query is enabled
> > > regardless of log_min_duration or log_analyze?
> >
> > Honestly, I don't think showing plans for all queries is useful
> > behavior.
> >
> > If you allow the stuck query to be canceled, showing plan in
> > PG_FINALLY() block in explain_ExecutorRun would work, which look like
> > this.
...
> It can work - but still it is not good enough solution. We need "query
> debugger" that allows to get some query execution metrics online.
If we need a live plan dump of a running query, We could do that using
some kind of inter-backend triggering. (I'm not sure if PG offers
inter-backend signalling facility usable by extensions..)
=# select auto_explain.log_plan_backend(12345);
postgresql.log:
LOG: requested plan dump: <blah, blah>..
> There was a problem with memory management for passing plans between
> processes. Can we used temp files instead shared memory?
=# select auto_explain.dump_plan_backend(12345);
pid | query | plan
-------+-------------+-------------------
12345 | SELECT 1; | Result (cost=....) (actual..)
(1 row)
Doesn't DSA work? I think it would be easier to handle than files.
regards.
--
Kyotaro Horiguchi
NTT Open Source Software Center