Thread: EXPLAIN progress info

EXPLAIN progress info

From
Gregory Stark
Date:
Not to be opened before May 1st!


I know I should still be looking at code from the March Commitfest but I was
annoyed by this *very* FAQ:

 http://archives.postgresql.org/pgsql-general/2008-04/msg00402.php

This also came up at the UKUUG-Pg conference so it was already on my mind. I
couldn't resist playing with it and trying to solve this problem.


I'm not sure what the right way to get the data back to psql would be.
Probably it should be something other than what it's doing now, an INFO
message. It might even be a special message type? Also need some thought about
what progress info could be reported in other situations aside from the
executor. VACUUM, for example, could report its progress pretty easily.

To use it run a long query in psql and hit C-\ in (unless you're on a system
with SIGINFO support such as BSD where the default will probably be C-t).

But no reviewing it until we finish with the March patches!
Do as I say, not as I do :(



--
  Gregory Stark
  EnterpriseDB          http://www.enterprisedb.com
  Ask me about EnterpriseDB's PostGIS support!

Attachment

Re: EXPLAIN progress info

From
Tom Lane
Date:
Gregory Stark <stark@enterprisedb.com> writes:
> I know I should still be looking at code from the March Commitfest but I was
> annoyed by this *very* FAQ:

>  http://archives.postgresql.org/pgsql-general/2008-04/msg00402.php

Seems like pg_relation_size and/or pgstattuple would solve his problem
better, especially since he'd not have to abort and restart the long
query to find out if it's making progress.  It'd help if pgstattuple
were smarter about "dead" vs uncommitted tuples, though.

            regards, tom lane

Re: EXPLAIN progress info

From
Gregory Stark
Date:
"Tom Lane" <tgl@sss.pgh.pa.us> writes:

> Gregory Stark <stark@enterprisedb.com> writes:
>> I know I should still be looking at code from the March Commitfest but I was
>> annoyed by this *very* FAQ:
>
>>  http://archives.postgresql.org/pgsql-general/2008-04/msg00402.php
>
> Seems like pg_relation_size and/or pgstattuple would solve his problem
> better, especially since he'd not have to abort and restart the long
> query to find out if it's making progress.  It'd help if pgstattuple
> were smarter about "dead" vs uncommitted tuples, though.

I specifically didn't go into detail because I thought it would be pointed out
I should be focusing on the commitfest, not proposing new changes. I just got
caught up with an exciting idea.

But it does *not* abort the current query. It spits out an explain tree with
the number of rows and loops executed so far for each node and returns to
processing the query. You can hit the C-t or C-\ multiple times and see the
actual rows increasing. You could easily imagine a tool like pgadmin
displaying progress bars based on the estimated and actual rows.

There are downsides:

a) the overhead of counting rows and loops is there for every query execution,
even if you don't do explain analyze. It also has to palloc all the
instrumentation nodes.

b) We're also running out of signals to control backends. I used SIGILL but
really that's not exactly an impossible signal, especially for user code from
contrib modules. We may have to start looking into other ways of having the
postmaster communicate with backends. It could open a pipe before it starts
backends for example.

c) It's not easy to be sure that every single CHECK_FOR_INTERRUPTS() site
throughout the backend is a safe place to be calling random node output
functions. I haven't seen any problems and realistically it seems all the node
output functions *ought* to be safe to call from anywhere but it warrants a
second look.

--
  Gregory Stark
  EnterpriseDB          http://www.enterprisedb.com
  Ask me about EnterpriseDB's On-Demand Production Tuning

Re: EXPLAIN progress info

From
Tom Lane
Date:
Gregory Stark <stark@enterprisedb.com> writes:
> There are downsides:

Insurmountable ones at that.  This one already makes it a non-starter:

> a) the overhead of counting rows and loops is there for every query execution,
> even if you don't do explain analyze.

and you are also engaging in a flight of fantasy about what the
client-side code might be able to handle.  Particularly if it's buried
inside, say, httpd or some huge Java app.  Yeah, you could possibly make
it work for the case that the problem query was manually executed in
psql, but that doesn't cover very much real-world territory.

You'd be far more likely to get somewhere with a design that involves
looking from another session to see if anything's happening.  In the
case of queries that are making database changes, pgstattuple is
certainly a usable option.  For SELECT-only queries, I agree it's
harder, but it's still possible.  I seem to recall some discussion of
including a low-overhead progress counter of some kind in the
pg_stat_activity state exposed by a backend.  The number of rows so far
processed by execMain.c in the current query might do for the
definition.

            regards, tom lane

Re: EXPLAIN progress info

From
Heikki Linnakangas
Date:
Tom Lane wrote:
> Gregory Stark <stark@enterprisedb.com> writes:
>> There are downsides:
>
> Insurmountable ones at that.  This one already makes it a non-starter:
>
>> a) the overhead of counting rows and loops is there for every query execution,
>> even if you don't do explain analyze.
>
> and you are also engaging in a flight of fantasy about what the
> client-side code might be able to handle.  Particularly if it's buried
> inside, say, httpd or some huge Java app.  Yeah, you could possibly make
> it work for the case that the problem query was manually executed in
> psql, but that doesn't cover very much real-world territory.

I think there's two different use cases here. The one that Greg's
proposal would be good for is a GUI, like pgAdmin. It would be cool to
see how a query progresses through the EXPLAIN tree when you run it from
the query tool. That would be great for visualizing the executor; a
great teaching tool.

But I agree it's no good for use by a DBA to monitor a live system
running a real-world application. For that we do need something else.

> You'd be far more likely to get somewhere with a design that involves
> looking from another session to see if anything's happening.  In the
> case of queries that are making database changes, pgstattuple is
> certainly a usable option.  For SELECT-only queries, I agree it's
> harder, but it's still possible.  I seem to recall some discussion of
> including a low-overhead progress counter of some kind in the
> pg_stat_activity state exposed by a backend.  The number of rows so far
> processed by execMain.c in the current query might do for the
> definition.

Yeah, something like this would be better for monitoring a live system.

The number of rows processed by execMain.c would only count the number
of rows processed by the top node of the tree, right? For a query that
for example performs a gigantic sort, that would be 0 until the sort is
done, which is not good. It's hard to come up with a single counter
that's representative :-(.

--
   Heikki Linnakangas
   EnterpriseDB   http://www.enterprisedb.com

Re: EXPLAIN progress info

From
Gregory Stark
Date:
"Heikki Linnakangas" <heikki@enterprisedb.com> writes:

> Tom Lane wrote:
>> Gregory Stark <stark@enterprisedb.com> writes:
>>> There are downsides:
>>
>> Insurmountable ones at that.  This one already makes it a non-starter:
>>
>>> a) the overhead of counting rows and loops is there for every query execution,
>>> even if you don't do explain analyze.

Note that this doesn't include the gettimeofdays. It's just a couple integer
increments and assigments per tuple.

>> and you are also engaging in a flight of fantasy about what the
>> client-side code might be able to handle.  Particularly if it's buried
>> inside, say, httpd or some huge Java app.  Yeah, you could possibly make
>> it work for the case that the problem query was manually executed in
>> psql, but that doesn't cover very much real-world territory.

> I think there's two different use cases here. The one that Greg's proposal
> would be good for is a GUI, like pgAdmin. It would be cool to see how a query
> progresses through the EXPLAIN tree when you run it from the query tool. That
> would be great for visualizing the executor; a great teaching tool.

It also means if a query takes suspiciously long you don't have to run explain
in another session (possibly getting a different plan) and if it takes way too
long such that it's too long to wait for results you can get an explain
analyze for at least partial data.

> Yeah, something like this would be better for monitoring a live system.
>
> The number of rows processed by execMain.c would only count the number of rows
> processed by the top node of the tree, right? For a query that for example
> performs a gigantic sort, that would be 0 until the sort is done, which is not
> good. It's hard to come up with a single counter that's representative :-(.

Alternately you could count the number of records which went through
ExecProcNode. That would at least get something which gives you a holistic
view of the query. I don't see how you would know what the expected end-point
would be though.

I think a better way to get a real "percentage done" would be to add a method
to each node which estimates its percentage done based on the percentage done
its children report and its actual and expected rows and its costs.

So for example a nested loop would calculate P1-(1-P2)/ER1 where P1 is the
percentage done of the first child and P2 is the percentage done of the second
child and ER1 is the expected number of records from the first child. Hash
Join would calculate (P1*C1 + P2*C2)/(C1+C2).

That could get a very good estimate of the percentage done, basically as good
as the estimated number of records.

--
  Gregory Stark
  EnterpriseDB          http://www.enterprisedb.com
  Ask me about EnterpriseDB's PostGIS support!

Re: EXPLAIN progress info

From
Tom Lane
Date:
Gregory Stark <stark@enterprisedb.com> writes:
> I think a better way to get a real "percentage done" would be to add a method
> to each node which estimates its percentage done based on the percentage done
> its children report and its actual and expected rows and its costs.

You can spend a week inventing some complicated method, and the patch
will be rejected because it adds too much overhead.  Anything we do here
has to be cheap enough that no one will object to having it turned on
all the time --- else it'll be useless exactly when they need it.

            regards, tom lane

Re: EXPLAIN progress info

From
Gregory Stark
Date:
"Tom Lane" <tgl@sss.pgh.pa.us> writes:

> Gregory Stark <stark@enterprisedb.com> writes:
>> I think a better way to get a real "percentage done" would be to add a method
>> to each node which estimates its percentage done based on the percentage done
>> its children report and its actual and expected rows and its costs.
>
> You can spend a week inventing some complicated method, and the patch
> will be rejected because it adds too much overhead.  Anything we do here
> has to be cheap enough that no one will object to having it turned on
> all the time --- else it'll be useless exactly when they need it.

Actually Dave made a brilliant observation about this when I described it.
Most nodes can actually estimate their progress without any profiling overhead
at all. In fact they can do so more accurately than using the estimated rows.

Sequential scans, for example, can base a report on the actual block they're
on versus the previously measured end of the file. Bitmap heap scans can
report based on the number of blocks queued up to read.

Index scans are the obvious screw case. I think they would have to have a
counter that they increment on every tuple returned and reset to zero when
restarted. I can't imagine that's really a noticeable overhead though. Limit
and sort would also be a bit tricky.

--
  Gregory Stark
  EnterpriseDB          http://www.enterprisedb.com
  Ask me about EnterpriseDB's 24x7 Postgres support!

Re: EXPLAIN progress info

From
"Heikki Linnakangas"
Date:
I like this idea in general. I'm envisioning a cool explain display in
pgAdmin that's updated live, as the query is executed, showing how many
tuples a seq scan in the bottom, and an aggregate above it, has
processed. Drool.

Currently the interface is that you open a new connection, and signal
the backend using the same mechanism as a query cancel. That's fine for
the interactive psql use case, but seems really heavy-weight for the
pgAdmin UI I'm envisioning. You'd have to poll the server, opening a new
connection each time. Any ideas on that? How about a GUC to send the
information automatically every n seconds, for example?

Other than that, this one seems to be the most serious issue:

Tom Lane wrote:
> Gregory Stark <stark@enterprisedb.com> writes:
>> There are downsides:
>
> Insurmountable ones at that.  This one already makes it a non-starter:
>
>> a) the overhead of counting rows and loops is there for every query execution,
>> even if you don't do explain analyze.

I wouldn't be surprised if the overhead of the counters turns out to be
a non-issue, but we'd have to see some testing of that. InstrEndLoop is
the function we're worried about, right?

--
   Heikki Linnakangas
   EnterpriseDB   http://www.enterprisedb.com