"Tom Lane" <tgl@sss.pgh.pa.us> writes:
> Gregory Stark <stark@enterprisedb.com> writes:
>> I know I should still be looking at code from the March Commitfest but I was
>> annoyed by this *very* FAQ:
>
>> http://archives.postgresql.org/pgsql-general/2008-04/msg00402.php
>
> Seems like pg_relation_size and/or pgstattuple would solve his problem
> better, especially since he'd not have to abort and restart the long
> query to find out if it's making progress. It'd help if pgstattuple
> were smarter about "dead" vs uncommitted tuples, though.
I specifically didn't go into detail because I thought it would be pointed out
I should be focusing on the commitfest, not proposing new changes. I just got
caught up with an exciting idea.
But it does *not* abort the current query. It spits out an explain tree with
the number of rows and loops executed so far for each node and returns to
processing the query. You can hit the C-t or C-\ multiple times and see the
actual rows increasing. You could easily imagine a tool like pgadmin
displaying progress bars based on the estimated and actual rows.
There are downsides:
a) the overhead of counting rows and loops is there for every query execution,
even if you don't do explain analyze. It also has to palloc all the
instrumentation nodes.
b) We're also running out of signals to control backends. I used SIGILL but
really that's not exactly an impossible signal, especially for user code from
contrib modules. We may have to start looking into other ways of having the
postmaster communicate with backends. It could open a pipe before it starts
backends for example.
c) It's not easy to be sure that every single CHECK_FOR_INTERRUPTS() site
throughout the backend is a safe place to be calling random node output
functions. I haven't seen any problems and realistically it seems all the node
output functions *ought* to be safe to call from anywhere but it warrants a
second look.
--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com
Ask me about EnterpriseDB's On-Demand Production Tuning