> ... if its just me mis-reading the numbers, let me
> know ... it just "feels" off
Here's what I see:
> time psql -c "explain analyze SELECT count(rec_id) FROM url" 186_archives
QUERY PLAN
--------------------------------------------------------------------------------------------------------------------Aggregate
(cost=11177.19..11177.19 rows=1 width=4) (actual time=3539.180..3539.184 rows=1 loops=1) -> Seq Scan on url
(cost=0.00..10741.55rows=174255 width=4) (actual time=0.111..2292.234 rows=215552 loops=1)Total runtime: 3539.922 ms
(3 rows)
0.006u 0.000s 0:03.57 0.0% 0+0k 0+0io 0pf+0w
So there doesn't seem to be any discrepancy between what EXPLAIN says
and what time(1) says. It is true that the overhead of EXPLAIN ANALYZE
looks a bit high:
> time psql -c "SELECT count(rec_id) FROM url" 186_archivescount
--------215169
(1 row)
0.000u 0.004s 0:00.77 0.0% 0+0k 0+0io 0pf+0w
If I'm doing the arithmetic correctly this makes the measurement
overhead about 13 microseconds per row. Presumably that's almost
entirely spent on the two gettimeofday() calls, so they are costing
upwards of 6 microseconds apiece, which seems like a lot for a modern
processor. Might be worth griping to the BSD kernel folk...
Still though I think we could be proud that we've gotten the price of
a seqscan down to the point where a couple of gettimeofday() calls per
row are dominating the runtime.
regards, tom lane