On Sun, 19 Jan 2003 14:53:49 -0300, Pablo Yaggi
<pyaggi@aulamagna.com.ar> wrote:
>if not, can I estimate the time from the explain result ? how ?
>this is the explain result about the query I'm running:
>
>Hash Join (cost=100347487.59..145240934.03 rows=26 width=60)
> -> Index Scan using inars_rectificaciones_inx on inars_rectificaciones b2
> (cost=0.00..37839140.62 rows=9546466 width=34)
^^^^^^^ ^^
(c) (d)
> -> Hash (cost=100347379.07..100347379.07 rows=43407 width=26)
> -> Seq Scan on inars_nocontrib_perm1
> (cost=100000000.00..100347379.07 rows=43407 width=26)
^^^^^^^^^^^^ ^^^^^^
(a) (b)
The end result of explain, 145240934.03, basically tells us that you
have SET enable_seqscan = OFF (a).
(b) looks a bit strange. Does inars_nocontrib_perm1 have 300000 pages
but only 43407 tuples? If so, you should VACUUM more often. Anyway
this seq scan contributes only a small fraction to the overall cost.
(c) There are almost 10M rows in inars_rectificaciones, the width of
which is at least 34 (d). Counting for overhead and columns not used
by your statement, let's guess 80 bytes per tuple (might be more).
This gives at least 800MB for the whole relation. Assuming that your
physical memory is much less than this and the index
inars_rectificaciones_inx doesn't retrieve tuples in physical order,
you need one random page read per tuple. So your question boils down
to how long it takes to do 10M random reads on your hardware.
I have to guess again: 4ms seek, 4ms latency. Ignoring CPU and
transfer times we get 8ms per page access or 80000 seconds (more than
22 hours) for the query on average PC hardware. This could be much
more, if your disk is fragmented or slow or both ...
Next time you run that query SET enable_seqscan = ON; this should cut
down execution time to way below one hour. YMMV.
Servus
Manfred