Re: Slow queries on big table - Mailing list pgsql-performance

From Tom Lane
Subject Re: Slow queries on big table
Date
Msg-id 1736.1179518362@sss.pgh.pa.us
Whole thread Raw
In response to Slow queries on big table  ("Tyrrill, Ed" <tyrrill_ed@emc.com>)
List pgsql-performance
"Tyrrill, Ed" <tyrrill_ed@emc.com> writes:
>  Index Scan using backup_location_pkey on backup_location
> (cost=0.00..1475268.53 rows=412394 width=8) (actual
> time=3318.057..1196723.915 rows=2752 loops=1)
>    Index Cond: (backup_id = 1070)
>  Total runtime: 1196725.617 ms

If we take that at face value it says the indexscan is requiring 434
msec per actual row fetched.  Which is just not very credible; the worst
case should be about 1 disk seek per row fetched.  So there's something
going on that doesn't meet the eye.

What I'm wondering about is whether the table is heavily updated and
seldom vacuumed, leading to lots and lots of dead tuples being fetched
and then rejected (hence they'd not show in the actual-rows count).

The other thing that seems pretty odd is that it's not using a bitmap
scan --- for such a large estimated rowcount I'd have expected a bitmap
scan not a plain indexscan.  What do you get from EXPLAIN ANALYZE if
you force a bitmap scan?  (Set enable_indexscan off, and enable_seqscan
too if you have to.)

            regards, tom lane

pgsql-performance by date:

Previous
From: Tom Lane
Date:
Subject: Re: 121+ million record table perf problems
Next
From: Andrew Kroeger
Date:
Subject: Re: Slow queries on big table