Thread: Unable to use index?
Hi folks! A query I am running does not seem to use indexes that are available (running version 7.4.2). I have the following table: => \d replicated Table "public.replicated" Column | Type | Modifiers -----------------+--------------------------+----------------------------------------------------- rep_id | bigint | not null default nextval('replicated_id_seq'::text) rep_component | character varying(100) | rep_key1 | integer | rep_key2 | bigint | rep_key3 | smallint | rep_replicated | timestamp with time zone | rep_remotekey1 | integer | rep_remotekey2 | bigint | rep_remotekey3 | smallint | rep_key2b | bigint | rep_remotekey2b | bigint | rep_key4 | text | Indexes: "replicated_pkey" primary key, btree (rep_id) "replicate_key1_idx" btree (rep_key1, rep_key2, rep_key3) "replicated_item2_idx" btree (rep_component, rep_key2, rep_key3) "replicated_item_idx" btree (rep_component, rep_key1, rep_key2, rep_key3) "replicated_key2_idx" btree (rep_key2, rep_key3) "replicated_key4_idx" btree (rep_key4) => analyze verbose replicated; INFO: analyzing "public.replicated" INFO: "replicated": 362140 pages, 30000 rows sampled, 45953418 estimated total rows ANALYZE The following does not use an index, even though two are available for the specific selection of rep_component. => explain analyze select * from replicated where rep_component = 'ps_probe' limit 1; QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------- Limit (cost=0.00..0.23 rows=1 width=101) (actual time=34401.857..34401.859 rows=1 loops=1) -> Seq Scan on replicated (cost=0.00..936557.70 rows=4114363 width=101) (actual time=34401.849..34401.849 rows=1 loops=1) Filter: ((rep_component)::text = 'ps_probe'::text) Total runtime: 34401.925 ms (4 rows) Yet, if I do the following, an index will be used, and it runs much faster (even when I swapped the order of the execution). => explain analyze select * from replicated where rep_component = 'ps_probe' order by rep_component limit 1; QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------------------------- Limit (cost=0.00..1.66 rows=1 width=101) (actual time=51.163..51.165 rows=1 loops=1) -> Index Scan using replicated_item2_idx on replicated (cost=0.00..6838123.76 rows=4114363 width=101) (actual time=51.157..51.157rows=1 loops=1) Index Cond: ((rep_component)::text = 'ps_probe'::text) Total runtime: 51.265 ms (4 rows) Any reason why the index is not chosen? Maybe I need to up the number of rows sampled for statistics? Regards! Ed
On Thu, 29 Apr 2004 09:48:10 -0400 (EDT), Edmund Dengler <edmundd@eSentire.com> wrote: >=> explain analyze select * from replicated where rep_component = 'ps_probe' limit 1; >------------------------------------------------------------------------------------------------------------------------------- > Limit (cost=0.00..0.23 rows=1 width=101) (actual time=34401.857..34401.859 rows=1 loops=1) > -> Seq Scan on replicated (cost=0.00..936557.70 rows=4114363 width=101) (actual time=34401.849..34401.849 rows=1 loops=1) ^^^^ > Filter: ((rep_component)::text = 'ps_probe'::text) The planner thinks that the seq scan has a startup cost of 0.00, i.e. that it can return the first tuple immediately, which is obviously not true in the presence of a filter condition. Unfortunately there's no easy way to fix this, because the statistics information does not have information about the physical position of tuples with certain vaules. >=> explain analyze select * from replicated where rep_component = 'ps_probe' order by rep_component limit 1; This is a good workaround. It makes the plan for a seq scan look like | Limit (cost=2345679.00..2345679.20 rows=1 width=101) | -> Sort (2345678.90..2500000.00 rows=4114363 width=101) | -> Seq Scan on replicated (cost=0.00..936557.70 rows=4114363 width=101) | Filter: ((rep_component)::text = 'ps_probe'::text) which is a loser against the index scan: > Limit (cost=0.00..1.66 rows=1 width=101) (actual time=51.163..51.165 rows=1 loops=1) >Maybe I need to up the number of rows sampled for statistics? Won't help, IMHO. Servus Manfred
Manfred Koizar <mkoi-pg@aon.at> writes: > The planner thinks that the seq scan has a startup cost of 0.00, i.e. > that it can return the first tuple immediately, which is obviously not > true in the presence of a filter condition. Not really --- the startup cost is really defined as "cost expended before we can start scanning for results". The estimated cost to select N tuples is actually "startup_cost + N*(total_cost-startup_cost)/M", where M is the estimated total rows returned. This is why the LIMIT shows a nonzero estimate for the cost to fetch 1 row. > Unfortunately there's no > easy way to fix this, because the statistics information does not have > information about the physical position of tuples with certain vaules. Yeah, I think the real problem is that the desired rows are not uniformly distributed, and in fact there are none near the start of the table. We do not keep stats detailed enough to let the planner discover this, so it has to estimate on the assumption of uniform distribution. On that assumption, it looks like a seqscan will hit a suitable tuple quickly enough to be faster than using the index. regards, tom lane
Hmm, interesting as I have that table clustered starting with the rep_component, so 'ps_probe' will definitely appear later in a sequential scan. So why does the <order by> force the use of the index? Regards! Ed On Thu, 29 Apr 2004, Tom Lane wrote: > Manfred Koizar <mkoi-pg@aon.at> writes: > > The planner thinks that the seq scan has a startup cost of 0.00, i.e. > > that it can return the first tuple immediately, which is obviously not > > true in the presence of a filter condition. > > Not really --- the startup cost is really defined as "cost expended > before we can start scanning for results". The estimated cost to select > N tuples is actually "startup_cost + N*(total_cost-startup_cost)/M", > where M is the estimated total rows returned. This is why the LIMIT > shows a nonzero estimate for the cost to fetch 1 row. > > > Unfortunately there's no > > easy way to fix this, because the statistics information does not have > > information about the physical position of tuples with certain vaules. > > Yeah, I think the real problem is that the desired rows are not > uniformly distributed, and in fact there are none near the start of the > table. We do not keep stats detailed enough to let the planner discover > this, so it has to estimate on the assumption of uniform distribution. > On that assumption, it looks like a seqscan will hit a suitable tuple > quickly enough to be faster than using the index. > > regards, tom lane >
Edmund Dengler <edmundd@eSentire.com> writes: > Hmm, interesting as I have that table clustered starting with the > rep_component, so 'ps_probe' will definitely appear later in a sequential > scan. So why does the <order by> force the use of the index? It does not "force" anything, it simply alters the cost estimates. The seqscan-based plan requires an extra sort step to meet the ORDER BY, while the indexscan plan does not. In this particular scenario the indexscan plan is estimated to beat seqscan+sort, but in other cases the opposite decision might be made. regards, tom lane
Tom Lane <tgl@sss.pgh.pa.us> writes: > > Unfortunately there's no easy way to fix this, because the statistics > > information does not have information about the physical position of > > tuples with certain vaules. > > Yeah, I think the real problem is that the desired rows are not > uniformly distributed, and in fact there are none near the start of the > table. We do not keep stats detailed enough to let the planner discover > this, so it has to estimate on the assumption of uniform distribution. > On that assumption, it looks like a seqscan will hit a suitable tuple > quickly enough to be faster than using the index. It seems like this is another scenario where it would be helpful to have the optimizer keep track of not just the average expected cost but also the worst-case cost. Since the index scan in this case might have a higher expected cost but a lower worst-case cost than the sequential scan. For some applications the best bet may in fact be to go with the plan expected to be fastest. But for others it would be more important to go with the plan that is least likely to perform badly, even if it means paying a performance penalty to avoid the risk. -- greg
unsubscribe