"Fernando Papa" <fpapa@claxson.com> writes:
> -> Nested Loop (cost=0.00..1828.46 rows=1 width=367)
> (actual time=7525.51..436843.27 rows=40 loops=1)
> Join Filter: (("inner".id_contenido =
> "outer".id_contenido) AND ("inner".id_instalacion =
> "outer".id_instalacion))
> -> Index Scan using jue_conf_pk on juegos_config
> (cost=0.00..12.19 rows=40 width=332) (actual time=0.38..6.63 rows=40
> loops=1)
> -> Index Scan using idx_generar_vainilla_ci on
> cont_publicacion (cost=0.00..45.39 rows=1 width=35) (actual
> time=48.81..10917.53 rows=97 loops=40)
> Index Cond: (upper((generar_vainilla)::text)
> = 'S'::text)
> Filter: (subplan)
> SubPlan
> -> Aggregate (cost=15.85..15.85 rows=1
> width=8) (actual time=24.30..24.30 rows=0 loops=17880)
As best I can tell, the problem here is coming from a drastic
underestimate of the number of rows selected by
"upper(generar_vainilla) = 'S'". Evidently there are about 450 such
rows (since in 40 repetitions of the inner index scan, the aggregate
subplan gets evaluated 17880 times), but the planner seems to think
there will be only about two such rows. Had it made a more correct
estimate, it would never have picked a plan that required multiple
repetitions of the indexscan.
One thing I'm wondering is if you've VACUUM ANALYZEd cont_publicacion
lately --- the cost estimate seems on the small side, and I'm wondering
if the planner thinks the table is much smaller than it really is. But
assuming you didn't make that mistake, the only solution I can see is to
not use a functional index. The planner is not good about making row
count estimates for functional indexes. You could replace the index on
upper(generar_vainilla) with a plain index on generar_vainilla, and
change the query condition from "upper(generar_vainilla) = 'S'" to
"generar_vainilla IN ('S', 's')". I think the planner would have a lot
better chance at understanding the statistics that way.
regards, tom lane