Alex <cdalxndr@yahoo.com> writes:
> select
> ...
> from
> product p
> left outer join product_property pp on p.id=pp.product_id
> left outer join product_property_value ppv on pp.value_id=ppv.id
> left outer join description_element de on pp.description_source=de.id
> left outer join description_value dev on de.description_value_id=dev.id
> left outer join description_element dep on de.parent=dep.id
> left outer join description_value depv on dep.description_value_id=depv.id
> left outer join shop_category sc_dup on p.shop_category=sc_dup.id --duplicate join
> left outer join shop_category sc on p.shop_category=sc.id
> left outer join category c on sc.category=c.id
> where
> c.id=8511
I suspect the real issue here has little to do with the "duplicate" nature
of the extra join, and a lot to do with the fact that your query exceeds
join_collapse_limit. That causes the planner to break it up in a
semi-arbitrary way to limit planning time. Adding one more join moves
where the break happens, and evidently is preventing the planner from
discovering a good join order.
You want join_collapse_limit to be at least 10 to handle this query well,
since 10 tables are mentioned. The default value, 8, is kinda tuned for
older and slower machines than most of us use today. I wouldn't make it
enormous, because it does prevent exponential planning time growth for
many tables, but you could likely get away with 15 or so. You should
likely keep from_collapse_limit in sync with that too.
regards, tom lane