Marco Di Cesare <Marco.DiCesare@pointclickcare.com> writes: > We are using a BI tool that generates a query with an unusually large number of joins. My understanding is that with this many joins Postgres query planner can't possibly use an exhaustive search so it drops into a heuristics algorithm. Unfortunately, the query runs quite slow (~35 seconds) and seems to ignore using primary keys and indexes where available.
It's difficult to make any detailed comments when you've shown us only an allegedly-bad query plan, and not either the query itself or the table definitions.
However, it appears to me that the query plan is aggregating over a rather large number of join rows, and there are very few constraints that would allow eliminating rows. So I'm not at all sure there is a significantly better plan available. Are you claiming this query was instantaneous on SQL Server?
The only thing that jumps out at me as possibly improvable is that with a further increase in work_mem, you could probably get it to change the last aggregation step from Sort+GroupAggregate into HashAggregate, which'd likely run faster ... assuming you can spare some more memory.
There's also the opportunity to tune the query itself (if it's not automatically generated by your BI tool). You can always speed up a query response by using filtered sub-selects instead of calling the the entire tables themselves on the joins.