In order to do some complex calculations I have joined several views.
Each view could join quite a few tables.
The user is allowed to filter the results with several multi-select
input fields and this is used in the query as where a.id in
(:listOfIds).
This works fine if the user does not filter the results. These calcs
for every row in the entire can be calculated in 1-2 seconds. Certain
combinations of filters will make the query take up to 4 minutes and
will freeze the system until it has completed. Queries without these
calcs at all, but using the same filters work in a reasonable amount of
time.
I have considered the following ways to make this faster.
1. increase geqo_threshold, from_collapse_limit, join_collapse_limit
While this does improve the performance on some of the more complex
queries, generally others suffer.
2. Filter the results first and then join the complex calcs.
The database is small. About 1 GB on disk and the vast majority of that
is taken by bytea documents that are never accessed. From what I can
tell all data is in shared buffers.
Any advice would be greatly appreciated.
Here are the settings I have changed in postgresql.conf
statement_timeout = 600000 # in milliseconds, 0 is disabled
geqo_effort = 10 # range 1-10
default_statistics_target = 10000
geqo_threshold = 13
from_collapse_limit = 9
join_collapse_limit = 9 # 1 disables collapsing of
explicit JOIN clauses
work_mem = 48MB # pgtune wizard 2011-12-12
maintenance_work_mem = 480MB # pgtune wizard 2011-12-12
shared_buffers = 1920MB # pgtune wizard 2011-12-12
effective_cache_size = 5632MB # pgtune wizard 2011-12-12
seq_page_cost = 0.005 # measured on an arbitrary scale
random_page_cost = 0.005 # same scale as above