Re: PostgreSQL Query Speed Issues - Mailing list pgsql-novice
From | Kevin Grittner |
---|---|
Subject | Re: PostgreSQL Query Speed Issues |
Date | |
Msg-id | 1361987066.70054.YahooMailNeo@web162905.mail.bf1.yahoo.com Whole thread Raw |
In response to | Re: PostgreSQL Query Speed Issues (Joseph Pravato <joseph.pravato@nomagic.com>) |
Responses |
Re: PostgreSQL Query Speed Issues
Re: PostgreSQL Query Speed Issues |
List | pgsql-novice |
Joseph Pravato <joseph.pravato@nomagic.com> wrote: > This query that we have been talking about is just a sample that > we used to get a start on performance improvements. The original > performance related issue we had was with a large view that we > use for our customer & sales information that accesses 3 > additional views and joins a total of 23 tables. Ah, well that gets us into a whole new ballgame. With that many tables, planning time can become an issue in itself, so there are various things that PostgreSQL does to try to planning time from ballooning to a point where it takes longer than the time saved by plan improvement. The first thing I would try is boosting these settings to more than the number of table references: from_collapse_limit join_collapse_limit geqo_threshold Try an EXPLAIN ANALYZE and see whether the "actual time" at the top level node of the plan looks good, and see how it compares to "Total runtime" at the bottom. The difference is primarily planning time, so you can see how good a plan you got versus how expensive it was to find that plan. If you see that there are good plans, but it is too expensive to find them, you might want to let the "genetic query optimizer" have a shot at planning, by adjusting the above values. The concept of this alternative planner is that it tries to get a plan which is "good enough" with bounded planning time. See the docs for details. > Before the suggestions you gave it returned in 7 - 10 minutes and > now returns in less than 10 seconds. However, we have a copy of > our data on another database that runs in less than 0.5 seconds. > We think based on the previous messages in this thread that it is > still choosing a sub-optimal query plan for the views. > > This is the explain analyze for our customer data view. > http://pastebin.com/kSfb2dqy > Here is the output for the queries you provided. > http://pastebin.com/Yp80HCpe The plan seems to be reluctant to use index scans, which might be related to the ratio between these values: random_page_cost | 1 seq_page_cost | 0.1 Do you get a better plan if these are equal? If random is only twice the sequential cost? The other possible issue is that depending on how the views are used, it sometimes creates an optimization barrier. In general, the planner will see more options if the views are joined than if they are used in subqueries or CTEs. You might also want to try pasting your plan into: http://explain.depesz.com/ This formats the plan and highlights portions you might want to pay special attention to. -- Kevin Grittner EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
pgsql-novice by date: