Re: Regarding performance regression on specific query - Mailing list pgsql-hackers

From Jung, Jinho
Subject Re: Regarding performance regression on specific query
Date
Msg-id DM5PR07MB3927989EB4FBF074E236B47EEED60@DM5PR07MB3927.namprd07.prod.outlook.com
Whole thread Raw
In response to Re: Regarding performance regression on specific query  (Tom Lane <tgl@sss.pgh.pa.us>)
List pgsql-hackers


Thanks for the comment. 


We also have several performance regression cases that we found from TPC-C benchmark. Since those queries were not executed on empty relation, they will be more interesting. 


We will report to pgsql-performance mailing list next time. 


Jinho Jung


From: Tom Lane <tgl@sss.pgh.pa.us>
Sent: Saturday, November 24, 2018 3:32:41 PM
To: Amit Langote
Cc: Jung, Jinho; pgsql-hackers@postgresql.org
Subject: Re: Regarding performance regression on specific query
 
Amit Langote <Langote_Amit_f8@lab.ntt.co.jp> writes:
> On 2018/11/20 2:49, Jung, Jinho wrote:
>>  [ assorted queries ]

> I noticed that these two are fixed by running ANALYZE in the database in
> which these queries are run.

That didn't help much for me.  What did help was increasing
join_collapse_limit and from_collapse_limit to not limit the
join search space --- on queries with as many input relations
as these, you're really at the mercy of whether the given query
structure represents a good join order if you don't.

In general I can't get even a little bit excited about the quality of the
plans selected for these examples, as they all involve made-up restriction
and join clauses that the planner isn't going to have the slightest clue
about.  The observations boil down to "9.4 made one set of arbitrary plan
choices, while v10 made a different set of arbitrary plan choices, and on
these particular examples 9.4 got lucky and 10 didn't".

Possibly also worth noting is that running these in an empty database
is in itself kind of a worst case, because many of the tables are empty
to start with (or the restriction/join clauses pass no rows), and so
the fastest runtime tends to go to plans of the form "nestloop with
empty relation on the outside and all the expensive stuff on the
inside".  (Observe all the "(never executed)" notations in the EXPLAIN
output.)  This kind of plan wins only when the outer rel is actually
empty, otherwise it can easily lose big, and therefore PG's planner is
intentionally designed to discount the case entirely.  We never believe
that a relation is empty, unless we can mathematically prove that, and
our cost estimates are never made with an eye to exploiting such cases.
This contributes a lot to the random-chance nature of which plan is
actually fastest; the planner isn't expecting "(never executed)" to
happen and doesn't prefer plans that will win if it does happen.

                        regards, tom lane

pgsql-hackers by date:

Previous
From: Peter Eisentraut
Date:
Subject: Re: Centralize use of PG_INTXX_MIN/MAX for integer limits
Next
From: Peter Eisentraut
Date:
Subject: Re: Continue work on changes to recovery.conf API