Thread: Re: [GENERAL] Performance of full outer join in 8.3
I wrote: > Christian Schröder <cs@deriva.de> writes: >> This is the query: >> select isin from ts_frontend.attachment_isins full OUTER JOIN >> ts_frontend.rec_isins using (attachment,isin) WHERE attachment=2698120 >> GROUP BY isin limit 1000; > Hmm. It seems 8.3 is failing to push the attachment=2698120 condition > down to the input relations. Not sure why. All that code got massively > rewritten in 8.3, but I thought it still understood about pushing > equalities through a full join ... On further review, this did work in 8.3 when released. I think it got broken here: http://archives.postgresql.org/pgsql-committers/2008-06/msg00336.php because that change is preventing the "mergedvar = constant" clause from being seen as an equivalence, when it should be seen as one. Need to think about a tighter fix for the bug report that prompted that change. regards, tom lane
I wrote: > On further review, this did work in 8.3 when released. I think it got > broken here: > http://archives.postgresql.org/pgsql-committers/2008-06/msg00336.php > because that change is preventing the "mergedvar = constant" clause from > being seen as an equivalence, when it should be seen as one. Need to > think about a tighter fix for the bug report that prompted that change. The original bug report involved create_or_index_quals() pulling out an index condition from an OR clause that appeared above an outer join that could null the relation it wanted to indexscan. (In practice this only arises if at least one arm of the OR has an IS NULL clause for the target relation --- if all arms have ordinary strict index clauses then we'd have determined during reduce_outer_joins that the outer join could be simplified to a plain join.) I tried to fix this by altering the meaning of the outerjoin_delayed flag slightly, but what Christian's complaint shows is that that was a bad idea because it breaks valid equivalence deductions. Using outerjoin_delayed in create_or_index_quals() was always pretty much of a crude hack anyway --- there are other cases in which it prevents us from extracting index conditions that *would* be legitimate. In particular, there's no reason why we should not extract an index condition for the outer relation of the same outer join. So I'm thinking the right thing to do is to eliminate outerjoin_delayed from RestrictInfo in favor of storing a bitmapset that shows exactly which relations referenced by the clause are nullable by outer joins that are below the clause. Then create_or_index_quals() could ignore an OR, or not, depending on whether the target relation is nullable below the OR clause. This might permit finer-grain analysis in the other places that currently depend on outerjoin_delayed too, though for the moment I'll just make them check for empty-or-nonempty-set. outerjoin_delayed should revert to its longstanding meaning within distribute_qual_to_rels, but right at the moment there seems no application for preserving it beyond that point. (On the other hand, eliminating it from RestrictInfo isn't going to save any space because of alignment considerations, so maybe we should keep it there in case we need it in future.) The main objection I can see to this is the expansion of RestrictInfo, but it's a pretty large struct already and one more pointer isn't going to make much difference. Comments? regards, tom lane
On Wed, 2009-04-15 at 12:34 -0400, Tom Lane wrote: > On further review, this did work in 8.3 when released. I think it got > broken here: > > http://archives.postgresql.org/pgsql-committers/2008-06/msg00336.php > > because that change is preventing the "mergedvar = constant" clause from > being seen as an equivalence, when it should be seen as one. Need to > think about a tighter fix for the bug report that prompted that change. I've always been scared to ask this question, in case the answer is No, but: Do we have a set of regression tests for the optimizer anywhere? -- Simon Riggs www.2ndQuadrant.comPostgreSQL Training, Services and Support
Simon Riggs <simon@2ndQuadrant.com> writes: > I've always been scared to ask this question, in case the answer is No, > but: Do we have a set of regression tests for the optimizer anywhere? Nothing beyond what is in the standard tests. While that's okay at catching wrong answers --- and we have memorialized a number of such issues in the tests --- the framework is not good for catching things that run slower than they ought. regards, tom lane
On Wed, Apr 15, 2009 at 6:04 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote: > Simon Riggs <simon@2ndQuadrant.com> writes: >> I've always been scared to ask this question, in case the answer is No, >> but: Do we have a set of regression tests for the optimizer anywhere? > > Nothing beyond what is in the standard tests. While that's okay at > catching wrong answers --- and we have memorialized a number of such > issues in the tests --- the framework is not good for catching things > that run slower than they ought. We could add some regression tests that create a sample data set, ANALYZE it, and then EXPLAIN various things. The results should be deterministic, but creating a reasonably comprehensive set of tests might be a fair amount of work, and would likely add significantly to the runtime of the tests. Maybe it would need to be a separate suite just for optimizer testing. ...Robert
Robert Haas <robertmhaas@gmail.com> writes: > We could add some regression tests that create a sample data set, > ANALYZE it, and then EXPLAIN various things. The results should be > deterministic, Sorry, you're wrong. The output of EXPLAIN is nowhere near stable enough to use within the current exact-match regression test framework. I'm not sure it would be stable even if we suppressed the rowcount and cost figures. Those figures vary across platforms (because of alignment effects and probably other things) and are also sensitive to the timing of autovacuums. It is known that a nontrivial fraction of the existing regression test cases do suffer from uninteresting plan changes across platforms or as a result of various phase-of-the-moon effects; that's why we keep having to add "ORDER BY" clauses now and then. The other problem with any large set of such tests is that any time you intentionally change the optimizer, a great deal of careful analysis would be needed to determine if the resulting EXPLAIN changes were good, bad, or indifferent; not to mention whether the change *should* have changed some plans that did not change. There might be net value in maintaining such a test suite, but it would be a lot of work with no certain benefit, and I don't see anyone stepping up to do it. regards, tom lane
On Wed, Apr 15, 2009 at 7:39 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote: > Robert Haas <robertmhaas@gmail.com> writes: >> We could add some regression tests that create a sample data set, >> ANALYZE it, and then EXPLAIN various things. The results should be >> deterministic, > > Sorry, you're wrong. > > The output of EXPLAIN is nowhere near stable enough to use within the > current exact-match regression test framework. I'm not sure it would > be stable even if we suppressed the rowcount and cost figures. Those > figures vary across platforms (because of alignment effects and probably > other things) and are also sensitive to the timing of autovacuums. It > is known that a nontrivial fraction of the existing regression test > cases do suffer from uninteresting plan changes across platforms or > as a result of various phase-of-the-moon effects; that's why we keep > having to add "ORDER BY" clauses now and then. Interesting. I suppose you could insulate yourself from this somewhat by populating pg_statistic with a particular set of values rather than relying on ANALYZE to gather them, but this would have the substantial downside of being way more work to maintain, especially if anyone ever changed pg_statistic. On a more practical level, I do think we need to give real consideration to some kind of options syntax for EXPLAIN, maybe something as simple as: EXPLAIN (option_name, ...) query Or maybe: EXPLAIN (option_name = value, ...) query It may or may not be the case that generating a useful regression test suite for the planner is too much work for anyone to bother, but they certainly won't if the tools aren't available. It seems we get at least one request a month for some kind of explain-output option: suppress row counts, suppress costs, gather I/O statistics, show outputs, show # of batches for a hash join, and on and on and on. I think we should implement a very basic version that maybe does nothing more than let you optionally suppress some of the existing output, but which provides an extensible syntax for others to build on. Would you support such a change? > The other problem with any large set of such tests is that any time you > intentionally change the optimizer, a great deal of careful analysis > would be needed to determine if the resulting EXPLAIN changes were good, > bad, or indifferent; not to mention whether the change *should* have > changed some plans that did not change. Arguably it would be a good thing to examine planner changes with this level of scrutiny, but I agree that the prospect is pretty intimidating. > There might be net value in maintaining such a test suite, but it would > be a lot of work with no certain benefit, and I don't see anyone > stepping up to do it. ...Robert
On Wed, 2009-04-15 at 20:58 -0400, Robert Haas wrote: > On Wed, Apr 15, 2009 at 7:39 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote: > > The output of EXPLAIN is nowhere near stable enough to use within the > > current exact-match regression test framework. I'm not sure it would > > be stable even if we suppressed the rowcount and cost figures. Those > > figures vary across platforms (because of alignment effects and probably > > other things) and are also sensitive to the timing of autovacuums. It > > is known that a nontrivial fraction of the existing regression test > > cases do suffer from uninteresting plan changes across platforms or > > as a result of various phase-of-the-moon effects; that's why we keep > > having to add "ORDER BY" clauses now and then. > > Interesting. I suppose you could insulate yourself from this somewhat > by populating pg_statistic with a particular set of values rather than > relying on ANALYZE to gather them, but this would have the substantial > downside of being way more work to maintain, especially if anyone ever > changed pg_statistic. > > On a more practical level, I do think we need to give real > consideration to some kind of options syntax for EXPLAIN, maybe > something as simple as: > > EXPLAIN (option_name, ...) query > > Or maybe: > > EXPLAIN (option_name = value, ...) query > > It may or may not be the case that generating a useful regression test > suite for the planner is too much work for anyone to bother, but they > certainly won't if the tools aren't available. It seems we get at > least one request a month for some kind of explain-output option: > suppress row counts, suppress costs, gather I/O statistics, show > outputs, show # of batches for a hash join, and on and on and on. I > think we should implement a very basic version that maybe does nothing > more than let you optionally suppress some of the existing output, but > which provides an extensible syntax for others to build on. I think the way to do this is to introduce plan output in XML (that matches the node structure of the plan). We can then filter away any junk we don't want to see for regression tests, or better still augment the exact-match framework with a fuzzy-match spec that allows us to specify a range of values. The skill would be in constructing a set of tests that was not sensitive to minor changes. The OP's join for example had a huge cost range difference that would have clearly shown up in a regression test. This will only move forward if it adds value directly for Tom, so if it's worth doing then he needs to specify it and ask for someone to do it. There will be someone available if the task is well defined. -- Simon Riggs www.2ndQuadrant.comPostgreSQL Training, Services and Support
2009/4/16 Simon Riggs <simon@2ndquadrant.com>: > On Wed, 2009-04-15 at 20:58 -0400, Robert Haas wrote: >> On Wed, Apr 15, 2009 at 7:39 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote: >> > The output of EXPLAIN is nowhere near stable enough to use within the >> > current exact-match regression test framework. I'm not sure it would >> > be stable even if we suppressed the rowcount and cost figures. Those >> > figures vary across platforms (because of alignment effects and probably >> > other things) and are also sensitive to the timing of autovacuums. It >> > is known that a nontrivial fraction of the existing regression test >> > cases do suffer from uninteresting plan changes across platforms or >> > as a result of various phase-of-the-moon effects; that's why we keep >> > having to add "ORDER BY" clauses now and then. >> >> Interesting. I suppose you could insulate yourself from this somewhat >> by populating pg_statistic with a particular set of values rather than >> relying on ANALYZE to gather them, but this would have the substantial >> downside of being way more work to maintain, especially if anyone ever >> changed pg_statistic. >> >> On a more practical level, I do think we need to give real >> consideration to some kind of options syntax for EXPLAIN, maybe >> something as simple as: >> >> EXPLAIN (option_name, ...) query >> >> Or maybe: >> >> EXPLAIN (option_name = value, ...) query >> >> It may or may not be the case that generating a useful regression test >> suite for the planner is too much work for anyone to bother, but they >> certainly won't if the tools aren't available. It seems we get at >> least one request a month for some kind of explain-output option: >> suppress row counts, suppress costs, gather I/O statistics, show >> outputs, show # of batches for a hash join, and on and on and on. I >> think we should implement a very basic version that maybe does nothing >> more than let you optionally suppress some of the existing output, but >> which provides an extensible syntax for others to build on. > > I think the way to do this is to introduce plan output in XML (that > matches the node structure of the plan). We can then filter away any > junk we don't want to see for regression tests, or better still augment > the exact-match framework with a fuzzy-match spec that allows us to > specify a range of values. I think XML explain output is a good idea, but I don't think it's a substitute for better options to control the human-readable form. But the nice thing is that with an extensible syntax, this is not an either/or proposition. > The skill would be in constructing a set of tests that was not sensitive > to minor changes. The OP's join for example had a huge cost range > difference that would have clearly shown up in a regression test. > > This will only move forward if it adds value directly for Tom, so if > it's worth doing then he needs to specify it and ask for someone to do > it. There will be someone available if the task is well defined. I'm not sure if by this you mean the EXPLAIN changes or the regression tests, but either way I think you're half right: it's probably not necessary for Tom to provide the spec, but it would sure be nice if he could at least indicate his lack of objection to accepting a well-designed patch in one of these areas - because no one is going to want to go to the trouble of doing either of these things and then have Tom say "well, I never liked that idea anyway". ...Robert
On Thu, Apr 16, 2009 at 06:12:10AM +0100, Simon Riggs wrote: > > > > EXPLAIN (option_name, ...) query > > > > Or maybe: > > > > EXPLAIN (option_name = value, ...) query > > > > It may or may not be the case that generating a useful regression > > test suite for the planner is too much work for anyone to bother, > > but they certainly won't if the tools aren't available. It seems > > we get at least one request a month for some kind of > > explain-output option: suppress row counts, suppress costs, gather > > I/O statistics, show outputs, show # of batches for a hash join, > > and on and on and on. I think we should implement a very basic > > version that maybe does nothing more than let you optionally > > suppress some of the existing output, but which provides an > > extensible syntax for others to build on. > > I think the way to do this is to introduce plan output in XML If we're going with a serialization, which I think would be an excellent idea, how about one that's light-weight and human-readable like JSON? Cheers, David. -- David Fetter <david@fetter.org> http://fetter.org/ Phone: +1 415 235 3778 AIM: dfetter666 Yahoo!: dfetter Skype: davidfetter XMPP: david.fetter@gmail.com Remember to vote! Consider donating to Postgres: http://www.postgresql.org/about/donate
On Thu, Apr 16, 2009 at 11:21 AM, David Fetter <david@fetter.org> wrote: > If we're going with a serialization, which I think would be an > excellent idea, how about one that's light-weight and human-readable > like JSON? Wow, that's a great idea for another option to EXPLAIN. Wouldn't it be nice if EXPLAIN supported an options syntax?!!! :-) ...Robert
Robert Haas <robertmhaas@gmail.com> writes: > I think XML explain output is a good idea, but I don't think it's a > substitute for better options to control the human-readable form. Yeah. I think a well-designed XML output format for EXPLAIN is a fine thing to work on, but I don't believe it would make the "create a planner test suite" problem noticeably easier. I see the purpose of an XML format as being to allow tools like Red Hat's old Visual Explain (now maintained by EDB IIRC) to parse EXPLAIN's output with somewhat better odds of not breaking from one release to the next. regards, tom lane
On 16 Apr 2009, at 16:21, David Fetter wrote: > On Thu, Apr 16, 2009 at 06:12:10AM +0100, Simon Riggs wrote: >> >> I think the way to do this is to introduce plan output in XML > > If we're going with a serialization, which I think would be an > excellent idea, how about one that's light-weight and human-readable > like JSON? +1 xml/json is machine readable. I don't think, personaly that explain (analyze) is not easy to read by human, quite contrary.
On Thu, Apr 16, 2009 at 2:04 PM, Grzegorz Jaskiewicz <gj@pointblue.com.pl> wrote: > > On 16 Apr 2009, at 16:21, David Fetter wrote: > >> On Thu, Apr 16, 2009 at 06:12:10AM +0100, Simon Riggs wrote: >>> >>> I think the way to do this is to introduce plan output in XML >> >> If we're going with a serialization, which I think would be an >> excellent idea, how about one that's light-weight and human-readable >> like JSON? > > +1 > > xml/json is machine readable. > I don't think, personaly that explain (analyze) is not easy to read by > human, quite contrary. Is that because of how the output is formatted though, or because the concepts are difficult to express? (I agree though, json is better especially for structures that are possibly highly nested). merlni
On 16 Apr 2009, at 19:41, Merlin Moncure wrote: > > Is that because of how the output is formatted though, or because the > concepts are difficult to express? (I agree though, json is better > especially for structures that are possibly highly nested). What I mean is that what postgresql displays currently as explain(analyze[verbose]) is clear and understandable. Also, it is getting better and better from version to version. So I don't personally agree, that it is unreadable - and I am up for (and I am sure many users like me are) JSON, or XML output.
Btw, There was a "EXPLAIN XML" summer of code project, wasn't there ?
On Wed, 2009-04-15 at 18:04 -0400, Tom Lane wrote: > Simon Riggs <simon@2ndQuadrant.com> writes: > > I've always been scared to ask this question, in case the answer is No, > > but: Do we have a set of regression tests for the optimizer anywhere? > > Nothing beyond what is in the standard tests. While that's okay at > catching wrong answers --- and we have memorialized a number of such > issues in the tests --- the framework is not good for catching things > that run slower than they ought. Can't we make first cut at it by just running with timings on and then compare ratios of running times - maybe with 2-3X tolerance - to catch most obvious regressions ? > regards, tom lane -- Hannu Krosing http://www.2ndQuadrant.com PostgreSQL Scalability and Availability Services, Consulting and Training
Hannu Krosing wrote: > On Wed, 2009-04-15 at 18:04 -0400, Tom Lane wrote: > >> Simon Riggs <simon@2ndQuadrant.com> writes: >> >>> I've always been scared to ask this question, in case the answer is No, >>> but: Do we have a set of regression tests for the optimizer anywhere? >>> >> Nothing beyond what is in the standard tests. While that's okay at >> catching wrong answers --- and we have memorialized a number of such >> issues in the tests --- the framework is not good for catching things >> that run slower than they ought. >> > > Can't we make first cut at it by just running with timings on and then > compare ratios of running times - maybe with 2-3X tolerance - to catch > most obvious regressions ? > > The current regression tests are a series of yes/no answers to this question: does the actual output match the expected output. Nothing like as fuzzy as what you are suggesting is supported at all. From time to time suggestions are made for a performance farm as a kind of analog to the buildfarm, which would look at quantitative timing tests rather than just success/failure tests. It on my (very long) list of things to do, but it not something we can just tack on to the current regression suite simply. cheers andrew
Andrew Dunstan <andrew@dunslane.net> writes: > Hannu Krosing wrote: >> Can't we make first cut at it by just running with timings on and then >> compare ratios of running times - maybe with 2-3X tolerance - to catch >> most obvious regressions ? > The current regression tests are a series of yes/no answers to this > question: does the actual output match the expected output. Nothing like > as fuzzy as what you are suggesting is supported at all. Quite aside from that, I don't think that's really the framework we want. The issues that I think would be worth having tests for are questions like "will the planner push comparisons to constants down through a full join?" (which was the bug that started this thread). With a test methodology like the above, it wouldn't be enough to write a test case that exercised the behavior; you'd have to make sure that any alternative plan was an order of magnitude worse. I'm inclined to think that some sort of fuzzy examination of EXPLAIN output (in this example, "are there constant-comparison conditions in the relation scans?") might do the job, but I'm not sure how we'd go about that. regards, tom lane
On Sat, Apr 18, 2009 at 1:32 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote: > I'm inclined to think that some sort of fuzzy examination of EXPLAIN > output (in this example, "are there constant-comparison conditions in > the relation scans?") might do the job, but I'm not sure how we'd > go about that. If we just removed all the costs and other metrics from the explain plan and verified that the plan structure was the same would you be happy with that? It would still be work to maintain every time the planner changed. I suppose if we had explain-to-a-table then we could run explain and then run an sql query to verify the specific properties we were looking for. A similar thing could be done with xml if we had powerful enough xml predicates but we have a lot more sql skills in-house than xml. -- greg
Greg Stark <stark@enterprisedb.com> writes: > On Sat, Apr 18, 2009 at 1:32 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote: >> I'm inclined to think that some sort of fuzzy examination of EXPLAIN >> output (in this example, "are there constant-comparison conditions in >> the relation scans?") might do the job, but I'm not sure how we'd >> go about that. > If we just removed all the costs and other metrics from the explain > plan and verified that the plan structure was the same would you be > happy with that? It would still be work to maintain every time the > planner changed. > I suppose if we had explain-to-a-table then we could run explain and > then run an sql query to verify the specific properties we were > looking for. > A similar thing could be done with xml if we had powerful enough xml > predicates but we have a lot more sql skills in-house than xml. Yeah, I suspect the only really good answers involve the ability to apply programmable checks to the EXPLAIN output. A SQL-based solution shouldn't need any external moving parts, whereas analyzing XML output presumably would. I guess then one criterion for whether you've built a good output definition for explain-to-table is whether it's feasible to check this type of question using SQL predicates. regards, tom lane
Tom Lane wrote: > Greg Stark <stark@enterprisedb.com> writes: ... >> I suppose if we had explain-to-a-table then we could run explain and >> then run an sql query to verify the specific properties we were >> looking for. > >> A similar thing could be done with xml if we had powerful enough xml >> predicates but we have a lot more sql skills in-house than xml. > > Yeah, I suspect the only really good answers involve the ability to > apply programmable checks to the EXPLAIN output. A SQL-based solution > shouldn't need any external moving parts, whereas analyzing XML output > presumably would. If only an explain-to-a-table would be one of the available options and not the only option that would be great. The big O only has this option and it totally sux if you want to explain a query on a production environment where you can't just create tables here and there. Tino
On Sat, 2009-04-18 at 08:32 -0400, Tom Lane wrote: > The issues that I think would be worth having tests for are > questions like "will the planner push comparisons to constants down > through a full join?" (which was the bug that started this thread). Yes, that sounds good. > With a test methodology like the above, it wouldn't be enough to > write a test case that exercised the behavior; you'd have to make > sure that any alternative plan was an order of magnitude worse. > > I'm inclined to think that some sort of fuzzy examination of EXPLAIN > output (in this example, "are there constant-comparison conditions in > the relation scans?") might do the job, but I'm not sure how we'd > go about that. We can compose unit tests that have plans where the presence/absence of the optimizer action is critical to a good plan. i.e. if the constant-comparison is *not* pushed down it will be unable to use an index created for it and so run cost will be much greater. We can then define success in terms of a reduction in plan cost below a threshold. So for each test we specify * SQL * a success threshold for cost e.g. For a piece of SQL we have cost = 60002.2 without optimisation or 12.45 with optimisation, so we make the threshold 20.0. Enough slack to allow for changes in plan costs on platforms/over time, yet sufficient to discriminate between working/non-working optimisation. -- Simon Riggs www.2ndQuadrant.comPostgreSQL Training, Services and Support