Thread: Re: allowing extensions to control planner behavior
Robert Haas <robertmhaas@gmail.com> writes: > I'm somewhat expecting to be flamed to a well-done crisp for saying > this, but I think we need better ways for extensions to control the > behavior of PostgreSQL's query planner. Nah, I won't flame you for that, it's a reasonable thing to think about. However, the devil is in the details, and ... > The attached patch, briefly mentioned above, essentially converts the > enable_* GUCs into RelOptInfo properties where the defaults are set by > the corresponding GUCs. ... this doesn't seem like it's moving the football very far at all. The enable_XXX GUCs are certainly blunt instruments, but I'm not sure how much better it is if they're per-rel. For example, I don't see how this gets us any closer to letting an extension fix a poor choice of join order. Or, if your problem is that the planner wants to scan index A but you want it to scan index B, enable_indexscan won't help. > ... On the other hand, the more I look at > what our enable_* GUCs actually do, the less impressed I am. IMHO, > things like enable_hashjoin make a lot of sense, but enable_sort seems > like it just controls an absolutely random smattering of behaviors in > a way that seems to me to have very little to recommend it, and I've > complained elsewhere about how enable_indexscan and > enable_indexonlyscan are really quite odd when you look at how they're > implemented. Yeah, these sorts of questions aren't made better this way either. If anything, having extensions manipulating these variables will make it even harder to rethink what they do. You mentioned that there is prior art out there, but this proposal doesn't seem like it's drawing on any such thing. What ideas should we be stealing? regards, tom lane
On Mon, Aug 26, 2024 at 1:37 PM Tom Lane <tgl@sss.pgh.pa.us> wrote: > Robert Haas <robertmhaas@gmail.com> writes: > > I'm somewhat expecting to be flamed to a well-done crisp for saying > > this, but I think we need better ways for extensions to control the > > behavior of PostgreSQL's query planner. > > Nah, I won't flame you for that, it's a reasonable thing to think > about. However, the devil is in the details, and ... Thank you. Not being flamed is one of my favorite things. :-) > > The attached patch, briefly mentioned above, essentially converts the > > enable_* GUCs into RelOptInfo properties where the defaults are set by > > the corresponding GUCs. > > ... this doesn't seem like it's moving the football very far at all. > The enable_XXX GUCs are certainly blunt instruments, but I'm not sure > how much better it is if they're per-rel. For example, I don't see > how this gets us any closer to letting an extension fix a poor choice > of join order. Or, if your problem is that the planner wants to scan > index A but you want it to scan index B, enable_indexscan won't help. Well, I agree that this doesn't address everything you might want to do, and I thought I said so, admittedly in the middle of a long wall of text. This would JUST be a step toward letting an extension control the scan and join methods, not the join order or the choice of index or whatever else there is. But the fact that it doesn't do everything is not a strike against it unless there's some competing design that lets you take care of everything with a single mechanism, which I do not see as realistic. If this proposal -- or really any proposal in this area -- gets through, I will very happily propose more things to address the other problems that I know about, but it doesn't make sense to do a huge amount of work to craft a comprehensive solution before we've had any discussion here. > Yeah, these sorts of questions aren't made better this way either. > If anything, having extensions manipulating these variables will > make it even harder to rethink what they do. Correct, but my proposal to make enable_indexscan behave like enable_indexonlyscan, which I thought was a slam-dunk, just provoked a lot of grumbling. There's a kind of chicken and egg problem here. If the existing GUCs were better designed, then using them here would make sense. And the patch that I attached to my previous email were in master, then cleaning up the design of the GUCs would have more value. But if I can't make any progress with either problem because the other problem also exists, then I'm kind of boxed into a corner. I could also propose something here that is diverges from the enable_* behavior, but then people will complain that the two shouldn't be inconsistent, which I agree with, BTW. I thought maybe doing this first would make sense, and then we could refine afterwards. > You mentioned that there is prior art out there, but this proposal > doesn't seem like it's drawing on any such thing. What ideas should > we be stealing? Depends what you mean. As far as PostgreSQL-related things, the two things that I mentioned in my opening paragraph and for which I provided links seem to be me to the best examples we have. It's pretty easy to see how to make pg_hint_plan require less kludgery, and I think we can just iterate through the various problems there and solve them pretty easily by adding a few hooks here and there and a few extension-settable structure members here and there. I am engaging in some serious hand-waving here, but this is not rocket science. I am confident that if you made it your top priority to get into PG 18 stuff which would thoroughly un-hackify pg_hint_plan, you could be done in months, possibly weeks. It will take me longer, but if we have an agreement in principal that it is worth doing, I just can't see it as being particularly difficult. Amazon's query plan management stuff is a much tougher lift. For that, you're asking the planner to try to create a new plan which is like some old plan that you got before. So in a perfect world, you want to control every planner decision. That's hard just because there are a lot of them. If for example you want to get the same index scan that you got before, you need not only to get the same type of index scan (index, index-only, bitmap) and the same index, but also things like the same non-native saop treatment, which seems like it would be asking an awful lot of a hook system. On the other hand, maybe you can cheat. If your regurgitate-the-same-plan system could force the same join order, join methods, scan methods, choice of indexes, and probably some stuff about aggregate and appendrel strategy, it might be close enough to giving you the same plan you had before that nobody would really care if the non-native saop treatment was different. I'm almost positive it's better than not having a feature, which is where are today. And although allowing control over just the major decisions in query planning doesn't seem like something we can do in one patch, I don't think it takes 100 patches either. Maybe five or ten. If we step outside of the PostgreSQL ecosystem, I think we should look at Oracle as one example. I have never been a real believer in hints like SeqScan(foo), because if you don't fix the cardinality estimate for table foo, then the rest of your plan is going to suck, too. On the other hand, "hint everything" for some people in some situations is a way to address that. It's stupid in a sense, but if you have an automated way to do it, especially one that allows applying hints out-of-query, it's not THAT stupid. Also, Oracle offers some other pretty useful hints. In particular, using the LEADING hint to set the driving table for the query plan does not seem dumb to me at all. Hinting that things should be parallel or not, and with what degree of parallelism, also seem quite reasonable. They've also got ALL_ROWS and FIRST_ROWS(n) hints, which let you say whether you want fast-start behavior or not, and it hardly needs to be said how often we get that wrong or how badly. pg_hint_plan, which copies a lot of stuff that Oracle does, innovates by allowing you to hint that a certain join will return X number of rows or that the number or rows that the planner thinks should be returned should be corrected by multiplying, adding, or subtracting some constant. I'm not sure how useful this is really because I feel like a lot of times you'd just pick some join order where that particular join is no longer used e.g. if. A JOIN B JOIN C and I hint the AB join, perhaps the planner will just start by joining C to either A or B, and then that join will never occur. However, that can be avoided by also using LEADING, or maybe in some other cleverer way, like making an AB selectivity hint apply at whatever point in the plan you join something that includes A to something that includes B. There's some details on SQL server's hinting here: https://learn.microsoft.com/en-us/sql/t-sql/queries/hints-transact-sql?view=sql-server-ver16 It looks pretty complicated, but some of the basic concepts that you'd expect are also present here: force the join method, rule in or out, force the use of an index or of no index, force the join order. Those seem to be the major things that "everyone" supports. I think we'd want to expand a bit on that to allow forcing aggregate strategy and perhaps some PostgreSQL-specific things e.g. other systems won't have a hint to force a TIDRangeScan or not because that's a PostgreSQL-specific concept, but it would be silly to make a system that lets an extension control sequential scans and index scans but not other, more rarely-used ways of scanning a relation, so probably we want to do something. I don't know if that helps, in terms of context. If it doesn't, let me know what would help. And just to be clear, I *absolutely* think we need to take a light touch here. If we install a ton of new highly-opinionated infrastructure we will make a lot of people mad and that's definitely not where I want to end up. I just think we need to grow beyond "the planner is a black box and you shall not presume to direct it." If every other system provides a way to control, say, the join order, then it seems reasonable to suppose that a PostgreSQL extension should be able to control the join order too. A lot of details might be different but if multiple other systems have the concept then the concept itself probably isn't ridiculous. -- Robert Haas EDB: http://www.enterprisedb.com
On 8/27/24 11:45, Robert Haas wrote: > On Mon, Aug 26, 2024 at 3:28 PM Robert Haas <robertmhaas@gmail.com> wrote: >> Well, I agree that this doesn't address everything you might want to >> do, ... I will very happily propose more things to >> address the other problems that I know about ... > > In that vein, here's a new patch set where I've added a second patch > that allows extensions to control choice of index. It's 3 lines of new > code, plus 7 lines of comments and whitespace. Feeling inspired, I > also included a contrib module, initial_vowels_are_evil, to > demonstrate how this can be used by an extension that wants to disable > certain indexes but not others. This is obviously quite silly and we > might (or might not) want a more serious example in contrib, but it > demonstrates how easy this can be with just a tiny bit of core > infrastructure: > > robert.haas=# load 'initial_vowels_are_evil'; > LOAD > robert.haas=# explain select count(*) from pgbench_accounts; > QUERY PLAN > ----------------------------------------------------------------------------------------------------------------- > Aggregate (cost=2854.29..2854.30 rows=1 width=8) > -> Index Only Scan using pgbench_accounts_pkey on pgbench_accounts > (cost=0.29..2604.29 rows=100000 width=0) > (2 rows) > robert.haas=# alter index pgbench_accounts_pkey rename to > evil_pgbench_accounts_pkey; > ALTER INDEX > robert.haas=# explain select count(*) from pgbench_accounts; > QUERY PLAN > ------------------------------------------------------------------------------ > Aggregate (cost=2890.00..2890.01 rows=1 width=8) > -> Seq Scan on pgbench_accounts (cost=0.00..2640.00 rows=100000 width=0) > (2 rows) > robert.haas=# Nice! On the one hand, excluding indexes by initial vowels is definitely silly. On the other, I can see how it might be useful for an extension to exclude indexes based on a regex match of the index name or something similar, at least for testing. -- Joe Conway PostgreSQL Contributors Team RDS Open Source Databases Amazon Web Services: https://aws.amazon.com
On Tue, Aug 27, 2024 at 11:57 AM Joe Conway <mail@joeconway.com> wrote: > On the one hand, excluding indexes by initial vowels is definitely > silly. On the other, I can see how it might be useful for an extension > to exclude indexes based on a regex match of the index name or something > similar, at least for testing. Right. I deliberately picked a contrib module that implemented a silly policy, because what I wanted to demonstrate with it is that this little bit of infrastructure provides enough mechanism to implement whatever policy you want. And I think it demonstrates it quite well, because the whole contrib module to implement this has just 6 lines of executable code. If you wanted a policy that would be more realistically useful, you'd need more code, but only however much is needed to implement your policy. All you need do is replace this strchr call with something else: if (name != NULL && strchr("aeiouAEIOU", name[0]) != NULL) index->disabled = true; -- Robert Haas EDB: http://www.enterprisedb.com
Robert Haas <robertmhaas@gmail.com> writes: > In that vein, here's a new patch set where I've added a second patch > that allows extensions to control choice of index. I'm minus-several on this bit, because that is a solved problem and we really don't need to introduce More Than One Way To Do It. The intention has always been that get_relation_info_hook can editorialize on the rel's indexlist by removing entries (or adding fake ones, in the case of hypothetical-index extensions). For that matter, if you really want to do it by modifying the IndexInfo rather than deleting it from the list, that's already possible: just set indexinfo->hypothetical = true. regards, tom lane
On Tue, Aug 27, 2024 at 12:56 PM Tom Lane <tgl@sss.pgh.pa.us> wrote: > Robert Haas <robertmhaas@gmail.com> writes: > > In that vein, here's a new patch set where I've added a second patch > > that allows extensions to control choice of index. > > I'm minus-several on this bit, because that is a solved problem and > we really don't need to introduce More Than One Way To Do It. The > intention has always been that get_relation_info_hook can editorialize > on the rel's indexlist by removing entries (or adding fake ones, > in the case of hypothetical-index extensions). For that matter, > if you really want to do it by modifying the IndexInfo rather than > deleting it from the list, that's already possible: just set > indexinfo->hypothetical = true. Well, now I'm confused. Just yesterday, in response to the 0001 patch that allows extensions to exert control over the join strategy, you complained that "Or, if your problem is that the planner wants to scan index A but you want it to scan index B, enable_indexscan won't help." So today, I produced a patch demonstrating how we could address that issue, and your response is "well, actually we don't need to do anything about it because that problem is already solved." But if that is true, then the fact that yesterday's patch did nothing about it was a feature, not a bug. Am I missing something here? -- Robert Haas EDB: http://www.enterprisedb.com
Robert Haas <robertmhaas@gmail.com> writes: > Well, now I'm confused. Just yesterday, in response to the 0001 patch > that allows extensions to exert control over the join strategy, you > complained that "Or, if your problem is that the planner wants to scan > index A but you want it to scan index B, enable_indexscan won't help." I was just using that to illustrate that making the enable_XXX GUCs relation-local covers only a small part of the planner-control problem. You had not, at that point, been very clear that you intended that patch as only a small part of a solution. I do think that index selection is pretty well under control already, thanks to stuff that we put in ages ago at the urging of people who wanted to write "index advisor" extensions. (The fact that that area seems a bit moribund is disheartening, though. Is it a lack of documentation?) regards, tom lane
On Tue, Aug 27, 2024 at 2:24 PM Tom Lane <tgl@sss.pgh.pa.us> wrote: > I was just using that to illustrate that making the enable_XXX GUCs > relation-local covers only a small part of the planner-control problem. > You had not, at that point, been very clear that you intended that > patch as only a small part of a solution. Ah, OK, apologies for the lack of clarity. I actually think it's a medium part of the solution. I believe the minimum viable product here is probably something like: - control over scan methods - control over index selection - control over join methods - control over join order It gets a lot better if we also have: - control over aggregation methods - something that I'm not quite sure about for appendrels - control over whether parallelism is used and the degree of parallelism If control over index selection is already adequate, then the proposed patch is one way to get about 1/3 of the way to the MVP, which isn't nothing. Maybe I'm underestimating the amount of stuff that people are going to want here, but if you look at pg_hint_plan, it isn't doing a whole lot more than this. > I do think that index selection is pretty well under control already, > thanks to stuff that we put in ages ago at the urging of people who > wanted to write "index advisor" extensions. (The fact that that > area seems a bit moribund is disheartening, though. Is it a lack > of documentation?) So a couple of things about this. First, EDB maintains closed-source index advisor code that uses this machinery. In fact, if I'm not mistaken, we now have two extensions that use it. So it's not dead from that point of view, but of course anything closed-source can't be promoted through community channels. There's open-source code around too; to my knowledge, https://github.com/HypoPG/hypopg is the leading open-source implementation, but my knowledge may very well be incomplete. Second, I do think that the lack of documentation poses somewhat of a challenge, and our exchange about whether an IndexOptInfo needs a disabled flag is perhaps an example of that. To be fair, now that I look at it, the comment where get_relation_info_hook does say that you can remove indexes from the index list, so maybe I should have realized that the problem can be solved that way, but on the other hand, the comment for set_rel_pathlist_hook claims you can delete paths from the pathlist, which AFAICS is completely non-viable, so one can't necessarily rely too much on the comments in this area to learn what actually does and does not work. Having some in-core examples showing how to use this stuff correctly and demonstrating its full power would also be really helpful. Right now, I often find myself looking at out-of-core code which is sometimes poorly written and frequently resorts to nasty hacks. It can be hard to determine whether those nasty hacks are there because they're the only way to implement some bit of functionality or because the author missed an opportunity to do better. Third, I think there's simply a lack of critical mass in terms of our planner hooks. While the ability to add hypothetical indexes has some use, the ability to remove indexes from consideration is probably significantly more useful. But not if it's the only technique for fixing a bad plan that you have available. Nobody gets excited about a toolbox that contains just one tool. That's why I'm keen to expand what can be done cleanly via hooks, and I think if we do that and also provide either some very good documentation or some well-written example implementations, we'll get more traction here. -- Robert Haas EDB: http://www.enterprisedb.com
On Mon, Aug 26, 2024 at 1:37 PM Tom Lane <tgl@sss.pgh.pa.us> wrote: > For example, I don't see > how this gets us any closer to letting an extension fix a poor choice > of join order. Thinking more about this particular sub-problem, let's say we're joining four tables A, B, C, and D. An extension wants to compel join order A-B-C-D. Let's suppose, however, that it wants to do this in a way where planning won't fail if that's impossible, so it wants to use disabled_nodes rather than skipping path generation entirely. When we're planning the baserels, we don't need to do anything special. When we plan 2-way joins, we need to mark all paths disabled except those originating from the A-B join. When we plan 3-way joins, we need to mark all paths disabled except those originating from an (A-B)-C join. When we plan the final 4-way join, we don't really need to do anything extra: the only way to end up with a non-disabled path at the top level is to pick a path from the (A-B)-C join and a path from D. There's a bit of nuance here, though. Suppose that when planning the A-B join, the planner generates HashJoin(SeqScan(B),Hash(A)). Does that path need to be disabled? If you think that join order A-B-C-D means that table A should be the driving table, then the answer is yes, because that path will lead to a join order beginning with B-A, not one beginning with A-B. But you might also have a mental model where it doesn't matter which side of the table is on which side of the join, and as long as you start by joining A and B in some way, that's good enough to qualify as an A-B join order. I believe actual implementations vary in which approach they take. I think that the beginning of add_paths_to_joinrel() looks like a useful spot to get control. You could, for example, have a hook there which returns a bitmask indicating which of merge-join, nested-loop, and hash join will be allowable for this call; that hook would then allow for control over the join method and the join order, and the join order control is strong enough that you can implement either of the two interpretations above. This idea theorizes that 0001 was wrong to make the path mask a per-RelOptInfo value, because there could be many calls to add_paths_to_joinrel() for a single RelOptInfo and, in this idea, every one of those can enforce a different mask. Potentially, such a hook could return additional information, either by using more bits of the bitmask or by returning other information via some other data type. For instance, I still believe that distinguishing between parameterized-nestloops and unparameterized-nestloops would be real darn useful, so we could have separate bits for each; or you could have a bit to control whether foreign-join paths get disabled (or are considered at all), or you could have separate bits for merge joins that involve 0, 1, or 2 sorts. Whether we need or want any or all of that is certainly debatable, but the point is that if you did want some of that, or something else, it doesn't look difficult to feed that information through to the places where you would need it to be available. Thoughts? -- Robert Haas EDB: http://www.enterprisedb.com
Hi, On Tue, Aug 27, 2024 at 03:11:15PM -0400, Robert Haas wrote: > Third, I think there's simply a lack of critical mass in terms of our > planner hooks. While the ability to add hypothetical indexes has some > use, the ability to remove indexes from consideration is probably > significantly more useful. JFTR, hypopg can also mask away/hide indexes since version 1.4.0: https://github.com/HypoPG/hypopg/commit/351f14a79daae8ab57339d2367d7f2fc639041f7 I haven't looked closely at the implementation though, and maybe you meant something else in the above entirely. Michael
Robert Haas <robertmhaas@gmail.com> writes: > I believe the minimum viable product here > is probably something like: > - control over scan methods > - control over index selection > - control over join methods > - control over join order Seems reasonable. It might be possible to say that our answer to "control over join order" is to provide a hook that can modify the "joinlist" before it's passed to make_one_rel. If you want to force a particular join order you can rearrange that list-of-lists-of-range-table-indexes to do so. The thing this would not give you is control over which rel is picked as outer in any given join step. Not sure how critical that bit is. regards, tom lane
On Tue, Aug 27, 2024 at 4:15 PM Tom Lane <tgl@sss.pgh.pa.us> wrote: > Seems reasonable. It might be possible to say that our answer > to "control over join order" is to provide a hook that can modify > the "joinlist" before it's passed to make_one_rel. If you want > to force a particular join order you can rearrange that > list-of-lists-of-range-table-indexes to do so. The thing this > would not give you is control over which rel is picked as outer > in any given join step. Not sure how critical that bit is. This has a big advantage over what I proposed yesterday in that it's basically declarative. With one call to the hook, you get all the information about the join order that you could ever want. That's really nice. However, I don't really think it quite works, partly because of what you mention here about not being able to control which rel ends up on which side of the join, which I do think is important, and also because if the join order isn't possible, planning will fail, rather than falling back to some other plan shape. If you have an idea how we could address those things within this same general framework, I'd be keen to hear it. It has occurred to me more than once that it might be really useful if we could attempt to plan under a set of constraints and then, if we don't end up finding a plan, retry without the constraints. But I don't quite see how to make it work. When I tried to do that as a solution to the disable_cost problem, it ended up meaning that once you couldn't satisfy every constraint perfectly, you gave up on even trying. I wasn't immediately certain that such behavior was unacceptable, but I didn't have to look any further than our own regression test suites to see that it was going to cause a lot of unhappiness. In this case, if we could attempt join planning with the user-prescribed join order and then try it again if we fail to find a path, that would be really cool. Or if we could do all of planning without generating disabled paths *at all* and then go back and restart if it becomes clear that's not working out, that would be slick. But, unless you have a clever idea, that all seems like advanced magic that should wait until we have basic things working. Right now, I think we should focus on getting something in place where we still try all the paths but an extension can arrange for some of them to be disabled. Then all the right things will happen naturally; we'll only be leaving some CPU cycles on the table. Which isn't amazing, but I don't think it's a critical defect either, and we can try to improve things later if we want to. -- Robert Haas EDB: http://www.enterprisedb.com
On Wed, Aug 28, 2024 at 8:37 AM Robert Haas <robertmhaas@gmail.com> wrote: > This has a big advantage over what I proposed yesterday in that it's > basically declarative. With one call to the hook, you get all the > information about the join order that you could ever want. That's > really nice. Hmm. On further thought, I suppose that another disadvantage of this kind of declarative approach is that there are some kinds of constraints that you could conceivably want that you just can't declare, especially negative constraints. For example, imagine we're joining tables A1 ... A10 and we don't want A1 to be joined directly to A2. Or suppose you want to insist on a non-bushy plan. I don't think there's a way to express those types of requirements by frobbing the joinlist. I'm not quite sure whether those kinds of gaps are sufficiently serious that we should worry about them. After all, there's a lot of things that you can express perfectly clearly with this kind of scheme. I don't think I know of something that you can do to control the join order in an existing hinting system that cannot be expressed as a manipulation of the joinlist. That's not to say that I am 100% confident that everything everyone could reasonably want to do can be expressed this way; in fact, I don't think that's true at all. But I _think_ that all of the things that I know about that people are actually doing _could_ be expressed this way, but for the join-direction and hard-failulre problems I mentioned in my earlier reply. -- Robert Haas EDB: http://www.enterprisedb.com
On Tue, 2024-08-27 at 15:11 -0400, Robert Haas wrote: > - control over scan methods > - control over index selection > - control over join methods > - control over join order I suggest we split join order into "commutative" and "associative". The former is both useful and seems relatively easy -- A JOIN B or B JOIN A (though there's some nuance about when you try to make that decision). The latter requires controlling an explosion of possibilities, and would be an entirely different kind of hook. Regards, Jeff Davis
On Wed, Aug 28, 2024 at 3:23 PM Jeff Davis <pgsql@j-davis.com> wrote: > On Tue, 2024-08-27 at 15:11 -0400, Robert Haas wrote: > > - control over scan methods > > - control over index selection > > - control over join methods > > - control over join order > > I suggest we split join order into "commutative" and "associative". > > The former is both useful and seems relatively easy -- A JOIN B or B > JOIN A (though there's some nuance about when you try to make that > decision). > > The latter requires controlling an explosion of possibilities, and > would be an entirely different kind of hook. My proposal in http://postgr.es/m/CA+TgmoZQyVxnRU--4g2bJonJ8RyJqNi2CHpy-=nwwBTNpAj71A@mail.gmail.com seems like it can cover both cases. -- Robert Haas EDB: http://www.enterprisedb.com
On Mon, Sep 30, 2024 at 5:50 AM Andrei Lepikhov <lepihov@gmail.com> wrote: > Being flexible, this approach is less invasive. Now, I use it to > implement heuristics demanded by clients for cases when the estimator > predicts only one row - usually, it means that the optimiser > underestimates cardinality. For example, in-place switch-off of NestLoop > if it uses too many clauses, or rules to pick up index scan if we have > alternative scans, each of them predicts only one tuple. > > Positive outcomes includes: we don't alter path costs; extension may be > sure that core doesn't remove path from the list if the extension > forbids it. > > In attachment - hooks for add_path and add_partial_path. As you can see, > because of differences in these routines hooks also implemented > differently. Also the compare_path_costs_fuzzily is exported, but it is > really good stuff for an extension. I agree that this is more flexible, but it also seems like it would be a lot more expensive. For every add_path() or add_partial_path() call, you'll have to examine the input path and decide what you want to do with it. If you want to do something like avoid nested loops with materialization, you'll need to first check the top-level node, and then if it's a nested loop, you have to check the inner subpath to see if it's a Materialize node. I'm not completely against having something like this; I think there are cases where something along these lines is the only way to achieve some desired objective. But I don't think this kind of hook should be the primary way for extensions to control the planner; it seems too low-level to me. -- Robert Haas EDB: http://www.enterprisedb.com
On 10/4/24 01:35, Robert Haas wrote: > On Mon, Sep 30, 2024 at 5:50 AM Andrei Lepikhov <lepihov@gmail.com> wrote: >> In attachment - hooks for add_path and add_partial_path. As you can see, >> because of differences in these routines hooks also implemented >> differently. Also the compare_path_costs_fuzzily is exported, but it is >> really good stuff for an extension. > > I agree that this is more flexible, but it also seems like it would be > a lot more expensive. For every add_path() or add_partial_path() call, > you'll have to examine the input path and decide what you want to do > with it. If you want to do something like avoid nested loops with > materialization, you'll need to first check the top-level node, and > then if it's a nested loop, you have to check the inner subpath to see > if it's a Materialize node. I agree, and as you can see, the first simple test on a NestLoop already avoids further checks for lots of calls for baserels, upper rels, Hash- and MergeJoins, relieving the issue. > > I'm not completely against having something like this; I think there > are cases where something along these lines is the only way to achieve > some desired objective. But I don't think this kind of hook should be > the primary way for extensions to control the planner; it seems too > low-level to me. It is a fact. Maybe I was unclear, but I usually use it as an addition to planner_hook or pathlist hooks to have some guarantee or, at least, have a chance to do something if another path is much better and is going to displace my path. Sometimes it is just a way to remove a path from the pathlist without playing games with costs of my path. I spent some time discovering the pg_hint_plan extension to apprehend when extensions need to make massive core code copying and IMO, Michael has the most to say here. -- regards, Andrei Lepikhov
On 10/10/24 23:51, Robert Haas wrote: > On Wed, Sep 18, 2024 at 11:48 AM Robert Haas <robertmhaas@gmail.com> wrote: > 1. If you want to specify in-query hints using comments, how does your > extension get access to the comments? Having designed two features [1,2] that do the stuff mostly similar to pg_hint_plan but based on real cardinalities earned from previous executions, I can say the most annoying problem is hinting subqueries & CTEs. Sometimes, you want to hint at the top query and not touch the subquery and vice versa. Sometimes, users get stuck in accidents when a flattened subquery influences your hint. So, the key property to invent in advance should be an identification system. I have chosen hash-based identification when each RelOptInfo has a hash created over values of relids, restrictions and hashes of underlying RelOptInfos. At the core, it seems inappropriate. But anyway, some additional info must be pushed into the subquery to allow it to identify itself and decide which hint can be used. [1] https://github.com/postgrespro/aqo [2] https://postgrespro.com/docs/enterprise/16/realtime-query-replanning -- regards, Andrei Lepikhov
Hi Robert,
On Thu, Oct 10, 2024 at 6:52 PM Robert Haas <robertmhaas@gmail.com> wrote:
On Wed, Sep 18, 2024 at 11:48 AM Robert Haas <robertmhaas@gmail.com> wrote:
> Still, I think it's a pretty useful starting point. It is mostly
> enough to give you control over join planning, and if combined with
> similar work for scan planning, I think it would be enough for
> pg_hint_plan. If we also got control over appendrel and agg planning,
> then you could do a bunch more cool things.
Here's a new set of patches where I added a similar mechanism for scan
type control. See the commit message for some notes on limitations of
this approach. In the absence of better ideas, I'd like to proceed
with something along the lines of 0001 and 0002.
I upgraded the hint_via_alias contrib module (which is not intended
for commit, it's just a demo) so that it can hint either scan type or
join type. I think this is sufficient to demonstrate that it's quite
easy to use hooks to leverage this new infrastructure.
Thank You! I've played a little and IMHO this is a step in a good direction after playing a tiny bit with 'hint_via_alias'.
In fact, the
biggest thing I'm unhappy about right now is the difficulty of
providing the hooks with any sort of useful information. I don't think
it should be the job of this patch set to solve that problem, but I do
think we should do something about it. The problem exists on two
levels:
1. If you want to specify in-query hints using comments, how does your
extension get access to the comments? [..]Still, it's not clear what other approach you could
adopt.
No, I don't think the ability to influence optimizers should be tied to SQL comments as a "vehicle" to transfer some information, so -1 for going into any discussion about it and wasting Your time on this. Rationale: as you note such "interface" is quirky, and troublesome even for users. IMHO they are mostly just used as a way to run experiments, but noone with sense of touch with reality would ever embed query with hints in the comment section inside the production application if had other choices, and that's for multiple reasons (e.g. it's better to have control about it in the DB as performance is function of time [and pgversion], ORM might be even unable to do it in the 1st place, people do not have access to the source code). You could just assume we have "SET extension.influence_optimizer='SeqScan(t)'" and be done with it as far as the production-troubleshooting goes. It's better because it's easier to use (one does not need to even launch an editor to modify the query) during those experiments. E.g. I would find it much faster to iterate in psql with a loop of: SET + `\i query.sql` rather than often having dozens of KBs to re-edit. And there's even a logon trigger today, so it could be (ab)used to SET that GUC with just some specific conditions (e.g. only for specific application_name and that could be even forced by e.g. pgjdbc driver -- jdbcurl?ApplicationName=enable_this_workaround_just_here).
Well the issue is however how do you bind such influence to just one specific query without changing the app in the long run. My take is that we should utilize compute_query_id (hash) and then extension should allow doing something along of the lines of mapping (queryId <-> influence_optimizer), or even it could provide `ALTER SQL <queryId> SET influence_optimizer='SeqScan(t)'`. Users could take that hash from the %Q in the log_line_prefix.
Well the issue is however how do you bind such influence to just one specific query without changing the app in the long run. My take is that we should utilize compute_query_id (hash) and then extension should allow doing something along of the lines of mapping (queryId <-> influence_optimizer), or even it could provide `ALTER SQL <queryId> SET influence_optimizer='SeqScan(t)'`. Users could take that hash from the %Q in the log_line_prefix.
2. If you want a certain base relation or join relation to be treated
in a certain way, how do you identify it? You might think that this is
easy because, even when a query contains multiple references to a
relation with the same name, or identical aliases in different parts
of the query, EXPLAIN renames them so they have disjoint names. What
would be nice is if you could feed those names back into your
extension and use them as a way of specifying what behavior you want
where. But that doesn't work, because what actually happens is that
the plan can contain duplicated aliases, and then when EXPLAIN
deparses it using ruleutils.c, that's when we rename things so that
they're distinct. This means that, at the time we're planning, we
don't yet know what name EXPLAIN will end up giving to any relation
yet, which means we can't use the names that EXPLAIN produced for an
earlier plan for the same query to associate behaviors with relations.
I wonder if we could think about reversing the order of operations
here and making it so that we do the distinct-ification during parse
analysis or maybe early in planning, so that the name you see EXPLAIN
print out is the real name of that thing and not just a value invented
for display purposes.
This if that's possible?, or simply some counter and numbering the plan operation? or Andrei's response/idea of using hashing??
So again, I am definitely not saying that these patches get us all the
way to where we should be -- not in terms of the ability to control
the plan, and definitely not in terms of giving extensions all the
information they need to be effective. But if we insist on a perfect
solution before doing anything, we will never get anywhere, and I
personally believe these are going in a useful direction.
Even as it stands today, the v4-0002 would be better to have than nothing (well other than pg_hint_plan), as the it looks to me that the most frequent workaround for optimizer issues is to just throw 'enable_nestloop = no' into the mix quite often (so having the ability to just throw fixproblem.so into session_preload_libraries with just strstr()/regex() - to match on specific query - and disable it just there seems to be completely achievable and has much better granularity when targeting whole sessions with SET issued there).
-Jakub Wartak.
On Mon, Oct 14, 2024 at 6:02 AM Jakub Wartak <jakub.wartak@enterprisedb.com> wrote: >> I wonder if we could think about reversing the order of operations >> here and making it so that we do the distinct-ification during parse >> analysis or maybe early in planning, so that the name you see EXPLAIN >> print out is the real name of that thing and not just a value invented >> for display purposes. > > This if that's possible?, or simply some counter and numbering the plan operation? or Andrei's response/idea of using hashing?? I spent some time looking at this. I think everything that we might ever want to do here is possible, but what I unfortunately couldn't find is a place to do it where it would be more-or-less free. In parse analysis, we do detect certain kinds of namespace collisions. For example, "SELECT * FROM x, x" or "SELECT * FROM x, y x" will fail with ERROR: table name "x" specified more than once. Similarly, "SELECT * FROM generate_series(1,2), x generate_series" will fail with ERROR: table name "generate_series" specified more than once, even though generate_series is not, strictly speaking, a table name. From this you might infer that each alias can be used only once, but it turns out that's not entirely correct. If you say "SELECT * FROM x, q.x" that is totally fine even though both relations end up with the alias "x". There's a specific exception in the code to allow this case, when the aliases are the same and but they refer to different relation OIDs, and this is justified by reference to the SQL standard. Furthermore, "SELECT * FROM (x JOIN y ON true) x, y" is totally fine even though there are two x's and two y's. The fact that the parenthesized part has its own alias makes everything inside the parentheses a separate scope from the names outside the parentheses, so there's no name conflict. If you delete the "x" just after the closing parenthesis then it's all a single scope and you get a complaint about "y" being used twice. In this example, we're allowed to have collisions even though there are no subqueries, but it is also true that each subquery level gets its own scope e.g. "SELECT * FROM (SELECT * FROM x) x" is permitted. What I think all this means is that piggy-backing on the existing logic here doesn't look especially promising. If we're trying to identify a specific usage of a specific relation inside of a query, we want global uniqueness across the whole query, but this isn't even local uniqueness within a certain query level -- it's uniqueness within a part of a query level with a special carve-out for same-named tables in different schemas. Just to drive the point home, the duplicate checks are done with two nested for loops, a good old-fashioned O(n^2) algorithm, instead of something like entering everything into a hash table and complaining if the entry already exists. We're basically relying on the fact that there won't be very many items at a single level of a FROM-list for this to have reasonable performance; and scaling it even as far as the whole query sounds like it might be dubious. Maybe it's possible to rewrite this somehow to use a hash table and distinguish the cases where it shouldn't complain, but it doesn't look like a ton of fun. Once we get past parse analysis, there's really nothing that cares about duplicate names, as far as I can see. The planner is perfectly happy to work on multiple instances of the same table or of different tables with the same alias, and it really has no reason to care about what anything would look like if it were deparsed and printed out. I suppose this is why the code works the way it does: by postponing renaming until somebody actually does EXPLAIN or some other deparsing operation, we avoid doing it when it isn't "really needed". Aside from this nudge-the-planner use case there's no real reason to do anything else. In terms of solving the problem, nothing prevents us from installing a hash table in PlannerGlobal and using that to label every RangeTblEntry (except unnamed joins, probably) with a unique alias. My tentative idea is to have the hash table key be a string. We'd check whether the table alias is of the form string+'_'+positive_integer_without_a_leading_zero. If so, we'd look up the string in the hash table, else the entire alias. The hash table entry would tell us which integers were already used for that string, so then we could arrange to use one that isn't used yet, essentially re-aliasing tables wherever collisions occur. However, in the "normal" case where the query plan is not being printed and no query-plan-frobbing extensions are in use, this effort is all wasted. Maybe there's a way to do this kind of work only on demand, but it seems to be somewhat complicated by the way we handle flattening the range table: initially, every subquery level gets its own range table, and at the end of planning, those range tables are all collapsed into one. This means that depending on WHEN we get asked for information about the re-aliased table names, the information is in a different place. Uggh. One could also argue that this sort of functionality doesn't belong in core at all - that it's the problem of an extension that wants to frob the planner behavior to figure out how to identify the rels. But I think that is short-sighted. I think what people will want to do is have the output of EXPLAIN tell them how they can identify a rel for frobbing purposes. If not that, then what? > Even as it stands today, the v4-0002 would be better to have than nothing (well other than pg_hint_plan), as the it looksto me that the most frequent workaround for optimizer issues is to just throw 'enable_nestloop = no' into the mix quiteoften (so having the ability to just throw fixproblem.so into session_preload_libraries with just strstr()/regex() -to match on specific query - and disable it just there seems to be completely achievable and has much better granularitywhen targeting whole sessions with SET issued there). Thanks for the +1. -- Robert Haas EDB: http://www.enterprisedb.com
Hi Andrei,
On Fri, Oct 11, 2024 at 8:21 AM Andrei Lepikhov <lepihov@gmail.com> wrote:
On 10/10/24 23:51, Robert Haas wrote:
> On Wed, Sep 18, 2024 at 11:48 AM Robert Haas <robertmhaas@gmail.com> wrote:
> 1. If you want to specify in-query hints using comments, how does your
> extension get access to the comments?
Having designed two features [1,2] that do the stuff mostly similar to
pg_hint_plan but based on real cardinalities earned from previous
executions, I can say the most annoying problem is hinting subqueries &
CTEs. Sometimes, you want to hint at the top query and not touch the
subquery and vice versa. Sometimes, users get stuck in accidents when a
flattened subquery influences your hint. So, the key property to invent
in advance should be an identification system.
I have chosen hash-based identification when each RelOptInfo has a hash
created over values of relids, restrictions and hashes of underlying
RelOptInfos. At the core, it seems inappropriate.
out of curiosity, why do You think it would be inappropriate to do so in the core? Maybe it Is something similiar to compute_query_id that can be computed externally too. I remember Tom arguing that query_id should be computed externally by extension for some reasons because each extension may want differently to fingerprint the queries, but I cannot find link to discussion now or bring more details)
-J.
On 18/10/2024 14:56, Jakub Wartak wrote: > Hi Andrei, > > On Fri, Oct 11, 2024 at 8:21 AM Andrei Lepikhov <lepihov@gmail.com > <mailto:lepihov@gmail.com>> wrote: > out of curiosity, why do You think it would be inappropriate to do so in > the core? Maybe it Is something similiar to compute_query_id that can be > computed externally too. Generally, a hash value doesn't 100% guarantee the uniqueness of a node identification. Also, RelOptInfo corresponds to a subtree in the final plan, and sometimes, it takes work to find which node in the partially executed plan corresponds to this specific estimation on row number during selectivity estimation. Remember parameterised paths - you should attach some signature for each path. So, it is not fully strict method. If you are interested, I can perhaps explain the method a little bit more at some meetup. > I remember Tom arguing that query_id should be > computed externally by extension for some reasons because each extension > may want differently to fingerprint the queries, but I cannot find link > to discussion now or bring more details) I argued a lot about allowing extensions to provide separate query_ids and came up with the patch a year ago. But it needs more debate on use cases - my use cases are primarily from enterprise-grade closed extensions, and it doesn't convince anyone. Also, I think any query_id should be generated, likewise the core-made one. Does it seem like an extensible Perl script? -- regards, Andrei Lepikhov
On Sat, Oct 19, 2024 at 6:00 AM Andrei Lepikhov <lepihov@gmail.com> wrote: > Generally, a hash value doesn't 100% guarantee the uniqueness of a node > identification. Also, RelOptInfo corresponds to a subtree in the final > plan, and sometimes, it takes work to find which node in the partially > executed plan corresponds to this specific estimation on row number > during selectivity estimation. Remember parameterised paths - you should > attach some signature for each path. So, it is not fully strict method. > If you are interested, I can perhaps explain the method a little bit > more at some meetup. Yeah, I agree that this is not the best method. While it's true that you could get a false match in case of a hash value collision, IMHO the bigger problem is that it seems like an expensive way of determining something that we really should know already. If the user types the same query, mentioning the same relations, in the same order, with the same constructs around them, it's hard to believe that hashing is the cheapest way of matching up the old and new ones. I'm not sure exactly what we should do instead, but it feels like we more or less have this information during parsing and then we lose track of it as the query goes through the rewrite and planning phases. -- Robert Haas EDB: http://www.enterprisedb.com
On 10/23/24 15:05, Robert Haas wrote: > On Sat, Oct 19, 2024 at 6:00 AM Andrei Lepikhov <lepihov@gmail.com> wrote: >> Generally, a hash value doesn't 100% guarantee the uniqueness of a node >> identification. Also, RelOptInfo corresponds to a subtree in the final >> plan, and sometimes, it takes work to find which node in the partially >> executed plan corresponds to this specific estimation on row number >> during selectivity estimation. Remember parameterised paths - you should >> attach some signature for each path. So, it is not fully strict method. >> If you are interested, I can perhaps explain the method a little bit >> more at some meetup. > > Yeah, I agree that this is not the best method. While it's true that > you could get a false match in case of a hash value collision, IMHO > the bigger problem is that it seems like an expensive way of > determining something that we really should know already. If the user > types the same query, mentioning the same relations, in the same > order, with the same constructs around them, it's hard to believe that > hashing is the cheapest way of matching up the old and new ones. I'm > not sure exactly what we should do instead, but it feels like we more > or less have this information during parsing and then we lose track of > it as the query goes through the rewrite and planning phases. Parse tree may be implemented with multiple execution plans. Even clauses can be transformed during optimisation (Remember OR -> ANY). Also, the cardinality of a middle-tree join depends on the inner and outer subtrees. Because of that, having a hash on RelOptInfo's relids and restrictions + hashes of child RelOptInfos and carrying it through all other stages up to the end of execution is the most stable approach I know. -- regards, Andrei Lepikhov
On Wed, Oct 23, 2024 at 11:51 AM Andrei Lepikhov <lepihov@gmail.com> wrote: > Parse tree may be implemented with multiple execution plans. Even > clauses can be transformed during optimisation (Remember OR -> ANY). > Also, the cardinality of a middle-tree join depends on the inner and > outer subtrees. Because of that, having a hash on RelOptInfo's relids > and restrictions + hashes of child RelOptInfos and carrying it through > all other stages up to the end of execution is the most stable approach > I know. I'm not saying there's a better way to do it today. I'm saying I think there SHOULD be a better way to do it. -- Robert Haas EDB: http://www.enterprisedb.com