Re: Event Triggers: adding information - Mailing list pgsql-hackers
From | Dimitri Fontaine |
---|---|
Subject | Re: Event Triggers: adding information |
Date | |
Msg-id | m238xptd28.fsf@2ndQuadrant.fr Whole thread Raw |
In response to | Re: Event Triggers: adding information (Robert Haas <robertmhaas@gmail.com>) |
Responses |
Re: Event Triggers: adding information
Re: Event Triggers: adding information |
List | pgsql-hackers |
Robert Haas <robertmhaas@gmail.com> writes: > OK, but can we lay the issue of a *normalized* command string to the > side for just one minute, and talk about exposing the *raw* command > string? It seems to me that this would be (1) very easy to do, (2) > reasonable to slip into 9.3, and (3) useful to some people. Arguably, > the normalized command string would be useful to MORE people, but I > think the raw command string is not without its uses, and it's at > least one order of magnitude less work. My understanding is that if the command string we give to event triggers is ambiguous (sub-object names, schema qualifications, etc), it comes useless for logical replication use. I'll leave it to the consumers of that to speak up now. [… no access to tuples in per-statement triggers …] > I agree that was a reasonable trade-off. But it would be quite > something else again if someone were to propose the idea of having NEW > and OLD available for each-statement triggers, but only give the > information about the first row affected by the statement, rather than > all of them. I think that proposal would quite rightly be rejected, > and I think it's morally equivalent to what you're proposing, or what > I understood you to be proposing, at any rate. No it's not. + /* + * we only support filling-in information for DROP command if we only + * drop a single object at a time, in all other cases the ObjectID, + * Name and Schema will remain NULL. + */ + if (list_length(stmt->objects) != 1) The current patch implementation is to fill in the object id, name and schema with NULL when we have something else than a single object as the target. I did that when I realized we have a precedent with statement triggers and that we would maybe share the implementation of the "record set variable" facility for PLs here. > I'm not sure, either, but I think that exposing things as tables is a > neat idea. I had imagined passing either JSON or arrays, but tables > would be easier to work with, at least for PL/pgsql triggers. Any idea about a practical implementation that we can do in the 9.3 timeframe? Baring that my proposal is to allow ourselves not to provide the information at all in that case in 9.3, and complete the feature by 9.4 time frame. Possibly with OLD and NEW relations for per-statement triggers, if it turns out as I think that we can re-use the same piece of PLpgSQL side framework in both cases. >> The only commands that will target more than one object are: >> >> - DROP, either in the command or by means of CASCADE; >> - CREATE SCHEMA with its PROCESS_UTILITY_SUBCOMMAND usage; > > CREATE TABLE can also create subsidiary objects (indexes, > constraints); and there could be more things that do this in the > future. Subsidiary objects are not the same thing at all as a command that targets more than one object, and the difference is user visible. >> The CASCADE case is something else entirely, and we need to be very >> careful about its locking behavior. Also, in the CASCADE case I can >> perfectly agree that we're not talking about a ddl_something event any >> more, and that in 9.4 we will be able to provide a sql_drop generic >> event that will now about that. > > I've felt from the beginning that we really need that to be able to do > anything worthwhile with this feature, and I said so last year around > this time. Meanwhile, absent that, if we put something in here that > only handles a subset of cases, the result will be DDL replication > that randomly breaks. So we have two proposals here: - Have the cascading drop calls back to process utility with a new context value of PROCESS_UTILITY_CASCADE and its parsenode, wherein you only stuff the ObjectAdress, and provide event triggers support for this new cascade command; - Implement a new event called "sql_drop" where you provide the same amount of information than in a ddl_command event(same lookups, etc), but without any parsetree nor I suppose the command string that the user would have to typeto drop just that object. You objected to the first on modularity violation grounds, and on the second on development cycle timing grounds. And now you're saying that because we don't have a practical solution, I'm not sure, apparently it's dead, but what is? Please help me decipher your process of thoughs and conclusions. > That doesn't bother me. If the facility is useful enough that people > want it in other PLs, they can submit patches to add it. I don't > believe there's any requirement that we must support this feature in > every PL before we can think about releasing. Ok, I won't bother either then. > I'm not sure I can, but I think that the design I'm proposing gives a > lot of flexibility. If we create a pg_parse_tree type with no input > function and an output function that merely returns a dummy value, and > we pass a pg_parse_tree object to PL/pgsql event triggers, then it's a > small win even if it offers no accessors at all - because somebody > could put whatever accessor functions they want in a contrib module > without waiting for 9.4. That's not solving the problem we want to solve, though. You can already do what you say with the code that we have today, you just have to code the extension in C. > The point is - we wouldn't expose the whole parse tree. Rather, we'd > expose a parse-tree as an opaque object, and give the user functions > that would expose specific bits of it. Those bits could grow over > time without needing any changes to PLs or existing trigger functions, > and they wouldn't change if the parse tree internals changed, and > users could extend the amount of information exposed via contrib > functions (at their own risk, of course). We already have that, baring the extension that grovels through the parse tree. The benefit from maintaining that code as an extension is relying on a stable enough API from the backend so that you can provide the same behaviour to users accross major versions. And with pg_upgrade, the same storage format. Here I don't see how to reach that goals without anything new in core, given that the main requirement from core about the parse tree is to be able to whack it around in any release (even minor) without concern's about API nor ABI stability. >> I'll hand you another one then: have INSERT INTO create the destination >> TABLE when it does not exists, for RAD users out there. You know, those >> schema less NoSQL guys… > > Wow, you don't think small. I'm not sure how hard that would be, but > I agree it would be cool. I think the trigger might have to be > written in C to get the behavior people would likely want without an > unreasonable amount of screwing around. What I'm chasing here is being able to extend DDL in user land, without even having to code in C. We'll get there. A practical example that we saw on this list very recently is addressing the trusted bits for PL: install an event trigger on CREATE FUNCTION and have it run a SECURITY DEFINER function implemented in PLpgSQL, that will do some checks then the CREATE FUNCTION for you (taking care of disabling its own event trigger to avoid infinite recursion). Of course to get there you need INSTEAD OF event triggers and we had to remove them from my patch series last year because we were not in a position to assess that the call points in each commands where safe enough. Once the current framework is in place, we will be able to get back to INSTEAD OF event triggers. We'll get there. And, well, yes, when you embark in side projects for a 2 or 3 cycles development, you need to have something of a vision, I guess… Event Triggers are but a building block to progress towards achieving mine. That feature appeared to me to be the Right Way™ to add a generic facility in PostgreSQL that you can use for a number of different use cases, many birds with a single stone. Well, a menhir maybe. Regards, -- Dimitri Fontaine http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support
pgsql-hackers by date: