Re: sql_drop Event Trigger - Mailing list pgsql-hackers
From | Robert Haas |
---|---|
Subject | Re: sql_drop Event Trigger |
Date | |
Msg-id | CA+TgmobgVzxqAgfksEuxTN9NN-vqws8jbo5uJVqtffDKZWgRrw@mail.gmail.com Whole thread Raw |
In response to | Re: sql_drop Event Trigger (Alvaro Herrera <alvherre@2ndquadrant.com>) |
Responses |
Re: sql_drop Event Trigger
Re: sql_drop Event Trigger |
List | pgsql-hackers |
On Wed, Feb 20, 2013 at 10:48 PM, Alvaro Herrera <alvherre@2ndquadrant.com> wrote: > I have added some protections so that these do not fire on undesirable > events (such as dropping an event trigger); also event triggers and > other object types are filtered out of pg_dropped_objects, in case > something like DROP OWNED happens. (So for instance you could get an > empty pg_dropped_objects if you did DROP OWNED and the mentioned user > only owns an event trigger). Maybe this needs to be reconsidered. This is tricky. The whole point of skipping the ddl_command_start/ddl_command_end triggers when the command is one that operates on event triggers is that we don't want a user to install an event trigger that prevents said user from dropping that event trigger afterwards. We're currently safe from that, but with the behavior you describe, we won't be: an event trigger that unconditionally thows an error will block all drops, even of event triggers. I am inclined to suggest that we go back to an earlier suggestion I made, which Dimitri didn't like, but which I still think might be the best way forward: add a SUSET GUC that disables event triggers. If we do that, then our answer to that problem, for the current event triggers and all we might add in the future, can be: well, if that happens, set the disable_event_triggers GUC and retry the drop. OTOH, if we DON'T do that, then this problem is going to come back up every time we add a new firing point, and it may be really tough to make sure we're secure against this in all cases. If you don't like that suggestion I'm open to other options, but I think we should try hard to preserve the invariant that a superuser can always drop an event trigger without interference from the event trigger system itself. > There's also code to re-obtain the list of objects to drop after the > event trigger functions have run; the second list is compared to the > first one, and if they differ, an error is raised. This is definitely an improvement. I am not 100% clear on whether this is sufficient, but it sure seems a lot better than punting. I think there might be a possible failure mode if somebody creates a new object that depends on one of the objects to be dropped while the event trigger is running. Since system catalogs are normally read with SnapshotNow, I think it might be possible to create a situation where the first and second searches return different results even though the trigger hasn't done anything naughty. I believe that I added some locking in 9.2 that will prevent this in the case where you create a table that depends on a concurrently-dropped schema, but there are other cases that have this problem, such as a function being added to a concurrently-dropped schema. Now, IMHO, that isn't necessarily a show-stopper for this patch, because that same phenomenon can cause catalog corruption as things stand. So in the scenario where an event trigger causes a transaction to fail because of this issue, it will actually be *preventing* catalog corruption. However, if this goes in, it might bump up the priority of fixing some of those locking issues, since it may make them more visible. > It's rather annoying that performMultipleDeletions and performDeletion > contain almost exactly the same code. I think maybe it's time to morph > the latter into a thin layer on top of the former. Yeah, or at least factor out the code you need for this into its own function. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
pgsql-hackers by date: