Re: Avoid orphaned objects dependencies, take 3 - Mailing list pgsql-hackers
From | Bertrand Drouvot |
---|---|
Subject | Re: Avoid orphaned objects dependencies, take 3 |
Date | |
Msg-id | Zk7D2M2pGk1JDkS8@ip-10-97-1-34.eu-west-3.compute.internal Whole thread Raw |
In response to | Re: Avoid orphaned objects dependencies, take 3 (Robert Haas <robertmhaas@gmail.com>) |
Responses |
Re: Avoid orphaned objects dependencies, take 3
|
List | pgsql-hackers |
Hi, On Wed, May 22, 2024 at 10:48:12AM -0400, Robert Haas wrote: > On Wed, May 22, 2024 at 6:21 AM Bertrand Drouvot > <bertranddrouvot.pg@gmail.com> wrote: > > I started initially with [1] but it was focusing on function-schema only. > > Yeah, that's what I thought we would want to do. And then just extend > that to the other cases. > > > Then I proposed [2] making use of a dirty snapshot when recording the dependency. > > But this approach appeared to be "scary" and it was still failing to close > > some race conditions. > > The current patch still seems to be using dirty snapshots for some > reason, which struck me as a bit odd. My intuition is that if we're > relying on dirty snapshots to solve problems, we likely haven't solved > the problems correctly, which seems consistent with your statement > about "failing to close some race conditions". But I don't think I > understand the situation well enough to be sure just yet. The reason why we are using a dirty snapshot here is for the cases where we are recording a dependency on a referenced object that we are creating at the same time behind the scene (for example, creating a composite type while creating a relation). Without the dirty snapshot, then the object we are creating behind the scene (the composite type) would not be visible and we would wrongly assume that it has been dropped. Note that the usage of the dirty snapshot is only when the object is first reported as "non existing" by the new ObjectByIdExist() function. > > I think there is 2 cases here: > > > > First case: the "conflicting" lock mode is for one of our own lock then LockCheckConflicts() > > would report this as a NON conflict. > > > > Second case: the "conflicting" lock mode is NOT for one of our own lock then LockCheckConflicts() > > would report a conflict. But I've the feeling that the existing code would > > already lock those sessions. > > > > Was your concern about "First case" or "Second case" or both? > > The second case, I guess. It's bad to take a weaker lock first and > then a stronger lock on the same object later, because it can lead to > deadlocks that would have been avoided if the stronger lock had been > taken at the outset. In the example I shared up-thread that would be the opposite: the Session 1 would take an AccessExclusiveLock lock on the object before taking an AccessShareLock during changeDependencyFor(). > Here, it seems like things would happen in the > other order: if we took two locks, we'd probably take the stronger > lock in the higher-level code and then the weaker lock in the > dependency code. Yeah, I agree. > That shouldn't break anything; it's just a bit > inefficient. Yeah, the second lock is useless in that case (like in the example up-thread). > My concern was really more about the maintainability of > the code. I fear that if we add code that takes heavyweight locks in > surprising places, we might later find the behavior difficult to > reason about. > I think I understand your concern about code maintainability but I'm not sure that adding locks while recording a dependency is that surprising. > Tom, what is your thought about that concern? +1 Regards, -- Bertrand Drouvot PostgreSQL Contributors Team RDS Open Source Databases Amazon Web Services: https://aws.amazon.com
pgsql-hackers by date: