Alvaro Herrera <alvherre@2ndquadrant.com> writes:
> Tom Lane wrote:
>> You'd need the object locks in any case, to be sure things hold still long
>> enough for their dependencies to be examined. It's possible a weaker lock
>> type would suffice, but I'm not sure; we generally don't require exclusive
>> lock on an object to add or remove dependencies on it.
> Of course, using a weak lock could run afoul of somebody changing the
> dependencies underneath. But even using a stronger lock is unlikely to
> give any actual protection: in UI programs (be it GUI admin programs or
> psql), more likely than not many users are going to run a check in one
> transaction, then run the actual drop in a different transaction.
Well, obviously the output would not be authoritative about what might get
dropped in a later drop attempt. But I'm thinking just about not getting
"cache lookup failed" or similar failures in the function itself.
Also, assuming that we did acquire full-strength locks, that would mean
that if you do
begin;
select pg_drop_cascades_to(foo);
drop foo cascade;
commit;
then in fact the function output WOULD be authoritative about what would
get dropped in the second step. That seems like a useful property to
have, even if there are lots of production scenarios where you'd not want
to hold the locks long enough for a human to eyeball the list. You
could imagine for instance an application quickly verifying that the
SELECT result matches a previously vetted list before barging ahead
with the DROP. (Right now, you can approximate that sort of "safe
drop" by looking at the NOTICE output before committing --- but as you
noted, that only works up to 100 dependencies.)
regards, tom lane