On Wed, Sep 5, 2018 at 1:05 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
>> Would expanding this a git further really be that noticeable?
>
> Frankly, I think it would be not so much "noticeable" as "disastrous".
>
> Making the overhead tolerable would require very large compromises
> in coverage, perhaps like "we'll only lock during DDL not DML".
> At which point I'd question why bother. We've seen no field reports
> (that I can recall offhand, anyway) that trace to not locking these
> objects.
I think that would actually be a quite principled separation. If your
DML fails with a strange error, that sucks, but you can retry it and
the problem will go away -- it will fail in some more sane way, or it
will work. On the other hand, if you manage to create an object in a
no-longer-existing schema, you now have a database that can't be
backed up in pg_dump, and the only way to fix it is to run manual
DELETE commands against the PostgreSQL catalogs. It's not even easily
to figure out what you need to DELETE, because there can be references
to pg_namespace.oid from zillions of other catalogs -- pg_catcheck
will tell you, but that's a third-party tool which many users won't
have and won't know that they should use.
I do agree with you that locking every schema referenced by any object
in a query would suck big time. At a minimum, we'd need to extend
fast-path locking to cover AccessShareLock on schemas.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company