Re: 'tuple concurrently updated' error for alter role ... set - Mailing list pgsql-hackers

From Bruce Momjian
Subject Re: 'tuple concurrently updated' error for alter role ... set
Date
Msg-id 201105131322.p4DDMRZ16948@momjian.us
Whole thread Raw
In response to Re: 'tuple concurrently updated' error for alter role ... set  (Robert Haas <robertmhaas@gmail.com>)
List pgsql-hackers
Is this a TODO?  I don't see it on the TODO list.

---------------------------------------------------------------------------

Robert Haas wrote:
> On Fri, May 13, 2011 at 12:56 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> > BTW, I thought a bit more about why I didn't like the initial proposal
> > in this thread, and the basic objection is this: the AccessShareLock or
> > RowExclusiveLock we take on the catalog is not meant to provide any
> > serialization of operations on individual objects within the catalog.
> > What it's there for is to interlock against operations that are
> > operating on the catalog as a table, such as VACUUM FULL (which has to
> > lock out all accesses to the catalog) or REINDEX (which has to lock out
> > updates). ?So the catalog-level lock is the right thing and shouldn't be
> > changed. ?If we want to interlock updates of individual objects then we
> > need a different locking concept for that.
> 
> Right, I agree.  Fortunately, we don't have to invent a new one.
> There is already locking being done exactly along these lines for
> DROP, COMMENT, and SECURITY LABEL (which is important, because
> otherwise we could leave behind orphaned security labels that would be
> inherited by a later object with the same OID, leading to a security
> problem).  I think it would be sensible, and quite simple, to extend
> that to other DDL operations.
> 
> I think that we probably *don't* want to lock non-table objects when
> they are just being *used*.  We do that for tables (to lock against
> concurrent drop operations) and in some workloads it becomes a severe
> bottleneck.  Doing it for functions and operators would make the
> problem far worse, for no particular benefit.  Unlike tables, there is
> no underlying relation file to worry about, so the worst thing that
> happens is someone continues to use a dropped object slightly after
> it's gone, or the old definition of an object that's been modified.
> 
> Actually, it's occurred to me from time to time that it would be nice
> to eliminate ACCESS SHARE (and while I'm dreaming, maybe ROW SHARE and
> ROW EXCLUSIVE) locks for tables as well.  Under normal operating
> conditions (i.e. no DDL running), these locks generate a huge amount
> of lock manager traffic even though none of the locks conflict with
> each other.  Unfortunately, I don't really see a way to make this
> work.  But maybe it would at least be possible to create some sort of
> fast path.  For example, suppose every backend opens a file and uses
> that file to record lock tags for the objects on which it is taking
> "weak" (ACCESS SHARE/ROW SHARE/ROW EXCLUSIVE) locks on.  Before taking
> a "strong" lock (anything that conflicts with one of those lock
> types), the exclusive locker is required to open all of those files
> and transfer the locks into the lock manager proper.  Of course, it's
> also necessary to nail down the other direction: you have to have some
> way of making sure that the backend can't record in it's local file a
> lock that would have conflicted had it been taken in the actual lock
> manager.  But maybe there's some lightweight way we could detect that,
> as well.  For example, we could keep, say, a 1K array in shared
> memory, representing a 1024-way partitioning of the locktag space.
> Each byte is 1 if there are any "strong" locks on objects with that
> locktag in the lock manager, and 0 if there are none (or maybe you
> need a 4K array with exact counts, for bookkeeping).  When a backend
> wants to take a "weak" lock, it checks the array: if it finds a 0 then
> it just records the lock in its file; otherwise, it goes through the
> lock manager.  When a backend wants a "strong" lock, it first sets the
> byte (or bumps the count) in the array, then transfers any existing
> weak locks from individual backends to the lock manager, then tries to
> get its own lock.  Possibly the array operations could be done with
> memory synchronization primitives rather than spinlocks, especially on
> architectures that support an atomic fetch-and-add.  Of course I don't
> know quite how we recover if we try to do one of these "lock
> transfers" and run out of shared memory... and overall I'm hand-waving
> here quite a bit, but in theory it seems like we ought to be able to
> rejigger this locking so that we reduce the cost of obtaining a "weak"
> lock, perhaps at the expense of making it more expensive to obtain a
> "strong" lock, which are relatively rare by comparison.
> 
> <end of rambling digression>
> 
> --
> Robert Haas
> EnterpriseDB: http://www.enterprisedb.com
> The Enterprise PostgreSQL Company
> 
> -- 
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers

--  Bruce Momjian  <bruce@momjian.us>        http://momjian.us EnterpriseDB
http://enterprisedb.com
 + It's impossible for everything to be true. +


pgsql-hackers by date:

Previous
From: Robert Haas
Date:
Subject: Re: 'tuple concurrently updated' error for alter role ... set
Next
From: "MauMau"
Date:
Subject: Re: Fw: [BUGS] BUG #6011: Some extra messages are output in the event log at PostgreSQL startup