Re: reducing the overhead of frequent table locks - now, with WIP patch - Mailing list pgsql-hackers

From Robert Haas
Subject Re: reducing the overhead of frequent table locks - now, with WIP patch
Date
Msg-id BANLkTin21kDTbm=3FKvDHZmzS2ouk3rmAg@mail.gmail.com
Whole thread Raw
In response to Re: reducing the overhead of frequent table locks - now, with WIP patch  (Heikki Linnakangas <heikki.linnakangas@enterprisedb.com>)
List pgsql-hackers
On Mon, Jun 6, 2011 at 8:02 AM, Heikki Linnakangas
<heikki.linnakangas@enterprisedb.com> wrote:
> On 06.06.2011 07:12, Robert Haas wrote:
>>
>> I did some further investigation of this.  It appears that more than
>> 99% of the lock manager lwlock traffic that remains with this patch
>> applied has locktag_type == LOCKTAG_VIRTUALTRANSACTION.  Every SELECT
>> statement runs in a separate transaction, and for each new transaction
>> we run VirtualXactLockTableInsert(), which takes a lock on the vxid of
>> that transaction, so that other processes can wait for it.  That
>> requires acquiring and releasing a lock manager partition lock, and we
>> have to do the same thing a moment later at transaction end to dump
>> the lock.
>>
>> A quick grep seems to indicate that the only places where we actually
>> make use of those VXID locks are in DefineIndex(), when CREATE INDEX
>> CONCURRENTLY is in use, and during Hot Standby, when max_standby_delay
>> expires.  Considering that these are not commonplace events, it seems
>> tremendously wasteful to incur the overhead for every transaction.  It
>> might be possible to make the lock entry spring into existence "on
>> demand" - i.e. if a backend wants to wait on a vxid entry, it creates
>> the LOCK and PROCLOCK objects for that vxid.  That presents a few
>> synchronization challenges, and plus we have to make sure that the
>> backend that's just been "given" a lock knows that it needs to release
>> it, but those seem like they might be manageable problems, especially
>> given the new infrastructure introduced by the current patch, which
>> already has to deal with some of those issues.  I'll look into this
>> further.
>
> At the moment, the transaction with given vxid acquires an ExclusiveLock on
> the vxid, and anyone who wants to wait for it to finish acquires a
> ShareLock. If we simply reverse that, so that the transaction itself takes
> ShareLock, and anyone wanting to wait on it take an ExclusiveLock, will this
> fastlock patch bust this bottleneck too?

Not without some further twaddling.  Right now, the fast path only
applies when you are taking a lock < ShareUpdateExclusiveLock on an
unshared relation.  See also the email I just sent on why using the
exact same mechanism might not be such a hot idea.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


pgsql-hackers by date:

Previous
From: Robert Haas
Date:
Subject: Re: heap vacuum & cleanup locks
Next
From: Pavel Golub
Date:
Subject: Re: Error in PQsetvalue