Re: Reducing relation locking overhead - Mailing list pgsql-hackers

From Hannu Krosing
Subject Re: Reducing relation locking overhead
Date
Msg-id 1134266138.3567.25.camel@localhost.localdomain
Whole thread Raw
In response to Re: Reducing relation locking overhead  (Tom Lane <tgl@sss.pgh.pa.us>)
Responses Re: Reducing relation locking overhead
List pgsql-hackers
Ühel kenal päeval, N, 2005-12-08 kell 14:53, kirjutas Tom Lane:
> Given the discussion so far, it seems likely to me that completely
> concurrent REINDEX is indeed out of reach, and that what we ought to
> be thinking about is what sort of compromise design (ie, partially
> concurrent REINDEX) is reasonable.
> 
> Something that might work is:
> 
> 1. Take ShareUpdateExclusiveLock (this blocks VACUUM and DDL changes),
> then run existing CREATE INDEX code.  The finished index may be missing
> some tuples inserted during the run.
> 
> 2. Commit transaction so that index becomes visible (we assume it's
> marked so that the planner will know not to rely on it).  Continue to
> hold ShareUpdateExclusiveLock so VACUUM doesn't run.
> 
> 3. Attempt to acquire ShareLock (possibly a ConditionalLockAcquire/sleep
> loop instead of just flat-out LockAcquire).  Once we have this we know
> there are no active writer transactions.  Release the lock immediately.
> 
> 4. Make a new scan of the table and insert any rows not already present
> in the index.  (This need not process anything inserted later than step
> 3, because any new transactions will know to insert rows in the index
> anyway.)

How do you plan to determine "any rows not already present in the index"
without explicitly remembering the start and end snapshots of existing
CREATE INDEX (SNAP1 and SNAP2 in my proposal)? actually the end point
seems to be covered, but what about start condition ?

In the last round of discussion you pointed out that index itself can't
be effectively used for this in case there are lots of equal index keys.
(As I pointed out, this can be fixed if we will start using ctid to
determine placement/order of equal keys, but I don't think we are
building indexes this way now).

I still think that wedging start of 1. and end of 2. into points where
no concurrent transaction is running would be the easiest and most
robust way to do it. 

And if the attempts (locking periods) to find/force that spot are short
enough, they can be tolerated in practice.

> 5. Mark index good and commit, releasing all locks.
> 
> I don't think that it's worth the effort and complexity to try to avoid
> a full table scan in step 4.  At best you would save much less than 50%
> of the total work, and the impact on normal operations is not free.

Agreed. The usecase needing concurrent index, being already slow,  can
probably be made to tolerate another 2-3x slowdown.

> If what you want is a REINDEX rather than creating an independent new
> index, then at step 5 you need to do a swap operation which'll require
> obtaining exclusive lock on the index.  This creates another opportunity
> for deadlock failures, but again a conditional loop might help.
> 
> There are still some issues about the behavior when the index is UNIQUE.
> Ideally you would like the CREATE INDEX to fail on a duplicate, not any
> concurrent writer transaction, but I don't think it's possible to
> guarantee that.

Ideally, but probably not too important in practice. The point can be
always made that there already is a unique index at the point where
concurrent trx fails. If the point is before end of 2. the concurrent
trx will probably wait until first commit before failing, no ?

> Also, I'm not sure how we get rid of the broken index if there is a
> failure later than step 2.

What about expicit DROP INDEX ? Even for REINDEX the index has to be
visible as a separate index after 2. so that inserts updates will be
aware of it.

--------------
Hannu





pgsql-hackers by date:

Previous
From: "Qingqing Zhou"
Date:
Subject: Re: Warm-cache prefetching
Next
From: Bruce Momjian
Date:
Subject: Re: Backslashes in string literals