Re: Tricky bugs in concurrent index build - Mailing list pgsql-hackers

From Tom Lane
Subject Re: Tricky bugs in concurrent index build
Date
Msg-id 17859.1156336503@sss.pgh.pa.us
Whole thread Raw
In response to Re: Tricky bugs in concurrent index build  (Greg Stark <gsstark@mit.edu>)
List pgsql-hackers
Greg Stark <gsstark@mit.edu> writes:
> But then wouldn't we have deadlock risks? If we come across these records in a
> different order from someone else (possibly even the deleter) who also wants
> to lock them? Or would it be safe to lock and release them one by one so we
> only every hold one lock at a time?

AFAICS we could release the lock as soon as we've inserted the index
entry.  (Whether there is any infrastructure to do that is another
question...)

> I'm also pondering whether it might be worth saving up all the
> DELETE_IN_PROGRESS tuples in a second tuplesort and processing them all in a
> third phase. That seems like it would reduce the amount of waiting that might
> be involved. The fear I have though is that this third phase could become
> quite large.

Actually --- a tuple that is live when we do the "second pass" scan
could well be DELETE_IN_PROGRESS (or even RECENTLY_DEAD) by the time we
do the merge and discover that it hasn't got an index entry.  So offhand
I'm thinking that we *must* take a tuple lock on *every* tuple we insert
in stage two.  Ugh.
        regards, tom lane


pgsql-hackers by date:

Previous
From: Greg Stark
Date:
Subject: Re: Tricky bugs in concurrent index build
Next
From: Tom Lane
Date:
Subject: Re: Tricky bugs in concurrent index build