Re: Tricky bugs in concurrent index build - Mailing list pgsql-hackers

From stark
Subject Re: Tricky bugs in concurrent index build
Date
Msg-id 87u043fr6o.fsf@enterprisedb.com
Whole thread Raw
In response to Re: Tricky bugs in concurrent index build  (Tom Lane <tgl@sss.pgh.pa.us>)
Responses Re: Tricky bugs in concurrent index build
List pgsql-hackers
Tom Lane <tgl@sss.pgh.pa.us> writes:

>> Or do you mean we use SatisfiesVacuum to determine what to insert but
>> SatisfiesSnapshot to determine whether to check uniqueness?
>
> Right.  The problems seem to all stem from the risk of trying to
> unique-check more than one version of a tuple, and using a snap would
> stop that.  We need to think through all the cases though and be sure
> they all work.

What happens if someone inserts a record that we miss, but it gets deleted by
the same phase 2 starts. So it's not visible to phase 2 but conflicts with
some other record we find. I suppose that's ok since the delete would have to
have comitted for that to happen. It just means that having a unique
constraint doesn't guarantee uniqueness if your transaction started before the
index was finished being built.

Or what if there's an insert that occurs before phase 2 starts and hasn't
committed yet. There's a conflicting record in the heap that's missing in the
index. I guess the build would have to block when it finds the missing record
until the new insert either commits or aborts just like inserts do when a user
inserts a potential conflict. Would I have to handle that myself or does
index_insert handle that automatically?

--  Gregory Stark EnterpriseDB          http://www.enterprisedb.com



pgsql-hackers by date:

Previous
From: Tom Lane
Date:
Subject: Re: Question about (lazy) vacuum
Next
From: Alvaro Herrera
Date:
Subject: Re: [PATCHES] COPY view