Hannu Krosing <hannu@skype.net> writes:
> Ãhel kenal päeval, K, 2006-08-23 kell 11:05, kirjutas Hannu Krosing:
> >
> > Maybe we could find a way to build a non-unique index first and then
> > convert it to a unique one later, in yet another pass ?
>
> Or even add ALTER INDEX myindex ADD/DROP UNIQUE; command
That would be great. But note that it suffers from precisely the same problem.
If you come across a couple of records with the same key and one of them is
DELETE_IN_PROGRESS then you'll have to wait until you can acquire a sharelock
on it before you can determine if there's a constraint violation.
Hmmm. Or is that true. The problem may be somewhat easier since at least you
can be sure every tuple in the heap is in the index. So if you see a
DELETE_IN_PROGRESS either it *was* a constraint violation prior to the delete
and failing is reasonable or it's an update in which case maybe it's possible
to detect that they're part of the same chain?
(Actually there is another corner case. a transaction that inserts a value,
then deletes it in the same transaction then inserts that same value again.
Now you have a INSERT_IN_PROGRESS and a DELETE_IN_PROGRESS that conflict but
should be allowed since they come from the same transaction. Hopefully the
ALTER INDEX command would be able to determine they come from the same
transaction.)
In the case of concurrent index builds that's not really safe since you don't
have the other tuples you're conflicting with together at the same time and
even if you did you may or may not have a complete set of them.
Tom's right. This stuff is tricky.
-- Gregory Stark EnterpriseDB http://www.enterprisedb.com