The current Unique constraint relies directly upon a Unique index to
test for uniqueness.
This has two disadvantages:
* only B-Trees currently support Uniqueness
* We need to create an index on *all* of the columns of the primary key,
which may end up being a very large index as a result
The uniqueness check code searches for the value being inserted and if a
value is found that is visible, then we reject. We currently use the
same index scan key for the uniqueness check as we do for the index
search.
If the uniqueness check used a scan key that consisted of all of the
Primary Key columns, rather than just the index columns then it would be
able to scan through non-unique index entries to check uniqueness.
Interestingly, the current uniqueness check code already scans through
multiple tuples because of the possible existence of multiple row
versions. So we just need to supply a different scan key.
If we extended the definition of a PRIMARY KEY to include an existing
index like this
ALTER TABLE foo ADD PRIMARY KEY (...) USING INDEX index_name;
then we would be able to specify what we want.
This would then allow us to use a Hash Index or other index as the basis
for a Unique Constraint and/or considerably reduce size of indexes.
Frequently the full unique key could be 5 or 6 columns, even though the
leading columns might be sufficiently unique to make this technique
worthwhile. It's also common to want to store the hash() of a value
rather than the value itself, but the hash typically won't be
guaranteeably unique, even though the probability of collisions may be
very low.
Thoughts?
-- Simon Riggs 2ndQuadrant http://www.2ndQuadrant.com
PostgreSQL UK 2008 Conference: http://www.postgresql.org.uk