Hm, table constraints aren't so unique as all that - Mailing list pgsql-hackers

Over in the thread about enhanced error fields, I claimed that
"constraints are uniquely named among those associated with a table,
or with a domain".  But it turns out that that ain't necessarily so,
because the code path for index constraints doesn't pay any attention
to pre-existing check constraints:

d1=# create table t1 (f1 int);
CREATE TABLE
d1=# alter table t1 add constraint c1 check (f1 > 0);
ALTER TABLE
d1=# alter table t1 add constraint c1 unique (f1);
ALTER TABLE
d1=# \d t1     Table "public.t1"Column |  Type   | Modifiers 
--------+---------+-----------f1     | integer | 
Indexes:   "c1" UNIQUE CONSTRAINT, btree (f1)
Check constraints:   "c1" CHECK (f1 > 0)

If you do this in the other order it does get rejected:

d1=# create table t2 (f1 int);
CREATE TABLE
d1=# alter table t2 add constraint c2 unique (f1);
ALTER TABLE
d1=# alter table t2 add constraint c2 check (f1 > 0);
ERROR:  constraint "c2" for relation "t2" already exists

Aside from being plain inconsistent, this seems to me to create a
dump/reload hazard: pg_dump has no idea that it would have to dump
these two constraints in a particular order to make them reloadable.

In practice there's not such a big risk because pg_dump prefers to stick
CHECK constraints directly into the CREATE TABLE rather than add them
after-the-fact.  But if it had to split off the CHECK constraint to
avoid a circularity problem, I don't believe there's anything preventing
a reload failure.

I think we need to tighten this down by having index-constraint creation
check for conflicts with other constraint types.  It also seems like it
might be a good idea to put in a unique index to enforce the intended
lack of conflicts --- note that the existing index on (conname,
connamespace) isn't unique.  It's a bit problematic that pg_constraint
contains both table-related constraints and domain-related constraints,
but it strikes me that we could get close enough by changing
pg_constraint_conname_nsp_index to be a unique index on
(conname, connamespace, conrelid, contypid).  That would fix the problem
as long as no pg_constraint entry ever has both conrelid and contypid
nonzero; the unique index couldn't catch such an error.  But it doesn't
seem to me that such a coding error would escape detection anyway.

Of course this wouldn't be material for back-patching, but it seems to
me there's still time to fix this for 9.3, and we should do so if we
want to claim that the enhanced-errors patch uniquely identifies
constraints.

Thoughts?
        regards, tom lane



pgsql-hackers by date:

Previous
From: Craig Ringer
Date:
Subject: Re: error C2011 in Visual Studio 2012
Next
From: Peter Geoghegan
Date:
Subject: Re: Hm, table constraints aren't so unique as all that