Let me start by saying I understand that postgresql does not support the
following: composite data types with individual components acting as
foreign keys, arrays of composite data types, and arrays with elements
acting as foreign keys. I will layout my example using them for clarity
even though they don't exist
Ok let me layout a rough example of my problem:
CREATE TABLE target_node1 (
id int4,
value text
);
CREATE TABLE target_node2 (
id int4,
value text
);
CREATE TYPE composite1 (
idx int4,
reference int4
);
CREATE TYPE composite2 (
idx int4,
reference1 int4
reference2 int4
);
CREATE TABLE example_table (
id int4,
value test,
type1 composite1[];
type2 composite2[];
);
ALTER TABLE example_table ADD FOREIGN KEY ((composite1).reference)
REFERENCES target_node1 (id);
ALTER TABLE example_table ADD FOREIGN KEY ((composite2).reference1)
REFERENCES target_node1 (id);
ALTER TABLE example_table ADD FOREIGN KEY ((composite2).reference2)
REFERENCES target_node2 (id);
In addition I want a UNIQUE check constraint on INSERT and UPDATE that
first checks that the text in the value column is unique. If that fails
then I need to check that the type1 and type2 arrays are unique and act
accordingly.
Yes I understand all of that is completely wrong, but hopefully it
clarifies my intentions.
Essentially, I want a list(s) of a custom type(hopefully composite type)
where individual data members of the type are foreign keys to other
tables. Also I need to use those custom types for uniqueness in
combination with other information in the table.
The solutions as I see it:
1. Parallel arrays for the data members of the types. In other words an
array for composite1.idx, an array for composite1.reference and so on.
Then I could take the array of interest and create triggers that would
allow the array to act as a set of foreign keys. However, this is slow
when updating or deleting from a table like target_node1. Also it just
seems accident prone given that the values are all separate from each
other, and hardly human understandable.
2. Normalize this and move the composite type into its own table that
references example_table. It is a many-to-one relationship after all,
and I am sure this is the route that will probably be suggested by you
guys. However... if I do that how the hell do I do maintain my
uniqueness check on insert into example table given that half of the
data I am checking on is in another table, and hasn't been inserted yet
for the new row? I am totally stuck in regards to this...
3. Something with truly custom data types(aka C functions loaded into
the back end). To be honest I don't even know how possible that is or
how much work that would take. I have defined my own custom types
before, but not any indexing functions for them. What would I have to
define beyond the type itself? How would the foreign keys work in this
situation, if at all?
Help! This is seriously important to my project... but naturally the
less complicated the solution the better.
Thanks in advance,
Morgan