From: "Alvar Freude" <alvar.freude@gate.agi.de>
> Hi,
>
> what is the best method to make concurrent inserts to a table with
> unique/primary key?
>
> Scenario:
> I write a DBI logger for Apache, and this uses a table for all referers:
>
>
> CREATE TABLE referer (
> id SERIAL,
> referer varchar(2048) NOT NULL PRIMARY KEY
> );
>
>
> so, you can imagine that there are two accesses with the same referer at
> the same time; at logging time, each process looks if there is already
> an entry for this referer and catches its id, but if not, it inserts the
> new referer.
Why have you got id as a serial if referer is your primary key? Oh - I
suppose it's easier to reference a serial of course, less data to carry
around.
> So, it is possible that two processes trying to insert the same primary
> key into the table.
Well, they'll try.
> My solution is: if transaction is broken, I restart the hole transaction
> (there are more then one inserts like this for each request) a second
> time. But i can not be sure that the transaction is aborted because a
> dublicate unique key, and it seems to me not the most elegant solution.
I don't see an alternative if you wrap several inserts into a transaction.
You're saying you want all to succeed or none of them. I presume you're
doing this for performance reasons.
There's really no easy way around this AFAIK - no matter what you do there
is always the possibility that another process is inserting the same
referrer as you in parallel.
The only thing I can think of is to insert into a staging table where
referer isn't unique and insert into the real table from a snapshot of that
staging table. Not sure that's a cleaner solution than yours though.
- Richard Huxton