Sounds like you'll either need an explicit "LOCK TABLE"
command, set your transaction isolation to serializable,
or use advisory locking.
http://www.postgresql.org/docs/8.2/interactive/explicit-locking.html#LOC
KING-TABLES
http://www.postgresql.org/docs/8.2/interactive/transaction-iso.html#XACT
-SERIALIZABLE
http://www.postgresql.org/docs/8.2/interactive/functions-admin.html#FUNC
TIONS-ADVISORY-LOCKS
-----Original Message-----
From: pgsql-general-owner@postgresql.org
[mailto:pgsql-general-owner@postgresql.org] On Behalf Of Tom Lane
Sent: Thursday, January 25, 2007 6:21 PM
To: Brian Wipf
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Duplicate key violation
Brian Wipf <brian@clickspace.com> writes:
> I got a duplicate key violation when the following query was
performed:
> INSERT INTO category_product_visible (category_id, product_id)
> SELECT cp.category_id, cp.product_id
> FROM category_product cp
> WHERE cp.product_id = $1 AND
> not exists (
> select 'x'
> from category_product_visible cpv
> where cpv.product_id = cp.product_id
and
> cpv.category_id = cp.category_id
> );
> This is despite the fact the insert is written to only insert rows
> that do not already exist. The second time the same query was run it
> went through okay. This makes me think there is some kind of race
> condition, which I didn't think was possible with PostgreSQL's MVCC
> implementation.
If you're doing more than one of these concurrently, then of course
there's a race condition: the NOT EXISTS is testing for nonexistence
as of the query snapshot. If two sessions do this concurrently then
they'll try to insert the same rows and one of them is gonna fail.
regards, tom lane
---------------------------(end of broadcast)---------------------------
TIP 6: explain analyze is your friend