Re: Hard problem with concurrency - Mailing list pgsql-hackers

From Greg Stark
Subject Re: Hard problem with concurrency
Date
Msg-id 87smunfr16.fsf@stark.dyndns.tv
Whole thread Raw
In response to Re: Hard problem with concurrency  ("Christopher Kings-Lynne" <chriskl@familyhealth.com.au>)
Responses Re: Hard problem with concurrency
List pgsql-hackers
Hm, odd, nobody mentioned this solution:

If you don't have a primary key already, create a unique index on the
combination you want to be unique. Then:

. Try to insert the record
. If you get a duplicate key error then do update instead

No possibilities of duplicate records due to race conditions. If two people
try to insert/update at the same time you'll only get one of the two results,
but that's the downside of the general approach you've taken. It's a tad
inefficient if the usual case is updates, but certainly not less efficient
than doing table locks.

I'm not sure what you're implementing here. Depending on what it is you might
consider having a table of raw data that you _only_ insert into. Then you
process those results into a table with the consolidated data you're trying to
gather. I've usually found that's more flexible later because then you have
all the raw data in the database even if you only present a limited view.

-- 
greg



pgsql-hackers by date:

Previous
From: Kevin Brown
Date:
Subject: Re: location of the configuration files
Next
From: "Christopher Kings-Lynne"
Date:
Subject: Re: Hard problem with concurrency