Re: INSERT...ON DUPLICATE KEY LOCK FOR UPDATE - Mailing list pgsql-hackers

From Heikki Linnakangas
Subject Re: INSERT...ON DUPLICATE KEY LOCK FOR UPDATE
Date
Msg-id 52D45382.3080604@vmware.com
Whole thread Raw
In response to Re: INSERT...ON DUPLICATE KEY LOCK FOR UPDATE  (Peter Geoghegan <pg@heroku.com>)
Responses Re: INSERT...ON DUPLICATE KEY LOCK FOR UPDATE  (Peter Geoghegan <pg@heroku.com>)
List pgsql-hackers
On 01/13/2014 10:53 PM, Peter Geoghegan wrote:
> On Mon, Jan 13, 2014 at 12:17 PM, Robert Haas <robertmhaas@gmail.com> wrote:
>> For what it's worth, I agree with Heikki.  There's probably nothing
>> sensible an upsert can do if it conflicts with more than one tuple,
>> but if it conflicts with just exactly one, it oughta be OK.
>
> If there is exactly one, *and* the existing value is exactly the same
> as the value proposed for insertion (or, I suppose, a subset of the
> existing value, but that's so narrow that it might as well not apply).
> In short, when you're using an exclusion constraint as a unique
> constraint. Which is very narrow indeed. Weighing the costs and the
> benefits, that seems like far more cost than benefit, before we even
> consider anything beyond simply explaining the applicability and
> limitations of upserting with exclusion constraints. It's generally
> far cleaner to define speculative insertion as something that happens
> with unique indexes only.

Well, even if you don't agree that locking all the conflicting rows for 
update is sensible, it's still perfectly sensible to return the rejected 
rows to the user. For example, you're inserting N rows, and if some of 
them violate a constraint, you still want to insert the non-conflicting 
rows instead of rolling back the whole transaction.

- Heikki



pgsql-hackers by date:

Previous
From: Jim Nasby
Date:
Subject: Re: Linux kernel impact on PostgreSQL performance
Next
From: Andres Freund
Date:
Subject: Re: [Lsf-pc] Linux kernel impact on PostgreSQL performance