Re: INSERT ... ON CONFLICT {UPDATE | IGNORE} 2.0 - Mailing list pgsql-hackers

From Peter Geoghegan
Subject Re: INSERT ... ON CONFLICT {UPDATE | IGNORE} 2.0
Date
Msg-id CAM3SWZSqo9=KwCnTOZ5pva6AyU6h4HpojXfO9MjEBz8kjo27WQ@mail.gmail.com
Whole thread Raw
In response to Re: INSERT ... ON CONFLICT {UPDATE | IGNORE} 2.0  (Heikki Linnakangas <hlinnakangas@vmware.com>)
Responses Re: INSERT ... ON CONFLICT {UPDATE | IGNORE} 2.0  (Peter Geoghegan <pg@heroku.com>)
Re: INSERT ... ON CONFLICT {UPDATE | IGNORE} 2.0  (Heikki Linnakangas <hlinnaka@iki.fi>)
List pgsql-hackers
On Mon, Feb 16, 2015 at 12:00 AM, Heikki Linnakangas
<hlinnakangas@vmware.com> wrote:
> So INSERT ON CONFLICT IGNORE on a table with an exclusion constraint might
> fail. I don't like that. The point of having the command in the first place
> is to deal with concurrency issues. If it sometimes doesn't work, it's
> broken.

I don't like it either, although I think it wouldn't come up very
often with exclusion constraints - basically, it would rarely be
noticed due to the different use cases. To be honest, in suggesting
this idea I was hedging against us not figuring out a solution to that
problem in time. You didn't like my suggestion of dropping IGNORE
entirely, either. I'll do my best to come up with something, but I'm
uncomfortable that at this late stage, ON CONFLICT IGNORE support for
exclusion constraints seems like a risk to the entire project.

I ask that if push comes to shove you show some flexibility here. I'll
try my best to ensure that you don't have to even consider committing
something with a notable omission. You don't have to give me an answer
to this now.

> The idea of comparing the TIDs of the tuples as a tie-breaker seems most
> promising to me. If the conflicting tuple's TID is smaller than the inserted
> tuple's, super-delete and wait. Otherwise, wait without super-deletion.
> That's really simple. Do you see a problem with that?

No. I'll work on that, and see how it stands up to stress testing.
Come to think of it, that does seem most promising.

> BTW, it would good to explain somewhere in comments or a README the term
> "unprincipled deadlock". It's been a useful concept, and hard to grasp. If
> you defined it once, with examples and everything, then you could just have
> "See .../README" in other places that need to refer it.

Agreed. I have described those in the revised executor README, in case
you missed that. Do you think they ought to have their own section?
Note that hash indexes have "unprincipled deadlock" issues, but no one
has bothered to fix them.

>> Whatever works, really. I can't say that the performance implications
>> of acquiring that hwlock are at the forefront of my mind. I never
>> found that to be a big problem on an 8 core box, relative to vanilla
>> INSERTs, FWIW - lock contention is not normal, and may be where any
>> heavweight lock costs would really be encountered.
>
> Oh, cool. I guess the fast-path in lmgr.c kicks ass, then :-).

Seems that way. But even if that wasn't true, it wouldn't matter,
since I don't see that we have a choice.

>> I don't see how storing the promise token in heap tuples buys us not
>> having to involve heavyweight locking of tokens. (I think you may have
>> made a thinko in suggesting otherwise)
>
> It wouldn't get rid of heavyweight locks, but it would allow getting rid of
> the procarray changes. The inserter could generate a token, then acquire the
> hw-lock on the token, and lastly insert the heap tuple with the correct
> token.

Do you really think that's worth the disk overhead? I generally agree
with the "zero overhead" principle, which is that anyone not using the
feature shouldn't pay no price for it (or vanishingly close to no
price). Can't say that I have a good sense of the added distributed
cost (if any) to be paid by adding new fields to the PGPROC struct
(since the PGXACT struct was added in 9.2). Is your only concern that
the PGPROC array will now have more fields, making it more
complicated? Surely that's a price worth paying to avoid making these
heap tuples artificially somewhat larger. You're probably left with
tuples that are at least 8 bytes larger, once alignment is taken into
consideration. That doesn't seem great.

>> That couldn't work without further HeapTupleSatisfiesDirty() logic.
>
> Why not?

Just meant that it wasn't sufficient to check xmin == xmax, while
allowing that something like that could work with extra work (e.g. the
use of infomask bits)...

> Ok, so we can't unconditionally always ignore tuples with xmin==xmax. We
> would need an infomask bit to indicate super-deletion, and only ignore it if
> the bit is set.

...which is what you say here.

> I'm starting to think that we should bite the bullet and consume an infomask
> bit for this. The infomask bits are a scarce resource, but we should use
> them when it makes sense. It would be good for forensic purposes too, to
> leave a trace that a super-deletion happened.
>
>> I too think the tqual.c changes are ugly. I can't see a way around
>> using a token lock, though - I would only consider (and only consider)
>> refining the interface by which a waiter becomes aware that it must
>> wait on the outcome of the inserting xact's speculative
>> insertion/value lock (and in particular, that is should not wait on
>> xact outcome). We clearly need something to wait on that isn't an
>> XID...heavyweight locking of a token value is the obvious way of doing
>> that.
>
>
> Yeah.
>
>> Jim Nasby said something about setting the HEAP_XMIN_INVALID hint bit.
>> Maybe he is right...if that can be made to be reliable (always
>> WAL-logged), it could be marginally better than setting xmin to
>> invalidTransactionId.
>
>
> I'm not a big fan of that. The xmin-invalid bit is currently always just a
> hint.

Well, Andres makes the point that that isn't quite so. TBH, I have a
hard time justifying the use of MOVED_IN|MOVED_OUT...it's not as if
there is a correctness problem with either setting xmin to
InvalidTransactionId, or setting the xmin-invalid hint bit (with
appropriate precautions so that it's not really just a hint). And as
was pointed out, there is something to be said for preserving tuple
header xmin where possible, for forensic reasons.

>> But only marginally; it seems like your
>> discomfort is basically inherent to playing these new games with
>> visibility, which is inherent to this design.
>
>
> No, I get that we're playing games with visibility. I want to find the best
> way to implement those games.

That's useful information. Thanks.

>> are you just worried about extra cycles in the
>> HeapTupleSatisfies* routines?
>
> Extra cycles yes, but even more importantly, code readability and
> maintainability.

Sure.

-- 
Peter Geoghegan



pgsql-hackers by date:

Previous
From: Andres Freund
Date:
Subject: Re: Expanding the use of FLEXIBLE_ARRAY_MEMBER for declarations like foo[1]
Next
From: Petr Jelinek
Date:
Subject: Re: Add min and max execute statement time in pg_stat_statement