Thread: new patch of MERGE (merge_204) & a question about duplicated ctid

new patch of MERGE (merge_204) & a question about duplicated ctid

From
Boxuan Zhai
Date:
Dear Greg, 

I have updated the MERGE patch for two main problems.

1. Support of system attributes in MERGE action expressions. 
    In old version of MERGE, I use the top join as the only RTE in var name space, during the transformation process in parser. It contains all the common attributes of source and target table but not the system attributes, such as oid. 
    Thus, if user specified system attributes in MERGE actions, error pops. In current version, the var name space is forced to be the target table RTE and source table RTE. So system attributes will be correctly transformed. 

2. fix the problem of dropped column.
    In planner, we need to map the Var nodes in MERGE actions to its corresponding TE in the main plan's TL. I used to do this under the assumption that all the TE are ordered by their attribute no (the value of filed "Var.varattno"). However, this is not the case when the table contain dropped attributes. The dropped attribute will take one attribute number but not appear in TL. 
     Now a new mapping algorithm is forged, which can avoid this bug. 

Please help me to review this new patch. Thank you!

PS:

In the new pgsql 9.1, I find that the junk attribute ctid is added twice, for UPDATE and DELETE commands. 

In planner, the function preprocess_targetlist() will invoke a sub function expand_targetlist() which will add missed TE for INSERT and UPDATE commands. And for UPDATE and DELETE, it will also create a TE of ctid (a junk attr) which is used in executor. This is the process in old pgsql versions. 

However, in pgsql 9.1, while the above is kept. A new function in rewriter, that is rewriteTargetListUD() does the same thing. So for a plain UPDATE/DELTE command, it will have two ctid junk attr in its final TL. 

Is there any particular reason for this duplicated ctid??

This will not cause much problem, normally. However, in MERGE command, the TL of MERGE actions should contain no junk attributes. So, I add blocking codes in these two parts, to avoid the change of TL of MERGE actions. I don't know whether this will cause any problem.

Regards,
Yours Boxan
Attachment

Re: new patch of MERGE (merge_204) & a question about duplicated ctid

From
David Fetter
Date:
On Sat, Dec 04, 2010 at 09:27:52PM +0800, Boxuan Zhai wrote:
> Dear Greg,
> 
> I have updated the MERGE patch for two main problems.

Please attach the actual patch :)

Cheers,
David.
-- 
David Fetter <david@fetter.org> http://fetter.org/
Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
Skype: davidfetter      XMPP: david.fetter@gmail.com
iCal: webcal://www.tripit.com/feed/ical/people/david74/tripit.ics

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate


Re: new patch of MERGE (merge_204) & a question about duplicated ctid

From
Greg Smith
Date:
Boxuan Zhai wrote:
> I have updated the MERGE patch for two main problems.

The patch inside the .tar.gz file you attached isn't right; that 
extracts to a tiny file of junk characters.

-- 
Greg Smith   2ndQuadrant US    greg@2ndQuadrant.com   Baltimore, MD
PostgreSQL Training, Services and Support        www.2ndQuadrant.us




Fwd: new patch of MERGE (merge_204) & a question about duplicated ctid

From
Boxuan Zhai
Date:


---------- Forwarded message ----------
From: Boxuan Zhai <bxzhai2010@gmail.com>
Date: Mon, Dec 6, 2010 at 9:17 PM
Subject: Re: new patch of MERGE (merge_204) & a question about duplicated ctid
To: Greg Smith <greg@2ndquadrant.com>




On Mon, Dec 6, 2010 at 2:12 AM, Greg Smith <greg@2ndquadrant.com> wrote:
Boxuan Zhai wrote:
I have updated the MERGE patch for two main problems.

The patch inside the .tar.gz file you attached isn't right; that extracts to a tiny file of junk characters.


Sorry for that. The original file is correct in my machine, but the gz file is broken. 

I send the original file directly this time. 

Sorry again. 

Regards, 
 
--
Greg Smith   2ndQuadrant US    greg@2ndQuadrant.com   Baltimore, MD
PostgreSQL Training, Services and Support        www.2ndQuadrant.us




Attachment

Re: Fwd: new patch of MERGE (merge_204) & a question about duplicated ctid

From
"Erik Rijkers"
Date:
On Mon, December 6, 2010 14:17, Boxuan Zhai wrote:

> I send the original file directly this time.



I get some whitespace-warnings, followed by error:


$  git apply /home/rijkers/download/pgpatches/0091/merge/20101206/merge_204_2010DEC06.patch
/home/rijkers/download/pgpatches/0091/merge/20101206/merge_204_2010DEC06.patch:481: trailing
whitespace.

/home/rijkers/download/pgpatches/0091/merge/20101206/merge_204_2010DEC06.patch:482: trailing
whitespace.       if (IsA(plan, ModifyTable) &&
/home/rijkers/download/pgpatches/0091/merge/20101206/merge_204_2010DEC06.patch:550: trailing
whitespace.               /*print the action qual*/
/home/rijkers/download/pgpatches/0091/merge/20101206/merge_204_2010DEC06.patch:556: trailing
whitespace.                       (act_plan->operation == CMD_INSERT ||
/home/rijkers/download/pgpatches/0091/merge/20101206/merge_204_2010DEC06.patch:560: trailing
whitespace.

error: patch failed: src/backend/optimizer/plan/planner.c:739
error: src/backend/optimizer/plan/planner.c: patch does not apply




Erik Rijkers




Re: Fwd: new patch of MERGE (merge_204) & a question about duplicated ctid

From
Greg Smith
Date:
Erik Rijkers wrote:
> I get some whitespace-warnings, followed by error:
>
> $  git apply /home/rijkers/download/pgpatches/0091/merge/20101206/merge_204_2010DEC06.patch
> /home/rijkers/download/pgpatches/0091/merge/20101206/merge_204_2010DEC06.patch:481: trailing
> whitespace.
>
> /home/rijkers/download/pgpatches/0091/merge/20101206/merge_204_2010DEC06.patch:482: trailing
> whitespace.
>         if (IsA(plan, ModifyTable) &&
> /home/rijkers/download/pgpatches/0091/merge/20101206/merge_204_2010DEC06.patch:550: trailing
> whitespace.
>                 /*print the action qual*/
> /home/rijkers/download/pgpatches/0091/merge/20101206/merge_204_2010DEC06.patch:556: trailing
> whitespace.
>                         (act_plan->operation == CMD_INSERT ||
> /home/rijkers/download/pgpatches/0091/merge/20101206/merge_204_2010DEC06.patch:560: trailing
> whitespace.
>
> error: patch failed: src/backend/optimizer/plan/planner.c:739
> error: src/backend/optimizer/plan/planner.c: patch does not appl

Maybe I'm doing something wrong, but I've never had good luck with git 
apply.  I took this patch and applied it the 12/15 copy of HEAD I had 
checked out (trying to minimize drift in there since the patch was 
created) using:

patch -p 1 < merge_204_2010DEC06.patch

There was one trivial conflict it produced 
src/backend/optimizer/plan/planner.c.rej for, and that fix was 
straightforward to apply by hand.

The result is now sitting as the merge204 branch in my github repo: 
https://github.com/greg2ndQuadrant/postgres/tree/merge204 if you did 
want to try this out.

-- 
Greg Smith   2ndQuadrant US    greg@2ndQuadrant.com   Baltimore, MD
PostgreSQL Training, Services and Support        www.2ndQuadrant.us




Re: new patch of MERGE (merge_204) & a question about duplicated ctid

From
Greg Smith
Date:
I did some basic testing of the latest update here, but quickly hit a
problem that wasn't in the previous version.  Attached is the standalone
test script that used to work, but now fails like this:

psql:simple.sql:12: ERROR:  the vars in merge action tlist of qual
should only belongs to the source table or target table

This test case is intended to implement the common UPSERT situation that
is one of the main requests that MERGE is intended to satisfy, using
this syntax:

MERGE INTO Stock t
 USING (VALUES(10,100)) AS s(item_id,balance)
 ON s.item_id=t.item_id
 WHEN MATCHED THEN UPDATE SET balance=t.balance + s.balance
 WHEN NOT MATCHED THEN INSERT VALUES(s.item_id,s.balance)
 ;

If you can suggest an alternate way to express this that works with the
new patch, I might switch to that and retry.  I was never 100% sure this
was the right way to write this, and I don't have another database with
MERGE support here to try against.  (Aside:  if someone else does, I'd
be really curious to see if the attached test case works or not on
another database system.  I think we need to include compatibility
testing with other MERGE implementations into the test mix here soon.)

Regardless, this failure suggests that you need to add this sort of test
to the regression test set.  We need to have an example of an UPSERT
using constant data in there to make sure this continues to work in the
future.

This is a good week for me in terms of having time for PostgreSQL
hacking, so if you can suggest something here or update the patch I'll
try it soon afterwards.

--
Greg Smith   2ndQuadrant US    greg@2ndQuadrant.com   Baltimore, MD
PostgreSQL Training, Services and Support        www.2ndQuadrant.us


DROP TABLE Stock;
CREATE TABLE Stock(item_id int UNIQUE, balance int);
INSERT INTO Stock VALUES (10, 2200);
INSERT INTO Stock VALUES (20, 1900);
SELECT * FROM Stock ORDER BY item_id;

MERGE INTO Stock t
 USING (VALUES(10,100)) AS s(item_id,balance)
 ON s.item_id=t.item_id
 WHEN MATCHED THEN UPDATE SET balance=t.balance + s.balance
 WHEN NOT MATCHED THEN INSERT VALUES(s.item_id,s.balance)
 ;

SELECT * FROM Stock ORDER BY item_id;

MERGE INTO Stock t
 USING (VALUES(30,2000)) AS s(item_id,balance)
 ON s.item_id=t.item_id
 WHEN MATCHED THEN UPDATE SET balance=t.balance + s.balance
 WHEN NOT MATCHED THEN INSERT VALUES(s.item_id,s.balance)
 ;

SELECT * FROM Stock ORDER BY item_id;

Re: Re: new patch of MERGE (merge_204) & a question about duplicated ctid

From
Marko Tiikkaja
Date:
On 2010-12-29 2:14 PM, Greg Smith wrote:
> MERGE INTO Stock t
>   USING (VALUES(10,100)) AS s(item_id,balance)
>   ON s.item_id=t.item_id
>   WHEN MATCHED THEN UPDATE SET balance=t.balance + s.balance
>   WHEN NOT MATCHED THEN INSERT VALUES(s.item_id,s.balance)
>   ;
>
> If you can suggest an alternate way to express this that works with the
> new patch, I might switch to that and retry.  I was never 100% sure this
> was the right way to write this, and I don't have another database with
> MERGE support here to try against.

As far as I can tell, this should work.  I played around with the patch 
and the problem seems to be the VALUES:

INTO Stock t USING (SELECT 30, 2000) AS s(item_id,balance) ON s.item_id=t.item_id WHEN MATCHED THEN UPDATE SET
balance=t.balance+ s.balance WHEN NOT MATCHED THEN INSERT VALUES(s.item_id,s.balance) ;
 
MERGE 1



Regards,
Marko Tiikkaja


Re: Re: new patch of MERGE (merge_204) & a question about duplicated ctid

From
Greg Smith
Date:
Marko Tiikkaja wrote:
> As far as I can tell, this should work.  I played around with the
> patch and the problem seems to be the VALUES:
>
> INTO Stock t
>  USING (SELECT 30, 2000) AS s(item_id,balance)
>  ON s.item_id=t.item_id
>  WHEN MATCHED THEN UPDATE SET balance=t.balance + s.balance
>  WHEN NOT MATCHED THEN INSERT VALUES(s.item_id,s.balance)
>  ;
> MERGE 1

Good catch...while I think the VALUES syntax should work, that is a
useful workaround so I could keep testing.  I rewrote like this
(original syntax commented out):

MERGE INTO Stock t
-- USING (VALUES(10,100)) AS s(item_id,balance)
 USING (SELECT 10,100) AS s(item_id,balance)
 ON s.item_id=t.item_id
 WHEN MATCHED THEN UPDATE SET balance=t.balance + s.balance
 WHEN NOT MATCHED THEN INSERT VALUES(s.item_id,s.balance)
 ;

And that got me back again to concurrent testing.

Moving onto next two problems...the basic MERGE feature seems to have
stepped backwards a bit too.  I'm now seeing these quite often:

ERROR:  duplicate key value violates unique constraint
"pgbench_accounts_pkey"
DETAIL:  Key (aid)=(176641) already exists.
STATEMENT:  MERGE INTO pgbench_accounts t USING (SELECT 176641,1+(176641
/ 1000000)::integer,168,'') AS s(aid,bid,balance,filler) ON s.aid=t.aid
WHEN MATCHED THEN UPDATE SET abalance=abalance + s.balance WHEN NOT
MATCHED THEN INSERT VALUES(s.aid,s.bid,s.balance,s.filler);

On my concurrent pgbench test, which had been working before.  Possibly
causing that, the following assertion is tripping:

TRAP: FailedAssertion("!(epqstate->origslot != ((void *)0))", File:
"execMain.c", Line: 1762)

That's coming from the following code:

void
EvalPlanQualFetchRowMarks(EPQState *epqstate)
{
    ListCell   *l;

    Assert(epqstate->origslot != NULL);

    foreach(l, epqstate->rowMarks)


Stepping back to summarize...here's a list of issues I know about with
the current v204 code:

1) VALUE syntax doesn't work anymore
2) Assertion failure in EvalPlanQualFetchRowMarks
3) Duplicate key bug (possibly a direct result of #3)
4) Attempts to use MERGE in a fuction spit back "ERROR:  <table> is not
a known fuction"
5) The ctid junk attr handling needs to be reviewed more carefully,
based on author request.

I've attached the current revisions of all my testing code in hopes that
Boxuan might try and replicate these (this makes it simple to replicate
#1 through #3), and therefore confirm whether changes made do better.

--
Greg Smith   2ndQuadrant US    greg@2ndQuadrant.com   Baltimore, MD
PostgreSQL Training, Services and Support        www.2ndQuadrant.us
"PostgreSQL 9.0 High Performance": http://www.2ndQuadrant.com/books

DROP TABLE Stock;
CREATE TABLE Stock(item_id int UNIQUE, balance int);
INSERT INTO Stock VALUES (10, 2200);
INSERT INTO Stock VALUES (20, 1900);
SELECT * FROM Stock ORDER BY item_id;

MERGE INTO Stock t
-- USING (VALUES(10,100)) AS s(item_id,balance)
 USING (SELECT 10,100) AS s(item_id,balance)
 ON s.item_id=t.item_id
 WHEN MATCHED THEN UPDATE SET balance=t.balance + s.balance
 WHEN NOT MATCHED THEN INSERT VALUES(s.item_id,s.balance)
 ;

SELECT * FROM Stock ORDER BY item_id;

MERGE INTO Stock t
-- USING (VALUES(30,2000)) AS s(item_id,balance)
 USING (SELECT 30,2000) AS s(item_id,balance)
 ON s.item_id=t.item_id
 WHEN MATCHED THEN UPDATE SET balance=t.balance + s.balance
 WHEN NOT MATCHED THEN INSERT VALUES(s.item_id,s.balance)
 ;

SELECT * FROM Stock ORDER BY item_id;
\set nbranches :scale
\set ntellers 10 * :scale
\set naccounts 100000 * :scale
\setrandom aid 1 :naccounts
\setrandom bid 1 :nbranches
\setrandom tid 1 :ntellers
\setrandom delta -5000 5000
MERGE INTO pgbench_accounts t USING (SELECT :aid,1+(:aid / 1000000)::integer,:delta,'') AS s(aid,bid,balance,filler) ON
s.aid=t.aidWHEN MATCHED THEN UPDATE SET abalance=abalance + s.balance WHEN NOT MATCHED THEN INSERT
VALUES(s.aid,s.bid,s.balance,s.filler);

-- This syntax worked with MERGE v203 patch, but isn't compatible with v204
--MERGE INTO pgbench_accounts t USING (VALUES (:aid,1+(:aid / 1000000)::integer,:delta,'')) AS
s(aid,bid,balance,filler)ON s.aid=t.aid WHEN MATCHED THEN UPDATE SET abalance=abalance + s.balance WHEN NOT MATCHED
THENINSERT VALUES(s.aid,s.bid,s.balance,s.filler); 

Attachment

Re: Re: new patch of MERGE (merge_204) & a question about duplicated ctid

From
Marko Tiikkaja
Date:
On 2010-12-30 4:39 AM +0200, Greg Smith wrote:
> And that got me back again to concurrent testing.
>
> Moving onto next two problems...the basic MERGE feature seems to have
> stepped backwards a bit too.  I'm now seeing these quite often:
>
> ERROR:  duplicate key value violates unique constraint
> "pgbench_accounts_pkey"
> DETAIL:  Key (aid)=(176641) already exists.
> STATEMENT:  MERGE INTO pgbench_accounts t USING (SELECT 176641,1+(176641
> / 1000000)::integer,168,'') AS s(aid,bid,balance,filler) ON s.aid=t.aid
> WHEN MATCHED THEN UPDATE SET abalance=abalance + s.balance WHEN NOT
> MATCHED THEN INSERT VALUES(s.aid,s.bid,s.balance,s.filler);
>
> On my concurrent pgbench test, which had been working before.

I have no idea why it worked in the past, but the patch was never 
designed to work for UPSERT.  This has been discussed in the past and 
some people thought that that's not a huge deal.


Regards,
Marko Tiikkaja


Re: Re: new patch of MERGE (merge_204) & a question about duplicated ctid

From
Robert Haas
Date:
On Wed, Dec 29, 2010 at 9:45 PM, Marko Tiikkaja
<marko.tiikkaja@cs.helsinki.fi> wrote:
> I have no idea why it worked in the past, but the patch was never designed
> to work for UPSERT.  This has been discussed in the past and some people
> thought that that's not a huge deal.

I think it's expected to fail in some *concurrent* UPSERT cases.  It
should work if it's the only game in town.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Re: Re: new patch of MERGE (merge_204) & a question about duplicated ctid

From
Greg Smith
Date:
Marko Tiikkaja wrote:
> I have no idea why it worked in the past, but the patch was never 
> designed to work for UPSERT.  This has been discussed in the past and 
> some people thought that that's not a huge deal.

It takes an excessively large lock when doing UPSERT, which means its 
performance under a heavy concurrent load can't be good.  The idea is 
that if the syntax and general implementation issues can get sorted out, 
fixing the locking can be a separate performance improvement to be 
implemented later.  Using MERGE for UPSERT is the #1 use case for this 
feature by a gigantic margin.  If that doesn't do what's expected, the 
whole implementation doesn't provide the community anything really worth 
talking about.  That's why I keep hammering on this particular area in 
all my testing.

One of the reflexive "I can't switch to PostgreSQL easily" stopping 
points for MySQL users is "I can't convert my ON DUPLICATE KEY UPDATE 
code".  Every other use for MERGE is a helpful side-effect of adding the 
implementation in my mind, but not the primary driver of why this is 
important.  My hints in this direction before didn't get adopted, so I'm 
saying it outright now:  this patch must have an UPSERT implementation 
in its regression tests.  And the first thing I'm going to do every time 
a new rev comes in is try and break it with the pgbench test I 
attached.  If Boxuan can start doing that as part of his own testing, I 
think development here might start moving forward faster.  I don't care 
so much about the rate at which concurrent UPSERT-style MERGE happens, 
so long as it doesn't crash.  But that's where this has been stuck at 
for a while now.

-- 
Greg Smith   2ndQuadrant US    greg@2ndQuadrant.com   Baltimore, MD
PostgreSQL Training, Services and Support        www.2ndQuadrant.us
"PostgreSQL 9.0 High Performance": http://www.2ndQuadrant.com/books



Re: Re: new patch of MERGE (merge_204) & a question about duplicated ctid

From
Marko Tiikkaja
Date:
On 2010-12-30 9:02 AM +0200, Greg Smith wrote:
> Marko Tiikkaja wrote:
>> I have no idea why it worked in the past, but the patch was never
>> designed to work for UPSERT.  This has been discussed in the past and
>> some people thought that that's not a huge deal.
>
> It takes an excessively large lock when doing UPSERT, which means its
> performance under a heavy concurrent load can't be good.  The idea is
> that if the syntax and general implementation issues can get sorted out,
> fixing the locking can be a separate performance improvement to be
> implemented later.  Using MERGE for UPSERT is the #1 use case for this
> feature by a gigantic margin.  If that doesn't do what's expected, the
> whole implementation doesn't provide the community anything really worth
> talking about.  That's why I keep hammering on this particular area in
> all my testing.

I'm confused.  Are you saying that the patch is supposed to lock the 
table against concurrent INSERT/UPDATE/DELETE/MERGE?  Because I don't 
see it in the patch, and the symptoms you're having are a clear 
indication of the fact that it's not happening.  I also seem to recall 
that people thought locking the table would be excessive.


Regards,
Marko Tiikkaja


Re: Re: new patch of MERGE (merge_204) & a question about duplicated ctid

From
Andrew Dunstan
Date:

On 12/30/2010 02:02 AM, Greg Smith wrote:
> Marko Tiikkaja wrote:
>> I have no idea why it worked in the past, but the patch was never 
>> designed to work for UPSERT.  This has been discussed in the past and 
>> some people thought that that's not a huge deal.
>
> It takes an excessively large lock when doing UPSERT, which means its 
> performance under a heavy concurrent load can't be good.  The idea is 
> that if the syntax and general implementation issues can get sorted 
> out, fixing the locking can be a separate performance improvement to 
> be implemented later.  Using MERGE for UPSERT is the #1 use case for 
> this feature by a gigantic margin.  If that doesn't do what's 
> expected, the whole implementation doesn't provide the community 
> anything really worth talking about.  That's why I keep hammering on 
> this particular area in all my testing.
>
> One of the reflexive "I can't switch to PostgreSQL easily" stopping 
> points for MySQL users is "I can't convert my ON DUPLICATE KEY UPDATE 
> code".  Every other use for MERGE is a helpful side-effect of adding 
> the implementation in my mind, but not the primary driver of why this 
> is important.  My hints in this direction before didn't get adopted, 
> so I'm saying it outright now:  this patch must have an UPSERT 
> implementation in its regression tests.  And the first thing I'm going 
> to do every time a new rev comes in is try and break it with the 
> pgbench test I attached.  If Boxuan can start doing that as part of 
> his own testing, I think development here might start moving forward 
> faster.  I don't care so much about the rate at which concurrent 
> UPSERT-style MERGE happens, so long as it doesn't crash.  But that's 
> where this has been stuck at for a while now.

I strongly agree. It *is* a huge deal.

cheers

andrew


Re: Re: new patch of MERGE (merge_204) & a question about duplicated ctid

From
Greg Smith
Date:
Marko Tiikkaja wrote:
> I'm confused.  Are you saying that the patch is supposed to lock the 
> table against concurrent INSERT/UPDATE/DELETE/MERGE?  Because I don't 
> see it in the patch, and the symptoms you're having are a clear 
> indication of the fact that it's not happening.  I also seem to recall 
> that people thought locking the table would be excessive.

That's exactly what it should be doing.  I thought I'd seen just that in 
one of the versions of this patch, but maybe that's a mistaken memory on 
my part.  In advance of the planned but not available yet ability to 
lock individual index key values, locking the whole table is the only 
possible implementation that can work correctly here I'm aware of.  In 
earlier versions, I think this code was running into issues before it 
even got to there.  If you're right that things like the duplicate key 
error in the current version are caused exclusively by not locking 
enough, that may be the next necessary step here.

-- 
Greg Smith   2ndQuadrant US    greg@2ndQuadrant.com   Baltimore, MD
PostgreSQL Training, Services and Support        www.2ndQuadrant.us
"PostgreSQL 9.0 High Performance": http://www.2ndQuadrant.com/books



Re: Re: new patch of MERGE (merge_204) & a question about duplicated ctid

From
Simon Riggs
Date:
On Mon, 2011-01-03 at 01:53 -0500, Greg Smith wrote:

> In advance of the planned but not available yet ability to 
> lock individual index key values, locking the whole table is the only 
> possible implementation that can work correctly here I'm aware of. 

This was discussed here
http://archives.postgresql.org/pgsql-hackers/2010-10/msg01903.php
with suggested resolutions for this release here
http://archives.postgresql.org/pgsql-hackers/2010-10/msg01907.php

In summary, that means we can either

1. Lock the table for ShareRowExclusiveLock

2. throw a SERIALIZABLE error, if we come up against a row that cannot
be neither MATCHED nor NON MATCHED.

3. Bounce the patch to 9.2, commit early and then work on a full
concurrency solution before commit. The solution strawman is something
like EvalPlanQual with a new snapshot for each re-checked row, emulating
the pattern of snapshots/rechecks that would happen in a PL/pgSQL
version of an UPSERT.

Either way, we're saying that MERGE will not support concurrent
operations safely, in this release.

Given the continued lack of test cases for this patch, and the possible
embarrassment over not doing concurrent actions, do we think (3) is the
right road? 

-- Simon Riggs           http://www.2ndQuadrant.com/books/PostgreSQL Development, 24x7 Support, Training and Services



Re: Re: new patch of MERGE (merge_204) & a question about duplicated ctid

From
Heikki Linnakangas
Date:
On 03.01.2011 11:37, Simon Riggs wrote:
> On Mon, 2011-01-03 at 01:53 -0500, Greg Smith wrote:
>
>> In advance of the planned but not available yet ability to
>> lock individual index key values, locking the whole table is the only
>> possible implementation that can work correctly here I'm aware of.
>
> This was discussed here
> http://archives.postgresql.org/pgsql-hackers/2010-10/msg01903.php
> with suggested resolutions for this release here
> http://archives.postgresql.org/pgsql-hackers/2010-10/msg01907.php
>
> In summary, that means we can either
>
> 1. Lock the table for ShareRowExclusiveLock
>
> 2. throw a SERIALIZABLE error, if we come up against a row that cannot
> be neither MATCHED nor NON MATCHED.
>
> 3. Bounce the patch to 9.2, commit early and then work on a full
> concurrency solution before commit. The solution strawman is something
> like EvalPlanQual with a new snapshot for each re-checked row, emulating
> the pattern of snapshots/rechecks that would happen in a PL/pgSQL
> version of an UPSERT.
>
> Either way, we're saying that MERGE will not support concurrent
> operations safely, in this release.
>
> Given the continued lack of test cases for this patch, and the possible
> embarrassment over not doing concurrent actions, do we think (3) is the
> right road?

This patch has never tried to implement concurrency-safe upsert. It 
implements the MERGE command as specified by the SQL standard, nothing 
more, nothing less. Let's not move the goalposts. Googling around, at 
least MS SQL Server's MERGE command is the same 
(http://weblogs.sqlteam.com/dang/archive/2009/01/31/UPSERT-Race-Condition-With-MERGE.aspx). 
There is nothing embarrassing about it, we just have to document it clearly.

--   Heikki Linnakangas  EnterpriseDB   http://www.enterprisedb.com


Re: Re: new patch of MERGE (merge_204) & a question about duplicated ctid

From
Simon Riggs
Date:
On Mon, 2011-01-03 at 15:12 +0200, Heikki Linnakangas wrote:

> This patch has never tried to implement concurrency-safe upsert. It 
> implements the MERGE command as specified by the SQL standard, nothing 
> more, nothing less. Let's not move the goalposts. Googling around, at 
> least MS SQL Server's MERGE command is the same 
> (http://weblogs.sqlteam.com/dang/archive/2009/01/31/UPSERT-Race-Condition-With-MERGE.aspx). 
> There is nothing embarrassing about it, we just have to document it clearly.

That article says that SQLServer supplies a locking hint that completely
removes the issue. Because they use locking, they are able to update in
place, so there is no need for them to use snapshots.

Our version won't allow a workaround yet, just for the record.

-- Simon Riggs           http://www.2ndQuadrant.com/books/PostgreSQL Development, 24x7 Support, Training and Services



Re: Re: new patch of MERGE (merge_204) & a question about duplicated ctid

From
Robert Haas
Date:
On Mon, Jan 3, 2011 at 8:35 AM, Simon Riggs <simon@2ndquadrant.com> wrote:
> On Mon, 2011-01-03 at 15:12 +0200, Heikki Linnakangas wrote:
>> This patch has never tried to implement concurrency-safe upsert. It
>> implements the MERGE command as specified by the SQL standard, nothing
>> more, nothing less. Let's not move the goalposts. Googling around, at
>> least MS SQL Server's MERGE command is the same
>> (http://weblogs.sqlteam.com/dang/archive/2009/01/31/UPSERT-Race-Condition-With-MERGE.aspx).
>> There is nothing embarrassing about it, we just have to document it clearly.
>
> That article says that SQLServer supplies a locking hint that completely
> removes the issue. Because they use locking, they are able to update in
> place, so there is no need for them to use snapshots.
>
> Our version won't allow a workaround yet, just for the record.

Like Heikki, I'd rather have the feature without a workaround for the
concurrency issues than no feature.  But I have to admit that the
discussion we've had thus far gives me very little confidence that
this code is anywhere close to bug-free.  So I think we're going to
end up punting it to 9.2 not so much because it's not concurrency-safe
as because it doesn't work.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Re: Re: new patch of MERGE (merge_204) & a question about duplicated ctid

From
Stephen Frost
Date:
* Robert Haas (robertmhaas@gmail.com) wrote:
> Like Heikki, I'd rather have the feature without a workaround for the
> concurrency issues than no feature.

I'm still trying to figure out the problem with having the table-level
lock, unless we really think people will be doing concurrent MERGE's
where they won't overlap..?  I'm also a bit nervous about if the result
of concurrent MERGE's would actually be correct if we're not taking a
bigger lock than row-level (I assume we're taking row-level locks as it
goes through..).

In general, I also thought/expected to have some kind of UPSERT type
capability with our initial MERGE support, even if it requires a big
lock and won't operate concurrently, etc.

> But I have to admit that the
> discussion we've had thus far gives me very little confidence that
> this code is anywhere close to bug-free.  So I think we're going to
> end up punting it to 9.2 not so much because it's not concurrency-safe
> as because it doesn't work.

That's certainly a concern. :/
Stephen

Re: Re: new patch of MERGE (merge_204) & a question about duplicated ctid

From
Heikki Linnakangas
Date:
On 03.01.2011 17:56, Stephen Frost wrote:
> * Robert Haas (robertmhaas@gmail.com) wrote:
>> Like Heikki, I'd rather have the feature without a workaround for the
>> concurrency issues than no feature.
>
> I'm still trying to figure out the problem with having the table-level
> lock, unless we really think people will be doing concurrent MERGE's
> where they won't overlap..?  I'm also a bit nervous about if the result
> of concurrent MERGE's would actually be correct if we're not taking a
> bigger lock than row-level (I assume we're taking row-level locks as it
> goes through..).
>
> In general, I also thought/expected to have some kind of UPSERT type
> capability with our initial MERGE support, even if it requires a big
> lock and won't operate concurrently, etc.

You can of course LOCK TABLE as a work-around, if that's what you want.

--   Heikki Linnakangas  EnterpriseDB   http://www.enterprisedb.com


Re: Re: new patch of MERGE (merge_204) & a question about duplicated ctid

From
Robert Haas
Date:
On Mon, Jan 3, 2011 at 10:58 AM, Heikki Linnakangas
<heikki.linnakangas@enterprisedb.com> wrote:
> On 03.01.2011 17:56, Stephen Frost wrote:
>>
>> * Robert Haas (robertmhaas@gmail.com) wrote:
>>>
>>> Like Heikki, I'd rather have the feature without a workaround for the
>>> concurrency issues than no feature.
>>
>> I'm still trying to figure out the problem with having the table-level
>> lock, unless we really think people will be doing concurrent MERGE's
>> where they won't overlap..?  I'm also a bit nervous about if the result
>> of concurrent MERGE's would actually be correct if we're not taking a
>> bigger lock than row-level (I assume we're taking row-level locks as it
>> goes through..).
>>
>> In general, I also thought/expected to have some kind of UPSERT type
>> capability with our initial MERGE support, even if it requires a big
>> lock and won't operate concurrently, etc.
>
> You can of course LOCK TABLE as a work-around, if that's what you want.

That work-around completely fails to solve the concurrency problem.
Just because you have a lock on the table doesn't mean that there
aren't already tuples in the table which are invisible to your
snapshot (for example because the inserting transactions haven't
committed yet).

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Re: Re: new patch of MERGE (merge_204) & a question about duplicated ctid

From
Andrew Dunstan
Date:

On 01/03/2011 10:58 AM, Heikki Linnakangas wrote:
>>
>> In general, I also thought/expected to have some kind of UPSERT type
>> capability with our initial MERGE support, even if it requires a big
>> lock and won't operate concurrently, etc.
>
>
> You can of course LOCK TABLE as a work-around, if that's what you want.

I think we need to state this in large red letters in the docs, if 
that's the requirement.

cheers

andrew


Re: Re: new patch of MERGE (merge_204) & a question about duplicated ctid

From
Heikki Linnakangas
Date:
On 03.01.2011 18:02, Robert Haas wrote:
> On Mon, Jan 3, 2011 at 10:58 AM, Heikki Linnakangas
> <heikki.linnakangas@enterprisedb.com>  wrote:
>> On 03.01.2011 17:56, Stephen Frost wrote:
>>>
>>> * Robert Haas (robertmhaas@gmail.com) wrote:
>>>>
>>>> Like Heikki, I'd rather have the feature without a workaround for the
>>>> concurrency issues than no feature.
>>>
>>> I'm still trying to figure out the problem with having the table-level
>>> lock, unless we really think people will be doing concurrent MERGE's
>>> where they won't overlap..?  I'm also a bit nervous about if the result
>>> of concurrent MERGE's would actually be correct if we're not taking a
>>> bigger lock than row-level (I assume we're taking row-level locks as it
>>> goes through..).
>>>
>>> In general, I also thought/expected to have some kind of UPSERT type
>>> capability with our initial MERGE support, even if it requires a big
>>> lock and won't operate concurrently, etc.
>>
>> You can of course LOCK TABLE as a work-around, if that's what you want.
>
> That work-around completely fails to solve the concurrency problem.
> Just because you have a lock on the table doesn't mean that there
> aren't already tuples in the table which are invisible to your
> snapshot (for example because the inserting transactions haven't
> committed yet).

It works in read committed mode, because you acquire a new snapshot 
after the LOCK TABLE, and anyone else who modified the table must commit 
before the lock is granted. In serializable mode you get a serialization 
error.

--   Heikki Linnakangas  EnterpriseDB   http://www.enterprisedb.com


Re: Re: new patch of MERGE (merge_204) & a question about duplicated ctid

From
Robert Haas
Date:
On Mon, Jan 3, 2011 at 11:08 AM, Heikki Linnakangas
<heikki.linnakangas@enterprisedb.com> wrote:
> On 03.01.2011 18:02, Robert Haas wrote:
>>
>> On Mon, Jan 3, 2011 at 10:58 AM, Heikki Linnakangas
>> <heikki.linnakangas@enterprisedb.com>  wrote:
>>>
>>> On 03.01.2011 17:56, Stephen Frost wrote:
>>>>
>>>> * Robert Haas (robertmhaas@gmail.com) wrote:
>>>>>
>>>>> Like Heikki, I'd rather have the feature without a workaround for the
>>>>> concurrency issues than no feature.
>>>>
>>>> I'm still trying to figure out the problem with having the table-level
>>>> lock, unless we really think people will be doing concurrent MERGE's
>>>> where they won't overlap..?  I'm also a bit nervous about if the result
>>>> of concurrent MERGE's would actually be correct if we're not taking a
>>>> bigger lock than row-level (I assume we're taking row-level locks as it
>>>> goes through..).
>>>>
>>>> In general, I also thought/expected to have some kind of UPSERT type
>>>> capability with our initial MERGE support, even if it requires a big
>>>> lock and won't operate concurrently, etc.
>>>
>>> You can of course LOCK TABLE as a work-around, if that's what you want.
>>
>> That work-around completely fails to solve the concurrency problem.
>> Just because you have a lock on the table doesn't mean that there
>> aren't already tuples in the table which are invisible to your
>> snapshot (for example because the inserting transactions haven't
>> committed yet).
>
> It works in read committed mode, because you acquire a new snapshot after
> the LOCK TABLE, and anyone else who modified the table must commit before
> the lock is granted.

Oh, I forgot we hold the ROW EXCLUSIVE lock until commit.  That might
be OK, then.

> In serializable mode you get a serialization error.

I don't think this part is true.  You can certainly do this:

CREATE TABLE test (a int);
BEGIN TRANSACTION ISOLATION LEVEL SERIALIZABLE;
SELECT * FROM test;
<in another session, insert (1) into test>
LOCK TABLE test IN SHARE MODE; -- or just LOCK TABLE test, if you prefer
SELECT * FROM test;  -- still ain't there
INSERT INTO test VALUES (1);

I don't see what would make MERGE immune to this.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Re: Re: new patch of MERGE (merge_204) & a question about duplicated ctid

From
Simon Riggs
Date:
On Mon, 2011-01-03 at 18:08 +0200, Heikki Linnakangas wrote:

> It works in read committed mode, because you acquire a new snapshot 
> after the LOCK TABLE, and anyone else who modified the table must commit 
> before the lock is granted. In serializable mode you get a serialization 
> error.

If its not safe without this

LOCK TABLE ... IN SHARE ROW EXCLUSIVE MODE

then we should do that automatically, and document that.

-- Simon Riggs           http://www.2ndQuadrant.com/books/PostgreSQL Development, 24x7 Support, Training and Services



Re: Re: new patch of MERGE (merge_204) & a question about duplicated ctid

From
Heikki Linnakangas
Date:
On 03.01.2011 18:29, Simon Riggs wrote:
> On Mon, 2011-01-03 at 18:08 +0200, Heikki Linnakangas wrote:
>
>> It works in read committed mode, because you acquire a new snapshot
>> after the LOCK TABLE, and anyone else who modified the table must commit
>> before the lock is granted. In serializable mode you get a serialization
>> error.
>
> If its not safe without this
>
> LOCK TABLE ... IN SHARE ROW EXCLUSIVE MODE
>
> then we should do that automatically, and document that.

No we should not. The SQL standard doesn't require that, and it would 
unnecessarily restrict concurrent updates on unrelated rows in the table.

--   Heikki Linnakangas  EnterpriseDB   http://www.enterprisedb.com


Re: Re: new patch of MERGE (merge_204) & a question about duplicated ctid

From
Florian Pflug
Date:
On Jan3, 2011, at 17:21 , Robert Haas wrote:
> On Mon, Jan 3, 2011 at 11:08 AM, Heikki Linnakangas
>> In serializable mode you get a serialization error.
> 
> I don't think this part is true.  You can certainly do this:
> 
> CREATE TABLE test (a int);
> BEGIN TRANSACTION ISOLATION LEVEL SERIALIZABLE;
> SELECT * FROM test;
> <in another session, insert (1) into test>
> LOCK TABLE test IN SHARE MODE; -- or just LOCK TABLE test, if you prefer
> SELECT * FROM test;  -- still ain't there
> INSERT INTO test VALUES (1);

In SERIALIZABLE mode, you need to take any table-level locks before obtaining
a snapshot. There's even a warning about this in the docs somewhere IIRC...

best regards,
Florian Pflug



Re: Re: new patch of MERGE (merge_204) & a question about duplicated ctid

From
Simon Riggs
Date:
On Mon, 2011-01-03 at 18:35 +0200, Heikki Linnakangas wrote:
> On 03.01.2011 18:29, Simon Riggs wrote:
> > On Mon, 2011-01-03 at 18:08 +0200, Heikki Linnakangas wrote:
> >
> >> It works in read committed mode, because you acquire a new snapshot
> >> after the LOCK TABLE, and anyone else who modified the table must commit
> >> before the lock is granted. In serializable mode you get a serialization
> >> error.
> >
> > If its not safe without this
> >
> > LOCK TABLE ... IN SHARE ROW EXCLUSIVE MODE
> >
> > then we should do that automatically, and document that.
> 
> No we should not. The SQL standard doesn't require that, and it would 
> unnecessarily restrict concurrent updates on unrelated rows in the table.

If we do that, then we definitely need a catch-all WHEN statement, so
that we can say

WHEN NOT MATCHED INSERT
WHEN MATCHED UPDATE
ELSE { INSERT into another table so we can try again in a minuteor RAISE error }

Otherwise we will silently drop rows. Throwing an error every time isn't
useful behaviour.

Of course, that then breaks the standard, just as all existing
implementations do.

-- Simon Riggs           http://www.2ndQuadrant.com/books/PostgreSQL Development, 24x7 Support, Training and Services



Re: Re: new patch of MERGE (merge_204) & a question about duplicated ctid

From
Heikki Linnakangas
Date:
On 03.01.2011 18:49, Simon Riggs wrote:
> On Mon, 2011-01-03 at 18:35 +0200, Heikki Linnakangas wrote:
>> On 03.01.2011 18:29, Simon Riggs wrote:
>>> On Mon, 2011-01-03 at 18:08 +0200, Heikki Linnakangas wrote:
>>>
>>>> It works in read committed mode, because you acquire a new snapshot
>>>> after the LOCK TABLE, and anyone else who modified the table must commit
>>>> before the lock is granted. In serializable mode you get a serialization
>>>> error.
>>>
>>> If its not safe without this
>>>
>>> LOCK TABLE ... IN SHARE ROW EXCLUSIVE MODE
>>>
>>> then we should do that automatically, and document that.
>>
>> No we should not. The SQL standard doesn't require that, and it would
>> unnecessarily restrict concurrent updates on unrelated rows in the table.
>
> If we do that, then we definitely need a catch-all WHEN statement, so
> that we can say
>
> WHEN NOT MATCHED
>    INSERT
> WHEN MATCHED
>    UPDATE
> ELSE
>    { INSERT into another table so we can try again in a minute
>   or RAISE error }
>
> Otherwise we will silently drop rows. Throwing an error every time isn't
> useful behaviour.

An ELSE clause would be nice, but it's not related to the question at 
hand. Only some serialization anomalities result in a row that matches 
neither WHEN MATCHED nor WHEN NOT MATCHED. Others result in a duplicate 
key exception, for example.

--   Heikki Linnakangas  EnterpriseDB   http://www.enterprisedb.com


Re: Re: new patch of MERGE (merge_204) & a question about duplicated ctid

From
Robert Haas
Date:
On Mon, Jan 3, 2011 at 11:36 AM, Florian Pflug <fgp@phlo.org> wrote:
> On Jan3, 2011, at 17:21 , Robert Haas wrote:
>> On Mon, Jan 3, 2011 at 11:08 AM, Heikki Linnakangas
>>> In serializable mode you get a serialization error.
>>
>> I don't think this part is true.  You can certainly do this:
>>
>> CREATE TABLE test (a int);
>> BEGIN TRANSACTION ISOLATION LEVEL SERIALIZABLE;
>> SELECT * FROM test;
>> <in another session, insert (1) into test>
>> LOCK TABLE test IN SHARE MODE; -- or just LOCK TABLE test, if you prefer
>> SELECT * FROM test;  -- still ain't there
>> INSERT INTO test VALUES (1);
>
> In SERIALIZABLE mode, you need to take any table-level locks before obtaining
> a snapshot. There's even a warning about this in the docs somewhere IIRC...

That should be safe, if people do it that way.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Re: Re: new patch of MERGE (merge_204) & a question about duplicated ctid

From
Robert Haas
Date:
On Mon, Jan 3, 2011 at 12:01 PM, Heikki Linnakangas
<heikki.linnakangas@enterprisedb.com> wrote:
>> If we do that, then we definitely need a catch-all WHEN statement, so
>> that we can say
>>
>> WHEN NOT MATCHED
>>   INSERT
>> WHEN MATCHED
>>   UPDATE
>> ELSE
>>   { INSERT into another table so we can try again in a minute
>>  or RAISE error }
>>
>> Otherwise we will silently drop rows. Throwing an error every time isn't
>> useful behaviour.
>
> An ELSE clause would be nice, but it's not related to the question at hand.
> Only some serialization anomalities result in a row that matches neither
> WHEN MATCHED nor WHEN NOT MATCHED. Others result in a duplicate key
> exception, for example.

I must be missing something.  A source row is either matched or not
matched.  ELSE doesn't exist because the writers of the spec thought
there might be some third matched-invisible-row case, but rather
because you might have written WHEN [NOT MATCHED] AND <some qual>, and
you might fall through a list of all such clauses.

I think we're focusing on the wrong problem here.  MERGE creates some
syntax to let you do with SQL something that people are currently
doing with application-side logic.  I've written the application-side
logic to do this kind of thing more times than I care to remember, and
yeah there are concurrency issues, but:

- sometimes there's only one writer, so it doesn't matter
- sometimes there can technically be more than one writer, but the
usage is so low that nothing actually breaks
- sometimes you know that a given writer will only operate on rows
customer_id = <some constant>; so you only need to prevent two
concurrent writers *for the same customer*, not any two concurrent
writers
- and sometimes you need a full table lock.

The third case, in particular, is quite common in my experience, and a
very good reason not to impose a full table lock.  Users hate having
to do explicit locking (especially users whose names rhyme with Bevin
Bittner) but they hate *unnecessary* full-table locks even more.  A
problem that you can fix by adding a LOCK TABLE statement is annoying;
a problem that you can fix only be removing an implicit lock table
operation that the system performs under the hood is a lot worse.  In
the fourth case above, which IME is quite common, you could EITHER
take a full-table lock, if that performs OK, OR you could arrange to
take an advisory lock that protects the records for the particular
customer whose data you want to update.  If we always take a
full-table lock, then the user loses the option to do something else.

The point we ought to be focusing on is that the patch doesn't work.
Unless someone is prepared to put in some time to fix THAT, the rest
of this discussion is academic.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Re: Re: new patch of MERGE (merge_204) & a question about duplicated ctid

From
Simon Riggs
Date:
On Mon, 2011-01-03 at 19:01 +0200, Heikki Linnakangas wrote:

> > If we do that, then we definitely need a catch-all WHEN statement, so
> > that we can say
> >
> > WHEN NOT MATCHED
> >    INSERT
> > WHEN MATCHED
> >    UPDATE
> > ELSE
> >    { INSERT into another table so we can try again in a minute
> >   or RAISE error }
> >
> > Otherwise we will silently drop rows. Throwing an error every time isn't
> > useful behaviour.
> 
> An ELSE clause would be nice, but it's not related to the question at 
> hand. Only some serialization anomalities result in a row that matches 
> neither WHEN MATCHED nor WHEN NOT MATCHED. 

Concurrent UPDATEs, DELETEs, MERGE

> Others result in a duplicate 
> key exception, for example.

Concurrent INSERTs, MERGE

So an ELSE clause is very relevant to handling anomalies in a useful
way.

-- Simon Riggs           http://www.2ndQuadrant.com/books/PostgreSQL Development, 24x7 Support, Training and Services



Re: Re: new patch of MERGE (merge_204) & a question about duplicated ctid

From
"Kevin Grittner"
Date:
Robert Haas <robertmhaas@gmail.com> wrote:
> Users hate having to do explicit locking (especially users whose
> names rhyme with Bevin Bittner)
:-)
Before you decide to taunt me again, I guess I should point out a
few things here.
Should SSI and MERGE both make it into 9.1, there's every reason to
believe that running just about any DML, including MERGE, at
REPEATABLE READ would generate the same behavior which running at
REPEATABLE READ or SERIALIZABLE does now.  If MERGE is susceptible
to such anomalies as testing for the presence of a row and then
trying to UPDATE it if found, only to update zero rows because it
was concurrently deleted, SERIALIZABLE would prevent that with a
serialization failure.  I'd kind of expect that already with a
write-write conflict, but if that isn't happening, the SSI patch
should help.  Well, help prevent anomalies -- if you want it to work
out how to continue without rolling back it won't help at all.
The fact that SSI introduces predicate locking may ultimately allow
MERGE to do something more clever in terms of UPSERT logic, but I
*REALLY* think it's too late in the release cycle to start looking
at that.  Predicate locking for SSI was done exactly as was most
useful for SSI, on the basis (generally popular on this list) that
trying to generalize something with only one use case is doomed to
failure.  Trying to bend it in an additional direction this late in
the release would pretty much ensure that neither MERGE nor SSI
could make it in.
On the other hand, if we put SSI in with predicate locking more or
less as it is, and put MERGE in with more primitive concurrency
control, I fully expect that someone could figure out how to tease
apart SSI and its predicate locking during the next release cycle,
so that the predicate locking was more generally useful.
-Kevin


Re: Re: new patch of MERGE (merge_204) & a question about duplicated ctid

From
Robert Haas
Date:
On Mon, Jan 3, 2011 at 1:18 PM, Kevin Grittner
<Kevin.Grittner@wicourts.gov> wrote:
> Robert Haas <robertmhaas@gmail.com> wrote:
>
>> Users hate having to do explicit locking (especially users whose
>> names rhyme with Bevin Bittner)
>
> :-)
>
> Before you decide to taunt me again, I guess I should point out a
> few things here.

Sorry, that was intended as good-natured humor, not taunting.  I think
that the work you are doing on the serializability stuff is *exactly*
the right fix for the concurrency issues associated with MERGE.
Coming up with a fix that is specific to MERGE doesn't impress me
much.  I don't believe that hacking up MERGE will lead to anything
other than an ugly mess; it's just a syntax wrapper around an
operation that's fundamentally not too easy to make concurrent.  SSI
will handle it, though, along with, well, all the other cases that are
worth worrying about.  I don't have quite as much of an allergy to
explicit locking as you do, but I'm quite clear that it isn't nearly
as good as "it just works".

> Should SSI and MERGE both make it into 9.1, [...]

So far the thread on large patches has lead to a status report from
most of the people working on large patches, and no volunteers to take
the lead on reviewing/committing any of them.  Although I think both
of those patches are worthwhile, and although I intend to spend a
very, very large amount of time doing CF work in the next 43 days, I
don't foresee committing either of them, and I probably will not have
time for a detailed review of either one, either.  I feel pretty bad
about that, but I just don't have any more bandwidth.  :-(

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Re: Re: new patch of MERGE (merge_204) & a question about duplicated ctid

From
"Kevin Grittner"
Date:
Robert Haas <robertmhaas@gmail.com> wrote:
> Kevin Grittner <Kevin.Grittner@wicourts.gov> wrote:
>> Before you decide to taunt me again, I guess I should point out a
>> few things here.
> 
> Sorry, that was intended as good-natured humor, not taunting.
Oh, I took it that way.  I guess my attempt at humor through an
oblique reference to a line from "Monty Python and the Holy Grail"
fell flat.  :-/  I guess I should have said "before you taunt me a
second time" to make it more readily recognizable...
> I think that the work you are doing on the serializability stuff
> is *exactly* the right fix for the concurrency issues associated
> with MERGE.
It's got a nice consistency with current behavior, with reads never
blocking or being blocked, but I can see why people would want a
MERGE which could dance around the concurrency problems and always
succeed with UPSERT behavior.
Various topics have come up which seem like they might benefit from
predicate locking.  I don't know how many would need locks which
introduce blocking.  I think it will actually be very easy to adapt
the predicate locking for such things as transactional cache
invalidation (which is what drew the interest of the MIT folks). 
I'm not sure how much work it would be to adapt it to use for the
type of blocking locks which seem to be needed based on some of the
MERGE discussions I've read.  It think it will be non-trivial but
possible.
-Kevin


Re: Re: new patch of MERGE (merge_204) & a question about duplicated ctid

From
Robert Haas
Date:
On Mon, Jan 3, 2011 at 2:01 PM, Kevin Grittner
<Kevin.Grittner@wicourts.gov> wrote:
> Robert Haas <robertmhaas@gmail.com> wrote:
>> Kevin Grittner <Kevin.Grittner@wicourts.gov> wrote:
>
>>> Before you decide to taunt me again, I guess I should point out a
>>> few things here.
>>
>> Sorry, that was intended as good-natured humor, not taunting.
>
> Oh, I took it that way.  I guess my attempt at humor through an
> oblique reference to a line from "Monty Python and the Holy Grail"
> fell flat.  :-/  I guess I should have said "before you taunt me a
> second time" to make it more readily recognizable...

Ah!  I missed that.  I have actually seen that movie, but it's been,
well... OK, I feel old now.

>> I think that the work you are doing on the serializability stuff
>> is *exactly* the right fix for the concurrency issues associated
>> with MERGE.
>
> It's got a nice consistency with current behavior, with reads never
> blocking or being blocked, but I can see why people would want a
> MERGE which could dance around the concurrency problems and always
> succeed with UPSERT behavior.

I think the right thing to do about wanting UPSERT is to implement
UPSERT, though personally I prefer the name REPLACE from my long-ago
days as a MySQL user.  It may be easier to solve a special case of the
concurrency problem than to solve it in its full generality (and
fixing MERGE is pretty close to "solving it in its full generality").
And even if it isn't, the MERGE syntax is insane if what you really
want to do is insert or update ONE record.  If all we have is MERGE,
people will keep doing it with a PL/pgsql stored procedure or some
crummy application logic just so that they don't have to spend several
days trying to understand the syntax.  Heck, I understand the syntax
(or I think I do) and I still think it's more trouble than its worth.
There is certainly a use case for an F-15 fighter jet but sometimes
what you really want is a model rocket and a small bottle of silver
paint.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Re: Re: new patch of MERGE (merge_204) & a question about duplicated ctid

From
Greg Smith
Date:
Heikki Linnakangas wrote:
> You can of course LOCK TABLE as a work-around, if that's what you want.

What I was trying to suggest upthread is that while there are other
possible ways around this problem, the only one that has any hope of
shipping with 9.1 is to do just that.  So from my perspective, the rest
of the discussion about the right way to proceed is moot for now.

For some reason it didn't hit me until you said this that I could do the
locking manually in my test case, without even touching the server-side
code yet.  Attached are a new pair of scripts where each pgbench UPDATE
statement executes an explicit LOCK TABLE.  Here's the result of a
sample run here:

$ pgbench -f update-merge.sql -T 60 -c 16 -j 4 -s 2 pgbench
starting vacuum...end.
transaction type: Custom query
scaling factor: 2
query mode: simple
number of clients: 16
number of threads: 4
duration: 60 s
number of transactions actually processed: 84375
tps = 1405.953672 (including connections establishing)
tps = 1406.137456 (excluding connections establishing)
$ psql -c 'select count(*) as updated FROM pgbench_accounts WHERE NOT
abalance=0' -d pgbench
 updated
---------
   68897
(1 row)

$ psql -c 'select count(*) as inserted FROM pgbench_accounts WHERE aid >
100000' -d pgbench
 inserted
----------
    34497
(1 row)

No assertion crashes, no duplicate key failures.  All the weird stuff I
was running into is gone, so decent evidence the worst of the problems
were all because the heavy lock I expecting just wasn't integrated into
the patch.  Congratulations to Boxuan:  for the first time this is
starting to act like a viable feature addition to me, just one with a
moderately long list of limitations and performance issues.

1400 TPS worth of UPSERT on my modest 8-core desktop (single drive with
cheating fsync) isn't uselessly slow.  If I add "SET TRANSACTION
ISOLATION LEVEL SERIALIZABLE;" just after the BEGIN;, I don't see any
serialization errors, and performance is exactly the same.

Run a straight UPDATE over only the existing range of keys, and I get
7000 TPS instead.  So the locking etc. is reducing performance to 20% of
its normal rate, on this assertion+debug build.  I can run this tomorrow
(err, later today I guess looking at the time) on a proper system with
BBWC and without asseritions to see if the magnitude of the difference
changes, but I don't think that's the main issue here.

Presuming the code quality issues and other little quirks I've
documented (and new ones yet to be discovered) can get resolved here,
and that's a sizeable open question, I could see shipping this with the
automatic heavy LOCK TABLE in there.  Then simple UPSERT could work out
of the box via a straightforward MERGE.  We'd need a big warning
disclaiming that concurrent performance is very limited in this first
release of the feature, but I don't know that this is at the
unacceptable level of slow for smaller web apps and such.

Until proper fine-grained concurrency is implemented, I think it would
be PR suicide to release a version of this without a full table lock
happening automatically though.  The idea Robert advocated well, that it
would be possible for advanced users to use even this rough feature in a
smarter way to avoid conflicts and not suffer the full performance
penalty, is true.  But if you consider the main purpose here to be
making it easier to get smaller MySQL apps and the like ported to
PostgreSQL (which is what I see as goal #1), putting that burden on the
user is just going to reinforce the old "PostgreSQL is so much harder
than MySQL" stereotype.  I'd much prefer to see everyone have a slow but
simple to use UPSERT via MERGE available initially, rather than to worry
about optimizing for the advanced user in a way that makes life harder
for the newbies.  The sort of people who must have optimal performance
already have trigger functions available to them, that they can write
and tweak for best performance.

--
Greg Smith   2ndQuadrant US    greg@2ndQuadrant.com   Baltimore, MD
PostgreSQL Training, Services and Support        www.2ndQuadrant.us
"PostgreSQL 9.0 High Performance": http://www.2ndQuadrant.com/books

\set nbranches :scale
\set ntellers 10 * :scale
\set naccounts 100000 * :scale
\setrandom aid 1 :naccounts
\setrandom bid 1 :nbranches
\setrandom tid 1 :ntellers
\setrandom delta -5000 5000
BEGIN;

-- Optional mode change
-- SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;

LOCK TABLE pgbench_accounts;
MERGE INTO pgbench_accounts t USING (SELECT :aid,1+(:aid / 1000000)::integer,:delta,'') AS s(aid,bid,balance,filler) ON
s.aid=t.aidWHEN MATCHED THEN UPDATE SET abalance=abalance + s.balance WHEN NOT MATCHED THEN INSERT
VALUES(s.aid,s.bid,s.balance,s.filler);
COMMIT;

-- This syntax worked with MERGE v203 patch, but isn't compatible with v204
--MERGE INTO pgbench_accounts t USING (VALUES (:aid,1+(:aid / 1000000)::integer,:delta,'')) AS
s(aid,bid,balance,filler)ON s.aid=t.aid WHEN MATCHED THEN UPDATE SET abalance=abalance + s.balance WHEN NOT MATCHED
THENINSERT VALUES(s.aid,s.bid,s.balance,s.filler); 

Attachment

Re: Re: new patch of MERGE (merge_204) & a question about duplicated ctid

From
Greg Smith
Date:
Robert Haas wrote:
> And even if it isn't, the MERGE syntax is insane if what you really
> want to do is insert or update ONE record.  If all we have is MERGE,
> people will keep doing it with a PL/pgsql stored procedure or some
> crummy application logic just so that they don't have to spend several
> days trying to understand the syntax.  Heck, I understand the syntax
> (or I think I do) and I still think it's more trouble than its worth

I hoped that the manual would have a clear example of "this is how you 
do UPSERT with MERGE", preferrably cross-linked to the existing "Example 
39-2. Exceptions with UPDATE/INSERT" trigger implementation that's been 
the reference implementation for this for a long time, so people can see 
both alternatives.  New users will cut and paste that example into their 
code, and in the beginning neither know nor care how MERGE actually 
works, so long as the example does what it claims.  I would wager the 
majority of PL/pgsql implementations of this requirement start the exact 
same way.  I don't think the learning curve there is really smaller, 
it's just that you've just already been through it.

I've been purposefully ignoring the larger applications of MERGE in the 
interest of keeping focus on a managable subset.  But the more general 
feature set is in fact enormously useful for some types of data 
warehouse applications.  Build REPLACE, and you built REPLACE.  Build 
MERGE that is REPLACE now and eventually full high-performance MERGE, 
and you've done something with a much brighter future.  I don't think 
the concurrency hurdles here are unique to this feature either, as shown 
by the regular overlap noted with the other serialization work. 

-- 
Greg Smith   2ndQuadrant US    greg@2ndQuadrant.com   Baltimore, MD
PostgreSQL Training, Services and Support        www.2ndQuadrant.us
"PostgreSQL 9.0 High Performance": http://www.2ndQuadrant.com/books



Re: Re: new patch of MERGE (merge_204) & a question about duplicated ctid

From
David Fetter
Date:
On Tue, Jan 04, 2011 at 04:44:32AM -0500, Greg Smith wrote:
> Heikki Linnakangas wrote:
> >You can of course LOCK TABLE as a work-around, if that's what you want.
> 
> Presuming the code quality issues and other little quirks I've
> documented (and new ones yet to be discovered) can get resolved
> here, and that's a sizeable open question, I could see shipping this
> with the automatic heavy LOCK TABLE in there.  Then simple UPSERT
> could work out of the box via a straightforward MERGE.

How about implementing an UPSERT command as "take the lock, do the
merge?"  That way, we'd have both the simplicity for the simpler cases
and a way to relax consistency guarantees for those who would like to
do so.

Cheers,
David.
-- 
David Fetter <david@fetter.org> http://fetter.org/
Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
Skype: davidfetter      XMPP: david.fetter@gmail.com
iCal: webcal://www.tripit.com/feed/ical/people/david74/tripit.ics

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate


Re: Re: new patch of MERGE (merge_204) & a question about duplicated ctid

From
Marko Tiikkaja
Date:
On 2011-01-04 6:27 PM, David Fetter wrote:
> On Tue, Jan 04, 2011 at 04:44:32AM -0500, Greg Smith wrote:
>> Heikki Linnakangas wrote:
>>> You can of course LOCK TABLE as a work-around, if that's what you want.
>>
>> Presuming the code quality issues and other little quirks I've
>> documented (and new ones yet to be discovered) can get resolved
>> here, and that's a sizeable open question, I could see shipping this
>> with the automatic heavy LOCK TABLE in there.  Then simple UPSERT
>> could work out of the box via a straightforward MERGE.
>
> How about implementing an UPSERT command as "take the lock, do the
> merge?"  That way, we'd have both the simplicity for the simpler cases
> and a way to relax consistency guarantees for those who would like to
> do so.

That, unfortunately, won't work so well in REPEATABLE READ :-(  But I, 
too, am starting to think that we should have a separate, optimized 
command to do UPSERT/INSERT .. IGNORE efficiently and correctly while 
making MERGE's correctness the user's responsibility.  Preferably with 
huge warning signs on the documentation page.


Regards,
Marko Tiikkaja


Re: Re: new patch of MERGE (merge_204) & a question about duplicated ctid

From
David Fetter
Date:
On Tue, Jan 04, 2011 at 07:02:54PM +0200, Marko Tiikkaja wrote:
> On 2011-01-04 6:27 PM, David Fetter wrote:
> >On Tue, Jan 04, 2011 at 04:44:32AM -0500, Greg Smith wrote:
> >>Heikki Linnakangas wrote:
> >>>You can of course LOCK TABLE as a work-around, if that's what you want.
> >>
> >>Presuming the code quality issues and other little quirks I've
> >>documented (and new ones yet to be discovered) can get resolved
> >>here, and that's a sizeable open question, I could see shipping this
> >>with the automatic heavy LOCK TABLE in there.  Then simple UPSERT
> >>could work out of the box via a straightforward MERGE.
> >
> >How about implementing an UPSERT command as "take the lock, do the
> >merge?"  That way, we'd have both the simplicity for the simpler cases
> >and a way to relax consistency guarantees for those who would like to
> >do so.
> 
> That, unfortunately, won't work so well in REPEATABLE READ :-(

There are caveats all over READ COMMITTED/REPEATABLE READ/SNAPSHOT.
The only really intuitively obvious behavior is SERIALIZABLE, which
we'll have available in 9.1. :)

> But I, too, am starting to think that we should have a separate,
> optimized command to do UPSERT/INSERT .. IGNORE efficiently and
> correctly while making MERGE's correctness the user's
> responsibility.  Preferably with huge warning signs on the
> documentation page.

+1 for the HUGE WARNING SIGNS :)

Cheers,
David.
-- 
David Fetter <david@fetter.org> http://fetter.org/
Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
Skype: davidfetter      XMPP: david.fetter@gmail.com
iCal: webcal://www.tripit.com/feed/ical/people/david74/tripit.ics

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate


Re: Re: new patch of MERGE (merge_204) & a question about duplicated ctid

From
Greg Smith
Date:
David Fetter wrote:
> How about implementing an UPSERT command as "take the lock, do the
> merge?"  That way, we'd have both the simplicity for the simpler cases
> and a way to relax consistency guarantees for those who would like to
> do so.
>   

Main argument against is that path leads to a permanent non-standard 
wart to support forever, just to work around what should be a short-term 
problem.  And I'm not sure whether reducing the goals to only this 
actually improves the ability to ship something in the near term too 
much.  Many of the hard problems people are bothered by don't go away, 
it just makes deciding which side of the speed/complexity trade-off 
you're more interested in becomes more obvious.  What I've been 
advocating is making that decision go away altogether by only worrying 
about the simple to use and slow path for now, but that's a highly 
debatable viewpoint I appreciate the resistence to, if it's possible to 
do at all.

-- 
Greg Smith   2ndQuadrant US    greg@2ndQuadrant.com   Baltimore, MD
PostgreSQL Training, Services and Support        www.2ndQuadrant.us
"PostgreSQL 9.0 High Performance": http://www.2ndQuadrant.com/books



Re: Re: new patch of MERGE (merge_204) & a question about duplicated ctid

From
David Fetter
Date:
On Tue, Jan 04, 2011 at 09:27:10PM -0500, Greg Smith wrote:
> David Fetter wrote:
> >How about implementing an UPSERT command as "take the lock, do the
> >merge?"  That way, we'd have both the simplicity for the simpler cases
> >and a way to relax consistency guarantees for those who would like to
> >do so.
> 
> Main argument against is that path leads to a permanent non-standard
> wart to support forever, just to work around what should be a
> short-term problem.  And I'm not sure whether reducing the goals to
> only this actually improves the ability to ship something in the
> near term too much.

I think I haven't communicated clearly what I'm suggesting, which is
that we ship with both an UPSERT and a MERGE, the former being ugly,
crude and simple, and the latter festooned with dire warnings about
isolation levels and locking.

If shipping with a "wart," as you term it, isn't acceptable, then I'd
advocate for going with just MERGE and documenting it inside and out,
including one or more clearly written UPSERT and/or REPLACE INTO
recipes.

Cheers,
David.
-- 
David Fetter <david@fetter.org> http://fetter.org/
Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
Skype: davidfetter      XMPP: david.fetter@gmail.com
iCal: webcal://www.tripit.com/feed/ical/people/david74/tripit.ics

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate


Re: Re: new patch of MERGE (merge_204) & a question about duplicated ctid

From
Greg Smith
Date:
David Fetter wrote:
> I think I haven't communicated clearly what I'm suggesting, which is
> that we ship with both an UPSERT and a MERGE, the former being ugly,
> crude and simple, and the latter festooned with dire warnings about
> isolation levels and locking.
>   

I don't know that I completely agree with the idea that the UPSERT 
version should always be crude and the MERGE one necessarily heavy from 
a performance perspective.  However, I do feel you raise a legitimate 
point that once the hard stuff is done, and the locking issues around 
SQL proper MERGE sorted, having an UPSERT/REPLACE synonym that maps to 
it under the hood may be a useful way to provide a better user 
experience.  The way I see this, the right goal is to first provide the 
full spec behavior with good performance, and get all that plumbing 
right.  There's nothing that says we can't provide an easier syntax than 
the admittedly complicated MERGE one to the users though.  (I am tempted 
to make a joke about how you could probaby

So, as for this patch...there's about half a dozen significant open 
issues with the current code, along with a smattering of smaller ones.  
The problems remaining are deep enough that I think it would be 
challenging to work through them for this CF under the best conditions.  
And given that we haven't heard from Boxuan since early December, we're 
definitely understaffed to tackle major revisions.

My hope was that we'd get an updated patch from him before the CF 
deadline.  Even without it, maybe get some more internal help here.  
Given my focus on checkpoint issues, Simon on Sync Rep, and Dimitri 
still on his Extensions push, that's second part isn't going to happen.

I am marking MERGE officially "Returned With Feedback" for 9.1.  Lots of 
progress made, much better community understanding of the unresolved 
issues now than when we started, but not in shape to go into this 
release.  Since we still have some significant interest here in getting 
this finished, I'll see if I can get put together a better game-plan for 
how to get this actually done for 9.2 in time to discuss at release 
planning time.  The main thing that's become much more obvious to me 
just recently is how the remaining issues left here relate to the true 
serialization work, so worrying about that first is probably the right 
order anyway.

-- 
Greg Smith   2ndQuadrant US    greg@2ndQuadrant.com   Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support  www.2ndQuadrant.us