Re: Implementing Incremental View Maintenance - Mailing list pgsql-hackers

From Konstantin Knizhnik
Subject Re: Implementing Incremental View Maintenance
Date
Msg-id e1466a5b-e30a-52c5-39c3-83f002e054ba@postgrespro.ru
Whole thread Raw
In response to Re: Implementing Incremental View Maintenance  (Yugo NAGATA <nagata@sraoss.co.jp>)
Responses Re: Implementing Incremental View Maintenance
List pgsql-hackers

On 25.11.2020 16:06, Yugo NAGATA wrote:
> On Wed, 25 Nov 2020 15:16:05 +0300
> Konstantin Knizhnik <k.knizhnik@postgrespro.ru> wrote:
>
>>
>> On 24.11.2020 13:11, Yugo NAGATA wrote:
>>>> I wonder if it is possible to somehow use predicate locking mechanism of
>>>> Postgres to avoid this anomalies without global lock?
>>> You mean that, ,instead of using any table lock, if any possibility of the
>>> anomaly is detected using predlock mechanism then abort the transaction?
>> Yes. If both transactions are using serializable isolation level, then
>> lock is not needed, isn't it?
>> So at least you can add yet another simple optimization: if transaction
>> has serializable isolation level,
>> then exclusive lock is not required.
> As long as we use the trigger approach, we can't handle concurrent view maintenance
> in either repeatable read or serializable isolation level.  It is because one
> transaction (R= R+dR) cannot see changes occurred in another transaction (S'= S+dS)
> in such cases, and we cannot get the incremental change on the view (dV=dR*dS).
> Therefore, in the current implementation, the transaction is aborted when the
> concurrent view maintenance happens in repeatable read or serializable.

Sorry, may be I do not correctly understand you or you do not understand me.
Lets consider two serializable transactions (I do not use view or 
triggers, but perform correspondent updates manually):



create table t(pk integer, val int);
create table mat_view(gby_key integer primary key, total bigint);
insert into t values (1,0),(2,0);
insert into mat_view values (1,0),(2,0);

Session 1: Session 2:

begin isolation level serializable;
begin isolation level serializable;
insert into t values (1,200);                         insert into t 
values (1,300);
update mat_view set total=total+200  where gby_key=1;
update mat_view set total=total+300 where gby_key=1;
<blocked>
commit;
ERROR:  could not serialize access due to concurrent update

So both transactions are aborted.
It is expected behavior for serializable transactions.
But if transactions updating different records of mat_view, then them 
can be executed concurrently:

Session 1: Session 2:

begin isolation level serializable;
begin isolation level serializable;
insert into t values (1,200);                         insert into t 
values (2,300);
update mat_view set total=total+200  where gby_key=1;
update mat_view set total=total+300 where gby_key=2;
commit;                                                      commit;

So, if transactions are using serializable isolation level, then we can 
update mat view without exclusive lock
and if there is not conflict, this transaction can be executed concurrently.

Please notice, that exclusive lock doesn't prevent conflict in first case:

Session 1: Session 2:

begin isolation level serializable;
begin isolation level serializable;
insert into t values (1,200);                         insert into t 
values (1,300);
lock table mat_view;
update mat_view set total=total+200  where gby_key=1;
lock table mat_view;
<blocked>
commit;
update mat_view set total=total+300 where gby_key=1;
commit;
ERROR:  could not serialize access due to concurrent update


So do you agree that there are no reasons for using explicit lock for 
serializable transactions?

-- 
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company




pgsql-hackers by date:

Previous
From: Anastasia Lubennikova
Date:
Subject: Re: Improper use about DatumGetInt32
Next
From: Laurenz Albe
Date:
Subject: Re: A few new options for CHECKPOINT