Re: Why overhead of SPI is so large? - Mailing list pgsql-hackers

From Pavel Stehule
Subject Re: Why overhead of SPI is so large?
Date
Msg-id CAFj8pRBjAOH5+U2NmSZZWQoqrLTWP8CM1yA_LQztnvXj0SMVMw@mail.gmail.com
Whole thread Raw
In response to Re: Why overhead of SPI is so large?  (Konstantin Knizhnik <k.knizhnik@postgrespro.ru>)
Responses Re: Why overhead of SPI is so large?  (Konstantin Knizhnik <k.knizhnik@postgrespro.ru>)
List pgsql-hackers


pá 22. 11. 2019 v 8:32 odesílatel Konstantin Knizhnik <k.knizhnik@postgrespro.ru> napsal:


On 22.11.2019 10:08, Pavel Stehule wrote:

I test it, and there is a problem already. We doesn't raise a exception, but the result is wrong


create table foo(a int);

create or replace function f1(int)
returns void as $$
begin
  insert into foo values($1);
end;
$$ language plpgsql;

create or replace function f2(int)
returns void as $$declare r record;
begin
  perform f1(); for r in select * from foo loop raise notice '%', r; end loop;
end;
$$ language plpgsql immutable; -- or stable with same behave

So current state is:

a) we allow to call volatile functions from nonvolatile (stable, immutable) that really does write
b) but this change is not visible in parent nonvolatile functions. Is visible only in volatile functions.

Is it expected behave?

I think that in theory it is definitely not correct to call volatile function from non-volatile.
But there are two questions:
1. Are we able to check it? Please taken in account that:
 - at the moment of "create function f2()"  called function f1() may not yet be defined
 - instead of perform f1() it can do "execute 'select f1()'" and it is not possible to check it at compile time. 

It's not possible to check it compile time now.

2. Is it responsibility of programmer to correctly specify function properties or it should be done by compiler?
  If we follow YWIYGI rule, then your definition of f2() is not correct and that it is why you will get wrong result in this case.

maybe - a) but it is not documented, b) is not a postgresql's philosophy return bad result - and c) it is not user friendly. There should be raised error or result should be correct.

Theoretically this feature can be used for logging - you can write to log table from immutable or stable function - so it can has some use case. Probably if we implement autonomous transactions, then this behave should be correct.
 
  If we what to completely rely on compiler, then... we do not not volatile/immutable/stable/qualifiers at all! Compiler should deduce this information itself.
  But it will be non-trivial if ever possible, take in account 1)

I am able to do it in plpgsql_check for plpgsql. But probably it is not possible do it for PLPerl, PLPythonu, ..

In principle it is possible to add checks which will produce warning in case of calling volatile function or executing dynamic SQL from non-volatile function.
If such extra checking will be considered useful, I can propose patch doing it.
But IMHO optimizer should rely on function qualifier provided by programmer and it is acceptable to produce wrong result if this information is not correct.

We should to distinguish  between bad result and not well optimized plan.




So now, there are described issues already. And the limit to just immutable function is not enough - these functions should be immutable buildin.

The core of these problems is based on function's flags related to planner optimizations.

Maybe other flags WRITE | READ | PURE can help.

Now we don't know if volatile function writes to database or not - surely random function doesn't do this. We can implement new set of flags, that can reduce a overhead with snapshots.

The function random() can be VOLATILE PURE - and we will know, so  result of this function is not stable, but this function doesn't touch data engine.

When we don't have these flags, then the current logic is used, when we have these flags, then it will be used. These flags can be more strict

we should not to allow any WRITE from READ function, or we should not allow READ from PURE functions.

Notes, comments?
I think that even current model with "volatile", "immutable" and "stable" is complex enough.
Adding more qualifiers will make it even more obscure and error-prone.

I don't think - the classic example is random() function. It's volatile, but you don't need special snapshot for calling this function.

Still any change of behave can breaks lot of applications, because how we can see, Postgres is very tolerant now.


-- 
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company 

pgsql-hackers by date:

Previous
From: Michael Paquier
Date:
Subject: Re: TAP tests aren't using the magic words for Windows file access
Next
From: Michael Paquier
Date:
Subject: Re: add a MAC check for TRUNCATE