I try to use procedures in Orafce package, and I did some easy performance tests. I found some hard problems:
1. test case
create or replace procedure p1(inout r int, inout v int) as $$
begin v := random() * r; end
$$ language plpgsql;
This command requires
do $$ declare r int default 100; x int; begin for i in 1..300000 loop call p1(r, x); end loop; end; $$;
about 2.2GB RAM and 10 sec.
I am having a consistent result of 3 secs, with a modified version (exec_stmt_call) of your patch. But my notebook is (Core 5, 8GB and SSD), could it be a difference in the testing hardware?
My notebook is old T520, and more I have a configured Postgres with --enable-cassert option.
The hardware is definitely making a difference, but if you have time and don't mind testing it,
I can send you a patch, not that the modifications are a big deal, but maybe they'll help.
With more testing, I found that latency increases response time. With 3 (secs) the test is with localhost. With 6 (secs) the test is with tcp (local, not between pcs).
Anyway, I would like to know if we have the number of parameters previously, why use List instead of Arrays? It would not be faster to create plpgsql variables.
Why you check SPI_processed?
+ if (SPI_processed == 1) + { + if (!stmt->target) + elog(ERROR, "DO statement returned a row, query \"%s\"", expr->query); + } + else if (SPI_processed > 1) + elog(ERROR, "Procedure call returned more than one row, query \"%s\"", expr->query);
CALL cannot to return rows, so these checks has not sense
Looking at the original file, this already done, from line 2351,
I just put all the tests together to, if applicable, get out quickly.