Recently I've wrote few pgSql procedures that generates invoices and
store it in postgres table. Small test has shown that there is
performance problem. I've thought that string operation in pgsql are not
perfect but it has appeared that 90% of time program waste on very
simple update.
Below is my simplified procedures:
CREATE TABLE group_fin_account_tst (
group_fin_account_tst_id BIGSERIAL PRIMARY KEY,
credit NUMERIC(8,2) DEFAULT 0.00 NOT NULL
) ; ALTER TABLE group_fin_account_tst OWNER TO freeconetadm;
INSERT INTO group_fin_account_tst
(credit) VALUES (4);
CREATE OR REPLACE FUNCTION test()
RETURNS void AS
$BODY$
DECLARE
BEGIN
FOR v_i IN 1..4000 LOOP
UPDATE group_fin_account_tst SET
credit = v_i
WHERE group_fin_account_tst_id = 1; -- for real procedure I
update different rows
END LOOP;
END; $BODY$ LANGUAGE 'plpgsql' VOLATILE;
ALTER FUNCTION test() OWNER TO freeconetadm;
select test();
The strange thing is how program behave when I increase number of
iteration.
Below my results (where u/s is number of updates per second)
On windows
500 - 0.3s(1666u/s)
1000 - 0.7s (1428u/s)
2000 - 2.3s (869u/s)
4000 - 9s (444u/s)
8000 -29s (275u/s)
16000-114s (14u/s)
On linux:
500 - 0.5s(1000u/s)
1000 - 1.8s (555u/s)
2000 - 7.0s (285u/s)
4000 - 26s (153u/s)
8000 -101s (79u/s)
16000-400s (40u/s)
On both systems relation between number of iteration and time is
strongly nonlinear! Do you know what is a problem? Is it possible to
commit transaction inside pgsql procedure because I think that maybe
transaction is too long?
Regards
Michal Szymanski
http://blog.szymanskich.net