I am running postgres 7.4 and want the database to process a large
number of transactions at once from a java application. I tried to set
autocommit = false, but as of postgres 7.4 that is not possible.
java.sql.SQLException: ERROR: SET AUTOCOMMIT TO OFF is no longer
supported
I then tried using BEGIN and COMMIT hoping that might work, but
according to the system log (/var/log/messages) each one is being
committed separately:
[382-1] LOG: query: BEGIN WORK;
[383-1] LOG: query: INSERT INTO e_dat3 (S33RM_PT_sig_val, prog,
update_date) VALUES (0.0, 'V8N', '2002-08-19');
[384-1] LOG: query: COMMIT WORK;
[382-1] LOG: query: BEGIN WORK;
[383-1] LOG: query: INSERT INTO e_dat3 (S2033RM_PT_sig_val, prog,
update_date) VALUES (-1.0, 'V8N', '2002-08-19');
[384-1] LOG: query: COMMIT WORK;
[382-1] LOG: query: BEGIN WORK;
[383-1] LOG: query: INSERT INTO e_dat3 (S433RM_PT_sig_val, prog,
update_date) VALUES (-1.0, 'V8N', '2002-08-19');
[384-1] LOG: query: COMMIT WORK;
I do about 7300 SELECTS from the database, and for each I perform 60
calculations and write the results to the database (once per
calculation). Each of the 7300 takes approximately 3 seconds to process,
the whole operation about 6 hours.
top shows java memory usage about 40 - 50%, with postgres usage usually
very low. vmstat shows lots of IO. So I'm assuming that the bottleneck
is in writing the transactions to the database. How can I send large
transactions to the database? (eg for one item, perform one of the 60
calculations on 200 rows, write the 200 rows at once to the database).
OS is debian stable, processor 2GH +, 120 GB 10,000 RPM HD.
Any help / comments / suggestions appreciated.
Thanks
Ron