Hello,
i have migrated from Maxdb to Postgresql recently and i am having a speed problem in
large transactions over slow links because of autorollback on error postgresql feature, i
create data in any tables with triggers in other tables and i do large inserts from the
data created in this tables to any other postgresql servers (replication purposes), for
this example maybe we can say 20000 rows, i want do this in a transaction to make rollback
on certain errors, but i use a fallback feature if a duplicated is found i relaunch the
last insert data in a update to the existing row, so i have to set savepoint and release
after the insert has been successful, so my traffic flow is anything like this.
client server
begin ------------------>
<----------------- ok
savepoint------------->
<----------------- ok
insert ------------------>
<----------------- ok
release savepoint--->
<----------------- ok
insert ------------------>
<----------------- error duplicated key
update ----------------->
<----------------- ok
release savepoint--->
<----------------- ok
20000 rows later....
commit ----------------->
<----------------- ok
obviously in a slow link this is slow as hell, i have posted this same email in spanish
pgsql-es-ayuda where Alvaro Herrera has replied my with some solutions (thanks Alvaro for
your great support in spanish mailing list!), mainly two:
1- create a function that uses EXCEPTION to save data traffic or the function like an
upsert that can be located in the example 38-1 at
http://www.postgresql.org/docs/8.3/static/plpgsql-control-structures.html but this have
the problem that i still have the savepoint overhead.
2- create a function that make a select locking the table before decide to do an insert or
an update.
Well i would like to know if every can help with any other idea or any notes on this
problem? Other question i have it is how i could create a function without be sure the
number of columns to insert/update. Thanks in advance.
Best Regards,
Miguel Angel.