Hello,
I have small table (up to 10000 rows) and every row will be updated
once per minute. Table also has "before update on each row" trigger
written in plpgsql. But trigger 99.99% of the time will do nothing
to the database. It will just compare old and new values in the row
and those values almost always will be identical.
Now I tried simple test and was able to do 10000 updates on 1000
rows table in ~30s. That's practically enough but I'd like to have
more room to slow down.
Also best result I achieved by doing commit+vacuum every ~500
updates.
How can I improve performance and will version 7.4 bring something
valuable for my task? Rewrite to some other scripting language is not
a problem. Trigger is simple enough.
Postgres v7.3.4, shared_buffers=4096 max_fsm settings also bumped up
10 times.
Thanks,
Mindaugas