I have a database that is used to manage email mailing lists (opt in
ezines). When an email message bounces the bouncecount in the customer
record is incremented.
My process used to do this on the fly as a bounce was recieved. This was on
6.5.0. We recently updgraded to 6.5.3 and faster hardware. For the most
part the update went smooth except for this process. It seemed to overload
the server. I did not have time to deal with it then, so I changed the
process to just log the bounce for later processing.
Now I need to fix the problem. I have written a script to process the log
file. It filters out duplicates which will reduce the total number of
updates that are done. It works inside a transaction. Too keep the size of
the transacations down (I have had problems in the past with large
transations) it closes one transaction and opens a new one after every 100th
update.
The problem is that performance is still less than I need. I am getting
about 5 updates processed per second on my test server. I never analysed it
on 6.5.0 but I was not having a problem and that was running on slower
hardware with more activity on the box.
My test server is running Postgres 7.0. It is a PIII 733 with 384M RAM and
2 IDE HDs running RedHat 6.2.
The backend is using about 80% of the CPU. The load average is around 1.00.
The is what the update statement looks like:
update customer
set bouncecount = bouncecount + 1,
bouncedate = CURRENT_DATE
where email = 'bryan@arcamax.com'
and bouncedate != CURRENT_DATE;
Explain says:
Index Scan using icusem2 on customer (cost=0.00..4.98 rows=1 width=238)