Tom Lane wrote:
> "Shoaib Mir" <shoaibmir@gmail.com> writes:
>> Here are my few recommendations that might help you:
>> [ snip good advice ]
>
> Another thing to look at is whether you are doing inserts/updates as
> individual transactions, and if so see if you can "batch" them to
> reduce the per-transaction overhead.
Thank you everyone who replied with suggestions. Unfortunately, this is
a background activity for me, so I can only work on it when I can
squeeze in time. Right now, I can't do anything; I swapped out a broken
switch in our network and the DB server is currently inaccessible ;(. I
will eventually work through all suggestions, but I'll start with the
ones I can respond to without further investigation.
I'm not doing updates as individual transactions. I cannot use the Java
batch functionality because the code uses stored procedures to do the
inserts and updates, and the PG JDBC driver cannot handle executing
stored procedures in batch. Briefly, executing a stored procedure
returns a result set, and Java batches don't expect result sets.
So, in the code I turn autocommit off, and do a commit every 100
executions of the stored proc. The exact same code is running against
BigDBMS, so any penalty from this approach should be evenly felt.
--
Guy Rouillier