Shachindra Agarwal wrote:
>Thanks for the note. Please see my responses below:
>
>
...
>
>
>>>We are using JDBC which supports 'inserts' and 'transactions'. We are
>>>
>>>
>using both. The business logic adds one business object at a time. Each
>object is added within its own transaction. Each object add results in 5
>records in various tables in the the database. So, a commit is performed
>after every 5 inserts.
>
>
>
Well, 5 inserts per commit is pretty low. It would be nice to see more
like 100 inserts per commit. Would it be possible during the "discovery"
phase to put the begin/commit logic a little bit higher?
Remember, each COMMIT requires at least one fsync. (I realize you have
fsync off for now). But commit is pretty expensive.
>Also, it sounds like you have a foreign key issue. That as things fill
>up, the foreign key reference checks are slowing you down.
>Are you using ANALYZE as you go? A lot of times when you only have <1000
>
>rows a sequential scan is faster than using an index, and if you don't
>inform postgres that you have more rows, it might still use the old
>seqscan.
>
>
>
>>>This could be the issue. I will start 'analyze' in a cron job. I will
>>>
>>>
>update you with the results.
>
>There are other possibilities, but it would be nice to know about your
>table layout, and possibly an EXPLAIN ANALYZE of the inserts that are
>going slow.
>
>John
>=:->
>
>PS> I don't know if JDBC supports COPY, but it certainly should support
>transactions.
>
>
Let us know if ANALYZE helps. If you are not deleting or updating
anything, you probably don't need to do VACUUM ANALYZE, but you might
think about it. It is a little more expensive since it has to go to
every tuple, rather than just a random sampling.
John
=:->