Shivakumar Ramannavar <shivasr@gmail.com> wrote:
> my problem is that it takes 8.314 milliseconds to update one
> record
> EXPLAIN ANALYZE UPDATE terminal
> Update ... actual time=0.074..0.074
If you look at the EXPLAIN ANALYZE of the UPDATE, you'll see that
the UPDATE itself actually took only a small fraction of one ms.
My guess is that your write-ahead log (WAL) is on a 7200 or 7500 RPM
drive, and that you're committing each update separately. Further,
I would bet that you're not going through a RAID controller with
battery-backup (BBU) cache configured for write-back. And you're
also using a single connection to do all the updates.
Each commit must wait until the WAL is persisted, which can mean
waiting for a disk drive to spin all the way around again -- which
for a 7200 RPM drive takes 8.3 ms and for a 7500 RPM drive takes 8
ms. Without better hardware, you face a hard limit on how many
database transactions can commit on a single connection based on
that rotational delay.
> There are around 300,000 update operations and it is taking approx
> 80 min,
One way to solve the problem would be to BEGIN a transaction, do
your 300,000 updates, and then COMMIT the transaction. Another
would be to use a good RAID controller. A third would be to turn
off synchronized_commit. (Be sure to read the description of that
in the documentation to understand the implications.) Or you could
use a number of connections working in parallel.
-Kevin