I'm not an expert on Postgres SQL but my recommendations are (in the order
I feel is most important):
1) Use a PreparedStatement !!!
Don't recompile every statement!!!!!!
2) Use batched update. Requires a JDBC 2 compliant driver.
The purpose is to reduce the number of server roundtrips.
3) Disable autocommit. insert a number of rows, then do an explicit commit.
The purpose of this step is to reduce the transaction overhead. A real
RDBMS has to secure transactions on disk which usually means 1 disk I/O per
transaction. To reduce the overhead, stuff more data into the transaction.
Note that very large transactions impose other types of overhead.
Experiment!
4) Maybe several concurrent connections can help? 2-5 connections?
Can create opportunities for group commit? Can create opportunities for CPU
and I/O overlap. But most people overestimate the amount of parallell
processing that can be done. How many disks do you have? SCSI? How many
CPUs?
Hope this helps
/Per Schr�der
http://developer.mimer.com
Raymond Chui <raymond.chui@noaa.gov> wrote in
<3AEFECE4.C51252CC@noaa.gov>:
>--------------D07110F107BE8D4E9C5B04FF
>
>
>I have 4 columns in a table, id, sub_id, timestamp and value.
>The primary key is id, sub_id and timestamp combine.
>I need to insert many rows (may be 10 thousands every 4 minutes)
>as fast as I can to the same host, same port, same database, same table.
>
>A.
>Open only one JDBC (Java Database Connective) connection,
>have multiple threads (similar to UNIX child process) to do
>the insert.
>Note, too many threads will cause the system out of memory!
>
>B.
>Open only one JDBC connection, have only one single thread
>to do the insert.
>
>C.
>Open multiple JDBC connections threads, each one of them
>handle the data insert.
>
>D.
>Please tell me your way, the much better way.