Your development system is probably running inexpensive IDE disks that
cache writes, while the test server is not caching. If you loop over
single inserts, PostgreSQL's default configuration will do a physical
commit to disk after every one of them, which limits performance to how
fast the disk spins. If your server has 15K RPM drives, a single client
can commit at most 250 transactions per second to disk, which means 10,000
inserts done one at a time must take at least 40 seconds no matter how
fast the server is.
There's a rambling discussion of this topic at
http://www.westnet.com/~gsmith/content/postgresql/TuningPGWAL.htm that
should fill in some background here.
This is exactly what is happening. Thank you for the above link, the article was very informative.
If you use COPY instead of INSERT, that bypasses the WAL and you don't see
this. Also, if you adjust your loop to do multiple inserts as a single
transaction, that will change the behavior here as well.
I will give COPY a go and see how it performs. For testing we specifically only did one insert per transaction, we will obviously optimize the actual application to do multiple insert per transaction where-ever possible.
Kind regards
Beyers Cronje
PS Thank you for the quick responses Greg and Pavel. It is always encouraging starting off with a new product and seeing there are people passionate about it.