On Wed, Oct 1, 2008 at 9:24 AM, Merlin Moncure <mmoncure@gmail.com> wrote:
On Wed, Oct 1, 2008 at 6:44 AM, Sergey A. <n39052@gmail.com> wrote: > Hello. > > My application generates a large amount of inserts (~ 2000 per second) > using one connection to PostgreSQL. All queries are buffered in memory > and then the whole buffers are send to DB. But when I use two > connections to PostgreSQL instead of one on dual core CPU (i.e. I use > two processes of PostgreSQL) to insert my buffers I see that things > goes 1.6 times faster. > > Using several connections in my application is somewhat tricky, so I > want to move this problem to PostgreSQL's side. Is there any method > for PostgreSQL to process huge inserts coming from one connection on > different cores?
If you are buffering inserts, you can get an easy performance boost by using copy as others have suggested. Another approach is to use mutli-row insert statement:
insert into something values (1,2,3), (2,4,6), ...
Using multiple cpu basically requires multiple connections. This can be easy or difficult depending on how you are connecting to the database.