Thread: batch inserts are "slow"
Howdy! I'm converting an application to be using postgresql instead of oracle. There seems to be only one issue left, batch inserts in postgresql seem significant slower than in oracle. I have about 200 batch jobs, each consisting of about 14 000 inserts. Each job takes 1.3 seconds in postgresql and 0.25 seconds in oracle. With 200 jobs this means several more minutes to complete the task. By fixing this I think the application using postgresql over all would be faster than when using oracle. I'd like some advice of what could enhance the performance. I use PostgreSQL 8. The table that is loaded with a bunch of data has no indexes. I use Gentoo Linux on a P4 3GHz with 1GB RAM. I use JDBC from jdbc.postgresql.org, postgresql-8.0-311.jdbc3.jar. I've changed a few parameters as I hoped that would help me: wal_buffers = 64 checkpoint_segments = 10 shared_buffers = 15000 work_mem = 4096 maintenance_work_mem = 70000 effective_cache_size = 30000 shmmax is 150000000 These settings made creating index faster for instance. Don't know if they can be tweaked further so these batch jobs are executed faster? Some setting I forgot to tweak? I tried setting fsync to false, but that didnt change anything. Something like this is what runs and takes a bit too long imho: conn.setAutoCommit(false); pst = conn.prepareStatement("INSERT INTO tmp (...) VALUES (?,?)"); for (int i = 0; i < len; i++) { pst.setInt(0, 2); pst.setString(1, "xxx"); pst.addBatch(); } pst.executeBatch(); conn.commit(); This snip takes 1.3 secs in postgresql. How can I lower that? Thanks, Tim
> conn.setAutoCommit(false); > pst = conn.prepareStatement("INSERT INTO tmp (...) VALUES (?,?)"); > for (int i = 0; i < len; i++) { > pst.setInt(0, 2); > pst.setString(1, "xxx"); > pst.addBatch(); > } > pst.executeBatch(); > conn.commit(); > > This snip takes 1.3 secs in postgresql. How can I lower that? You're batching them as one transaction, and using a prepared query both of which are good. I guess the next step for a great performance improvement is to use the COPY command. However, you'd have to find out how to access that via Java. I have a nasty suspicion that the release JDBC driver doesn't support it and you may have to apply a patch. Ask on pgsql-jdbc@postgresql.org perhaps. Chris
On 5/2/05, Tim Terlegård <tim@se.linux.org> wrote: > Howdy! > > I'm converting an application to be using postgresql instead of oracle. > There seems to be only one issue left, batch inserts in postgresql seem > significant slower than in oracle. I have about 200 batch jobs, each > consisting of about 14 000 inserts. Each job takes 1.3 seconds in > postgresql and 0.25 seconds in oracle. With 200 jobs this means several > more minutes to complete the task. By fixing this I think the > application using postgresql over all would be faster than when using > oracle. Just as on Oracle you would use SQL*Loader for this application, you should use the COPY syntax for PostgreSQL. You will find it a lot faster. I have used it by building the input files and executing 'psql' with a COPY command, and also by using it with a subprocess, both are quite effective. Chris -- | Christopher Petrilli | petrilli@gmail.com
> > Howdy! > > > > I'm converting an application to be using postgresql instead of oracle. > > There seems to be only one issue left, batch inserts in postgresql seem > > significant slower than in oracle. I have about 200 batch jobs, each > > consisting of about 14 000 inserts. Each job takes 1.3 seconds in > > postgresql and 0.25 seconds in oracle. With 200 jobs this means several > > more minutes to complete the task. By fixing this I think the > > application using postgresql over all would be faster than when using > > oracle. > > Have you tried COPY statement? I did that now. I copied all 3 million rows of data into a text file and executed the COPY command. It takes about 0.25 seconds per job. So that's much better. I'm afraid jdbc doesn't support COPY though. But now I know what the theoretical lower limit is atleast. Should it be possible to get anyway nearer 0.25s from my current 1.3s? Tim
On 5/2/05, Tim Terlegård <tim@se.linux.org> wrote: > > > Howdy! > > > > > > I'm converting an application to be using postgresql instead of oracle. > > > There seems to be only one issue left, batch inserts in postgresql seem > > > significant slower than in oracle. I have about 200 batch jobs, each > > > consisting of about 14 000 inserts. Each job takes 1.3 seconds in > > > postgresql and 0.25 seconds in oracle. With 200 jobs this means several > > > more minutes to complete the task. By fixing this I think the > > > application using postgresql over all would be faster than when using > > > oracle. > > > > Have you tried COPY statement? > > I did that now. I copied all 3 million rows of data into a text file and > executed the COPY command. It takes about 0.25 seconds per job. So that's > much better. I'm afraid jdbc doesn't support COPY though. But now I know > what the theoretical lower limit is atleast. > > Should it be possible to get anyway nearer 0.25s from my current 1.3s? My experience says 'no'. What you're likely seeing is the parse overhead of the setup. When you use COPY (as opposed to \copy), the postmaster is reading the file directory. There's just a lot less overhead. Can you write the files on disk and then kick off the psql process to run them? Chris -- | Christopher Petrilli | petrilli@gmail.com
=?iso-8859-1?Q?Tim_Terleg=E5rd?= <tim@se.linux.org> writes: > There seems to be only one issue left, batch inserts in postgresql seem > significant slower than in oracle. I have about 200 batch jobs, each > consisting of about 14 000 inserts. > conn.setAutoCommit(false); > pst = conn.prepareStatement("INSERT INTO tmp (...) VALUES (?,?)"); > for (int i = 0; i < len; i++) { > pst.setInt(0, 2); > pst.setString(1, "xxx"); > pst.addBatch(); > } > pst.executeBatch(); > conn.commit(); Hmm. It's good that you are wrapping this in a transaction, but I wonder about doing it as a single "batch". I have no idea what the internal implementation of batches in JDBC is like, but it seems possible that it would have some performance issues with 14000 statements in a batch. Have you checked whether the bulk of the runtime is being consumed on the server or client side? Also, make sure that the JDBC driver is using "real" prepared statements --- until pretty recently, it faked them. I think build 311 is new enough, but it would be good to check in the docs or by asking on pgsql-jdbc. regards, tom lane
We ran into the need to use COPY, but our application is also in Java. We wrote a JNI bridge to a C++ routine that uses the libpq library to do the COPY. The coding is a little bit weird, but not too complicated - the biggest pain in the neck is probably getting it into your build system. Here's the Java tutorial on JNI: http://java.sun.com/docs/books/tutorial/native1.1/concepts/index.html Hope that helps! - DAP >-----Original Message----- >From: pgsql-performance-owner@postgresql.org >[mailto:pgsql-performance-owner@postgresql.org] On Behalf Of >Christopher Kings-Lynne >Sent: Monday, May 02, 2005 11:11 AM >To: tim@se.linux.org >Cc: pgsql-performance@postgresql.org >Subject: Re: [PERFORM] batch inserts are "slow" > >> conn.setAutoCommit(false); >> pst = conn.prepareStatement("INSERT INTO tmp (...) VALUES >(?,?)"); for >> (int i = 0; i < len; i++) { >> pst.setInt(0, 2); >> pst.setString(1, "xxx"); >> pst.addBatch(); >> } >> pst.executeBatch(); >> conn.commit(); >> >> This snip takes 1.3 secs in postgresql. How can I lower that? > >You're batching them as one transaction, and using a prepared >query both of which are good. I guess the next step for a >great performance improvement is to use the COPY command. >However, you'd have to find out how to access that via Java. > >I have a nasty suspicion that the release JDBC driver doesn't >support it and you may have to apply a patch. > >Ask on pgsql-jdbc@postgresql.org perhaps. > >Chris > >---------------------------(end of >broadcast)--------------------------- >TIP 1: subscribe and unsubscribe commands go to >majordomo@postgresql.org >
petrilli@gmail.com (Christopher Petrilli) writes: > On 5/2/05, Tim Terlegård <tim@se.linux.org> wrote: >> Howdy! >> >> I'm converting an application to be using postgresql instead of >> oracle. There seems to be only one issue left, batch inserts in >> postgresql seem significant slower than in oracle. I have about 200 >> batch jobs, each consisting of about 14 000 inserts. Each job takes >> 1.3 seconds in postgresql and 0.25 seconds in oracle. With 200 jobs >> this means several more minutes to complete the task. By fixing >> this I think the application using postgresql over all would be >> faster than when using oracle. > > Just as on Oracle you would use SQL*Loader for this application, you > should use the COPY syntax for PostgreSQL. You will find it a lot > faster. I have used it by building the input files and executing > 'psql' with a COPY command, and also by using it with a subprocess, > both are quite effective. I'd suggest taking a peek at the PGForge project, pgloader <http://pgfoundry.org/projects/pgloader/>. This is intended to provide somewhat analagous functionality to SQL*Loader; a particularly useful thing about it is that it will load those records that it can, and generate a file consisting of just the failures. It uses COPY, internally, so it does run reasonably fast. To the extent to which it is inadequate, it would be neat to see some enhancements... -- (format nil "~S@~S" "cbbrowne" "acm.org") http://www.ntlug.org/~cbbrowne/sap.html Rules of the Evil Overlord #78. "I will not tell my Legions of Terror "And he must be taken alive!" The command will be: ``And try to take him alive if it is reasonably practical.''" <http://www.eviloverlord.com/>
Hi, all, David Parker wrote: > We ran into the need to use COPY, but our application is also in Java. > We wrote a JNI bridge to a C++ routine that uses the libpq library to do > the COPY. The coding is a little bit weird, but not too complicated - > the biggest pain in the neck is probably getting it into your build > system. There are several hacks floating around that add COPY capabilities to the pgjdbc driver. As they all are rather simple hacks, they have not been included in the cvs yet, but they tend to work fine. Markus
People, > There are several hacks floating around that add COPY capabilities to > the pgjdbc driver. As they all are rather simple hacks, they have not > been included in the cvs yet, but they tend to work fine. FWIW, Dave Cramer just added beta COPY capability to JDBC. Contact him on the JDBC list for details; I think he needs testers. -- Josh Berkus Aglio Database Solutions San Francisco
On Tue, 3 May 2005, Josh Berkus wrote: > > There are several hacks floating around that add COPY capabilities to > > the pgjdbc driver. As they all are rather simple hacks, they have not > > been included in the cvs yet, but they tend to work fine. > > FWIW, Dave Cramer just added beta COPY capability to JDBC. Contact him on > the JDBC list for details; I think he needs testers. > I believe Dave has remerged a patch for COPY I posted over a year ago, but he has not yet published it. I would guess it has the same bugs as the original (transaction + error handling) and will meet the same objections that kept the original patch out of the driver in the first place (we want a friendlier API than just a data stream). Kris Jurka
Testing list access
> > I'm converting an application to be using postgresql instead of oracle. > > There seems to be only one issue left, batch inserts in postgresql seem > > significant slower than in oracle. I have about 200 batch jobs, each > > consisting of about 14 000 inserts. Each job takes 1.3 seconds in > > postgresql and 0.25 seconds in oracle. With 200 jobs this means several > > more minutes to complete the task. By fixing this I think the > > application using postgresql over all would be faster than when using > > oracle. > > Just as on Oracle you would use SQL*Loader for this application, you > should use the COPY syntax for PostgreSQL. You will find it a lot > faster. I have used it by building the input files and executing > 'psql' with a COPY command, and also by using it with a subprocess, > both are quite effective. I tried this now. Now it's down to 0.45 seconds. It feels a bit hacky to run /usr/bin/psql from java, but it sure works. Thanks for the hint! Tim
Tim Terlegård wrote: >> >>Just as on Oracle you would use SQL*Loader for this application, you >>should use the COPY syntax for PostgreSQL. You will find it a lot >>faster. I have used it by building the input files and executing >>'psql' with a COPY command, and also by using it with a subprocess, >>both are quite effective. > > > I tried this now. Now it's down to 0.45 seconds. It feels a bit hacky to > run /usr/bin/psql from java, but it sure works. Thanks for the hint! There was a patch against 7.4 that provided direct JDBC access to PostgreSQL's COPY. (I have it installed here and *love* it - it gives outstanding performance.) However, it hasn't made into an official release yet. I don't know why, perhaps there's a problem yet to be solved with it ('works for me', though)? Is this still on the board? I won't upgrade past 7.4 without it. -- Steve Wampler -- swampler@noao.edu The gods that smiled on your birth are now laughing out loud.
On 5/3/05, Tim Terlegård <tim@se.linux.org> wrote: > > Just as on Oracle you would use SQL*Loader for this application, you > > should use the COPY syntax for PostgreSQL. You will find it a lot > > faster. I have used it by building the input files and executing > > 'psql' with a COPY command, and also by using it with a subprocess, > > both are quite effective. > > I tried this now. Now it's down to 0.45 seconds. It feels a bit hacky to > run /usr/bin/psql from java, but it sure works. Thanks for the hint! It may feel hacky, but I think if you want to use SQL*Loader on Oracle, you have to do the same thing. I know a C++ app that I use that runs SQL*Loader about once per second to deal with a HUGE volume (10K/sec). In fact, moving the load files onto ramdisk has helped a lot. Chris -- | Christopher Petrilli | petrilli@gmail.com
Kris is correct, This code was not added or even submitted to CVS. The purpose of this was to work out the bugs with people who are actually using copy. The api is a separate issue however. There's no reason that copy can't support more than one api. Dave Kris Jurka wrote: >On Tue, 3 May 2005, Josh Berkus wrote: > > > >>>There are several hacks floating around that add COPY capabilities to >>>the pgjdbc driver. As they all are rather simple hacks, they have not >>>been included in the cvs yet, but they tend to work fine. >>> >>> >>FWIW, Dave Cramer just added beta COPY capability to JDBC. Contact him on >>the JDBC list for details; I think he needs testers. >> >> >> > >I believe Dave has remerged a patch for COPY I posted over a year ago, but >he has not yet published it. I would guess it has the same bugs as the >original (transaction + error handling) and will meet the same objections >that kept the original patch out of the driver in the first place (we want >a friendlier API than just a data stream). > >Kris Jurka > >---------------------------(end of broadcast)--------------------------- >TIP 9: the planner will ignore your desire to choose an index scan if your > joining column's datatypes do not match > > > > -- Dave Cramer http://www.postgresintl.com 519 939 0336 ICQ#14675561