Re: V3 protocol, batch statements and binary transfer - Mailing list pgsql-jdbc

From Andrea Aime
Subject Re: V3 protocol, batch statements and binary transfer
Date
Msg-id 406A7BDE.3020502@aliceposta.it
Whole thread Raw
In response to Re: V3 protocol, batch statements and binary transfer  (Alan Stange <stange@rentec.com>)
List pgsql-jdbc
Alan Stange wrote:
> Hello all,
>
> We have the same performance problems with bulk data inserts from jdbc
> as well.   We used batches as well but made sure that each statement in
> the batch was large ~128KB and inserted on many rows at a time.  This
> cut down on the number of round trips to to the postgresql server.

Yes, I also did it but putting togheter many inserts into a single statement
and in fact it halved the time required to perform the inserts, still, it
takes too much time anyway: 1 minute for insertion and 5 seconds to read the
data...

> In addition to a) and b) below, I'd add that the read size off the
> sockets is too small.   It's a few KB currently and this should
> definitely be bumped up to a larger number.

In fact I've tried to bump up the 8kb value that's hardwired in the code
to 16,64,128Kb but saw no improvement on a 100Mb full switched LAN...

> We're running on a gigE network and see about 50MB/s data rates coming
> off the server (using a 2GB shared memory region).   This sounds nice,
> but one has to keep in mind that the data is binary encoded in text.
>
> Anyway, count me in to work on the jdbc client as well (in my limited
> time).   To start, I have a couple of local performance hacks for which
> I should submit proper patches.
>

I'm eager to have a look at them :-)

Best regards
Andrea Aime


pgsql-jdbc by date:

Previous
From: Oliver Jowett
Date:
Subject: Re: JDBC driver's (non-)handling of InputStream:s
Next
From: Guido Fiala
Date:
Subject: Re: OutOfMemory