Re: Batches of single-insert statements vs batches of multi-insert statements - Mailing list pgsql-jdbc

From Jeremy Whiting
Subject Re: Batches of single-insert statements vs batches of multi-insert statements
Date
Msg-id 6ea3d69b-9af6-7cdb-7313-a825a2bad7c4@redhat.com
Whole thread Raw
In response to Re: Batches of single-insert statements vs batches of multi-insert statements  (Christopher Deckers <chrriis@gmail.com>)
Responses Re: Batches of single-insert statements vs batches of multi-insert statements
Re: Batches of single-insert statements vs batches ofmulti-insert statements
List pgsql-jdbc
Hi Christopher,
 I suggest you avoid trying to hard coding data values in the statement sql. Instead use parameter place-holders (?) and use the API to bind the value. You will find it works much better for you. Like this ....

String insert = "INSERT INTO T_AbCd (T_AbCd_ID, SomeDate, ASmallInt) VALUES (?, ?, ?)";
PreparedStatement pst = conn.prepareStatement(insert, ResultSet.TYPE_FORWARD_ONLY, ResultSet.CONCUR_READ_ONLY);
pst.setLong(1, 24202712L);
pst.setDate(2, new java.sql.Date(2014-1900, 3-1, 21));
pst.setInt(3, 0);
pst.addBatch();
pst.setLong(1, 24202713L);
pst.setDate(2, new java.sql.Date(2014-1900, 3-1, 22));
pst.setInt(3, 0);
pst.addBatch();
pst.executeBatch();



 The reason is the batched re-write feature builds dynamically the sql issued to the back end. Using parameter place-holders. So you can see hard coding data values isn't going to work without some horribly complex statement parsing added to the driver. This time parsing data and recognizing all data values and type permutations. yikes

 I'd say you have these options. If you want to enable batched re-writing.

a) Change your code using the same style of coding as the above example. Re-deploying to production.
b) If re-deploy to production isn't possible in the short term try using Byteman [1]. Byteman will unobtrusively mend your existing code. See the attached Byteman rule. The attached rule fixes your test class.
c) Provide a patch to add implement sql statement parsing support for hard coded data. I see you have already started to try this [2].

Regards,
Jeremy

[1] http://byteman.jboss.org/
[2] https://github.com/pgjdbc/pgjdbc/pull/580

On 05/06/16 21:39, Christopher Deckers wrote:
Hi Vladimir,

Thanks for your answer! It explains well the current situation.

Believe me or not, pgjdbc has already that feature implemented.

Oh good! But it definitely needs more testing and fixing :)
I got: "Batch entry 0 INSERT INTO XXXX".
Next exception: "VALUES lists must all be the same length".

I narrowed down the issue to a simple test case, see attached file.

Funny enough, if I change the name of the table, it seems to work. I used the convention that we have in our production code.
Note that this test case does not produce any error if I remove the "reWriteBatchedInserts=true" parameter.

Please let me know if I can be of any help!
-Christopher





-- 
Jeremy Whiting
Senior Software Engineer, Middleware Performance Team
Red Hat

------------------------------------------------------------
Registered Address: Red Hat UK Ltd, 64 Baker Street, 4th Floor, London. W1U 7DF. United Kingdom.
Registered in England and Wales under Company Registration No. 03798903. Directors: Directors:Michael Cunningham (US), Michael O'Neill(Ireland), Eric Shander (US)
Attachment

pgsql-jdbc by date:

Previous
From: "jingzhi.zhang@outlook.com"
Date:
Subject: Re: PrepareStatement.execute() blocked because of long time 'create index' operation, connection leak
Next
From: Christopher Deckers
Date:
Subject: Re: Batches of single-insert statements vs batches of multi-insert statements