R: Slow performance updating CLOB data - Mailing list pgsql-jdbc

From Nicola Zanaga
Subject R: Slow performance updating CLOB data
Date
Msg-id 47856758BAE4794A9EC4FCA2E63FC85E3705963F@exchange.intranet.efsw.it
Whole thread Raw
In response to Re: Slow performance updating CLOB data  (Thomas Kellerer <spam_eater@gmx.net>)
List pgsql-jdbc

-----Messaggio originale-----
Da: pgsql-jdbc-owner@postgresql.org [mailto:pgsql-jdbc-owner@postgresql.org] Per conto di Thomas Kellerer
Inviato: lunedì 18 luglio 2016 15:01
A: pgsql-jdbc@postgresql.org
Oggetto: Re: [JDBC] Slow performance updating CLOB data

Nicola Zanaga schrieb am 18.07.2016 um 14:28:
> 
> I can change strategy for postgres, but I don’t think is good to issue 
> a query like “UPDATE table SET clob = ‘value’ WHERE key = x” if value is more than 10Mb.
> 
You should use a PreparedStatement not string literals. 
But apart from that, that won't be any different to the SQL that the driver uses.

Why do you think that would be a problem? 
The client needs to send 10MB of data, regardless on _how_ it sends that. 

Thomas




I solved my problem switching to prepared statement.
Now the performance are like other drivers.

However, in general, it's not the same thing sending the full sql query, instead of using  'setCharacterStream' or
'setBinaryStream'(for prepared statement) or 'updateCharacterStream' or 'updateBinaryStream' (for updatable resultset).

Using streams a driver could optimize sending data to the server in small packets. 

Thanks

pgsql-jdbc by date:

Previous
From: Vladimir Sitnikov
Date:
Subject: Re: curious line in jdbc change log
Next
From: Radoslav Petrov
Date:
Subject: Re: COPY in Java?