From: pgsql-hackers-owner@postgresql.org [mailto:pgsql-hackers-owner@postgresql.org] On Behalf Of Craig Ringer
On 19 May 2016 at 01:39, Michael Paquier <michael.paquier@gmail.com> wrote:
On Wed, May 18, 2016 at 12:27 PM, Craig Ringer <craig@2ndquadrant.com> wrote:
> On 18 May 2016 at 06:08, Michael Paquier <michael.paquier@gmail.com> wrote:
>> > Wouldn’t it make sense to do the insert batch wise e.g. 100 rows ?
>>
>> Using a single query string with multiple values, perhaps, but after
>> that comes into consideration query string limit particularly for
>> large text values... The query used for the insertion is a prepared
>> statement since writable queries are supported in 9.3, which makes the
>> code quite simple actually.
>
> This should be done how PgJDBC does batches. It'd require a libpq
> enhancement, but it's one we IMO need anyway: allow pipelined query
> execution from libpq.
That's also something that would be useful for the ODBC driver. Since
it is using libpq as a hard dependency and does not speak the protocol
directly, it is doing additional round trips to the server for this
exact reason when preparing a statement.
Yes, I want FE-BE protocol-level batch inserts/updates/deletes, too. I was just about to start thinking of how to implement it because of recent user question in pgsql-odbc. The OP uses Microsoft SQL Server Integration Service (SSIS) to migrate data to PostgreSQL. He asked for a method to speed up multi-row inserts, because the ODBC's multi-row insert API takes as long a time as when performing single-row inserts separately. This may prevent the migration to PostgreSQL.
And it's also useful for ECPG. Our customer wanted ECPG to support multi-row insert to migrate to PostgreSQL, because their embedded-SQL apps use the feature with a commercial database.
If you challenge this feature, I can help you by reviewing and testing, implementing the ODBC and ECPG sides, etc.
Regards
Takayuki Tsunakawa