Re: Pipelining INSERTs using libpq - Mailing list pgsql-general

From Florian Weimer
Subject Re: Pipelining INSERTs using libpq
Date
Msg-id 50D6F17A.5030809@redhat.com
Whole thread Raw
In response to Re: Pipelining INSERTs using libpq  (Merlin Moncure <mmoncure@gmail.com>)
Responses Re: Pipelining INSERTs using libpq  (Tom Lane <tgl@sss.pgh.pa.us>)
List pgsql-general
On 12/21/2012 03:29 PM, Merlin Moncure wrote:
> How you attack this problem depends a lot on if all your data you want
> to insert is available at once or you have to wait for it from some
> actor on the client side.  The purpose of asynchronous API is to allow
> client side work to continue while the server is busy with the query.

The client has only very little work to do until the next INSERT.

> So they would only help in your case if there was some kind of other
> processing you needed to do to gather the data and/or prepare the
> queries.  Maybe then you'd PQsend multiple insert statements with a
> single call.

I want to use parameterized queries, so I'll have to create an INSERT
statement which inserts multiple rows.  Given that it's still
stop-and-wait (even with PQsendParams), I can get through at most one
batch per RTT, so the number of rows would have to be rather large for a
cross-continental bulk load.  It's probably doable for local bulk loading.

Does the wire protocol support pipelining?  The server doesn't have to
do much to implement it. It just has to avoid discarding unexpected
bytes after the current frame and queue it for subsequent processing
instead.

(Sorry if this message arrives twice.)
--
Florian Weimer / Red Hat Product Security Team


pgsql-general by date:

Previous
From: Robert Treat
Date:
Subject: Re: Default timezone changes in 9.1
Next
From: Heine Ferreira
Date:
Subject: downgrading a database