On Tue, 16 Feb 2021 at 09:19, Craig Ringer <
craig.ringer@enterprisedb.com> wrote:
So long as there is a way to "send A", "send B", "send C", "read results from A", "send D", and there's a way for the application to associate some kind of state (an application specific id or index, a pointer to an application query-queue struct, whatever) so it can match queries to results ... then I'm happy.
FWIW I'm also thinking of revising the docs to mostly use the term "pipeline" instead of "batch". Use "pipelining and batching" in the chapter topic, and mention "batch" in the index, and add a para that explains how to run batches on top of pipelining, but otherwise use the term "pipeline" not "batch".
That's because pipelining isn't actually batching, and using it as a naïve batch interface will get you in trouble. If you PQsendQuery(...) a long list of queries without consuming results, you'll block unless you're in libpq's nonblocking-send mode. You'll then be deadlocked because the server can't send results until you consume some (tx buffer full) and can't consume queries until it can send some results, but you can't consume results because you're blocked on a send that'll never end.
An actual batch interface where you can bind and send a long list of queries might be worthwhile, but should be addressed separately, and it'd be confusing if this pipelining interface claimed the term "batch". I'm not convinced enough application developers actually code directly against libpq for it to be worth creating a pretty batch interface where you can submit (say) an array of struct PQbatchEntry { const char * query, int nparams, ... } to a PQsendBatch(...) and let libpq handle the socket I/O for you. But if someone wants to add one later it'll be easier if we don't use the term "batch" for the pipelining feature.
I'll submit a reworded patch in a bit.