From: Tomas Vondra <tomas.vondra@enterprisedb.com>
> Well, good that we all agree this is a useful feature to have (in
> general). The question is whether postgres_fdw should be doing batching
> on it's onw (per this thread) or rely on some other feature (libpq
> pipelining). I haven't followed the other thread, so I don't have an
> opinion on that.
Well, as someone said in this thread, I think bulk insert is much more common than updates/deletes. Thus, major DBMSs
haveINSERT VALUES(record1), (record2)... and INSERT SELECT. Oracle has direct path INSERT in addition. As for the
comparisonof INSERT with multiple records and libpq batching (= multiple INSERTs), I think the former is more efficient
becausethe amount of data transfer is less and the parsing-planning of INSERT for each record is eliminated.
I never deny the usefulness of libpq batch/pipelining, but I'm not sure if app developers would really use it. If they
wantto reduce the client-server round-trips, won't they use traditional stored procedures? Yes, the stored procedure
languageis very DBMS-specific. Then, I'd like to know what kind of well-known applications are using standard batching
APIlike JDBC's batch updates. (Sorry, I think that should be discussed in libpq batch/pipelining thread and this
threadshould not be polluted.)
> Note however we're doing two things here, actually - we're implementing
> custom batching for postgres_fdw, but we're also extending the FDW API
> to allow other implementations do the same thing. And most of them won't
> be able to rely on the connection library providing that, I believe.
I'm afraid so, too. Then, postgres_fdw would be an example that other FDW developers would look at when they use
INSERTwith multiple records.
Regards
Takayuki Tsunakawa