pgsql: Improve efficiency of dblink by using libpq's new row processor - Mailing list pgsql-committers

From Tom Lane
Subject pgsql: Improve efficiency of dblink by using libpq's new row processor
Date
Msg-id E1SFYsX-0004jr-E4@gemulon.postgresql.org
Whole thread Raw
List pgsql-committers
Improve efficiency of dblink by using libpq's new row processor API.

This patch provides a test case for libpq's row processor API.
contrib/dblink can deal with very large result sets by dumping them into
a tuplestore (which can spill to disk) --- but until now, the intermediate
storage of the query result in a PGresult meant memory bloat for any large
result.  Now we use a row processor to convert the data to tuple form and
dump it directly into the tuplestore.

A limitation is that this only works for plain dblink() queries, not
dblink_send_query() followed by dblink_get_result().  In the latter
case we don't know the desired tuple rowtype soon enough.  While hack
solutions to that are possible, a different user-level API would
probably be a better answer.

Kyotaro Horiguchi, reviewed by Marko Kreen and Tom Lane

Branch
------
master

Details
-------
http://git.postgresql.org/pg/commitdiff/6f922ef88e43b3084cdddf4b5ffe525a00896a90

Modified Files
--------------
contrib/dblink/dblink.c  |  421 ++++++++++++++++++++++++++++++++++++++--------
doc/src/sgml/dblink.sgml |   20 ++-
2 files changed, 366 insertions(+), 75 deletions(-)


pgsql-committers by date:

Previous
From: Tom Lane
Date:
Subject: pgsql: Add a "row processor" API to libpq for better handling of large
Next
From: Tom Lane
Date:
Subject: pgsql: Fix plpgsql named-cursor-parameter feature for variable name con