Re: libpq - lack of support to set the fetch size - Mailing list pgsql-general

From matshyeq
Subject Re: libpq - lack of support to set the fetch size
Date
Msg-id CAONr5=urpsJfSa_1HUAEqTqP0D9HLESS0yMTNACQQiz16M04Ew@mail.gmail.com
Whole thread Raw
In response to Re: libpq - lack of support to set the fetch size  (Marko Kreen <markokr@gmail.com>)
Responses Re: libpq - lack of support to set the fetch size  (John DeSoi <desoi@pgedit.com>)
List pgsql-general



On Wed, Mar 12, 2014 at 9:30 AM, Marko Kreen <markokr@gmail.com> wrote:
On Tue, Mar 11, 2014 at 12:39:12PM +0000, matshyeq wrote:
> - when using PQsetSingleRowMode() function - does it give an option to
> define how many rows to cache on client's side (like JDBC setFetchSize()
> does) or leaves it at pqlib's discretion?

This option would not make sense as you are not "fetching" anything,
full resultset is being streamed from server over TCP connection.
 
Well, I don't know what "streamed" exactly means here. 
If server pushes sequentially all the data not asking client if ready to receive then that's what the issue is about.
If client asks server for another chunk each time it has received previous one then to me it's implicit 'fetching' scenario where user/developer doesn't have an option to define fetch size. 

> - is it/would it be possible to add corresponding option to pgAdmin to
> limit initially (and each subsequently) returned rows in Query Tool by
> custom defined max value?

It could close connection in the middle of resultset but that seems like
bad idea.  LIMIT N or FETCH N are better for such task. 
 
I don't see why? I can't think of any single SQL tool I've been working with that didn't have this functionality, really.
The principle I find very simple and useful.
There is defined "fetch row size" parameter (each tool calls give its own name),
after submitting ANY query, client fetches result set rows but not more than that.
Some programs even automatically define this value based on result grid size displayed on the screen.
User then usually has two buttons, fetch another batch/screen or fetch all - he decides.
If he decides way too late (break for coffee) then he simply resubmits the query (and potentially change the parameter first)...

I don't find value in auto-fetching millions of rows for user to present on the screen.
Also I don't think it's particularly useful when you need to know and apply database specific SQL syntax to limit the rows.
If you join multiple tables that may be even more tricky (which table to apply limit? or use subquerying instead?).

Last but non least, I can even see now this Option was once available in pgAdmin, but disappeared over time

"In the options dialog, you can specify a default limit for the rowset size to retrieve. By default, this value will be 100. If the number of rows to retrieve from the server exceeds this value, a message box will appear asking what to do to prevent retrieval of an unexpected high amount of data. You may decide to retrieve just the first rows, as configured with the max rows setting, or retrieving the complete rowset regardless of the setting, or abort the query, effectively retrieving zero rows."

This is pretty much exactly what I'm talking about here (pgAdmin functionality)
and its equivalent for developers (pqlib)

Regards
Msciwoj

pgsql-general by date:

Previous
From: Roy Anderson
Date:
Subject: Materialized view from PG to Oracle?
Next
From: matshyeq
Date:
Subject: Re: libpq - lack of support to set the fetch size