Thread: are cursors necessary?
In the program testlibpq.c, these five SQL statements are executed: res = PQexec(conn, "BEGIN"); res = PQexec(conn, "DECLARE myportal CURSOR FOR select * from pg_database"); res = PQexec(conn, "FETCH ALL in myportal"); res = PQexec(conn, "CLOSE myportal"); res = PQexec(conn, "END"); Is this just to illustrate how to create transactions and cursors, or is there some material difference between trimming the program down to just: res = PQexec(conn, "select * from pg_database"); Is there any value in my own query-only programs to declaring the cursor for each search? Many TIA, Mark -- Mark Harrison Pixar Animation Studios
On Thursday 04 December 2003 22:46, Mark Harrison wrote: > res = PQexec(conn, "BEGIN"); > res = PQexec(conn, "DECLARE myportal CURSOR FOR select * from > pg_database"); res = PQexec(conn, "FETCH ALL in myportal"); > res = PQexec(conn, "CLOSE myportal"); > res = PQexec(conn, "END"); > Is there any value in my own query-only programs to declaring the cursor > for each search? Well - if you want to scroll forward/backward through the resultset, you'd want a cursor. Or, if your client had limited memory and the resultset was large you might want to do so. PG will return all rows at once, so if your SELECT returns 5 million rows you'll use a lot of RAM on the client side. -- Richard Huxton Archonet Ltd
Richard Huxton wrote: > On Thursday 04 December 2003 22:46, Mark Harrison wrote: >> res = PQexec(conn, "BEGIN"); >> res = PQexec(conn, "DECLARE myportal CURSOR FOR select * from >> pg_database"); res = PQexec(conn, "FETCH ALL in myportal"); >> res = PQexec(conn, "CLOSE myportal"); >> res = PQexec(conn, "END"); > >> Is there any value in my own query-only programs to declaring the cursor >> for each search? > > Well - if you want to scroll forward/backward through the resultset, you'd > want a cursor. Or, if your client had limited memory and the resultset was > large you might want to do so. PG will return all rows at once, so if your > SELECT returns 5 million rows you'll use a lot of RAM on the client side. > You really think people would ever want to store more than 640 rows? Jan -- #======================================================================# # It's easier to get forgiveness for being wrong than for being right. # # Let's break this rule - forgive me. # #================================================== JanWieck@Yahoo.com #
Jan Wieck wrote: > Richard Huxton wrote: > >> On Thursday 04 December 2003 22:46, Mark Harrison wrote: >> >>> res = PQexec(conn, "BEGIN"); >>> res = PQexec(conn, "DECLARE myportal CURSOR FOR select * from >>> pg_database"); res = PQexec(conn, "FETCH ALL in myportal"); >>> res = PQexec(conn, "CLOSE myportal"); >>> res = PQexec(conn, "END"); >> >>> Is there any value in my own query-only programs to declaring the cursor >>> for each search? >> >> Well - if you want to scroll forward/backward through the resultset, >> you'd want a cursor. Or, if your client had limited memory and the >> resultset was large you might want to do so. PG will return all rows >> at once, so if your SELECT returns 5 million rows you'll use a lot of >> RAM on the client side. > > You really think people would ever want to store more than 640 rows? > > Jan > Ha ha! With each one being 1K? You, sir, have a wicked sense of humor... Mike Mascari mascarm@mascari.com
Mark Harrison wrote: > Is this just to illustrate how to create transactions and cursors, or is > there > some material difference between trimming the program down ... But then you wouldn't be able to test that transactions, cursors and queries work :) And you wouldn't be able to thumb your nose at your friends who use that *other* popular database ;) For your own purposes, you would only use cursors where you're expecting to get back lots of data. Cursors not only save client memory, they save network bandwidth too - you might have 2GB of RAM in your machine that can copy data around at a rate of hundreds of megabytes per second, but transferring that much data over a 100Mbps network takes time. If you're looking to make your database feel faster, it can be better to transfer one bunch of rows at a time. You might have an interface that shows one pageful of details at a time - this is ideal cursor fodder, since PostgreSQL can feed you the results in exactly the quantities that you need for your pages. Just a thought for the day :) Alex
mh@pixar.com (Mark Harrison) writes: > In the program testlibpq.c, these five SQL statements are executed: > > res = PQexec(conn, "BEGIN"); > res = PQexec(conn, "DECLARE myportal CURSOR FOR select * from pg_database"); > res = PQexec(conn, "FETCH ALL in myportal"); > res = PQexec(conn, "CLOSE myportal"); > res = PQexec(conn, "END"); > > Is this just to illustrate how to create transactions and cursors, or is there > some material difference between trimming the program down to just: > > res = PQexec(conn, "select * from pg_database"); > > Is there any value in my own query-only programs to declaring the cursor > for each search? Yes, there is value in it, albeit not likely for that particular query. Let's suppose that you have a query that is ultimately going to return 45 million records. If you use "select * from gory_query", then they'll all show up at once. If you declare a cursor, you can, in a loop, do something like: while (data_left()) do { res = PQexec(conn, "fetch 1000 in gory_query_cursor"); do_stuff_with(res); } That'll pull the records in more bite-sized chunks, and allow you to start processing data as soon as _some_ results come back. You don't have to wait for the whole barrel of data to get dropped on you. -- let name="cbbrowne" and tld="libertyrms.info" in String.concat "@" [name;tld];; <http://dev6.int.libertyrms.com/> Christopher Browne (416) 646 3304 x124 (land)