Thread: wat is the max number of rows that can be returned
hi all, i am using postgreSQL7.1.3 with libPQ C API. suppose i create a table tbl (col int8 primary key) and populate the table to the full. Now if i retreive the result with int a; .. result = pqexec(conn, "select * from tbl"); a = PQntuples (result); .. what will be the value of a. The value must surely wrap around, i presume (PQntuples returns int). Is this limitation because of C API?? When there is provision to insert that many rows, is there any ways to obtain the correct number of rows. (thru' C API). Or is it possible in any other API. tia. -- regards, hari
On Wed, Dec 04, 2002 at 04:37:07PM +0530, HK wrote: > hi all, > i am using postgreSQL7.1.3 with libPQ C API. > > suppose i create a table tbl (col int8 primary key) and populate the table > to the full. > Now if i retreive the result with > > int a; > .. > result = pqexec(conn, "select * from tbl"); > a = PQntuples (result); > .. > > what will be the value of a. > > The value must surely wrap around, i presume (PQntuples returns int). > Is this limitation because of C API?? > > When there is provision to insert that many rows, is there any ways to > obtain the correct number of rows. (thru' C API). Or is it possible in > any other API. Ummm... you're going to populate the table "to the full" with int 8 primary key. That's 9 quintillion (trillion to you non-US types) rows, 18 qb if you use negative side of range as well. That's a lot of hamburgers. And you're wondering what will happen if you write a query that returns more than 2 billion rows (2000 million)? I'm still wondering when this query would ever return, period. I'm not sure exactly how you could get around that (wiser minds on the -GENERAL list might know), but really, should someone posting on pgsql-novice really try to manage a database of this size? ;) Is your question entirely academic? -- Joel BURTON | joel@joelburton.com | joelburton.com | aim: wjoelburton Independent Knowledge Management Consultant
Joel Burton <joel@joelburton.com> writes: > And you're wondering what will happen if you write a query that returns > more than 2 billion rows (2000 million)? The first thing that will happen is that you'll run out of memory in the client process to store all those rows in your PGresult. You can handle ridiculously-large query results by opening a cursor and fetching reasonable-sized chunks with FETCH; it might take awhile but in theory you could process an indefinitely large result that way. I don't think the int-sized result of PGntuples is going to be a issue in the foreseeable future --- platforms that could hold >2G tuples are no doubt going to use a wider size of int. regards, tom lane
On Wed, 2002-12-04 at 10:55, Tom Lane wrote: > Joel Burton <joel@joelburton.com> writes: > > And you're wondering what will happen if you write a query that returns > > more than 2 billion rows (2000 million)? > > The first thing that will happen is that you'll run out of memory in the > client process to store all those rows in your PGresult. > > You can handle ridiculously-large query results by opening a cursor > and fetching reasonable-sized chunks with FETCH; it might take awhile > but in theory you could process an indefinitely large result that way. > > I don't think the int-sized result of PGntuples is going to be a issue > in the foreseeable future --- platforms that could hold >2G tuples are > no doubt going to use a wider size of int. Besides, at 100ns per insert, it would take 33,000 years to populate the table... Also, those 9.2x10^18 records at 8 bytes each would need 7.3x10^19 bytes. And that's not even considering that each record will be considerably larger that just the 8 bytes. -- +------------------------------------------------------------+ | Ron Johnson, Jr. mailto:ron.l.johnson@cox.net | | Jefferson, LA USA http://members.cox.net/ron.l.johnson | | | | "they love our milk and honey, but preach about another | | way of living" | | Merle Haggard, "The Fighting Side Of Me" | +------------------------------------------------------------+