On Wed, 2002-12-04 at 10:55, Tom Lane wrote:
> Joel Burton <joel@joelburton.com> writes:
> > And you're wondering what will happen if you write a query that returns
> > more than 2 billion rows (2000 million)?
>
> The first thing that will happen is that you'll run out of memory in the
> client process to store all those rows in your PGresult.
>
> You can handle ridiculously-large query results by opening a cursor
> and fetching reasonable-sized chunks with FETCH; it might take awhile
> but in theory you could process an indefinitely large result that way.
>
> I don't think the int-sized result of PGntuples is going to be a issue
> in the foreseeable future --- platforms that could hold >2G tuples are
> no doubt going to use a wider size of int.
Besides, at 100ns per insert, it would take 33,000 years to populate
the table...
Also, those 9.2x10^18 records at 8 bytes each would need 7.3x10^19
bytes. And that's not even considering that each record will be
considerably larger that just the 8 bytes.
--
+------------------------------------------------------------+
| Ron Johnson, Jr. mailto:ron.l.johnson@cox.net |
| Jefferson, LA USA http://members.cox.net/ron.l.johnson |
| |
| "they love our milk and honey, but preach about another |
| way of living" |
| Merle Haggard, "The Fighting Side Of Me" |
+------------------------------------------------------------+