Re: Out of memory error on huge resultset - Mailing list pgsql-jdbc
From | Dave Cramer |
---|---|
Subject | Re: Out of memory error on huge resultset |
Date | |
Msg-id | 1034351885.2777.101.camel@inspiron.cramers Whole thread Raw |
In response to | Re: Out of memory error on huge resultset (Doug Fields <dfields-postgres@pexicom.com>) |
Responses |
Re: Out of memory error on huge resultset
Re: Out of memory error on huge resultset |
List | pgsql-jdbc |
This really is an artifact of the way that postgres gives us the data. When you query the backend you get *all* of the results in the query, and there is no indication of how many results you are going to get. In simple selects it would be possible to get some idea by using count(field), but this wouldn't work nearly enough times to make it useful. So that leaves us with using cursors, which still won't tell you how many rows you are getting back, but at least you won't have the memory problems. This approach is far from trivial which is why it hasn't been implemented as of yet, keep in mind that result sets support things like move(n), first(), last(), the last of which will be the trickiest. Not to mention updateable result sets. As it turns out there is a mechanism to get to the end move 0 in 'cursor', which currently is being considered a bug. Dave On Fri, 2002-10-11 at 11:44, Doug Fields wrote: > At 08:27 AM 10/11/2002, snpe wrote: > >Barry, > > Is it true ? > >I create table with one column varchar(500) and enter 1 milion rows with > >length 10-20 character.JDBC query 'select * from a' get error 'out of > >memory', but psql not. > >I insert 8 milion rows and psql work fine yet (slow, but work) > > The way the code works in JDBC is, in my opinion, a little poor but > possibly mandated by JDBC design specs. > > It reads the entire result set from the database backend and caches it in a > horrible Vector (which should really be a List and which should at least > make an attempt to get the # of rows ahead of time to avoid all the > resizing problems). > > Then, it doles it out from memory as you go through the ResultSet with the > next() method. > > I would have hoped (but was wrong) that it streamed - WITHOUT LOADING THE > WHOLE THING - through the result set as each row is returned from the > backend, thus ensuring that you never use much more memory than one line. > EVEN IF you have to keep the connection locked. > > The latter is what I expected it to do. The former is what it does. So, it > necessitates you creating EVERY SELECT query which you think has more than > a few rows (or which you think COULD have more than a few rows, "few" being > defined by our VM memory limits) into a cursor based query. Really klugy. I > intend to write a class to do that for every SELECT query for me automatically. > > Cheers, > > Doug > > > >In C library is 'execute query' without fetch - in jdbc execute fetch all > >rows > >and this is problem - I think that executequery must prepare query and fetch > >(ResultSet.next or ...) must fetch only fetchSize rows. > >I am not sure, but I think that is problem with jdbc, not postgresql > >Hackers ? > >Does psql fetch all rows and if not how many ? > >Can I change fetch size in psql ? > >CURSOR , FETCH and MOVE isn't solution. > >If I use jdbc in third-party IDE, I can't force this solution > > > >regards > > > >On Thursday 10 October 2002 06:40 pm, Barry Lind wrote: > > > Nick, > > > > > > This has been discussed before on this list many times. But the short > > > answer is that that is how the postgres server handles queries. If you > > > issue a query the server will return the entire result. (try the same > > > query in psql and you will have the same problem). To work around this > > > you can use explicit cursors (see the DECLARE CURSOR, FETCH, and MOVE > > > sql commands for postgres). > > > > > > thanks, > > > --Barry > > > > > > Nick Fankhauser wrote: > > > > I'm selecting a huge ResultSet from our database- about one million rows, > > > > with one of the fields being varchar(500). I get an out of memory error > > > > from java. > > > > > > > > If the whole ResultSet gets stashed in memory, this isn't really > > > > surprising, but I'm wondering why this happens (if it does), rather than > > > > a subset around the current record being cached and other rows being > > > > retrieved as needed. > > > > > > > > If it turns out that there are good reasons for it to all be in memory, > > > > then my question is whether there is a better approach that people > > > > typically use in this situation. For now, I'm simply breaking up the > > > > select into smaller chunks, but that approach won't be satisfactory in > > > > the long run. > > > > > > > > Thanks > > > > > > > > -Nick > > > > > > > > ------------------------------------------------------------------------- > > > >- Nick Fankhauser nickf@ontko.com Phone 1.765.935.4283 Fax > > > > 1.765.962.9788 Ray Ontko & Co. Software Consulting Services > > > > http://www.ontko.com/ > > > > > > > > > > > > ---------------------------(end of broadcast)--------------------------- > > > > TIP 5: Have you checked our extensive FAQ? > > > > > > > > http://www.postgresql.org/users-lounge/docs/faq.html > > > > > > ---------------------------(end of broadcast)--------------------------- > > > TIP 6: Have you searched our list archives? > > > > > > http://archives.postgresql.org > > > > > >---------------------------(end of broadcast)--------------------------- > >TIP 2: you can get off all lists at once with the unregister command > > (send "unregister YourEmailAddressHere" to majordomo@postgresql.org) > > > > ---------------------------(end of broadcast)--------------------------- > TIP 6: Have you searched our list archives? > > http://archives.postgresql.org > >
pgsql-jdbc by date: