Timo,
This question has been answered before on this list so you can check the
mail archives for a good answer. But in short, the only way to do this
today would be to either use the LIMIT clause in the SQL or to use the
SQL cursor statements such as DECLARE CURSOR, FETCH, CLOSE, etc.
The behavior you are seeing is how the database talks to client
applications. When a query is executed the entire result is sent to the
client at once and must be read by the client before any other
opperations can be performed. There is a todo item on the jdbc todo
list to begin implicitly using CURSORS to work around this limitatation
but that won't happen before 7.3.
thanks,
--Barry
Timo Savola wrote:
> Hi. I'm new to this list, so please be gentle.
>
> I've been using PostgreSQL with a Java application that needs to SELECT
> (without LIMIT) from a table with a lot or rows. I tested the
> application when the table in question had over 2 million rows, and the
> JVM kept running out of memory (of course I could tune the heap size
> settings). I don't actually need all the 2 million rows; the application
> logic selects some of the first rows (in a manner that I couldn't
> implement in the SQL query) and then stops reading the ResultSet.
>
> I checked the code of the JDBC driver. As far as I can understand
> org/postgresql/jdbc2/ResultSet.java, the whole result set is read into
> memory at once. Is there any practical way to tune the driver to read
> the rows a little at a time? Any plans to enhance the driver to do that?
> Would it be a big job to write such a patch without prior knowledge of
> the internals of PostgreSQL?
>
> Timo
>
>
>
> ---------------------------(end of broadcast)---------------------------
> TIP 3: if posting/reading through Usenet, please send an appropriate
> subscribe-nomail command to majordomo@postgresql.org so that your
> message can get through to the mailing list cleanly
>
>