Re: JDBC and processing large numbers of rows - Mailing list pgsql-jdbc

From Guido Fiala
Subject Re: JDBC and processing large numbers of rows
Date
Msg-id 200405120837.42865.guido.fiala@dka-gmbh.de
Whole thread Raw
In response to Re: JDBC and processing large numbers of rows  (Sean Shanny <shannyconsulting@earthlink.net>)
Responses Re: JDBC and processing large numbers of rows
Re: JDBC and processing large numbers of rows
Re: JDBC and processing large numbers of rows
List pgsql-jdbc
Reading all this i'd like to know if all this isn't just a tradeof between
_where_ the memory is consumed?

If your JDBC-client holds all in memory - it gets an OutOfMem-Exception.

If your backend uses Cursors - it caches the whole resultset and probably
starts swapping and gets slow (needs the memory of all users).

If you use Limit and Offset the database has to do more to find the
data-snippet and in worst case (last few records) still needs temporary the
whole resultset? (not sure here)

Is that just a "choose your poison" ? At least in the first case the memory of
the Client _gets_ used too and not all load to the backend, on the other side
- most the the user does not really read all the data at all, so it puts
unnecessary load on all the hardware.

Really like to know what the best way to go is then...

Guido

pgsql-jdbc by date:

Previous
From: Tom Lane
Date:
Subject: Re: JDBC and processing large numbers of rows
Next
From: Andy Jefferson
Date:
Subject: setAutoCommit(false)