Re: JDBC and processing large numbers of rows - Mailing list pgsql-jdbc

From Dave Cramer
Subject Re: JDBC and processing large numbers of rows
Date
Msg-id 1084359414.1536.149.camel@localhost.localdomain
Whole thread Raw
In response to Re: JDBC and processing large numbers of rows  (Guido Fiala <guido.fiala@dka-gmbh.de>)
List pgsql-jdbc
Guido,

No, this isn't the case, if you use cursors inside a transaction then
you will be able to have an arbitrarily large cursor open ( of any size
AFAIK )

--dc--
On Wed, 2004-05-12 at 02:37, Guido Fiala wrote:
> Reading all this i'd like to know if all this isn't just a tradeof between
> _where_ the memory is consumed?
>
> If your JDBC-client holds all in memory - it gets an OutOfMem-Exception.
>
> If your backend uses Cursors - it caches the whole resultset and probably
> starts swapping and gets slow (needs the memory of all users).
>
> If you use Limit and Offset the database has to do more to find the
> data-snippet and in worst case (last few records) still needs temporary the
> whole resultset? (not sure here)
>
> Is that just a "choose your poison" ? At least in the first case the memory of
> the Client _gets_ used too and not all load to the backend, on the other side
> - most the the user does not really read all the data at all, so it puts
> unnecessary load on all the hardware.
>
> Really like to know what the best way to go is then...
>
> Guido
>
> ---------------------------(end of broadcast)---------------------------
> TIP 5: Have you checked our extensive FAQ?
>
>                http://www.postgresql.org/docs/faqs/FAQ.html
>
>
>
> !DSPAM:40a1c98a223941159885930!
>
>
--
Dave Cramer
519 939 0336
ICQ # 14675561


pgsql-jdbc by date:

Previous
From: Oliver Jowett
Date:
Subject: Re: setAutoCommit(false)
Next
From: Kris Jurka
Date:
Subject: Re: JDBC and processing large numbers of rows