Re: JDBC and processing large numbers of rows - Mailing list pgsql-jdbc

From Guido Fiala
Subject Re: JDBC and processing large numbers of rows
Date
Msg-id 200405121431.08734.guido.fiala@dka-gmbh.de
Whole thread Raw
In response to Re: JDBC and processing large numbers of rows  (Kris Jurka <books@ejurka.com>)
Responses Re: JDBC and processing large numbers of rows
List pgsql-jdbc
Am Mittwoch, 12. Mai 2004 12:00 schrieb Kris Jurka:
> The backend spools to a file when a materialized cursor uses more than
> sort_mem amount of memory.  This is not quite the same as swapping as it
> will consume disk bandwidth, but it won't hog memory from other
> applications.

Well thats good on one side, but from the side of the user its worse:

He will see a large drop in performance (factor 1000) ASAP the database starts
using disk for such things. Ok - once the database is to large to be hold in
memory it is disk-bandwith-limited anyway...



pgsql-jdbc by date:

Previous
From: Oliver Jowett
Date:
Subject: Re: JDBC and processing large numbers of rows
Next
From: Oliver Jowett
Date:
Subject: Re: JDBC and processing large numbers of rows