Re: JDBC and processing large numbers of rows - Mailing list pgsql-jdbc

From Oliver Jowett
Subject Re: JDBC and processing large numbers of rows
Date
Msg-id 40A22544.8000206@opencloud.com
Whole thread Raw
In response to Re: JDBC and processing large numbers of rows  (Guido Fiala <guido.fiala@dka-gmbh.de>)
List pgsql-jdbc
Guido Fiala wrote:
> Am Mittwoch, 12. Mai 2004 12:00 schrieb Kris Jurka:
>
>>The backend spools to a file when a materialized cursor uses more than
>>sort_mem amount of memory.  This is not quite the same as swapping as it
>>will consume disk bandwidth, but it won't hog memory from other
>>applications.
>
>
> Well thats good on one side, but from the side of the user its worse:
>
> He will see a large drop in performance (factor 1000) ASAP the database starts
> using disk for such things. Ok - once the database is to large to be hold in
> memory it is disk-bandwith-limited anyway...

What about the kernel cache? I doubt you'll see a *sudden* drop in
performance .. it'll just degrade gradually towards disk speeds as your
resultset gets larger.

-O

pgsql-jdbc by date:

Previous
From: Guido Fiala
Date:
Subject: Re: JDBC and processing large numbers of rows
Next
From: Thomas Kellerer
Date:
Subject: Re: setAutoCommit(false)