On Apr 7, 2004, at 7:51 AM, Jan Wieck wrote:
> Eric Ridge wrote:
>> On Apr 6, 2004, at 11:54 AM, Jan Wieck wrote:
>>> If the underlying query is for example a simple sequential scan,
>>> then the result set is not materialized but every future fetch
>>> operation will read directly from the base table. This would
>>> obviously get screwed up if vacuum would think nobody needs those
>>> rows any more.
>> Is vacuum the only thing that would muck with the rows?
>
> Vacuum is the only thing that cares for the dustmites, yes.
And WITH HOLD is strong enough to defend against a vacuum, I hope...
>> I need to setup a 7.4 test server and play with this some, and figure
>> out if the benefits are really what I want them to be. I do
>> appreciate the insight into how cursors work... it helps a lot!
>
> Experience and knowledge can only be replaced by more experience and
> more knowledge.
Very wise words.
My real problem is that the JDBC drivers (and I assume this is true for
all client interfaces) buffer the results of a SELECT in memory,
because the backend pushes out all the tuples as the response. I'm not
dealing with a large number of rows (only a few thousand), but they've
very wide, and many contain fields with multi-megabyte data. In some
situations, when I've got a lot of open ResultSets, the JVM throws
OutOfMemory errors.
One half-baked thought was to hack the JDBC drivers to have 'em gzip
large resultsets in memory. Wouldn't completely solve the problem, but
would probably help quite a bit. But the better solution is to use
cursors. We're not in a position to upgrade to 7.4 just yet, so we'll
just deal with the OutOfMemory errors until we can.
eric