Re: Sphinx indexing problem - Mailing list pgsql-novice

From Joshua Tolley
Subject Re: Sphinx indexing problem
Date
Msg-id AANLkTiktsvCazJ1viLG3BYpynD8ocG2soQK-YR1hoLSA@mail.gmail.com
Whole thread Raw
In response to Re: Sphinx indexing problem  (Mladen Gogala <mladen.gogala@vmsinfo.com>)
List pgsql-novice
On Mon, May 24, 2010 at 6:02 AM, Mladen Gogala
<mladen.gogala@vmsinfo.com> wrote:
>> Joshua Tolley wrote:
>>> Is there anything I can do to prevent the API from attempting to put the
>>> entire query result in memory?

>> Use a cursor, and fetch chunks of the result set one at a time.

> I would have done so,  had I written the application. Unfortunately, the
> application was written by somebody else. Putting the entire result set in
> memory is a bad idea and Postgres client should be changed, probably by
> adding some configuration options, like maximum memory that the client is
> allowed to consume and a "swap file". These options should be configurable
> per user, not system-wide. As I have said in my post, I do have a solution
> for my immediate problem but this slows things down:

You're definitely right; the current behavior is painful in some
cases. Using a cursor is the typical solution, in cases where it's
possible. The change you have in mind is on the TODO list (cf.
http://wiki.postgresql.org/wiki/Todo, "Allow statement results to be
automatically batched to the client"); it hasn't been tackled at this
point.

- Josh

pgsql-novice by date:

Previous
From: Mladen Gogala
Date:
Subject: Re: timestamp of a row
Next
From: Sean Davis
Date:
Subject: Re: Sphinx indexing problem