Re: Selecting large tables gets killed - Mailing list pgsql-hackers

From Ashutosh Bapat
Subject Re: Selecting large tables gets killed
Date
Msg-id CAFjFpRdV-ZufO_ycqB-QuxGXuSmD74H0=3n_zmgV4sY4mKeNaA@mail.gmail.com
Whole thread Raw
In response to Re: Selecting large tables gets killed  (Bernd Helmle <mailings@oopsware.de>)
List pgsql-hackers



On Thu, Feb 20, 2014 at 9:00 PM, Bernd Helmle <mailings@oopsware.de> wrote:


--On 20. Februar 2014 09:51:47 -0500 Tom Lane <tgl@sss.pgh.pa.us> wrote:

Yeah.  The other reason that you can't just transparently change the
behavior is error handling: people are used to seeing either all or
none of the output of a query.  In single-row mode that guarantee
fails, since some rows might get output before the server detects
an error.

That's true. I'd never envisioned to this transparently either, exactly of this reason. However, i find to have single row mode somewhere has some attractiveness, be it only to have some code around that shows how to do it right. But i fear we might complicate things in psql beyond what we really want.


Yes. Fixing this bug doesn't seem to be worth the code complexity it will add, esp. when the work around exists.

OR, other option is when sufficiently large output is encountered (larger than some predefined value MAX_ROWS or something), psql behaves as if FETCH_COUNT is set to MAX_ROWS. Documenting this behaviour wouldn't be a problem and would not be a problem, I guess.
 
--
Thanks

        Bernd



--
Best Wishes,
Ashutosh Bapat
EnterpriseDB Corporation
The Postgres Database Company

pgsql-hackers by date:

Previous
From: Etsuro Fujita
Date:
Subject: Re: inherit support for foreign tables
Next
From: Amit Kapila
Date:
Subject: Re: walsender doesn't send keepalives when writes are pending