Re: [HACKERS] [bug-fix] Cannot select big bytea values (~600MB) - Mailing list pgsql-hackers

From Robert Haas
Subject Re: [HACKERS] [bug-fix] Cannot select big bytea values (~600MB)
Date
Msg-id CA+TgmoYRnY3g_Ab9uFDezxXyuUg3ZPyvjK6vR2uqZoSsDm7=tw@mail.gmail.com
Whole thread Raw
In response to Re: [HACKERS] [bug-fix] Cannot select big bytea values (~600MB)  (Tom Lane <tgl@sss.pgh.pa.us>)
List pgsql-hackers
On Tue, Feb 27, 2018 at 2:17 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> Robert Haas <robertmhaas@gmail.com> writes:
>> +1.  We don't have to support everything, but things that don't work
>> should fail on insertion, not retrieval.  Otherwise what we have is
>> less a database and more a data black hole.
>
> That sounds nice as a principle but I'm not sure how workable it really
> is.  Do you want to reject text strings that fit fine in, say, LATIN1
> encoding, but might be overlength if some client tries to read them in
> UTF8 encoding?  (bytea would have a comparable problem with escape vs hex
> representation, for instance.)  Should the limit vary depending on how
> many columns are in the table?  Should we account for client-side tuple
> length restrictions?

I suppose what I really want is to have a limit that's large enough
for how big the retrieved data can be that people stop hitting it.

> Anyway, as Alvaro pointed out upthread, we've been down this particular
> path before and it didn't work out.  We need to learn something from that
> failure and decide how to move forward.

Yep.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


pgsql-hackers by date:

Previous
From: Thomas Munro
Date:
Subject: Re: Registering LWTRANCHE_PARALLEL_HASH_JOIN
Next
From: Robert Haas
Date:
Subject: Re: [HACKERS] [POC] Faster processing at Gather node