Re: Support allocating memory for large strings - Mailing list pgsql-hackers

From Tom Lane
Subject Re: Support allocating memory for large strings
Date
Msg-id 1983209.1762810630@sss.pgh.pa.us
Whole thread Raw
In response to Re: Support allocating memory for large strings  (Nathan Bossart <nathandbossart@gmail.com>)
List pgsql-hackers
Nathan Bossart <nathandbossart@gmail.com> writes:
> FWIW something I am hearing about more often these days, and what I believe
> Maxim's patch is actually after, is the 1GB limit on row size.  Even if
> each field doesn't exceed 1GB (which is what artifacts.md seems to
> demonstrate), heap_form_tuple() and friends can fail to construct the whole
> tuple.  This doesn't seem to be covered in the existing documentation about
> limits [0].

Yeah.  I think our hopes of relaxing the 1GB limit on individual
field values are about zero, but maybe there is some chance of
allowing tuples that are wider than that.  The notion that it's
a one-line fix is still ludicrous though :-(

One big problem with a scheme like that is "what happens when
I try to make a bigger-than-1GB tuple into a composite datum?".

Another issue is what happens when a wider-than-1GB tuple needs
to be sent to or from clients.  I think there are assumptions
in the wire protocol about message lengths fitting in an int,
for example.  Even if the protocol were okay with it, I wouldn't
count on client libraries not to fall over.

On the whole, it's a nasty can of worms, and I stand by the
opinion that the cost-benefit ratio of removing the limit is
pretty awful.

            regards, tom lane



pgsql-hackers by date:

Previous
From: Tom Lane
Date:
Subject: Re: pgsql: Drop unnamed portal immediately after execution to completion
Next
From: Nathan Bossart
Date:
Subject: Re: 2025-11-13 release announcement draft