Re: BUG #19431: limitation of the data type jsonb equals 8Kb - Mailing list pgsql-bugs

From Andrew Dunstan
Subject Re: BUG #19431: limitation of the data type jsonb equals 8Kb
Date
Msg-id bca753e5-58aa-45d2-b87b-432489bbefa5@dunslane.net
Whole thread Raw
In response to Re: BUG #19431: limitation of the data type jsonb equals 8Kb  (Tom Lane <tgl@sss.pgh.pa.us>)
List pgsql-bugs
On 2026-03-12 Th 11:55 AM, Tom Lane wrote:
> Andrew Dunstan <andrew@dunslane.net> writes:
>> On 2026-03-12 Th 8:11 AM, PG Bug reporting form wrote:
>>> The following bug has been logged on the website:
>>>
>>> Bug reference:      19431
>>> Logged by:          Alex Perov
>>> Email address:      perovaa@sbermarketing.ru
>>> PostgreSQL version: 18.3
>>> Operating system:   Windows 10
>>> Description:
>>>
>>> Hello.
>>> I have a type:
>>> CREATE TYPE public.t_dict AS
>>> (
>>> id bigint,
>>> name text,
>>> gid uuid,
>>> alias text,
>>> bgcolor text,
>>> disabled boolean,
>>> checked boolean,
>>> options jsonb,
>>> "order" integer
>>> );
>>> and function
>>> CREATE OR REPLACE FUNCTION public.f_func(data jsonb, uid bigint, context
>>> text = null, ext text = null, pid bigint = null, pgid uuid = null)
>>> returns setof rd.t_dict
>>> language plpgsql
>>> AS $$
>>> begin
>>> context := coalesce(context, data->>'context', '');
>>> ext = coalesce(ext, '');
>>>
>>> create temp table tmp_f_table of public.t_dict;
>>> if dict = 'dict1' then
>>> insert into tmp_f_table (id, name)
>>> select
>>> 1, name
>>> from public.tbl1 a;
>>> elsif dict = 'dict2' then
>>> insert into tmp_f_table (id, name, alias)
>>> select
>>> t.objecttypeid, t.objecttype, t.alias
>>> from public.tbl2 t
>>> where (t.objecttypeid <> 0)
>>> and t.active;
>>> elsif dict = 'dict3' then
>>> insert into tmp_f_table (id, name, options)
>>> select
>>> t.linktypeid, t.linktype,
>>> json_object(
>>> 'rev': t.reftypeid = (data->'objecttype')::bigint,
>>> 'items': t.options
>>> )
>>> from public.tbl3 t;
>>> end if;
>>> return query select * from tmp_f_cpdictionary;
>>> drop table tmp_f_cpdictionary;
>>> end
>>> $$;
>>> when calling a function for dict3, I have an error "could not open relation
>>> with OID 196327".
>>> As far as I know, this is due to a limitation of the data type jsonb equals
>>> 8Kb.
>>> Is it possible to fix this error without changing the logic of the code?
>>>
>> This has nothing to do with any limit on the size of jsonb, and in fact
>> jsonb can have values vastly larger than 8Kb.
>> The real problem is the well-known plan caching + temp table OID
>> mismatch issue. Here's what's actually happening:
>>     1. The function does CREATE TEMP TABLE tmp_f_table OF public.t_dict
>> at the start and DROP TABLE tmp_f_table at the end.
>>     2. On the first call, PL/pgSQL parses/plans the INSERT INTO
>> tmp_f_table and SELECT * FROM tmp_f_table statements, caching plans that
>> reference tmp_f_table by its OID (e.g., OID 196327).
>>     3. On a subsequent call, the old temp table has been dropped and a
>> new one is created with a different OID. The cached plan still
>> references the old OID leading to "could not open relation with OID 196327".
> I believe we have replanning logic that will handle that.  I think the
> actual issue is that the function tries to do this:
>
>>     return query select * from tmp_f_cpdictionary;
>>     drop table tmp_f_cpdictionary;
> The RETURN QUERY stuffed the data into a tuplestore ... but what it
> stuffed there was TOAST pointers referencing the temp table.  So the fetch
> failure occurs when we try to read out the tuplestore contents after the
> function returns.  While the OP didn't say this in so many words, I gather
> that he found it to work fine for small jsonb values but not for large
> ones, which would be explained if it only fails for toasted values.
>
> In principle we could fix this by forcing the values to be detoasted
> before we put them into the tuplestore.  I'm not sure that we want to
> accept that overhead though.
>
>             


Oh. Hah! makes sense. I learn something every day. Sorry for the noise.


cheers


andrew



--
Andrew Dunstan
EDB: https://www.enterprisedb.com




pgsql-bugs by date:

Previous
From: PG Bug reporting form
Date:
Subject: BUG #19432: recovery fails at invalid checkpoint record
Next
From: Laurenz Albe
Date:
Subject: Re: BUG #19432: recovery fails at invalid checkpoint record