XX000: iso-8859-1 type of jsonb container. - Mailing list pgsql-bugs

From Poot, Bas (B.J.)
Subject XX000: iso-8859-1 type of jsonb container.
Date
Msg-id 534fca83789c4a378c7de379e9067d4f@politie.nl
Whole thread Raw
Responses Re: XX000: iso-8859-1 type of jsonb container.  (Dmitry Dolgov <9erthalion6@gmail.com>)
List pgsql-bugs

Hi there,

First of all I must say that we're not only very thankful for the work you guys are doing. But also very impressed with the result.
We have various products using various databases, and coming from an oracle database I must say I'm overwhelmed by the performance and features postgres has.

Unfortunately we're experiencing very weird behavior upgrading to postgres 13, and possibly found a bug.


This is going to be a weird bug report, but please keep reading.

Recently we migrated our postgres test database. We had a postgres 12.5 on windows, and moved to a postgres 13.2.1 on ubuntu 20 (clustered).

99% of all queries worked just fine, but one particular query created an exception: XX000: unknown type of jsonb container.

We did some various testing with various versions of postgres on various operating systems (Ubuntu and windows server 2016)
We found that we can reproduce the bug in both ubuntu 20 and windows 2016 with postgres 13.2.1.

Also we didn't have the problem with postgres 12.5 or 12.6 on both windows and ubuntu.
So it seems to be a 13.x bug (we didn't try lower versions of 13 then 13.2.1)


Of course the first thing I did was run the query in a different tool (we use DBeaver) and I got the same result.
I tried to reduce the query, by stripping columns, rows etc. figure out what part of it created the error.  I got as far as this:

 
 
select 
display_name,
'' as rolename
,filtur
,jsonb_each_text(filtur) as x
,to_jsonb(jsonb_each_text(filtur)) as frows
,array(SELECT jsonb_object_keys(filtur)) as objectkeys
from tmp_bug2
order by logical_name
;
 

where our 'tmp_bug2' table is a select into from a lot of joins on other tables. The DDL is now this:

CREATE TABLE public.tmp_bug2 (
logical_name varchar NULL,
display_name varchar NULL,
filtur jsonb NULL
);



But now the weird stuff started happening. (tldr: testcase at the end of this mail)

I can do a lot of changes on this query that makes me get a different exception (or no exception at all). But when I wait a bit, or change some randomly other line I get the error again.

Weird behavior 1:
When the tmp_bug2 table has these rows:
- logical_name
- display_name
- operation
- filtur
And I remove the 'display_name' from the query, it works. I get no more errors and see data.
 

Then I figured that i don't use the column 'operation' at all, so i removed it from my tmp_bug2 table (dropped it, and redid the select into).
The table now contains these columns:
 
- logical_name
- display_name
- filtur

 
When I run the query it still fails. But now, when I remove the 'display_name' field from my select, it keeps failing. So the removing of the 'operation' column in my table somehow changes the behavior of my query.
Even when I don't use the entire column.



Weird behavior 2:
(based on the original query)
When I remove the line 
'' as rolename,

I get a perfectly fine result. No errors, it all works.
 
So removing a totally useless column makes my query work.



Weird behavior 3:
When I remove 'order by' clause my query works again like a charm. No errors whatsoever.
But that's weird, because my select into query also has an order by.
Therefor the table 'tmp_bug2' is already sorted by logical_name. 
I can imagine some operations only being done on the lines you return to your client, but the data is ordered in the same way, so I should return the same rows.
With or without that order by statement



Weird behavior 4
In our original query, we have the last 3 columns in the select that do stuff with json(b) functions. 
,jsonb_each_text(filtur) as x
,to_jsonb(jsonb_each_text(filtur)) as frows
,array(SELECT jsonb_object_keys(filtur)) as objectkeys

obviously this error has something to do with those statements.
So I tried to comment out the first line: still an error.
I tried to comment out the second one (instead of the first) still an error.
Then I commented out both of them. Problem solved, query works!
Then enabled them all, and commented out the 3th line. Problem solved again.

So:
when I have the first or second line active we get an error.
when I have the 3th line active we get an error
when we disable the 3th line it works (even with the first and second line active)



Weird behavior 5
Now I tried to figure out in what row triggers the error.

first I created a table that contained all the information in my original subquery (not in this email) and called it tmp_bug.
Then I did this:
select *
into tmp_bug2
from (
select logical_name
, display_name
, operation
, filtur
from tmp_bug
order by logical_name 
) t1
limit 300000
;

note that I order by logical_name, and limit my rows, so I only get 300k of my 390k rows.
So I only lose 90k rows.
When I run my query (top of this email) it works like a charm. Therefor I concluded, the error must be in the 90k rows I dropped.

So I dropped my tmp_bug2 table, and redid the above query, but then altering the order by clause using:
order by logical_name desc
this way I dropped the first 90K rows, instead of the last 90K. 
I reran my query, and (wtf) it again worked like a charm..
So somehow it has nothing to do with the specific rows I select....


Weird behavior 6
In an attempt to create a testcase with the same amount of rows I tried updating them all to the same value.
So I updated the columns 'display_name' and 'logical_name' to 'thing' and the the case still stands (errors as above)
I simplified my select into statement to generate the table, so it now always added 'thing' into those 2 columns.
Results as expected. Still errors. (but closer to a testcase for you guys)

Then I updated the column 'filtur' like so:

update tmp_bug2 
set filtur  = '{"unit": "Our special Unit", "some.place": "placename", "place_of_birth": "Over the rainbow", "How_secret_is_it": "Level 1", "Does.it.shoot": "YES!"}'::jsonb

And again the case still stands. Errors as above.
So I simplified my select into statement just as with the other to columns. It was now sort of like this:

select *

into tmp_bug2

from (

select logical_name
, 'thing' as display_name
, 'thing' as operation
, '{"unit": "Our special Unit", "some.place": "placename", "place_of_birth": "Over the rainbow", "How_secret_is_it": "Level 1", "Does.it.shoot": "YES!"}'::jsonb as filtur
from ( .. complex subquery here.. )
order by logical_name 

) t1


And suddenly all queries ran fine. No more errors.
I removed my simplification, and went back to

select *

into tmp_bug2

from (

select logical_name
, 'thing' as display_name
, 'thing' as operation
, filtur
from ( .. complex subquery here.. )
order by logical_name 

) t1

and the errors were back.
Then I updated the table using the above update statement, and again: errors remained..

I had the same problem updating the logical_name column, when I used the update (after creating the table) I still had the error. But when I created the table with the static value ('thing' in my case) the errors were suddenly gone

So creating a table with a value has a different result then creating a table with a value, and then updating that value..


Weird behavior 7

Finally! I have a testcase for you guys. This is my query to generate the data.
select *
into tmp_bug2
from (
select 'thing' as  logical_name
, 'thing' as display_name
, 'thing' as operation
, '{"unit": "Our special Unit", "some.place": "placename", "place_of_birth": "Over the rainbow", "How_secret_is_it": "Level 1", "Does.it.shoot": "YES!"}'::jsonb as filtur
from generate_series(1, 302443)
) t1;

And this is the query that generates the error.

select 
display_name
,'' as x
,filtur
,jsonb_each_text(filtur) as x
,to_jsonb(jsonb_each_text(filtur)) as frows
,array(SELECT jsonb_object_keys(filtur)) as objectkeys
from tmp_bug2
order by logical_name;

Notice that when you generate not 302443 but 302442 rows, it works perfectly fine.




I'm lost here.
And I sure as hell would like a hand

Bas

------------------------- Disclaimer ----------------------------
De informatie verzonden met dit e-mailbericht (en bijlagen) is uitsluitend bestemd voor de geadresseerde(n) en zij die van de geadresseerde(n) toestemming kregen dit bericht te lezen.
Kennisneming door anderen is niet toegestaan.
De informatie in dit e-mailbericht (en bijlagen) kan vertrouwelijk van aard zijn en binnen het bereik van een geheimhoudingsplicht en/of een verschoningsrecht vallen.
Indien dit e-mailbericht niet voor u bestemd is, wordt u verzocht de afzender daarover onmiddellijk te informeren en het e-mailbericht (en bijlagen) te vernietigen.
Conform het beveiligingsbeleid van de Politie wordt e-mail van en naar de politie gecontroleerd op virussen, spam en phishing en moet deze e-mail voldoen aan de voor de overheid verplichte mailbeveiligingsstandaarden die zijn vastgesteld door het Forum Standaardisatie.
Mail die niet voldoet aan het beveiligingsbeleid kan worden geblokkeerd waardoor deze de geadresseerde niet bereikt. De geadresseerde wordt hiervan niet in kennis gesteld.
---------------------------------------------------------------------
The information sent in this E-mail message (including any attachments) is exclusively intended for the individual(s) to whom it is addressed and for the individual(s) who has/have had permission from the recipient(s) to read this message.
Access by others is not permitted.
The information in this E-mail message (including any attachments) may be of a confidential nature and may form part of the duty of confidentiality and/or the right of non-disclosure.
If you have received this E-mail message in error, please notify the sender without delay and delete the E-mail message (including any attachments).
In conformity with the security policy of the Police, E-mails from and to the Police are checked for viruses, spam and phishing and this E-mail must meet the standards of the government-imposed E-mail security as set by the Standardization Forum.
Any E-mail failing to meet said security policy may be blocked as a result of which it will not reach the intended recipient. The recipient concerned will not be notified.
---------------------------------------------------------------------

pgsql-bugs by date:

Previous
From: PG Bug reporting form
Date:
Subject: BUG #16953: OOB access while converting "interval" to char
Next
From: Julien Rouhaud
Date:
Subject: Re: BUG #16953: OOB access while converting "interval" to char