Re: [PATCH] Optimize json_lex_string by batching character copying - Mailing list pgsql-hackers

From Andrew Dunstan
Subject Re: [PATCH] Optimize json_lex_string by batching character copying
Date
Msg-id aeaa30bf-29a8-5e81-75eb-106e77bdce81@dunslane.net
Whole thread Raw
In response to Re: [PATCH] Optimize json_lex_string by batching character copying  (Andres Freund <andres@anarazel.de>)
List pgsql-hackers
On 2022-06-24 Fr 20:18, Andres Freund wrote:
> Hi,
>
> On 2022-06-24 08:47:09 +0000, Jelte Fennema wrote:
>> To test performance of this change I used COPY BINARY from a JSONB table
>> into another, containing fairly JSONB values of ~15kB.
> This will have a lot of other costs included (DML is expensive). I'd suggest
> storing the json in a text column and casting it to json[b], with a filter
> ontop of the json[b] result that cheaply filters it away. That should end up
> spending nearly all the time somewhere around json parsing.
>
> It's useful for things like this to include a way for others to use the same
> benchmark...
>
> I tried your patch with:
>
> DROP TABLE IF EXISTS json_as_text;
> CREATE TABLE json_as_text AS SELECT (SELECT json_agg(row_to_json(pd)) as t FROM pg_description pd) FROM
generate_series(1,100);
 
> VACUUM FREEZE json_as_text;
>
> SELECT 1 FROM json_as_text WHERE jsonb_typeof(t::jsonb) = 'not me';


I've been doing some other work related to json parsing and John
referred me to this. But it's actually not the best test for pure json
parsing - casting to jsonb involves some extra work besides pure
parsing. Instead I've been using this query with the same table, which
should be almost all json parsing:


select 1 from json_as_text where t::json is null;


cheers


andrew


--
Andrew Dunstan
EDB: https://www.enterprisedb.com




pgsql-hackers by date:

Previous
From: Alvaro Herrera
Date:
Subject: Re: make install-world fails sometimes in Mac M1
Next
From: Tom Lane
Date:
Subject: Re: Bug: Reading from single byte character column type may cause out of bounds memory reads.