> On Tue, Apr 13, 2021 at 11:38:04PM -0700, Mitar wrote:
> Hi!
>
> I have a project where we among other data want to store static JSON
> objects which can get pretty large (10-100 KB). I was trying to
> evaluate how it would work if we simply store it as an additional
> column in a PostgreSQL database. So I made a benchmark [1]. The
> results surprised me a bit and I am writing here because I would like
> to understand them. Namely, it looks like writing into a jsonb typed
> column is 30% faster than writing into a json typed column. Why is
> that? Does not jsonb require parsing of JSON and conversion? That
> should be slower than just storing a blob as-is?
>
> [1] https://gitlab.com/mitar/benchmark-pg-json
Interesting. I've tried to reproduce the schema and queries from the repository
above (with a different generated data though and without the app itself) and
indeed got a bit slower (not 30%, but visibly) execution for json column
instead of jsonb. There are couple of important points:
* Parsing is happening in both cases, for json it's done for validation
purposes.
* Compression of json data is actually dominates the runtime load for large
json objects, making other factors less visible and reducing difference in
size (jsonb also should be a bit bigger, that's why I would think it would be
slower).
* At the same time perf shows that json compression spends a bit more time in
pglz_find_match (mostly around accessing and checking history entries), which
is compression optimization via history table. So probably due to differences
in data layout this optimization works slightly worse for plain json than for
jsonb?