Re: [HACKERS] [PATCH] Generic type subscripting - Mailing list pgsql-hackers

From Dmitry Dolgov
Subject Re: [HACKERS] [PATCH] Generic type subscripting
Date
Msg-id CA+q6zcXaT6+SjEXigLmWH-Q96oGSA22u8tayD=0+e_8VstJedQ@mail.gmail.com
Whole thread Raw
In response to Re: [HACKERS] [PATCH] Generic type subscripting  (Pavel Stehule <pavel.stehule@gmail.com>)
Responses Re: [HACKERS] [PATCH] Generic type subscripting  (Pavel Stehule <pavel.stehule@gmail.com>)
List pgsql-hackers
> On Wed, 10 Oct 2018 at 14:26, Pavel Stehule <pavel.stehule@gmail.com> wrote:
>
> I am playing with this feature little bit

Thanks a lot!

> I have one idea - can be possible to use integer subscript for record fields? It can helps with iteration over
record.
>
> example:
>
> select ('{"a":{"a":[10,20]}}'::jsonb)[0];--> NULL, but can be more practical if it returns same like select
('{"a":{"a":[10,"20"]}}'::jsonb)['a'];

Sounds interesting, but I'm not sure how consistent it would be with the rest
of jsonb functionality, and someone may want to get an error in this case. At
the same time I believe that this can be achieved quite nicely with json_query
or json_table from SQL/JSON patch (see examples here [1]). What do you think
about this approach?

> I don't like quite ignoring bad subsript in update

Can you show an example of such ignoring of a bad subsript in an update?

> postgres=# insert into test(v) values( '[]');
> INSERT 0 1
> postgres=# update test set v[1000] = 'a';
> UPDATE 1
> postgres=# update test set v[1000] = 'a';
> UPDATE 1
> postgres=# update test set v[1000] = 'a';
> UPDATE 1
> postgres=# select * from test;
> ┌────┬─────────────────┐
> │ id │        v        │
> ╞════╪═════════════════╡
> │    │ ["a", "a", "a"] │
> └────┴─────────────────┘
> (1 row)
>
> It should to raise exception in this case. Current behave allows append simply, but can be source of errors. For this
casewe can introduce some special symbol - some like -0 :) 

Yeah, it may look strange, but there is a reason behind it. I tried to keep the
behaviour of this feature consistent with jsonb_set function (and in fact
they're sharing the same functionality). And for jsonb_set it's documented:

    If the item (of a path, in our case an index) is out of the range
    -array_length .. array_length -1, and create_missing is true, the new value
    is added at the beginning of the array if the item is negative, and at the
    end of the array if it is positive.

So, the index 1000 is way above the end of the array v, and every new item has
being appended at the end.

Of course no one said that they should behave similarly, but I believe it's
quite nice to have consistency here. Any other opinions?

> It is maybe strange, but I prefer less magic syntax like
>
> update test set v['a']['a'] =  v['a']['a'] || '1000';
>
> more readable than
>
> update test set v['a']['a'][1000000] = 1000;

Yep, with this patch it's possible to use both ways:

    =# table test;
    v
    -------------------------
     {"a": {"a": [1, 2, 3]}}
    (1 row)

    =# update test set v['a']['a'] = v['a']['a'] || '1000';
    UPDATE 1

    =# table test;
       v
    -------------------------------
     {"a": {"a": [1, 2, 3, 1000]}}
    (1 row)

> My first impression is very good - update jsonb, xml documents can be very friendly.

Thanks!

1: https://www.postgresql.org/message-id/flat/732208d3-56c3-25a4-8f08-3be1d54ad51b@postgrespro.ru


pgsql-hackers by date:

Previous
From: Tom Lane
Date:
Subject: Re: Soon-to-be-broken regression test case
Next
From: Tom Lane
Date:
Subject: Re: [HACKERS] removing abstime, reltime, tinterval.c, spi/timetravel