Re: json accessors - Mailing list pgsql-hackers

From Andrew Dunstan
Subject Re: json accessors
Date
Msg-id 50B76A10.2080603@dunslane.net
Whole thread Raw
In response to Re: json accessors  (Hannu Krosing <hannu@krosing.net>)
Responses Re: json accessors
List pgsql-hackers
On 11/28/2012 08:16 PM, Hannu Krosing wrote:
> On 11/29/2012 02:07 AM, Hannu Krosing wrote:
>> On 11/29/2012 01:10 AM, Merlin Moncure wrote:
>>> On Wed, Nov 28, 2012 at 2:44 PM, Andrew Dunstan 
>>> <andrew@dunslane.net> wrote:
>> ...
>>>
>>>>> *) have you considered something like
>>>>> anyelement from_json(anyelement, json)
>>>>> or
>>>>> select <json>::some_type;  (this may or many not be possible given 
>>>>> our
>>>>> casting mechanics; i don't know).
>>>> I have no idea what the semantics of this would be.
>>> Yeah, there's a lot of nuance there.
>> One way to tackle it would give the argument element as a template
>> and the result will the same template filled in from json filled
>>
>> create table tab1(id serial primary key, ts timestamp default now(), 
>> data text);
>>
>> insert into tab1 select from_json(row(null,null,null)::tab1, 
>> '{"data":"the data"}');
>> insert into tab1 select from_json(row(null,null,null)::tab1, 
>> '{"id":-1, "ts":null, "data":""}');
>> insert into tab1 select from_json(t.*,'{"data":"more data"}') from 
>> tab1 t where id = -1;
>>
>> hannu=# select row_to_json(t.*) from tab1 t;
>>                           row_to_json
>> ---------------------------------------------------------------
>>  {"id":1,"ts":"2012-11-29 02:01:48.379172","data":"the data"}
>>  {"id":-1,"ts":null, "data":""}
>>  {"id":2,"ts":"2012-11-29 02:02:34.600164","data":"more data"}
>> (3 rows)
>>
>> if extracting the defaults from table def proves too tricky for first 
>> iteration, then
>> just set the missing fields to NULL or even better, carry over the 
>> values from template;
> You could even do a template-less row_from_json which returns a 
> records with all fields converted to
> the JSON-encodable types and hope that the next conversions will be 
> done by postgreSQL  as needed.
>
> insert into tab1 select row_from_json('{"id":100, "ts":"2012-12-21", 
> "data":"End of Everything"}');
>
> insert into tab1
> select * from row_from_json(
> '[{"id":101, "ts":"2012-12-22", "data":"1st day after End of Everything"}
>   {"id":102, "ts":"2012-12-22", "data":"2nd day after End of Everything"}
> ]');
>


The real problem here is that for any irregularly shaped json it's 
likely to be a bust, and could only possibly work sanely for nested json 
at all if the target type had corresponding array and composite fields. 
hstore's populate_record works fairly well precisely because hstore is a 
flat structure, unlike json.


In any case, I think this sort of suggestion highlights the possible 
benefits of what I suggested upthread, namely to expose an API that will 
allow easy construction of json transformation functions as extensions.


>>
>>
>> PS: good work so far :)
>>
>> Hannu
>>
>>


Thanks.

cheers

andrew




pgsql-hackers by date:

Previous
From: Andres Freund
Date:
Subject: [PATCH] make -jN check fails / unset MAKEFLAGS in pg_regress.c
Next
From: Jeremy Drake
Date:
Subject: Re: [COMMITTERS] pgsql: Refactor flex and bison make rules