Re: Duplicate JSON Object Keys - Mailing list pgsql-hackers

From Andrew Dunstan
Subject Re: Duplicate JSON Object Keys
Date
Msg-id 513A6038.2030008@dunslane.net
Whole thread Raw
In response to Re: Duplicate JSON Object Keys  (Andrew Dunstan <andrew@dunslane.net>)
Responses Re: Duplicate JSON Object Keys
Re: Duplicate JSON Object Keys
List pgsql-hackers
On 03/08/2013 04:42 PM, Andrew Dunstan wrote:
>
>>
>> So my order of preference for the options would be:
>>
>> 1. Have the JSON type collapse objects so the last instance of a key 
>> wins and is actually stored
>>
>> 2. Throw an error when a JSON type has duplicate keys
>>
>> 3. Have the accessors find the last instance of a key and return that 
>> value
>>
>> 4. Let things remain as they are now
>>
>> On second though, I don't like 4 at all. It means that the JSON type 
>> things a value is valid while the accessor does not. They contradict 
>> one another.
>>
>>
>
>
> You can forget 1. We are not going to have the parser collapse 
> anything. Either the JSON it gets is valid or it's not. But the parser 
> isn't going to try to MAKE it valid.


Actually, now I think more about it 3 is the best answer. Here's why: 
even the JSON generators can produce JSON with non-unique field names:
   andrew=# select row_to_json(q) from (select x as a, y as a from   generate_series(1,2) x, generate_series(3,4) y) q;
    row_to_json   ---------------     {"a":1,"a":3}     {"a":1,"a":4}     {"a":2,"a":3}     {"a":2,"a":4}
 


So I think we have no option but to say, in terms of rfc 2119, that we 
have careful considered and decided not to comply with the RFC's 
recommendation (and we should note that in the docs).

cheers

andrew





pgsql-hackers by date:

Previous
From: Hannu Krosing
Date:
Subject: Re: Duplicate JSON Object Keys
Next
From: Joe Conway
Date:
Subject: pg_dump selectively ignores extension configuration tables