Re: Duplicate JSON Object Keys - Mailing list pgsql-hackers

From Gavin Flower
Subject Re: Duplicate JSON Object Keys
Date
Msg-id 513A55C5.40706@archidevsys.co.nz
Whole thread Raw
In response to Re: Duplicate JSON Object Keys  (Alvaro Herrera <alvherre@2ndquadrant.com>)
List pgsql-hackers
Well I would much prefer to find out sooner rather than later that there
is a problem, so I would much prefer know I've created a duplicate as
soon as the system can detect it.  In general, Postgresql appears much
better at this than MySQL


On 09/03/13 10:01, Alvaro Herrera wrote:
> Hannu Krosing escribió:
>> On 03/08/2013 09:39 PM, Robert Haas wrote:
>>> On Thu, Mar 7, 2013 at 2:48 PM, David E. Wheeler <david@justatheory.com> wrote:
>>>> In the spirit of being liberal about what we accept but strict about what we store, it seems to me that JSON
objectkey uniqueness should be enforced either by throwing an error on duplicate keys, or by flattening so that the
latestkey wins (as happens in JavaScript). I realize that tracking keys will slow parsing down, and potentially make it
morememory-intensive, but such is the price for correctness. 
>>> I'm with Andrew.  That's a rathole I emphatically don't want to go
>>> down.  I wrote this code originally, and I had the thought clearly in
>>> mind that I wanted to accept JSON that was syntactically well-formed,
>>> not JSON that met certain semantic constraints.
>> If it does not meet these "semantic" constraints, then it is not
>> really JSON - it is merely JSON-like.
>>
>> this sounds very much like MySQLs decision to support timestamp
>> "0000-00-00 00:00" - syntactically correct, but semantically wrong.
> Is it wrong?  The standard cited says SHOULD, not MUST.
>




pgsql-hackers by date:

Previous
From: "David E. Wheeler"
Date:
Subject: Re: Duplicate JSON Object Keys
Next
From: Andrew Dunstan
Date:
Subject: Re: Duplicate JSON Object Keys