Re: JSON for PG 9.2 - Mailing list pgsql-hackers

From Hannu Krosing
Subject Re: JSON for PG 9.2
Date
Msg-id 1334593180.31618.2673.camel@hvost
Whole thread Raw
In response to Re: JSON for PG 9.2  (Andrew Dunstan <andrew@dunslane.net>)
Responses Re: JSON for PG 9.2  (Merlin Moncure <mmoncure@gmail.com>)
List pgsql-hackers
On Mon, 2012-04-16 at 10:10 -0400, Andrew Dunstan wrote:
> 
> On 04/16/2012 09:34 AM, Hannu Krosing wrote:
> >> based on Abhijit's feeling and some discussion offline, the consensus
> >> seems to be to remove query_to_json.
> > The only comment I have here is that query_to_json could have been
> > replaced with json_agg, so thet you don't need to do double-buffering
> > for the results of array(<yourquery>) call in
> >
> > SELECT array_to_json(array(<yourquery>));
> >
> > Or is there some other way to avoid it except to wrap row_to_json()
> > calls in own aggregate function which adds enclosing brackets and comma
> > separator ( like this : '['<row1>[,<rowN>]']' ?
> >
> >
> 
> The way I usually write this is:
> 
>      select array_to_json(array_agg(q))
>      from (<yourquery>) q;
> 
> It's a pity you didn't make this comment back in January when we were 
> talking about this. I think it's too late now in this release cycle to 
> be talking about adding the aggregate function.

My comment is not meant to propose changing anything in 9.2.

I think what we have here is absolutely fantastic :)

If doing something in 9.3 then what I would like is some way to express
multiple queries. Basically a variant of 

query_to_json(query text[])

where queries would be evaluated in order and then all the results
aggregated into on json object.

But "aggregation on client" as suggested by Merlin may be a better way
to do it for larger result(set)s. 

Especially as it could enable streaming of the resultsets without having
to first buffer everything on the server.


If we can add something, then perhaps a "deeper"  pretty_print feature

samples:

hannu=# \d test                                  Table "public.test"Column |            Type             |
Modifiers                     
--------+-----------------------------+---------------------------------------------------id     | integer
      | not null default
 
nextval('test_id_seq'::regclass)data   | text                        | tstamp | timestamp without time zone | default
now()
Indexes:   "test_pkey" PRIMARY KEY, btree (id)

hannu=# select array_to_json(array(select test from test),true);
-[ RECORD
1 ]-+----------------------------------------------------------------------------
array_to_json | [{"id":1,"data":"0.262814193032682","tstamp":"2012-04-05
13:21:03.235204"},             |  {"id":2,"data":"0.157406373415142","tstamp":"2012-04-05
13:21:05.2033"}]

This is OK


hannu=# \d test2                                  Table "public.test2"Column |            Type             |
Modifiers                      
--------+-----------------------------+----------------------------------------------------id     | integer
       | not null default
 
nextval('test2_id_seq'::regclass)data2  | test                        | tstamp | timestamp without time zone | default
now()
Indexes:   "test2_pkey" PRIMARY KEY, btree (id)

hannu=# select array_to_json(array(select test2 from test2),true);
-[ RECORD
1
]-+-----------------------------------------------------------------------------------------------------------------------------------
array_to_json |
[{"id":1,"data2":{"id":1,"data":"0.262814193032682","tstamp":"2012-04-05
13:21:03.235204"},"tstamp":"2012-04-05 13:25:03.644497"},             |
{"id":2,"data2":{"id":2,"data":"0.157406373415142","tstamp":"2012-04-05
13:21:05.2033"},"tstamp":"2012-04-05 13:25:03.644497"}]

This is "kind of OK"

hannu=# \d test3                                  Table "public.test3"Column |            Type             |
Modifiers                      
--------+-----------------------------+----------------------------------------------------id     | integer
       | not null default
 
nextval('test3_id_seq'::regclass)data3  | test2[]                     | tstamp | timestamp without time zone | default
now()
Indexes:   "test3_pkey" PRIMARY KEY, btree (id)


hannu=# select array_to_json(array(select test3 from test3),true);
-[ RECORD
1
]-+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
array_to_json |
[{"id":1,"data3":[{"id":1,"data2":{"id":1,"data":"0.262814193032682","tstamp":"2012-04-05
13:21:03.235204"},"tstamp":"2012-04-05
13:25:03.644497"},{"id":2,"data2":{"id":2,"data":"0.157406373415142","tstamp":"2012-04-05
13:21:05.2033"},"tstamp":"2012-04-0513:25:03.644497"}],"tstamp":"2012-04-16 14:40:15.795947"}]
 

but this would be nicer if printed like pythons pprint :

>>> pprint.pprint(d)
[{'data3': [{'data2': {'data': '0.262814193032682',                      'id': 1,                      'tstamp':
'2012-04-0513:21:03.235204'},            'id': 1,            'tstamp': '2012-04-05 13:25:03.644497'},
{'data2':{'data': '0.157406373415142',                      'id': 2,                      'tstamp': '2012-04-05
13:21:05.2033'},           'id': 2,            'tstamp': '2012-04-05 13:25:03.644497'}], 'id': 1, 'tstamp': '2012-04-16
14:40:15.795947'}]

:D

Again, I don't expect it anytime soon.

What we will get in 9.2 is wonderful already.

Cheers,
Hannu
-- 
-------
Hannu Krosing
PostgreSQL Unlimited Scalability and Performance Consultant
2ndQuadrant Nordic
PG Admin Book: http://www.2ndQuadrant.com/books/



pgsql-hackers by date:

Previous
From: Michael Meskes
Date:
Subject: Re: ECPG FETCH readahead
Next
From: Fujii Masao
Date:
Subject: Re: [COMMITTERS] pgsql: Add new replication mode synchronous_commit = 'write'.