On Mon, May 14, 2018 at 12:18 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:
> "David G. Johnston" <david.g.johnston@gmail.com> writes:
> > On Sunday, May 13, 2018, <007reader@gmail.com> wrote:
> >> My interest is in fast access to data. On a relational table, the query
> >> time is about the same whether I have one or ten fields in a select
> >> statement. I’d love to see the same behavior when getting multiple keys
> >> from a JSON document.
> > I would hazard to say this is a solid desire and one shared by many.
It is
> > also seemingly something that today has little or no precedent in
> > PostgreSQL. The lack is not a bug.
> Yeah. This has been discussed from time to time before. The problem
> is how to reconcile it with PG's extensible architecture, in which these
> various -> and ->> operators are independent functions that are black
> boxes so far as the core code is concerned. It's very unclear how to
> set up an arrangement that would let them share processing.
> For the moment, you can work around it to some extent by writing out
> the shared processing manually, along the lines of
> select (x).this, (x).that, (x).the_other from
> (select jsonb_populate_record(null::myrowtype, jsonb_column) as x
> from ...) ss
> where myrowtype defines the fields you want to extract.
This is really the answer. Pretty typically you'd use CROSS JOIN LATERAL
to fold the jsonb_populate_record portion into a broader query to work
around some restrictions. TBH this approach is reasonably fast,
expressive, and flexible; it's not broken. My only gripe (such as it is) is
the verbosity of the json api. Question: is there any technical reason as
to why this can't be worked into a simple cast?
jsonb_populate_record(null::myrowtype, jsonb_column) ->
jsonb_column::myrowtype
merlin
merlin