Thread: [PATCH] Generalized JSON output functions
Attached is a patch against master to generalize the JSON-producing functions in utils/adt/json.c and to provide a set of callbacks which can be overridden the same way that is already provided for *parsing* JSON.
The motivation behind this to be able to produce specially-crafted JSON in a logical replication output plugin, such that numeric (and bigint) values are quoted. This requirement, in turn, arises from the fact that JavaScript specification, which is quite natural to expect as a consumer for this JSON data, allows to silently drop significant digits when converting from string to number object.
I believe this is a well-known problem and I'm aware of a number of tricks that might be used to avoid it, but none of them seems to be optimal from my standpoint.
I can also imagine this can be used to convert date/time to string differently, or adding indentation depending on the depth in object hierarchy, etc.
What this patch does apart from providing callbacks, is abstracting most of code for producing the correct JSON structure, which was previously scattered and repeated in a number of functions with slight differences. In the current code there are 5 styles for producing JSON object string, differing in whitespace only:
a) no spaces
select to_json(row(1,2));
to_json
-----------------
{"f1":1,"f2":2}
b) some spaces (hstore_to_json)
select hstore(row(1,2))::json;
hstore
------------------------
{"f1": "1", "f2": "2"}
c) spaces around colon
select json_build_object('f1',1,'f2',2);
json_build_object
----------------------
{"f1" : 1, "f2" : 2}
d) spaces around colon *and* curly braces
select json_object_agg(x,x) from unnest('{1,2}'::int[]) x;
json_object_agg
----------------------
{ "1" : 1, "2" : 2 }
e) line feeds (row_to_json_pretty)
select row_to_json(row(1,2), true) as row;
row
----------
{"f1":1,+
"f2":2}
Personally, I think we should stick to (b), however that would break a lot of test cases that already depend on (a). I've tried hard to minimize the amount of changes in expected/json.out, but it is quickly becomes cumbersome trying to support all of the above formats. So I've altered (c) and (d) to look like (b), naturally only whitespace was affected.
There's one corner case I don't see a sensible way to support:
select json_agg(x) from generate_series(1,5) x;
json_agg
-----------------
[1, 2, 3, 4, 5]
With the patch applied it puts line feeds between the array elements instead of spaces.
What also bothers me is that I've hard-coded output function oids for cstring_out, and textout on assumption that they never change, but would like to know that for sure.
Feedback is very much welcome!
--
Alex
PS: using a different email address this time, same Alex Shulgin. ;-)
Attachment
On Wed, May 20, 2015 at 8:16 AM, Shulgin, Oleksandr <oleksandr.shulgin@zalando.de> wrote: > Hi, Hackers! > > Attached is a patch against master to generalize the JSON-producing > functions in utils/adt/json.c and to provide a set of callbacks which can be > overridden the same way that is already provided for *parsing* JSON. > > The motivation behind this to be able to produce specially-crafted JSON in a > logical replication output plugin, such that numeric (and bigint) values are > quoted. This requirement, in turn, arises from the fact that JavaScript > specification, which is quite natural to expect as a consumer for this JSON > data, allows to silently drop significant digits when converting from string > to number object. > > I believe this is a well-known problem and I'm aware of a number of tricks > that might be used to avoid it, but none of them seems to be optimal from my > standpoint. > > I can also imagine this can be used to convert date/time to string > differently, or adding indentation depending on the depth in object > hierarchy, etc. > > What this patch does apart from providing callbacks, is abstracting most of > code for producing the correct JSON structure, which was previously > scattered and repeated in a number of functions with slight differences. In > the current code there are 5 styles for producing JSON object string, > differing in whitespace only: > > a) no spaces > > select to_json(row(1,2)); > to_json > ----------------- > {"f1":1,"f2":2} > > b) some spaces (hstore_to_json) > > select hstore(row(1,2))::json; > hstore > ------------------------ > {"f1": "1", "f2": "2"} > > c) spaces around colon > > select json_build_object('f1',1,'f2',2); > json_build_object > ---------------------- > {"f1" : 1, "f2" : 2} > > d) spaces around colon *and* curly braces > > select json_object_agg(x,x) from unnest('{1,2}'::int[]) x; > json_object_agg > ---------------------- > { "1" : 1, "2" : 2 } > > e) line feeds (row_to_json_pretty) > > select row_to_json(row(1,2), true) as row; > row > ---------- > {"f1":1,+ > "f2":2} > > Personally, I think we should stick to (b), however that would break a lot > of test cases that already depend on (a). I've tried hard to minimize the > amount of changes in expected/json.out, but it is quickly becomes cumbersome > trying to support all of the above formats. So I've altered (c) and (d) to > look like (b), naturally only whitespace was affected. Disagree. IMNSHO, the default should be (a), as it's the most compact format and therefore the fastest. Whitespace injection should be reserved for prettification functions. merlin
On 05/20/2015 09:16 AM, Shulgin, Oleksandr wrote: > Hi, Hackers! > > Attached is a patch against master to generalize the JSON-producing > functions in utils/adt/json.c and to provide a set of callbacks which > can be overridden the same way that is already provided for *parsing* > JSON. > > The motivation behind this to be able to produce specially-crafted > JSON in a logical replication output plugin, such that numeric (and > bigint) values are quoted. This requirement, in turn, arises from the > fact that JavaScript specification, which is quite natural to expect > as a consumer for this JSON data, allows to silently drop significant > digits when converting from string to number object. > > I believe this is a well-known problem and I'm aware of a number of > tricks that might be used to avoid it, but none of them seems to be > optimal from my standpoint. > > I can also imagine this can be used to convert date/time to string > differently, or adding indentation depending on the depth in object > hierarchy, etc. I'm not necessarily opposed to this, but it sure seems like a lot of changes, and moderately invasive ones, to support something that could be done, at the cost of reparsing, with a simple loadable extension that I could create in a few hours of programming. The parser API was created precisely to make this sort of transformation close to trivial. Other fairly obvious transformations include translating to XML or YAML, and a less obvious one could be something very specialized, like translating certain fields. Anyway, for this purpose I could imagine a function like: json_format ( j json (or text), indent_spaces smallint default 0, space_after_colon boolean defaultfalse, space_after_comma boolean default false, quote_numerics boolean default false) returns json cheers andrew
On Thu, May 21, 2015 at 2:23 AM, Shulgin, Oleksandrsure -- did you mean to respond off-list?<oleksandr.shulgin@zalando.de> wrote:
> On Wed, May 20, 2015 at 4:06 PM, Merlin Moncure <mmoncure@gmail.com> wrote:
>>
>> On Wed, May 20, 2015 at 8:16 AM, Shulgin, Oleksandr
>> <oleksandr.shulgin@zalando.de> wrote:
>> >
>> > a) no spaces
>> >
>> > select to_json(row(1,2));
>> > to_json
>> > -----------------
>> > {"f1":1,"f2":2}
>> >
>> > b) some spaces (hstore_to_json)
>> >
>> > select hstore(row(1,2))::json;
>> > hstore
>> > ------------------------
>> > {"f1": "1", "f2": "2"}
>> >
>> > c) spaces around colon
>> >
>> > select json_build_object('f1',1,'f2',2);
>> > json_build_object
>> > ----------------------
>> > {"f1" : 1, "f2" : 2}
>> >
>> > d) spaces around colon *and* curly braces
>> >
>> > select json_object_agg(x,x) from unnest('{1,2}'::int[]) x;
>> > json_object_agg
>> > ----------------------
>> > { "1" : 1, "2" : 2 }
>> >
>> > e) line feeds (row_to_json_pretty)
>> >
>> > select row_to_json(row(1,2), true) as row;
>> > row
>> > ----------
>> > {"f1":1,+
>> > "f2":2}
>> >
>> > Personally, I think we should stick to (b), however that would break a
>> > lot
>> > of test cases that already depend on (a). I've tried hard to minimize
>> > the
>> > amount of changes in expected/json.out, but it is quickly becomes
>> > cumbersome
>> > trying to support all of the above formats. So I've altered (c) and (d)
>> > to
>> > look like (b), naturally only whitespace was affected.
>>
>> Disagree. IMNSHO, the default should be (a), as it's the most compact
>> format and therefore the fastest. Whitespace injection should be
>> reserved for prettification functions.
>
>
> I have no strong opinion on choosing (a) over (b), just wanted to make the
> change minimally sensible. If at all, I think we should modify existing
> code to make JSON output consistent: that is choose one format an stick to
> it.
anyways, inserting spacing
into the serialization function formatting (xx_to_json) for me will
raise memory profile of output json by 10%+ in nearly all cases. I
just don't see the benefit of doing that given that the json is still
not very 'pretty'.
Andrew Dunstan wrote: > > On 05/20/2015 09:16 AM, Shulgin, Oleksandr wrote: > >Attached is a patch against master to generalize the JSON-producing > >functions in utils/adt/json.c and to provide a set of callbacks which can > >be overridden the same way that is already provided for *parsing* JSON. > I'm not necessarily opposed to this, but it sure seems like a lot of > changes, and moderately invasive ones, to support something that could be > done, at the cost of reparsing, with a simple loadable extension that I > could create in a few hours of programming. But this seems like a pretty reasonable change to make, no? Doesn't the total amount of code decrease after this patch? JSON stuff is pretty new so some refactoring and generalization of what we have is to be expected. -- Álvaro Herrera http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
On Fri, May 22, 2015 at 9:43 AM, Alvaro Herrera <alvherre@2ndquadrant.com> wrote: > Andrew Dunstan wrote: >> >> On 05/20/2015 09:16 AM, Shulgin, Oleksandr wrote: > >> >Attached is a patch against master to generalize the JSON-producing >> >functions in utils/adt/json.c and to provide a set of callbacks which can >> >be overridden the same way that is already provided for *parsing* JSON. > >> I'm not necessarily opposed to this, but it sure seems like a lot of >> changes, and moderately invasive ones, to support something that could be >> done, at the cost of reparsing, with a simple loadable extension that I >> could create in a few hours of programming. > > But this seems like a pretty reasonable change to make, no? Doesn't the > total amount of code decrease after this patch? JSON stuff is pretty > new so some refactoring and generalization of what we have is to be > expected. Yeah. Also, there have been a few previous gripes about this, for example, http://www.postgresql.org/message-id/CAHbVmPzS+sVR+y-UgxjRq+XW4dqteVL-cOzc69zFFwmxjcKCxg@mail.gmail.com. As noted, I definitely prefer 'space free' by default for efficiency reasons, but standardizing the output has definitely got to be a reasonable goal. merlin
On Fri, May 22, 2015 at 9:43 AM, Alvaro Herrera
<alvherre@2ndquadrant.com> wrote:
> Andrew Dunstan wrote:
>>
>> On 05/20/2015 09:16 AM, Shulgin, Oleksandr wrote:
>
>> >Attached is a patch against master to generalize the JSON-producing
>> >functions in utils/adt/json.c and to provide a set of callbacks which can
>> >be overridden the same way that is already provided for *parsing* JSON.
>
>> I'm not necessarily opposed to this, but it sure seems like a lot of
>> changes, and moderately invasive ones, to support something that could be
>> done, at the cost of reparsing, with a simple loadable extension that I
>> could create in a few hours of programming.
>
> But this seems like a pretty reasonable change to make, no? Doesn't the
> total amount of code decrease after this patch? JSON stuff is pretty
> new so some refactoring and generalization of what we have is to be
> expected.
Yeah. Also, there have been a few previous gripes about this, for
example, http://www.postgresql.org/message-id/CAHbVmPzS+sVR+y-UgxjRq+XW4dqteVL-cOzc69zFFwmxjcKCxg@mail.gmail.com.
As noted, I definitely prefer 'space free' by default for efficiency
reasons, but standardizing the output has definitely got to be a
reasonable goal.
On Fri, May 22, 2015 at 10:51 AM, Merlin Moncure <mmoncure@gmail.com> wrote:On Fri, May 22, 2015 at 9:43 AM, Alvaro Herrera
<alvherre@2ndquadrant.com> wrote:
> Andrew Dunstan wrote:
>>
>> On 05/20/2015 09:16 AM, Shulgin, Oleksandr wrote:
>
>> >Attached is a patch against master to generalize the JSON-producing
>> >functions in utils/adt/json.c and to provide a set of callbacks which can
>> >be overridden the same way that is already provided for *parsing* JSON.
>
>> I'm not necessarily opposed to this, but it sure seems like a lot of
>> changes, and moderately invasive ones, to support something that could be
>> done, at the cost of reparsing, with a simple loadable extension that I
>> could create in a few hours of programming.
>
> But this seems like a pretty reasonable change to make, no? Doesn't the
> total amount of code decrease after this patch? JSON stuff is pretty
> new so some refactoring and generalization of what we have is to be
> expected.
Yeah. Also, there have been a few previous gripes about this, for
example, http://www.postgresql.org/message-id/CAHbVmPzS+sVR+y-UgxjRq+XW4dqteVL-cOzc69zFFwmxjcKCxg@mail.gmail.com.
As noted, I definitely prefer 'space free' by default for efficiency
reasons, but standardizing the output has definitely got to be a
reasonable goal.Every JSON implementation I have ever used defaults to the minified version of JSON (no whitespace) when printed.
On Tue, May 26, 2015 at 2:50 AM, Shulgin, Oleksandr <oleksandr.shulgin@zalando.de> wrote: > Is it reasonable to add this patch to CommitFest now? It's always reasonable to add a patch to the CommitFest if you would like for it to be reviewed and avoid having it get forgotten about. There seems to be some disagreement about whether we want this, but don't let that stop you from adding it to the next CommitFest. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
On 05/27/2015 02:37 PM, Robert Haas wrote: > On Tue, May 26, 2015 at 2:50 AM, Shulgin, Oleksandr > <oleksandr.shulgin@zalando.de> wrote: >> Is it reasonable to add this patch to CommitFest now? > It's always reasonable to add a patch to the CommitFest if you would > like for it to be reviewed and avoid having it get forgotten about. > There seems to be some disagreement about whether we want this, but > don't let that stop you from adding it to the next CommitFest. > I'm not dead set against it either. When I have time I will take a closer look. cheers andrew
On 05/27/2015 09:51 PM, Andrew Dunstan wrote: > > On 05/27/2015 02:37 PM, Robert Haas wrote: >> On Tue, May 26, 2015 at 2:50 AM, Shulgin, Oleksandr >> <oleksandr.shulgin@zalando.de> wrote: >>> Is it reasonable to add this patch to CommitFest now? >> It's always reasonable to add a patch to the CommitFest if you would >> like for it to be reviewed and avoid having it get forgotten about. >> There seems to be some disagreement about whether we want this, but >> don't let that stop you from adding it to the next CommitFest. > > I'm not dead set against it either. When I have time I will take a > closer look. Andrew, will you have the time to review this? Please add yourself as reviewer in the commitfest app if you do. My 2 cents is that I agree with your initial reaction: This is a lot of infrastructure and generalizing things, for little benefit. Let's change the current code where we generate JSON to be consistent with whitespace, and call it a day. - Heikki
On 07/03/2015 06:27 AM, Heikki Linnakangas wrote: > On 05/27/2015 09:51 PM, Andrew Dunstan wrote: >> >> On 05/27/2015 02:37 PM, Robert Haas wrote: >>> On Tue, May 26, 2015 at 2:50 AM, Shulgin, Oleksandr >>> <oleksandr.shulgin@zalando.de> wrote: >>>> Is it reasonable to add this patch to CommitFest now? >>> It's always reasonable to add a patch to the CommitFest if you would >>> like for it to be reviewed and avoid having it get forgotten about. >>> There seems to be some disagreement about whether we want this, but >>> don't let that stop you from adding it to the next CommitFest. >> >> I'm not dead set against it either. When I have time I will take a >> closer look. > > Andrew, will you have the time to review this? Please add yourself as > reviewer in the commitfest app if you do. > > My 2 cents is that I agree with your initial reaction: This is a lot > of infrastructure and generalizing things, for little benefit. Let's > change the current code where we generate JSON to be consistent with > whitespace, and call it a day. > > - Heikki > I'm somewhat on vacation for the next week or so, so I won't claim it, but I'll try to make time to look at it. Other people (Merlin?) could also provide reviews. cheers andrew
On 05/27/2015 09:51 PM, Andrew Dunstan wrote:
On 05/27/2015 02:37 PM, Robert Haas wrote:On Tue, May 26, 2015 at 2:50 AM, Shulgin, Oleksandr
<oleksandr.shulgin@zalando.de> wrote:Is it reasonable to add this patch to CommitFest now?It's always reasonable to add a patch to the CommitFest if you would
like for it to be reviewed and avoid having it get forgotten about.
There seems to be some disagreement about whether we want this, but
don't let that stop you from adding it to the next CommitFest.
I'm not dead set against it either. When I have time I will take a
closer look.
Andrew, will you have the time to review this? Please add yourself as reviewer in the commitfest app if you do.
My 2 cents is that I agree with your initial reaction: This is a lot of infrastructure and generalizing things, for little benefit. Let's change the current code where we generate JSON to be consistent with whitespace, and call it a day.
- Heikki
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
dst.value(&dst, (Datum) 0, JSONTYPE_NULL, InvalidOid, InvalidOid, false);
json_out_value(&dst, ....)
2015-07-03 12:27 GMT+02:00 Heikki Linnakangas <hlinnaka@iki.fi>:On 05/27/2015 09:51 PM, Andrew Dunstan wrote:
On 05/27/2015 02:37 PM, Robert Haas wrote:On Tue, May 26, 2015 at 2:50 AM, Shulgin, Oleksandr
<oleksandr.shulgin@zalando.de> wrote:Is it reasonable to add this patch to CommitFest now?It's always reasonable to add a patch to the CommitFest if you would
like for it to be reviewed and avoid having it get forgotten about.
There seems to be some disagreement about whether we want this, but
don't let that stop you from adding it to the next CommitFest.
I'm not dead set against it either. When I have time I will take a
closer look.
Andrew, will you have the time to review this? Please add yourself as reviewer in the commitfest app if you do.
My 2 cents is that I agree with your initial reaction: This is a lot of infrastructure and generalizing things, for little benefit. Let's change the current code where we generate JSON to be consistent with whitespace, and call it a day.I am thinking so it is not bad idea. This code can enforce uniform format, and it can check if produced value is correct. It can be used in our code, it can be used by extension's developers.This patch is not small, but really new lines are not too much.I'll do review today.RegardsPavel
- Heikki
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
insteadWhat is benefit2. why you did indirect call via JsonOutContext?1. comments - missing comment for some basic API, basic fields like "key_scalar" and similarMy objections:2. I have to fix a typo in hstore_io.c function (update attached), other (patching, regress tests) without problems1. I reread a previous discussion and almost all are for this patch (me too)HiI am sending review of this patch:
dst.value(&dst, (Datum) 0, JSONTYPE_NULL, InvalidOid, InvalidOid, false);
json_out_value(&dst, ....)? Is it necessary?3. if it should be used everywhere, then in EXPLAIN statement too.RegardsPavel2015-07-10 6:31 GMT+02:00 Pavel Stehule <pavel.stehule@gmail.com>:2015-07-03 12:27 GMT+02:00 Heikki Linnakangas <hlinnaka@iki.fi>:On 05/27/2015 09:51 PM, Andrew Dunstan wrote:
On 05/27/2015 02:37 PM, Robert Haas wrote:On Tue, May 26, 2015 at 2:50 AM, Shulgin, Oleksandr
<oleksandr.shulgin@zalando.de> wrote:Is it reasonable to add this patch to CommitFest now?It's always reasonable to add a patch to the CommitFest if you would
like for it to be reviewed and avoid having it get forgotten about.
There seems to be some disagreement about whether we want this, but
don't let that stop you from adding it to the next CommitFest.
I'm not dead set against it either. When I have time I will take a
closer look.
Andrew, will you have the time to review this? Please add yourself as reviewer in the commitfest app if you do.
My 2 cents is that I agree with your initial reaction: This is a lot of infrastructure and generalizing things, for little benefit. Let's change the current code where we generate JSON to be consistent with whitespace, and call it a day.I am thinking so it is not bad idea. This code can enforce uniform format, and it can check if produced value is correct. It can be used in our code, it can be used by extension's developers.This patch is not small, but really new lines are not too much.I'll do review today.RegardsPavel
- Heikki
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Attachment
1. comments - missing comment for some basic API, basic fields like "key_scalar" and similarMy objections:2. I have to fix a typo in hstore_io.c function (update attached), other (patching, regress tests) without problems1. I reread a previous discussion and almost all are for this patch (me too)HiI am sending review of this patch:
instead2. why you did indirect call via JsonOutContext?What is benefit
dst.value(&dst, (Datum) 0, JSONTYPE_NULL, InvalidOid, InvalidOid, false);
json_out_value(&dst, ....)
3. if it should be used everywhere, then in EXPLAIN statement too.
1. comments - missing comment for some basic API, basic fields like "key_scalar" and similarMy objections:2. I have to fix a typo in hstore_io.c function (update attached), other (patching, regress tests) without problems1. I reread a previous discussion and almost all are for this patch (me too)HiI am sending review of this patch:I thought it was pretty obvious from the code, because it's sort of the only source for docs on the subject right now. Should we add proper documentation section, this would have been documented for sure.instead2. why you did indirect call via JsonOutContext?What is benefit
dst.value(&dst, (Datum) 0, JSONTYPE_NULL, InvalidOid, InvalidOid, false);
json_out_value(&dst, ....)For consistency. Even though we initialize the output context ourselves, there might be some code introduced between json_out_init_context() and dst.value() calls that replaces some of the callbacks, and then there would be a difference.
3. if it should be used everywhere, then in EXPLAIN statement too.Ahh.. good catch. I'll have a look on this now.Thanks for the review!--Alex
instead2. why you did indirect call via JsonOutContext?What is benefit
dst.value(&dst, (Datum) 0, JSONTYPE_NULL, InvalidOid, InvalidOid, false);
json_out_value(&dst, ....)For consistency. Even though we initialize the output context ourselves, there might be some code introduced between json_out_init_context() and dst.value() calls that replaces some of the callbacks, and then there would be a difference.with this consistency? I didn't see this style everywhere in Postgres? Isn't it premature optimization?
On Fri, Jul 10, 2015 at 4:04 PM, Pavel Stehule <pavel.stehule@gmail.com> wrote:instead2. why you did indirect call via JsonOutContext?What is benefit
dst.value(&dst, (Datum) 0, JSONTYPE_NULL, InvalidOid, InvalidOid, false);
json_out_value(&dst, ....)For consistency. Even though we initialize the output context ourselves, there might be some code introduced between json_out_init_context() and dst.value() calls that replaces some of the callbacks, and then there would be a difference.with this consistency? I didn't see this style everywhere in Postgres? Isn't it premature optimization?Well, one could call it premature pessimization due to dynamic call overhead.IMO, the fact that json_out_init_context() sets the value callback to json_out_value is an implementation detail, the other parts of code should not rely on. And for the Explain output, there definitely going to be *some* code between context initialization and output callbacks: these are done in a number of different functions.
--Alex
Well, one could call it premature pessimization due to dynamic call overhead.IMO, the fact that json_out_init_context() sets the value callback to json_out_value is an implementation detail, the other parts of code should not rely on. And for the Explain output, there definitely going to be *some* code between context initialization and output callbacks: these are done in a number of different functions.Again - it is necessary? Postgres still use modular code, not OOP code. I can understand the using of this technique, when I need a possibility to change behave. But these function are used for printing JSON, not printing any others.
Attachment
On Fri, Jul 10, 2015 at 5:16 PM, Pavel Stehule <pavel.stehule@gmail.com> wrote:Well, one could call it premature pessimization due to dynamic call overhead.IMO, the fact that json_out_init_context() sets the value callback to json_out_value is an implementation detail, the other parts of code should not rely on. And for the Explain output, there definitely going to be *some* code between context initialization and output callbacks: these are done in a number of different functions.Again - it is necessary? Postgres still use modular code, not OOP code. I can understand the using of this technique, when I need a possibility to change behave. But these function are used for printing JSON, not printing any others.No, it's not strictly necessary.For me it's not about procedural- vs. object- style, but rather about being able to override/extend the behavior consistently. And for that I would prefer that if I override the value callback in a JSON output context, that it would be called for every value being printed, not only for some of them.
Thank you for pointing out the case of Explain format, I've totally overlooked it in my first version. Trying to apply the proposed approach in the explain printing code led me to reorganize things slightly. I've added explicit functions for printing keys vs. values, thus no need to expose that key_scalar param anymore. There are now separate before/after key and before/after value functions as well, but I believe it makes for a cleaner code.The most of the complexity is still in the code that decides whether or not to put spaces (between the values or for indentation) and newlines at certain points. Should we decide to unify the style we emit ourselves, this could be simplified, while still leaving room for great flexibility if overridden by an extension, for example.Have a nice weekend.
--Alex
<p dir="ltr">On Jul 11, 2015 6:19 PM, "Pavel Stehule" <<a href="mailto:pavel.stehule@gmail.com">pavel.stehule@gmail.com</a>>wrote:<br /> ><br /> ><br /> ><br /> > 2015-07-1118:02 GMT+02:00 Shulgin, Oleksandr <<a href="mailto:oleksandr.shulgin@zalando.de">oleksandr.shulgin@zalando.de</a>>:<br/> >><br /> >> On Fri, Jul10, 2015 at 5:16 PM, Pavel Stehule <<a href="mailto:pavel.stehule@gmail.com">pavel.stehule@gmail.com</a>> wrote:<br/> >>>><br /> >>>> <br /> >>>> Well, one could call it premature pessimizationdue to dynamic call overhead.<br /> >>>><br /> >>>> IMO, the fact that json_out_init_context()sets the value callback to json_out_value is an implementation detail, the other parts of code shouldnot rely on. And for the Explain output, there definitely going to be *some* code between context initialization andoutput callbacks: these are done in a number of different functions.<br /> >>><br /> >>><br /> >>>Again - it is necessary? Postgres still use modular code, not OOP code. I can understand the using of this technique,when I need a possibility to change behave. But these function are used for printing JSON, not printing any others.<br/> >><br /> >><br /> >> No, it's not strictly necessary.<br /> >><br /> >> For meit's not about procedural- vs. object- style, but rather about being able to override/extend the behavior consistently. And for that I would prefer that if I override the value callback in a JSON output context, that it would becalled for every value being printed, not only for some of them.<br /> ><br /> ><br /> > please, can me show anyreal use case? JSON is JSON, not art work.<p dir="ltr">To quote my first mail:<p dir="ltr">The motivation behind thisto be able to produce specially-crafted JSON in a logical replication output plugin, such that numeric (and bigint) valuesare quoted. This requirement, in turn, arises from the fact that JavaScript specification, which is quite naturalto expect as a consumer for this JSON data, allows to silently drop significant digits when converting from stringto number object.<p dir="ltr">I believe this is a well-known problem and I'm aware of a number of tricks that mightbe used to avoid it, but none of them seems to be optimal from my standpoint.<p dir="ltr">I can also imagine this canbe used to convert date/time to string differently, or adding indentation depending on the depth in object hierarchy,etc.<p dir="ltr">> Still I don't see any value of this.<p dir="ltr">Huh? Why then do you spend time on review?<br/>
On Jul 11, 2015 6:19 PM, "Pavel Stehule" <pavel.stehule@gmail.com> wrote:
>
>
>
> 2015-07-11 18:02 GMT+02:00 Shulgin, Oleksandr <oleksandr.shulgin@zalando.de>:
>>
>> On Fri, Jul 10, 2015 at 5:16 PM, Pavel Stehule <pavel.stehule@gmail.com> wrote:
>>>>
>>>>
>>>> Well, one could call it premature pessimization due to dynamic call overhead.
>>>>
>>>> IMO, the fact that json_out_init_context() sets the value callback to json_out_value is an implementation detail, the other parts of code should not rely on. And for the Explain output, there definitely going to be *some* code between context initialization and output callbacks: these are done in a number of different functions.
>>>
>>>
>>> Again - it is necessary? Postgres still use modular code, not OOP code. I can understand the using of this technique, when I need a possibility to change behave. But these function are used for printing JSON, not printing any others.
>>
>>
>> No, it's not strictly necessary.
>>
>> For me it's not about procedural- vs. object- style, but rather about being able to override/extend the behavior consistently. And for that I would prefer that if I override the value callback in a JSON output context, that it would be called for every value being printed, not only for some of them.
>
>
> please, can me show any real use case? JSON is JSON, not art work.To quote my first mail:
The motivation behind this to be able to produce specially-crafted JSON in a logical replication output plugin, such that numeric (and bigint) values are quoted. This requirement, in turn, arises from the fact that JavaScript specification, which is quite natural to expect as a consumer for this JSON data, allows to silently drop significant digits when converting from string to number object.
I believe this is a well-known problem and I'm aware of a number of tricks that might be used to avoid it, but none of them seems to be optimal from my standpoint.
I can also imagine this can be used to convert date/time to string differently, or adding indentation depending on the depth in object hierarchy, etc.
> Still I don't see any value of this.
Huh? Why then do you spend time on review?
<p dir="ltr">On Jul 11, 2015 8:41 PM, "Pavel Stehule" <<a href="mailto:pavel.stehule@gmail.com">pavel.stehule@gmail.com</a>>wrote:<br /> ><br /> > There is simple rule -be strict on output and tolerant on input. If I understand to sense of this patch - the target is one same format of JSONdocuments - so there are no space for any variability.<p dir="ltr">So, would you prefer explain json format on a singleline - no indentation or whitespace whatsoever?<p dir="ltr">This far it was only about whitespace, but it can be usefulfor tweaking other aspects of output, as I've mentioned before.<p dir="ltr">I can imagine the ability for 3rd-partycode to override certain aspects of the output would be really useful for extensions or background workers, decodingplugins, etc.<p dir="ltr">> I am thinking so general json functions has sense, but I partially disagree with yourimplementation.<p dir="ltr">Then what would you differently exactly?<p dir="ltr">--<br /> Alex
On Jul 11, 2015 8:41 PM, "Pavel Stehule" <pavel.stehule@gmail.com> wrote:
>
> There is simple rule - be strict on output and tolerant on input. If I understand to sense of this patch - the target is one same format of JSON documents - so there are no space for any variability.So, would you prefer explain json format on a single line - no indentation or whitespace whatsoever?
This far it was only about whitespace, but it can be useful for tweaking other aspects of output, as I've mentioned before.
I can imagine the ability for 3rd-party code to override certain aspects of the output would be really useful for extensions or background workers, decoding plugins, etc.
> I am thinking so general json functions has sense, but I partially disagree with your implementation.
Then what would you differently exactly?
--
Alex
On Sun, Jul 12, 2015 at 4:35 AM, Pavel Stehule <pavel.stehule@gmail.com> wrote: > > > 2015-07-12 10:29 GMT+02:00 Shulgin, Oleksandr > <oleksandr.shulgin@zalando.de>: >> >> On Jul 11, 2015 8:41 PM, "Pavel Stehule" <pavel.stehule@gmail.com> wrote: >> > >> > There is simple rule - be strict on output and tolerant on input. If I >> > understand to sense of this patch - the target is one same format of JSON >> > documents - so there are no space for any variability. >> >> So, would you prefer explain json format on a single line - no indentation >> or whitespace whatsoever? > > yes, - if you need pretty format - there is function json_pretty - any more > styles is wrong on Postgres side. > >> >> This far it was only about whitespace, but it can be useful for tweaking >> other aspects of output, as I've mentioned before. > > Postgres is database, not presentation server - it have to to any database > operations, quickly as possible - and formatting is part of client side. >> >> I can imagine the ability for 3rd-party code to override certain aspects >> of the output would be really useful for extensions or background workers, >> decoding plugins, etc. > > we talking about output - I can imagine, so there is only two possibilities > - plain join, and pretty formatted join (but with only one style). This makes sense. Postgres core really only needs to support the minimum styles necessary for core requirements. This means raw unformatted json for data productions to client and an appropriate formatting for explain. Fancier stuff like a generic formatted is fine but those features *belong in an extension*. merlin
<p dir="ltr">> > we talking about output - I can imagine, so there is only two possibilities<br /> > > - plainjoin, and pretty formatted join (but with only one style).<br /> ><br /> > This makes sense. Postgres core reallyonly needs to support the<br /> > minimum styles necessary for core requirements. This means raw<br /> > unformattedjson for data productions to client and an appropriate<br /> > formatting for explain. Fancier stuff likea generic formatted is<br /> > fine but those features *belong in an extension*.<p dir="ltr">The thing is - it's notonly about whitespace, otherwise I would probably not bother with the generic interface. For my original problem, thereis simply no way to do this correctly in an extension w/o copying over all of the logic from json.c, which I have todo right now, would rather not.<p dir="ltr">Alex<br />
> > we talking about output - I can imagine, so there is only two possibilities
> > - plain join, and pretty formatted join (but with only one style).
>
> This makes sense. Postgres core really only needs to support the
> minimum styles necessary for core requirements. This means raw
> unformatted json for data productions to client and an appropriate
> formatting for explain. Fancier stuff like a generic formatted is
> fine but those features *belong in an extension*.The thing is - it's not only about whitespace, otherwise I would probably not bother with the generic interface. For my original problem, there is simply no way to do this correctly in an extension w/o copying over all of the logic from json.c, which I have to do right now, would rather not.
Alex
The thing is - it's not only about whitespace, otherwise I would probably not bother with the generic interface. For my original problem, there is simply no way to do this correctly in an extension w/o copying over all of the logic from json.c, which I have to do right now, would rather not.
I am sorry - we are talking about JSON, not about any styled document. I disagree, so it has not be implemented as extension - the backport of JSON support is a extension.
On Sun, Jul 12, 2015 at 8:39 PM, Pavel Stehule <pavel.stehule@gmail.com> wrote:The thing is - it's not only about whitespace, otherwise I would probably not bother with the generic interface. For my original problem, there is simply no way to do this correctly in an extension w/o copying over all of the logic from json.c, which I have to do right now, would rather not.
I am sorry - we are talking about JSON, not about any styled document. I disagree, so it has not be implemented as extension - the backport of JSON support is a extension.Hm... I'm having a hard time making sense of that statement, sorry.To reiterate: for my problem, that is escaping numerics that can potentially overflow[1] under ECMAScript standard, I want to be able to override the code that outputs the numeric converted to string. There is no way in current implementation to do that *at all*, short of copying all the code involved in producing JSON output and changing it at certain points. One could try re-parsing JSON instead, but that doesn't actually solve the issue, because type information is lost forever at that point.The whitespace unification was a mere side-effect of the original effort on this patch.
To reiterate: for my problem, that is escaping numerics that can potentially overflow[1] under ECMAScript standard, I want to be able to override the code that outputs the numeric converted to string. There is no way in current implementation to do that *at all*, short of copying all the code involved in producing JSON output and changing it at certain points. One could try re-parsing JSON instead, but that doesn't actually solve the issue, because type information is lost forever at that point.The whitespace unification was a mere side-effect of the original effort on this patch.The dynamic type change is some what I would not to do in database, really :)If you afraid about overflow, then convert numeric to string immediately - in this case, the client have to support both variant - so immediate cast should not be a problem.
Anyway this check on max number should be implemented in our JSON(b) out functions (as warning?).
To reiterate: for my problem, that is escaping numerics that can potentially overflow[1] under ECMAScript standard, I want to be able to override the code that outputs the numeric converted to string. There is no way in current implementation to do that *at all*, short of copying all the code involved in producing JSON output and changing it at certain points. One could try re-parsing JSON instead, but that doesn't actually solve the issue, because type information is lost forever at that point.
On 07/13/2015 05:41 AM, Shulgin, Oleksandr wrote: > On Mon, Jul 13, 2015 at 9:44 AM, Pavel Stehule > <pavel.stehule@gmail.com <mailto:pavel.stehule@gmail.com>> wrote: > > > To reiterate: for my problem, that is escaping numerics that > can potentially overflow[1] under ECMAScript standard, I want > to be able to override the code that outputs the numeric > converted to string. There is no way in current > implementation to do that *at all*, short of copying all the > code involved in producing JSON output and changing it at > certain points. One could try re-parsing JSON instead, but > that doesn't actually solve the issue, because type > information is lost forever at that point. > > The whitespace unification was a mere side-effect of the > original effort on this patch. > > > The dynamic type change is some what I would not to do in > database, really :) > > If you afraid about overflow, then convert numeric to string > immediately - in this case, the client have to support both > variant - so immediate cast should not be a problem. > > > Yeah, but how would you do that in context of a logical replication > decoding plugin? I've tried a number of tricks for that, including, > but not limited to registering phony types to wrap numeric type and > replacing the OID of numeric with this custom type OID in TupleDesc, > but then again one has to register that as known record type, etc. > > Anyway this check on max number should be implemented in our > JSON(b) out functions (as warning?). > > > Not really, since this is a problem of ECMAScript standard, not JSON > spec. For example, Python module for handling JSON doesn't suffer > from this overflow problem, > > The thing is, we cannot know which clients are going to consume the > stream of decoded events, and if it's some implementation of > JavaScript, it can suffer silent data corruption if we don't guard > against that in the logical decoding plugin. > > Hope that makes it clear. :-) > > Yes, but I think the plugin is the right place to do it. What is more, this won't actually prevent you completely from producing non-ECMAScript compliant JSON, since json or jsonb values containing offending numerics won't be caught, AIUI. But a fairly simple to write function that reparsed and fixed the JSON inside the decoder would work. cheers andrew
On 07/13/2015 10:46 AM, Ryan Pedela wrote: > On Mon, Jul 13, 2015 at 1:30 AM, Shulgin, Oleksandr > <oleksandr.shulgin@zalando.de <mailto:oleksandr.shulgin@zalando.de>> > wrote: > > > To reiterate: for my problem, that is escaping numerics that can > potentially overflow[1] under ECMAScript standard, I want to be > able to override the code that outputs the numeric converted to > string. There is no way in current implementation to do that *at > all*, short of copying all the code involved in producing JSON > output and changing it at certain points. One could try > re-parsing JSON instead, but that doesn't actually solve the > issue, because type information is lost forever at that point. > > > I had the exact same problem with Node.js and client-side Javascript. > That is why I wrote json-bignum [1] for Node.js. There is a bower > version [2] as well. The only caveat is that it is slower than the > native JSON functions, but I am happy to receive PRs to improve > performance. > > 1. https://github.com/datalanche/json-bignum > 2. https://libraries.io/bower/json-bignum > > As far as large numbers in JSON, I think Postgres is doing the right > thing and should not be changed. It is Javascript that is stupid here, > and I don't think it is wise to add something to core just because one > client does stupid things with large numbers. In addition, ES7 is > introducing "value types" which will hopefully solve the large number > problem in Javascript. > > The random whitespace issue is valid in my opinion and should be fixed. > > > OK, I think we're getting a consensus here. It's good to know the JS world is acquiring some sanity in this area. Let's just fix the whitespace and be done, without all the callback stuff. That should be a much smaller patch. cheers andrew
<p dir="ltr">> Yes, but I think the plugin is the right place to do it. What is more, this won't actually prevent youcompletely from producing non-ECMAScript compliant JSON, since json or jsonb values containing offending numerics won'tbe caught, AIUI.<p dir="ltr">Ah, that's a good catch indeed.<p dir="ltr">> But a fairly simple to write functionthat reparsed and fixed the JSON inside the decoder would work.<p dir="ltr">Need to rethink this, but reparsing wasnever my favorite option here. :-)<p dir="ltr">--<br /> Alex<br />
On Mon, Jul 13, 2015 at 10:46 AM, Ryan Pedela <rpedela@datalanche.com> wrote: > As far as large numbers in JSON, I think Postgres is doing the right thing > and should not be changed. It is Javascript that is stupid here, and I don't > think it is wise to add something to core just because one client does > stupid things with large numbers. In addition, ES7 is introducing "value > types" which will hopefully solve the large number problem in Javascript. FWIW, I don't agree. If it's not easy to read the JSON that PostgreSQL generates using JavaScript, then a lot of people are just going to give up on doing it, and IMO that would be sad. Telling people that they have to parse the JSON using some parser other than the one built into their JavaScript engine, whack it around, and then render it as text and parse it again is not really an acceptable answer. The reason why the logical decoding stuff allows multiple output formats is because Andres, quite correctly, foresaw that different people would need different output formats. He could have designed that system to output only one output format and just said, everybody's got to read and parse this, but that would have been slow. Instead, he tried to set things up so that you could get the output in the format that was most convenient for your client, whatever that is. On this thread, we're back-pedaling from that idea: sorry, you can get JSON output, but if you want JSON output that will be properly interpreted by your JSON parser, you can't have that. Regardless of the details of this particular patch, I can't endorse that approach. If we want people to use our software, we need to meet them where they are at, especially when we are only (IIUC) talking about inserting a few extra quotation marks. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
On 07/15/2015 10:52 AM, Robert Haas wrote: > On Mon, Jul 13, 2015 at 10:46 AM, Ryan Pedela <rpedela@datalanche.com> wrote: >> As far as large numbers in JSON, I think Postgres is doing the right thing >> and should not be changed. It is Javascript that is stupid here, and I don't >> think it is wise to add something to core just because one client does >> stupid things with large numbers. In addition, ES7 is introducing "value >> types" which will hopefully solve the large number problem in Javascript. > FWIW, I don't agree. If it's not easy to read the JSON that > PostgreSQL generates using JavaScript, then a lot of people are just > going to give up on doing it, and IMO that would be sad. Telling > people that they have to parse the JSON using some parser other than > the one built into their JavaScript engine, whack it around, and then > render it as text and parse it again is not really an acceptable > answer. The reason why the logical decoding stuff allows multiple > output formats is because Andres, quite correctly, foresaw that > different people would need different output formats. He could have > designed that system to output only one output format and just said, > everybody's got to read and parse this, but that would have been slow. > Instead, he tried to set things up so that you could get the output in > the format that was most convenient for your client, whatever that is. > On this thread, we're back-pedaling from that idea: sorry, you can get > JSON output, but if you want JSON output that will be properly > interpreted by your JSON parser, you can't have that. Regardless of > the details of this particular patch, I can't endorse that approach. > If we want people to use our software, we need to meet them where they > are at, especially when we are only (IIUC) talking about inserting a > few extra quotation marks. > The question for me is where is the best place to transform the data. The approach take was both invasive and broken. The approach I suggested, reparsing and transforming it in the logical decoder, would be both fairly simple and completely noninvasive. If someone gives me a test for what is an acceptable number for JS processors, I bet I could write a transformation function in an hour or two, and in a hundred lines or so. I admit that I probably have more experience doing this than anyone else, but the parser API was designed to be fairly simple, and I believe it is. cheers andrew
FWIW, I don't agree. If it's not easy to read the JSON that
PostgreSQL generates using JavaScript, then a lot of people are just
going to give up on doing it, and IMO that would be sad. Telling
people that they have to parse the JSON using some parser other than
the one built into their JavaScript engine, whack it around, and then
render it as text and parse it again is not really an acceptable
answer.
The reason why the logical decoding stuff allows multiple
output formats is because Andres, quite correctly, foresaw that
different people would need different output formats. He could have
designed that system to output only one output format and just said,
everybody's got to read and parse this, but that would have been slow.
Instead, he tried to set things up so that you could get the output in
the format that was most convenient for your client, whatever that is.
On this thread, we're back-pedaling from that idea: sorry, you can get
JSON output, but if you want JSON output that will be properly
interpreted by your JSON parser, you can't have that. Regardless of
the details of this particular patch, I can't endorse that approach.
If we want people to use our software, we need to meet them where they
are at, especially when we are only (IIUC) talking about inserting a
few extra quotation marks.
On Wed, Jul 15, 2015 at 8:52 AM, Robert Haas <robertmhaas@gmail.com> wrote:FWIW, I don't agree. If it's not easy to read the JSON that
PostgreSQL generates using JavaScript, then a lot of people are just
going to give up on doing it, and IMO that would be sad. Telling
people that they have to parse the JSON using some parser other than
the one built into their JavaScript engine, whack it around, and then
render it as text and parse it again is not really an acceptable
answer.The vast majority of Javascript users are going to be using Node.js when they connect to Postgres if only for security reasons. If they use Node, they will be using node-postgres [1] or something that builds on top of it. For int64 and numerics in a row, the default is to return a string, and there is a flag you can set to round returned numbers if you prefer. There is also a way to override the default parsing of each Postgres type [2]. So in the case of JSON using my json-bignum module [3], the code looks like this:var pgTypes = require('pg').types;var bignumJSON = require('json-bignum');types.setTypeParser(JSON_TYPE_OID, function (value) {return bignumJSON.parse(value);});types.setTypeParser(JSONB_TYPE_OID, function (value) {return bignumJSON.parse(value);});To me that code is super simple, and no a pain in the ass. In other words, it is not "Telling people that they have to parse the JSON using some parser other than the one built into their JavaScript engine, whack it around, and then render it as text and parse it again". Like I said previously, the situation with Javascript will hopefully be remedied in a few years with ES7 anyway.On Wed, Jul 15, 2015 at 8:52 AM, Robert Haas <robertmhaas@gmail.com> wrote:The reason why the logical decoding stuff allows multiple
output formats is because Andres, quite correctly, foresaw that
different people would need different output formats. He could have
designed that system to output only one output format and just said,
everybody's got to read and parse this, but that would have been slow.
Instead, he tried to set things up so that you could get the output in
the format that was most convenient for your client, whatever that is.
On this thread, we're back-pedaling from that idea: sorry, you can get
JSON output, but if you want JSON output that will be properly
interpreted by your JSON parser, you can't have that. Regardless of
the details of this particular patch, I can't endorse that approach.
If we want people to use our software, we need to meet them where they
are at, especially when we are only (IIUC) talking about inserting a
few extra quotation marks.I would be okay with a generic way to specify output formats if there are many use cases beyond Javascript and JSON. I vaguely remember someone suggesting a FORMAT clause on CREATE TABLE which would specify how a particular column would output from a SELECT. For example, returning a date with a non-ISO format. I liked that idea. However if the only reason for different output formats is Javascript, that is silly. I have a very long list of feature requests that would probably only be beneficial to me or a handful of users. Should we implement them? No, of course not! If we did that Postgres would cease to be the best open-source database. You can't have the best product and say yes to everything. Feature creep is the enemy of quality. If Javascript is the sole reason for supporting multiple output formats, then that is the definition of feature creep in my opinion. If there are many use cases beyond Javascript and JSON, then that is different and a conversation worth having.
On Wed, Jul 15, 2015 at 12:58 PM, Andrew Dunstan <andrew@dunslane.net> wrote: > The approach take was both invasive and broken. Well, then let's not do it that way. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
On Wed, Jul 15, 2015 at 1:10 PM, Ryan Pedela <rpedela@datalanche.com> wrote: > Like I said previously, the > situation with Javascript will hopefully be remedied in a few years with ES7 > anyway. I don't understand these issues in great technical depth, but if somebody is arguing that it's OK for PostgreSQL to be difficult to use for a certain category of user for several years until the next language rev becomes mainstream, then I disagree. The fact that somebody wrote a patch to try to solve a problem means that the thing in question is a problem for at least that one user. If he's the only one, maybe we don't need to care all that much. If his needs are representative of a significant user community, we should not turn our backs on that community, regardless of whether we like the patch he wrote, and regardless of how well we are meeting the needs of other communities (like node.js users). -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
On Wed, Jul 15, 2015 at 1:10 PM, Ryan Pedela <rpedela@datalanche.com> wrote:
> Like I said previously, the
> situation with Javascript will hopefully be remedied in a few years with ES7
> anyway.
I don't understand these issues in great technical depth, but if
somebody is arguing that it's OK for PostgreSQL to be difficult to use
for a certain category of user for several years until the next
language rev becomes mainstream, then I disagree. The fact that
somebody wrote a patch to try to solve a problem means that the thing
in question is a problem for at least that one user. If he's the only
one, maybe we don't need to care all that much. If his needs are
representative of a significant user community, we should not turn our
backs on that community, regardless of whether we like the patch he
wrote, and regardless of how well we are meeting the needs of other
communities (like node.js users).
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
I don't understand these issues in great technical depth, but if
somebody is arguing that it's OK for PostgreSQL to be difficult to use
for a certain category of user for several years until the next
language rev becomes mainstream, then I disagree. The fact that
somebody wrote a patch to try to solve a problem means that the thing
in question is a problem for at least that one user. If he's the only
one, maybe we don't need to care all that much. If his needs are
representative of a significant user community, we should not turn our
backs on that community, regardless of whether we like the patch he
wrote, and regardless of how well we are meeting the needs of other
communities (like node.js users).
<p dir="ltr">On Jul 17, 2015 12:23 AM, "Ryan Pedela" <<a href="mailto:rpedela@datalanche.com">rpedela@datalanche.com</a>>wrote:<br /> ><br /> > On Thu, Jul 16, 2015 at 11:51AM, Robert Haas <<a href="mailto:robertmhaas@gmail.com">robertmhaas@gmail.com</a>> wrote:<br /> >><br />>> I don't understand these issues in great technical depth, but if<br /> >> somebody is arguing that it'sOK for PostgreSQL to be difficult to use<br /> >> for a certain category of user for several years until the next<br/> >> language rev becomes mainstream, then I disagree. The fact that<br /> >> somebody wrote a patchto try to solve a problem means that the thing<br /> >> in question is a problem for at least that one user. If he's the only<br /> >> one, maybe we don't need to care all that much. If his needs are<br /> >> representativeof a significant user community, we should not turn our<br /> >> backs on that community, regardlessof whether we like the patch he<br /> >> wrote, and regardless of how well we are meeting the needs of other<br/> >> communities (like node.js users).<br /> ><br /> ><br /> > I completely agree. However we aren'ttalking about a usability problem with Postgres. We are actually talking about a usability problem with Javascript,and trying to implement a band-aid for it with Postgres. Javascript doesn't support large numbers, it just doesn't.There is nothing the Postgres community can do about that. Only the ECMAscript standards committee and implementerscan fix Javascript.<br /> ><br /> > Here is the current user flow of reading numerics from Postgres andthen doing some math with them in Javascript.<br /> ><br /> > 1. SELECT json<br /> > 2. Use json-bignum [1] moduleor custom JSON parser to correctly parse numerics.<br /> > 3. Perform addition, subtraction, etc of numerics usingeither custom numeric math library or an existing library such as bigdecimal.js [2].<br /> ><br /> > Here is theuser flow if this patch is accepted.<br /> ><br /> > 1. SELECT json with quoting flags set<br /> > 2. Customparser to find numeric strings within JSON and convert them into numerics. This is easy if JSON is simple, but maybe difficult with a very complex JSON.<br /> > 3. Perform addition, subtraction, etc of numerics using either customnumeric math library or an existing library such as bigdecimal.js [2].<br /> ><br /> > It is almost the exactsame user flow so what is the point?<p dir="ltr">In my case there's no select: we're running this in the context ofa logical decoding plugin.<p dir="ltr">The all safeguarding idea that is enabled by this patch is that if the client *expects*big numbers *and* it needs to perform arithmetic on them, it'll have the special handling anyway. And IMO, it wouldactually make more sense to use big numbers module only at the point where you have the need for special handling, notto parse the whole input in a nonstandard way.<p dir="ltr">But the clients that are unaware of big numbers or don't careabout them shouldn't be *forced* to use external modules for parsing json.<p dir="ltr">> This patch makes Postgrescore more complex<p dir="ltr">Yes, it does. But, that was not the purpose, obviously. :-)<p dir="ltr">> whilenot really solving the problem in Javascript.<p dir="ltr">It still allows for less risk of silent data corruption onthe js side.<p dir="ltr">--<br /> Alex
On 07/17/2015 08:20 AM, Shulgin, Oleksandr wrote: > > This patch makes Postgres core more complex > > Yes, it does. But, that was not the purpose, obviously. :-) > > > while not really solving the problem in Javascript. > > It still allows for less risk of silent data corruption on the js side. > > I have already pointed out how this patch is fundamentally broken. You can achieve your aims by a fairly small amount of code inside your logical decoder, and with no core code changes whatsoever. So I'm puzzled why we are even still debating this broken design. cheers andrew
On 07/17/2015 10:11 AM, Andrew Dunstan wrote: > > On 07/17/2015 08:20 AM, Shulgin, Oleksandr wrote: > > >> > This patch makes Postgres core more complex >> >> Yes, it does. But, that was not the purpose, obviously. :-) >> >> > while not really solving the problem in Javascript. >> >> It still allows for less risk of silent data corruption on the js side. >> >> > > I have already pointed out how this patch is fundamentally broken. You > can achieve your aims by a fairly small amount of code inside your > logical decoder, and with no core code changes whatsoever. So I'm > puzzled why we are even still debating this broken design. Incidentally, this doesn't look acceptable anyway: > ! es->json_cxt.value(&es->json_cxt, num, JSONTYPE_NUMERIC, > ! NUMERICOID, 1702 /* numeric_out */); We don't hardcode function oids elsewhere. So this is also something that makes the patch unacceptable. cheers andrew
<p dir="ltr">On Jul 17, 2015 4:31 PM, "Andrew Dunstan" <<a href="mailto:andrew@dunslane.net">andrew@dunslane.net</a>>wrote:<br /> ><br /> ><br /> > On 07/17/2015 10:11AM, Andrew Dunstan wrote:<br /> >><br /> >><br /> >> On 07/17/2015 08:20 AM, Shulgin, Oleksandr wrote:<br/> >><br /> >><br /> >>> > This patch makes Postgres core more complex<br /> >>><br/> >>> Yes, it does. But, that was not the purpose, obviously. :-)<br /> >>><br /> >>>> while not really solving the problem in Javascript.<br /> >>><br /> >>> It still allowsfor less risk of silent data corruption on the js side.<br /> >>><br /> >>><br /> >><br />>> I have already pointed out how this patch is fundamentally broken. You can achieve your aims by a fairly smallamount of code inside your logical decoder, and with no core code changes whatsoever. So I'm puzzled why we are evenstill debating this broken design.<br /> ><br /> ><br /> ><br /> > Incidentally, this doesn't look acceptableanyway:<br /> >><br /> >> ! es->json_cxt.value(&es->json_cxt,num, JSONTYPE_NUMERIC,<br /> >> ! NUMERICOID, 1702 /* numeric_out */);<br /> ><br /> ><br /> > We don't hardcodefunction oids elsewhere. So this is also something that makes the patch unacceptable.<p dir="ltr">Well, good to know(I believe I've asked about this in the first mail specifically).<p dir="ltr">Is there any way a built-in function oidwould change/differ on different server versions? What would be the recommended way to do this?<p dir="ltr">--<br /> Alex<br/>
Andrew Dunstan wrote: > I have already pointed out how this patch is fundamentally broken. You can > achieve your aims by a fairly small amount of code inside your logical > decoder, and with no core code changes whatsoever. So I'm puzzled why we are > even still debating this broken design. I went through all your responses over the entire thread and I couldn't find your argument about how this is fundamentally broken. Can you restate, or maybe give an archive link if I just missed it? (Saying "but it changes so much of the existing code" is not really a fundamental problem to me. I mean, it's not like the existing code is perfect and needs no changes.) -- Álvaro Herrera http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Shulgin, Oleksandr wrote: > On Jul 17, 2015 4:31 PM, "Andrew Dunstan" <andrew@dunslane.net> wrote: > > Incidentally, this doesn't look acceptable anyway: > >> > >> ! es->json_cxt.value(&es->json_cxt, num, JSONTYPE_NUMERIC, > >> ! NUMERICOID, 1702 /* numeric_out */); > > > > We don't hardcode function oids elsewhere. So this is also something that > > makes the patch unacceptable. > > Well, good to know (I believe I've asked about this in the first mail > specifically). > > Is there any way a built-in function oid would change/differ on different > server versions? What would be the recommended way to do this? C'mon, that's a trivial problem. Just use getTypeOutputInfo(); numeric's OID is hardcoded as NUMERICOID. -- Álvaro Herrera http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
On 07/17/2015 10:30 AM, Alvaro Herrera wrote: > Andrew Dunstan wrote: > >> I have already pointed out how this patch is fundamentally broken. You can >> achieve your aims by a fairly small amount of code inside your logical >> decoder, and with no core code changes whatsoever. So I'm puzzled why we are >> even still debating this broken design. > I went through all your responses over the entire thread and I couldn't > find your argument about how this is fundamentally broken. Can you > restate, or maybe give an archive link if I just missed it? > > > (Saying "but it changes so much of the existing code" is not really a > fundamental problem to me. I mean, it's not like the existing code is > perfect and needs no changes.) > On July 13 I wrote: > Yes, but I think the plugin is the right place to do it. What is more, > this won't actually prevent you completely from producing > non-ECMAScript compliant JSON, since json or jsonb values containing > offending numerics won't be caught, AIUI. But a fairly simple to write > function that reparsed and fixed the JSON inside the decoder would work. The OP admitted that this was a serious flaw in his approach. In fact, given that a json value can contain an offending numeric value, any approach which doesn't involve reparsing is pretty much bound to fail. cheers andrew
On July 13 I wrote:Yes, but I think the plugin is the right place to do it. What is more, this won't actually prevent you completely from producing non-ECMAScript compliant JSON, since json or jsonb values containing offending numerics won't be caught, AIUI. But a fairly simple to write function that reparsed and fixed the JSON inside the decoder would work.
The OP admitted that this was a serious flaw in his approach. In fact, given that a json value can contain an offending numeric value, any approach which doesn't involve reparsing is pretty much bound to fail.