On Mon, Jan 25, 2016 at 9:57 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> Pavel Stehule <pavel.stehule@gmail.com> writes:
> > 2016-01-24 11:47 GMT+01:00 <xtracoder@gmail.com>:
> >> When postgres array has no elements at the beginning, array_to_json()
> >> generates json array starting at the first initialized value
>
> > It doesn't looks like bug - It hard to say, when is a start of PostgreS=
QL
> > array and how PostgreSQL arrays should be serialized to JSON. First fie=
ld
> > in pg array is a first field in JSON array.
>
> The current behavior looks reasonable to me too. Certainly the proposed
> change wouldn't be any more consistent --- what would happen when you had
> elements with subscripts less than 1
> =E2=80=8B.
>
=E2=80=8BI'd go with "since json arrays do not have index information attac=
hed to
them it is unwise to use them with non-default (i.e., 1-based and present)
PostgreSQL arrays. Its likely to late to actually error in that
situation. If the user really needs to use a non-default array structure
they'd be advised to create their own conversion functions between
json-compatible arrays (1-based with leading nulls) and the PostgreSQL
arrays.
I would say that the current behavior is wrong since there is no way to
round-trip. Any proper solution would at least need a user-specified
option for whether to truncate leading null array positions (which would
suppress the error for using a starting index > 1). Positions <1 should be
outright disallowed - I cannot imagine adding an argument that allows one
to specify a negative starting index.
David J.