Re: Emitting JSON to file using COPY TO - Mailing list pgsql-hackers

From Daniel Verite
Subject Re: Emitting JSON to file using COPY TO
Date
Msg-id 47946a5e-f24d-4028-b113-0f45d4493c83@manitou-mail.org
Whole thread Raw
In response to Re: Emitting JSON to file using COPY TO  (Dave Cramer <davecramer@postgres.rocks>)
List pgsql-hackers
    Dave Cramer wrote:

> > This argument for leaving 3 as the column count makes sense to me.  I
> > agree this content is not meant to facilitate interpreting the contents at
> > a protocol level.
> >
>
> I'd disagree. From my POV if the data comes back as a JSON Array this is
> one object and this should be reflected in the column count.

The doc says this:
"Int16
    The number of columns in the data to be copied (denoted N below)."

and this formulation is repeated in PQnfields() for libpq:

"PQnfields
    Returns the number of columns (fields) to be copied."

How to interpret that sentence?
"to be copied" from what, into what, and by what way?

A plausible interpretation is "to be copied from the source data
into the COPY stream, by the backend".    So the number of columns
to be copied likely refers to the columns of the dataset, not the
"in-transit form" that is text or csv or json.

The interpetation you're proposing also makes sense, that it's just
one json column per row, or even a single-row single-column for the
entire dataset in the force_array case, but then the question is why
isn't that number of columns always 1 for the original "text" format,
since each row is represented in the stream as a single long piece of
text?


Best regards,
--
Daniel Vérité
https://postgresql.verite.pro/
Twitter: @DanielVerite



pgsql-hackers by date:

Previous
From: Jeff Davis
Date:
Subject: Re: Change GUC hashtable to use simplehash?
Next
From: Nathan Bossart
Date:
Subject: Re: micro-optimizing json.c