Re: Decomposing xml into table - Mailing list pgsql-hackers

From Tom Lane
Subject Re: Decomposing xml into table
Date
Msg-id 2086149.1592853236@sss.pgh.pa.us
Whole thread Raw
In response to Decomposing xml into table  (Surafel Temesgen <surafel3000@gmail.com>)
Responses Re: Decomposing xml into table
List pgsql-hackers
Surafel Temesgen <surafel3000@gmail.com> writes:
>  In PostgreSQL there are a function table_to_xml to map the table content
> to xml value but there are no functionality to decompose xml back into
> table

Huh?  XMLTABLE does that, and it's even SQL-standard.

> I propose to have this by extending copy to handle xml format as well because
> file parsing and tuple formation functions is in there

Big -1 on that.  COPY is not for general-purpose data transformation.
The more unrelated features we load onto it, the slower it will get,
and probably also the more buggy and unmaintainable.  There's also a
really fundamental mismatch, in that COPY is designed to do row-by-row
processing with essentially no cross-row state.  How would you square
that with the inherently nested nature of XML?

> and it also seems to
> me that implement it without using xml library is simpler

I'm not in favor of implementing our own XML functionality, at least
not unless we go all the way and remove the dependency on libxml2
altogether.  That wouldn't be a terrible idea --- libxml2 has a long
and sad track record of bugs, including security issues.  But it'd be
quite a big job, and it'd still have nothing to do with COPY.

The big-picture question here, though, is why expend effort on XML at all?
It seems like JSON is where it's at these days for that problem space.

            regards, tom lane



pgsql-hackers by date:

Previous
From: Fabien COELHO
Date:
Subject: Re: tag typos in "catalog.sgml"
Next
From: Maciek Sakrejda
Date:
Subject: EXPLAIN: Non-parallel ancestor plan nodes exclude parallel worker instrumentation