Tom Lane wrote:
>Kris Jurka <books@ejurka.com> writes:
>
>
>>Endlessly extending the COPY command doesn't seem like a winning
>>proposition to me and I think if we aren't comfortable telling every user
>>to write a script to pre/post-process the data we should instead provide a
>>bulk loader/unloader that transforms things to our limited COPY
>>functionality. There are all kinds of feature requests I've seen
>>along these lines that would make COPY a million option mess if we try to
>>support all of it directly.
>>
>>
>
>I agree completely --- personally I'd not have put CSV into the backend
>either.
>
>IIRC we already have a TODO item for a separate bulk loader, but no
>one's stepped up to the plate yet :-(
>
IIRC, the way it happened was that a proposal was made to do CSV import/export in a fairly radical way, I countered
witha much more modest approach, which was generally accepted and which Bruce and I then implemented, not without some
angst(as well as a little sturm und drang).
The advantage of having it in COPY is that it can be done serverside
direct from the file system. For massive bulk loads that might be a
plus, although I don't know what the protocol+socket overhead is. Maybe
it would just be lost in the noise. Certainly I can see some sense in
having COPY deal with straightforward cases and a bulk-load-unload
program in bin to handle the hairier cases. Multiline fields would come
into that category. The bulk-load-unload facility could possibly handle
things other than CSV format too (XML anyone?). The nice thing about an
external program is that it would not have to handle data embedded in an
SQL stream, so the dangers from shifts in newline style, missing quotes,
and the like would be far lower.
We do need to keep things in perspective a bit. The small wrinkle that
has spawned this whole thread will not affect most users of the facility
- and many many users will thanks us for having provided it.
cheers
andrew