joshua <jzuellig@arbormetrix.com> writes:
> My apologies, I'm still somewhat new to this. Specifically, I'm dealing with
> COPY FROM CSV. I had assumed that since a csv is essentially a pile of text
> and COPY FROM is smart enough to interpret all sorts of csv entries into
> postgresql data types that if I wanted to allow a nonstandard conversion,
> I'd have to define some sort of cast to allow COPY FROM to interpret, say
> ...,green,... as {'green}.
COPY is not smart at all. It just looks at the column types of the
target table and assumes that the incoming data is of those types.
(More precisely, it applies the input conversion function of each
column's data type, after having separated and de-escaped the text
according to datatype-independent format rules.)
> I could set this up to use a staging table, but honestly, given our systems,
> it'd be easier for me to change all of our source csv's to simply read
> ...,{abc},... instead of ...,abc,... than to change our code base to use a
> series of staging tables
In that case, adjusting the source data is the way to go. Or you could
look at using an external ETL tool to do that for you. We've resisted
putting much transformational smarts into COPY because the main goal
for it is to be as fast and reliable as possible.
regards, tom lane