Re: [GENERAL] COPY: row is too big - Mailing list pgsql-general

From Adrian Klaver
Subject Re: [GENERAL] COPY: row is too big
Date
Msg-id 6b0a0f3e-cd0d-1927-6c4e-ba32cc24a0a3@aklaver.com
Whole thread Raw
In response to Re: [GENERAL] COPY: row is too big  (Rob Sargent <robjsargent@gmail.com>)
Responses Re: [GENERAL] COPY: row is too big
List pgsql-general
On 01/05/2017 08:31 AM, Rob Sargent wrote:
>
>
> On 01/05/2017 05:44 AM, vod vos wrote:
>> I finally figured it out as follows:
>>
>> 1. modified the corresponding data type of the columns to the csv file
>>
>> 2. if null values existed, defined the data type to varchar. The null
>> values cause problem too.
>>
>> so 1100 culumns work well now.
>>
>> This problem wasted me three days. I have lots of csv data to COPY.
>>
>>
> Yes, you cost yourself a lot of time by not showing the original table
> definition into which you were trying insert data.

Given that the table had 1100 columns I am not sure I wanted to see it:)

Still the OP did give it to us in description:

https://www.postgresql.org/message-id/15969913dd3.ea2ff58529997.7460368287916683127%40zoho.com
"I create a table with 1100 columns with data type of varchar, and hope
the COPY command will auto transfer the csv data that contains some
character and date, most of which are numeric."

In retrospect I should have pressed for was a more complete description
of the data. I underestimated this description:

"And some the values in the csv file contain nulls, do this null values
matter? "


--
Adrian Klaver
adrian.klaver@aklaver.com


pgsql-general by date:

Previous
From: Paul Ramsey
Date:
Subject: Re: [GENERAL] Improve PostGIS performance with 62 million rows?
Next
From: BRUSSER Michael
Date:
Subject: [GENERAL] psql error (encoding related?)