Re: Bug in copy - Mailing list pgsql-bugs

From me nefcanto
Subject Re: Bug in copy
Date
Msg-id CAEHBEOB2LOaDZycmkjcYDG6JJF0_kFX3gc9H+ZrL=cPNF+WnOg@mail.gmail.com
Whole thread Raw
In response to Re: Bug in copy  (Zhang Mingli <zmlpostgres@gmail.com>)
List pgsql-bugs
Hi, thank you for the response. If we analyze semantically, it had to be on_type_error or something. But what matters is the problem at hand. Inserting a million records not in an all-or-fail is a requirement. What options do we have for that?

On Sat, Feb 8, 2025 at 9:22 AM Zhang Mingli <zmlpostgres@gmail.com> wrote:
On Feb 8, 2025 at 13:28 +0800, me nefcanto <sn.1361@gmail.com>, wrote:
Hello
I run this command:
copy "Parts" ("Id","Title") from stdin with (format csv, delimiter ",", on_error ignore)
But I receive this error:
duplicate key value violates unique constraint "PartsUniqueLocaleTitle"
This means that the on_error setting is not working. When I try to insert a million records, this becomes extremely annoying and counterproductive.
When we specify that on_error should be ignored, any type of error including data type inconsistency, check constraint inconsistency, foreign key inconsistency, etc. should be ignored and Postgres should move to the next record and not fail the entire bulk operation.
RegardsSaeed Nemati

Hi,

As my understanding,  on_error is designed to handle errors during data type conversions in PostgreSQL, similar to what we do in Greenplum or Cloudberry. 
Since these rows are valid, on_error doesn’t raise any concerns.

--
Zhang Mingli
HashData

pgsql-bugs by date:

Previous
From: Zhang Mingli
Date:
Subject: Re: Bug in copy
Next
From: me nefcanto
Date:
Subject: Re: Bug in copy