Re: Bug in copy - Mailing list pgsql-bugs

From me nefcanto
Subject Re: Bug in copy
Date
Msg-id CAEHBEOAS4X4UipH2or=5qJzYw8J+PLh03Lq7uTSEfz9Gr7031Q@mail.gmail.com
Whole thread Raw
In response to Re: Bug in copy  (me nefcanto <sn.1361@gmail.com>)
Responses Re: Bug in copy
List pgsql-bugs
@David, I saw that pg_bulkload. Amazing performance. But that's a command line tool. I need to insert bulk data in my Node.js app, via code.

On Sun, Feb 9, 2025 at 4:00 PM me nefcanto <sn.1361@gmail.com> wrote:
@laurenz if I use `insert into` or the `merge` would I be able to bypass records with errors? Or would I fail there too? I mean there are lots of ways a record can be limited. Unique indexes, check constraints, foreign key constraints, etc. What happens in those cases?

And why not fixing the "on_error ignore" in the first place? Maybe that would be a simpler way. I don't know the internals of bulk insertion, but if at some point it has a loop in it, then that's much simpler to catch errors in that loop.

Regards
Saeed

On Sun, Feb 9, 2025 at 9:32 AM Laurenz Albe <laurenz.albe@cybertec.at> wrote:
On Sat, 2025-02-08 at 09:31 +0330, me nefcanto wrote:
> Inserting a million records not in an all-or-fail is a requirement. What options do we have for that?

Use COPY to load the data into a new (temporary?) table.
Then use INSERT INTO ... SELECT ... ON CONFLICT ... or MERGE to merge
the data from that table to the actual destination.

COPY is not a full-fledged ETL tool.

Yours,
Laurenz Albe

pgsql-bugs by date:

Previous
From: me nefcanto
Date:
Subject: Re: Bug in copy
Next
From: "David G. Johnston"
Date:
Subject: Re: Bug in copy