@laurenz if I use `insert into` or the `merge` would I be able to bypass records with errors? Or would I fail there too? I mean there are lots of ways a record can be limited. Unique indexes, check constraints, foreign key constraints, etc. What happens in those cases?
And why not fixing the "on_error ignore" in the first place? Maybe that would be a simpler way. I don't know the internals of bulk insertion, but if at some point it has a loop in it, then that's much simpler to catch errors in that loop.
On Sat, 2025-02-08 at 09:31 +0330, me nefcanto wrote: > Inserting a million records not in an all-or-fail is a requirement. What options do we have for that?
Use COPY to load the data into a new (temporary?) table. Then use INSERT INTO ... SELECT ... ON CONFLICT ... or MERGE to merge the data from that table to the actual destination.