Thread: Can COPY update or skip existing records?
Hey all, I've got a table with a unique constraint across a few fields which I need to regularly import a batch of data into. Is there a way to do it with COPY without getting conflicts on the unique contraint? I have no was of being certain that some of the data I'm trying to load isn't in the table already. Ideally I'd like it to operate like MySQL's on_duplicate_key_update option, but for now I'll suffice with just ignoring existing rows and proceeding with everything else. Thanks, -- Glenn
Hi there,
I hope being helpful.
The operation "on_duplicate_key_update" is in implementation on the new version of Postgres :)
The only way (or, at least, the best way... I think) to do what you want is using a temporary table... let's see:
/* Creating the structure of the first table (table_01)... You can do it the way you like */
create table temp_01 as
(select * from table_01 limit 1);
TRUNCATE TABLE table_01;
/* COPY */
COPY temp_01 FROM '/tmp/table';
/* Insert the values */
insert into table_01 a
where not exists
(select 1 from temp_01 b
where
a.cod_serial = b.cod_serial)
/* Or you could do like this */
delete from temp_01 a
where exists
(select 1 from table_01 b
where
a.cod_serial = b.cod_serial)
I hope being helpful.
Best Regards,
Rafael Domiciano
Postgres DBA
2008/9/30 Glenn Gillen <glenn.gillen@gmail.com>
Hey all,
I've got a table with a unique constraint across a few fields which I
need to regularly import a batch of data into. Is there a way to do it
with COPY without getting conflicts on the unique contraint? I have no
was of being certain that some of the data I'm trying to load isn't in
the table already.
Ideally I'd like it to operate like MySQL's on_duplicate_key_update
option, but for now I'll suffice with just ignoring existing rows and
proceeding with everything else.
Thanks,
--
Glenn
--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql
On Wed, Oct 1, 2008 at 6:37 AM, Rafael Domiciano <rafael.domiciano@gmail.com> wrote:
CREATE TABLE temp_01( LIKE table_01 );
OR
CREATE TABLE temp_01 as select * from table_01 where 1 = 2;
These statements would avoid the following TRUNCATE command.
(mail follows at the end)
For your current requirement, you can create a trigger on the table, which silently rejects the duplicate rows. To implement UPDATE ON DUPLICATE, you can add additional logic to the same trigger and update the row already present.
Scour these archives, you'll find this topic discussed in detail earlier.
Best regards,
--
gurjeet[.singh]@EnterpriseDB.com
singh.gurjeet@{ gmail | hotmail | indiatimes | yahoo }.com
EnterpriseDB http://www.enterprisedb.com
Mail sent from my BlackLaptop device
Hi there,The operation "on_duplicate_key_update" is in implementation on the new version of Postgres :)The only way (or, at least, the best way... I think) to do what you want is using a temporary table... let's see:/* Creating the structure of the first table (table_01)... You can do it the way you like */create table temp_01 as(select * from table_01 limit 1);
CREATE TABLE temp_01( LIKE table_01 );
OR
CREATE TABLE temp_01 as select * from table_01 where 1 = 2;
These statements would avoid the following TRUNCATE command.
(mail follows at the end)
TRUNCATE TABLE table_01;/* COPY */COPY temp_01 FROM '/tmp/table';/* Insert the values */insert into table_01 awhere not exists(select 1 from temp_01 bwherea.cod_serial = b.cod_serial)/* Or you could do like this */delete from temp_01 awhere exists(select 1 from table_01 bwherea.cod_serial = b.cod_serial)
I hope being helpful.Best Regards,Rafael DomicianoPostgres DBA2008/9/30 Glenn Gillen <glenn.gillen@gmail.com>Hey all,
I've got a table with a unique constraint across a few fields which I
need to regularly import a batch of data into. Is there a way to do it
with COPY without getting conflicts on the unique contraint? I have no
was of being certain that some of the data I'm trying to load isn't in
the table already.
Ideally I'd like it to operate like MySQL's on_duplicate_key_update
option, but for now I'll suffice with just ignoring existing rows and
proceeding with everything else.
Thanks,
Scour these archives, you'll find this topic discussed in detail earlier.
Best regards,
--
gurjeet[.singh]@EnterpriseDB.com
singh.gurjeet@{ gmail | hotmail | indiatimes | yahoo }.com
EnterpriseDB http://www.enterprisedb.com
Mail sent from my BlackLaptop device
On Tue, Sep 30, 2008 at 6:16 AM, Glenn Gillen <glenn.gillen@gmail.com> wrote: > Hey all, > > I've got a table with a unique constraint across a few fields which I > need to regularly import a batch of data into. Is there a way to do it > with COPY without getting conflicts on the unique contraint? I have no > was of being certain that some of the data I'm trying to load isn't in > the table already. > > Ideally I'd like it to operate like MySQL's on_duplicate_key_update > option, but for now I'll suffice with just ignoring existing rows and > proceeding with everything else. Look into pg_loader, I believe it has that capability.
On Tue, Sep 30, 2008 at 5:16 AM, Glenn Gillen <glenn.gillen@gmail.com> wrote: > Hey all, > > I've got a table with a unique constraint across a few fields which I > need to regularly import a batch of data into. Is there a way to do it > with COPY without getting conflicts on the unique contraint? I have no > was of being certain that some of the data I'm trying to load isn't in > the table already. > > Ideally I'd like it to operate like MySQL's on_duplicate_key_update > option, but for now I'll suffice with just ignoring existing rows and > proceeding with everything else. I ran into a similar problem. I'm using these merge_by_key functions: http://pgfoundry.org/projects/mbk Here's a quick example... CREATE TEMP TABLE foo (LIKE dst INCLUDING DEFAULTS); COPY foo (c1, c2) FROM STDIN; (your copy data here) \. SELECT * FROM merge_by_key( 'public', -- table schema 'dst', -- table name 'mnew.c2 < mold.c2', -- mergecondition 'select c1,c2 FROM foo' ); Disclaimer: The author is a friend of mine. :-)