Archiving data to another server using copy, psql with pipe - Mailing list pgsql-general

From pinker
Subject Archiving data to another server using copy, psql with pipe
Date
Msg-id 1491427563491-5954469.post@n3.nabble.com
Whole thread Raw
List pgsql-general
Hi,
I'm trying to write an archive manager which will be first copying data from
tables with where clause and then, after successful load into second server
- delete them.
The simplest (and probably fastest) solution I came up with is to use copy:
psql -h localhost postgres -c "copy (SELECT * FROM a WHERE time < now()) to
stdout " | psql -h localhost  postgres   -c "copy b from stdin"

I have made very simple test to check if I can be sure about "transactional"
safety. It's not two phase commit of course but it's seems to throw an error
if something went wrong and it's atomic (i assume). The test was:

CREATE TABLE public.a
(
  id integer,
  k01 numeric (3)
);

CREATE TABLE public.b
(
  id integer,
  k01 numeric (1)
);

insert into a select n,n from generate_series(1,100) n;

and then:
psql -h localhost postgres -c "copy a to stdout "|psql -h localhost
postgres   -c "copy b from stdin"

so psql has thrown an error and no rows were inserted to the b table - so it
seems to be ok.

Is there maybe something I'm missing?
Some specific condition when something could go wrong and make the process
not atomic? (i don't care about data consistency in this particular case).




--
View this message in context:
http://www.postgresql-archive.org/Archiving-data-to-another-server-using-copy-psql-with-pipe-tp5954469.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


pgsql-general by date:

Previous
From: George Neuner
Date:
Subject: Re: browser interface to forums please?
Next
From: "David G. Johnston"
Date:
Subject: Re: browser interface to forums please?