Thread: pg_dump to a remote server
We're upgrading from v8.4 to 9.6 on a new VM in a different DC. The dump file will be more than 1TB, and there's not enough disk space on the current system for the dump file. Thus, how can I send the pg_dump file directly to the new server while the pg_dump command is running? NFS is one method, but are there others (netcat, rsync)? Since it's within the same company, encryption is not required. Or would it be better to install both 8.4 and 9.6 on the new server (can I even install 8.4 on RHEL 6.9?), rsync the live database across and then set up log shipping, and when it's time to cut over, do an in-place pg_upgrade? (Because this is a batch system, we can apply the data input files to bring the new database up to "equality" with the 8.4 production system.) Thanks -- Angular momentum makes the world go 'round.
On 04/16/2018 04:58 PM, Ron wrote: > We're upgrading from v8.4 to 9.6 on a new VM in a different DC. The > dump file will be more than 1TB, and there's not enough disk space on > the current system for the dump file. > > Thus, how can I send the pg_dump file directly to the new server while > the pg_dump command is running? NFS is one method, but are there others > (netcat, rsync)? Since it's within the same company, encryption is not > required. Maybe?: pg_dump -d test -U postgres -Fc | ssh aklaver@arkansas 'cat > test_cat.out' > > Or would it be better to install both 8.4 and 9.6 on the new server (can > I even install 8.4 on RHEL 6.9?), rsync the live database across and > then set up log shipping, and when it's time to cut over, do an in-place > pg_upgrade? > > (Because this is a batch system, we can apply the data input files to > bring the new database up to "equality" with the 8.4 production system.) > > Thanks > -- Adrian Klaver adrian.klaver@aklaver.com
On Mon, Apr 16, 2018 at 6:58 PM, Ron <ronljohnsonjr@gmail.com> wrote:
We're upgrading from v8.4 to 9.6 on a new VM in a different DC. The dump file will be more than 1TB, and there's not enough disk space on the current system for the dump file.
Thus, how can I send the pg_dump file directly to the new server while the pg_dump command is running? NFS is one method, but are there others (netcat, rsync)? Since it's within the same company, encryption is not required.
Can you run pg_dump on the new server, connecting remotely to the current one?
--
--
Mike Nolan
> -----Original Message----- > From: Ron <ronljohnsonjr@gmail.com> > Sent: Tuesday, April 17, 2018 7:59 AM > To: pgsql-general <pgsql-general@postgresql.org> > Subject: pg_dump to a remote server > > We're upgrading from v8.4 to 9.6 on a new VM in a different DC. The dump > file will be more than 1TB, and there's not enough disk space on the current > system for the dump file. > > Thus, how can I send the pg_dump file directly to the new server while the > pg_dump command is running? NFS is one method, but are there others > (netcat, rsync)? Since it's within the same company, encryption is not > required. > > Or would it be better to install both 8.4 and 9.6 on the new server (can I > even install 8.4 on RHEL 6.9?), rsync the live database across and then set > up log shipping, and when it's time to cut over, do an in-place pg_upgrade? > > (Because this is a batch system, we can apply the data input files to bring > the new database up to "equality" with the 8.4 production system.) > > Thanks > > -- > Angular momentum makes the world go 'round. Hi https://www.postgresql.org/docs/current/static/backup-dump.html#BACKUP-DUMP-RESTORE ... ... The ability of pg_dump and psql to write to or read from pipes makes it possible to dump a database directly from one serverto another, for example: pg_dump -h host1 dbname | psql -h host2 dbname -- Jack Gao jackgo73@outlook.com
On 04/16/2018 07:47 PM, Gao Jack wrote: >> -----Original Message----- >> From: Ron <ronljohnsonjr@gmail.com> >> Sent: Tuesday, April 17, 2018 7:59 AM >> To: pgsql-general <pgsql-general@postgresql.org> >> Subject: pg_dump to a remote server >> >> We're upgrading from v8.4 to 9.6 on a new VM in a different DC. The dump >> file will be more than 1TB, and there's not enough disk space on the current >> system for the dump file. >> >> Thus, how can I send the pg_dump file directly to the new server while the >> pg_dump command is running? NFS is one method, but are there others >> (netcat, rsync)? Since it's within the same company, encryption is not >> required. >> >> Or would it be better to install both 8.4 and 9.6 on the new server (can I >> even install 8.4 on RHEL 6.9?), rsync the live database across and then set >> up log shipping, and when it's time to cut over, do an in-place pg_upgrade? >> >> (Because this is a batch system, we can apply the data input files to bring >> the new database up to "equality" with the 8.4 production system.) >> >> Thanks >> >> -- >> Angular momentum makes the world go 'round. > Hi > > https://www.postgresql.org/docs/current/static/backup-dump.html#BACKUP-DUMP-RESTORE > > ... > ... > > The ability of pg_dump and psql to write to or read from pipes makes it possible to dump a database directly from one serverto another, for example: > > pg_dump -h host1 dbname | psql -h host2 dbname But that assumes --format=plain which will send a whole lot of uncompressed text across the wire. -- Angular momentum makes the world go 'round.
On 04/16/2018 07:18 PM, Adrian Klaver wrote: > On 04/16/2018 04:58 PM, Ron wrote: >> We're upgrading from v8.4 to 9.6 on a new VM in a different DC. The dump >> file will be more than 1TB, and there's not enough disk space on the >> current system for the dump file. >> >> Thus, how can I send the pg_dump file directly to the new server while >> the pg_dump command is running? NFS is one method, but are there others >> (netcat, rsync)? Since it's within the same company, encryption is not >> required. > > Maybe?: > > pg_dump -d test -U postgres -Fc | ssh aklaver@arkansas 'cat > test_cat.out' That looks promising. I could then "pg_restore -jX". -- Angular momentum makes the world go 'round.
On 04/16/2018 06:43 PM, Ron wrote: > > > On 04/16/2018 07:18 PM, Adrian Klaver wrote: >> On 04/16/2018 04:58 PM, Ron wrote: >>> We're upgrading from v8.4 to 9.6 on a new VM in a different DC. The >>> dump file will be more than 1TB, and there's not enough disk space on >>> the current system for the dump file. >>> >>> Thus, how can I send the pg_dump file directly to the new server >>> while the pg_dump command is running? NFS is one method, but are >>> there others (netcat, rsync)? Since it's within the same company, >>> encryption is not required. >> >> Maybe?: >> >> pg_dump -d test -U postgres -Fc | ssh aklaver@arkansas 'cat > >> test_cat.out' > > That looks promising. I could then "pg_restore -jX". More promising would be the suggestion from Michael Nolan: https://www.postgresql.org/message-id/CAOzAqu%2BVpOfzBHwcqptSzm3PkeZAjkqqc0XqB%2BA-jBNioU6x%2Bg%40mail.gmail.com "Can you run pg_dump on the new server, connecting remotely to the current one?" It eliminates two programs(ssh and cat) and a pipe. -- Adrian Klaver adrian.klaver@aklaver.com
## Ron (ronljohnsonjr@gmail.com): > > pg_dump -h host1 dbname | psql -h host2 dbname > > But that assumes --format=plain which will send a whole lot of > uncompressed text across the wire. You can also use pg_restore with standard input, i.e. pg_dump | pg_restore. Regards, Christoph -- Spare Space.
from the pg_dump docs...
...
-Z 0..9
--compress=0..9
Specify the compression level to use. Zero means no compression. For the custom archive format, this specifies compression of individual table-data segments, and the default is to compress at a moderate level. For plain text output, setting a nonzero compression level causes the entire output file to be compressed, as though it had been fed through gzip; but the default is not to compress.
...
so perhaps running (on host2):
pg_dump -h host1 -Z 9 dbname | zcat | psql -h host2 dbname
will generate a compressed text output on host1, which is sent over the wire to host2 where it is locally uncompressed & fed into psql...
Brent Wood
Programme leader: Environmental Information Delivery
NIWA
DDI: +64 (4) 3860529
...
-Z 0..9
--compress=0..9
Specify the compression level to use. Zero means no compression. For the custom archive format, this specifies compression of individual table-data segments, and the default is to compress at a moderate level. For plain text output, setting a nonzero compression level causes the entire output file to be compressed, as though it had been fed through gzip; but the default is not to compress.
...
so perhaps running (on host2):
pg_dump -h host1 -Z 9 dbname | zcat | psql -h host2 dbname
will generate a compressed text output on host1, which is sent over the wire to host2 where it is locally uncompressed & fed into psql...
Brent Wood
Programme leader: Environmental Information Delivery
NIWA
DDI: +64 (4) 3860529
Brent Wood |
Principal Technician - GIS and Spatial Data Management Programme Leader - Environmental Information Delivery |
+64-4-386-0529 | 301 Evans Bay Parade, Greta Point, Wellington | www.niwa.co.nz |
To ensure compliance with legal requirements and to maintain cyber security standards, NIWA's IT systems are subject to ongoing monitoring, activity logging and auditing. This monitoring and auditing service may be provided by third parties. Such third parties can access information transmitted to, processed by and stored on NIWA's IT systems. |
________________________________________
From: Christoph Moench-Tegeder [cmt@burggraben.net]
Sent: Tuesday, April 17, 2018 18:00
To: pgsql-general@lists.postgresql.org
Subject: Re: pg_dump to a remote server
## Ron (ronljohnsonjr@gmail.com):
> > pg_dump -h host1 dbname | psql -h host2 dbname
>
> But that assumes --format=plain which will send a whole lot of
> uncompressed text across the wire.
You can also use pg_restore with standard input, i.e. pg_dump | pg_restore.
Regards,
Christoph
--
Spare Space.
From: Christoph Moench-Tegeder [cmt@burggraben.net]
Sent: Tuesday, April 17, 2018 18:00
To: pgsql-general@lists.postgresql.org
Subject: Re: pg_dump to a remote server
## Ron (ronljohnsonjr@gmail.com):
> > pg_dump -h host1 dbname | psql -h host2 dbname
>
> But that assumes --format=plain which will send a whole lot of
> uncompressed text across the wire.
You can also use pg_restore with standard input, i.e. pg_dump | pg_restore.
Regards,
Christoph
--
Spare Space.
Attachment
On 04/16/2018 11:07 PM, Adrian Klaver wrote: > On 04/16/2018 06:43 PM, Ron wrote: >> >> >> On 04/16/2018 07:18 PM, Adrian Klaver wrote: >>> On 04/16/2018 04:58 PM, Ron wrote: >>>> We're upgrading from v8.4 to 9.6 on a new VM in a different DC. The >>>> dump file will be more than 1TB, and there's not enough disk space on >>>> the current system for the dump file. >>>> >>>> Thus, how can I send the pg_dump file directly to the new server while >>>> the pg_dump command is running? NFS is one method, but are there >>>> others (netcat, rsync)? Since it's within the same company, encryption >>>> is not required. >>> >>> Maybe?: >>> >>> pg_dump -d test -U postgres -Fc | ssh aklaver@arkansas 'cat > test_cat.out' >> >> That looks promising. I could then "pg_restore -jX". > > More promising would be the suggestion from Michael Nolan: > > https://www.postgresql.org/message-id/CAOzAqu%2BVpOfzBHwcqptSzm3PkeZAjkqqc0XqB%2BA-jBNioU6x%2Bg%40mail.gmail.com > > > "Can you run pg_dump on the new server, connecting remotely to the current > one?" > > It eliminates two programs(ssh and cat) and a pipe. Is that supported? -- Angular momentum makes the world go 'round.
Hi Ron, I have some pg_dump test result, for reference only 😊 -- [ENV] Intel(R) Core(TM) i5-4250U CPU @ 1.30GHz | SSD 120GB | 8G memory (PostgreSQL) 9.6.8 -- [DATA] my database has 7.2GB of random data: postgres=# select pg_size_pretty(pg_database_size('postgres')); pg_size_pretty ---------------- 7201 MB (1 row) -- [Test Results] command | export_time | output_size -------------------------------------------------------------+-----------------+------------------ pg_dump postgres > outfile.sql | 16m23s | 6.3 GB pg_dump postgres | gzip > outfile.gz | 5m27s | 2.4 GB pg_dump -Fc postgres > outfile.dump | 5m33s | 2.4 GB pg_dump -Fc -Z 9 postgres > outfile.dump | 11m59s | 2.4 GB pg_dump -Ft postgres > outfile.dump | 2m43s | 6.3 GB pg_dump -Fd postgres -f dumpdir | 5m17s | 2.4 GB pg_dump -Fd -j 4 postgres -f dumpdir | 2m50s | 2.4 GB (7 rows) -- The smaller the amount of data transmitted over the network, the better. You could try compressed export method like gzip, -Fc, -Ft, -Fd -j 4(faster). -- Jack Gao jackgo73@outlook.com > -----Original Message----- > From: Ron <ronljohnsonjr@gmail.com> > Sent: Tuesday, April 17, 2018 9:44 AM > To: Adrian Klaver <adrian.klaver@aklaver.com>; pgsql-general <pgsql- > general@postgresql.org> > Subject: Re: pg_dump to a remote server > > > > On 04/16/2018 07:18 PM, Adrian Klaver wrote: > > On 04/16/2018 04:58 PM, Ron wrote: > >> We're upgrading from v8.4 to 9.6 on a new VM in a different DC. The > dump > >> file will be more than 1TB, and there's not enough disk space on the > >> current system for the dump file. > >> > >> Thus, how can I send the pg_dump file directly to the new server while > >> the pg_dump command is running? NFS is one method, but are there > others > >> (netcat, rsync)? Since it's within the same company, encryption is not > >> required. > > > > Maybe?: > > > > pg_dump -d test -U postgres -Fc | ssh aklaver@arkansas 'cat > test_cat.out' > > That looks promising. I could then "pg_restore -jX". > > -- > Angular momentum makes the world go 'round.
On 04/17/2018 12:35 AM, Ron wrote: > On 04/16/2018 11:07 PM, Adrian Klaver wrote: >> On 04/16/2018 06:43 PM, Ron wrote: >>> >> More promising would be the suggestion from Michael Nolan: >> >> https://www.postgresql.org/message-id/CAOzAqu%2BVpOfzBHwcqptSzm3PkeZAjkqqc0XqB%2BA-jBNioU6x%2Bg%40mail.gmail.com >> >> >> "Can you run pg_dump on the new server, connecting remotely to the >> current one?" >> >> It eliminates two programs(ssh and cat) and a pipe. > > Is that supported? > Sure as long as pg_dump on the new server can reach -h and the pg_hba.conf for the current server is set up to allow connections from the remote client. To test do something like: new_server> pg_dump -t some_table -s -h current_server -f test_file.sql -- Adrian Klaver adrian.klaver@aklaver.com