Thread: pg_base_backup limit bandwidth possible?

pg_base_backup limit bandwidth possible?

From
Edson Carlos Ericksson Richter
Date:
Hi!

I could not find in docs, is there any way to limit pg_base_backup
bandwidth usage?

Thanks,

Edson


Re: pg_base_backup limit bandwidth possible?

From
Andy Colson
Date:
On 12/29/2014 10:08 AM, Edson Carlos Ericksson Richter wrote:
> Hi!
>
> I could not find in docs, is there any way to limit pg_base_backup
> bandwidth usage?
>
> Thanks,
>
> Edson
>
>

There is not.  You can however run the base backup on the server side
and use ssh/rsync/etc to copy w/limits to the slave.

With a big database and a spotty connection I find that the best option
anyway.  I'm assured the base backup succeeds and I can just keep
re-running rsync till I get it downloaded.

-Andy


Re: pg_base_backup limit bandwidth possible?

From
Alvaro Herrera
Date:
Andy Colson wrote:
> On 12/29/2014 10:08 AM, Edson Carlos Ericksson Richter wrote:
> >Hi!
> >
> >I could not find in docs, is there any way to limit pg_base_backup
> >bandwidth usage?
>
> There is not.  You can however run the base backup on the server side and
> use ssh/rsync/etc to copy w/limits to the slave.

FWIW in 9.4 you can use pg_basebackup --max-rate.

--
Álvaro Herrera                http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


Re: pg_base_backup limit bandwidth possible?

From
Matthew Kelly
Date:
The way I’ve solved the problem before 9.4 is to use a command called 'pv' (pipe view).  Normally this command is useful for seeing the rate of data flow in a pipe, but it also does have a rate limiting capacity.  The trick for me was running the output of pg_basebackup through pv (emulates having a slow disk) without having to have double the storage when building a new slave.

First, 'pg_basebackup' to standard out in the tar format.  Then pipe that to 'pv' to quietly do rate limiting.  Then pipe that to 'tar' to lay it out in a directory format.  Tar will dump everything into the current directory, but transform will give you the effect of having selected a directory in the initial command.

The finished product looks something like:
pg_basebackup -U postgres -D - -F t -x -vP | pv -q --rate-limit 100m | tar -xf - --transform='s`^`./pgsql-data-backup/`'



Re: pg_base_backup limit bandwidth possible?

From
Edson Carlos Ericksson Richter
Date:
At the end, I've chosen to use the following:

trickle -u 500 -d 500 rsync --progress --partial -az ${PGDATA}/* root@xxx.bbbbbb.com:/var/lib/pgsql/repl-9.3/data/ --exclude postmaster.pid --exclude postgresql.conf --exclude pg_hba.conf --exclude pg_log

and it worked really well. This way I've limited bandwidth consumption to 10Mbps.


Atenciosamente,

Edson Richter


On 02-01-2015 19:28, Matthew Kelly wrote:
The way I’ve solved the problem before 9.4 is to use a command called 'pv' (pipe view).  Normally this command is useful for seeing the rate of data flow in a pipe, but it also does have a rate limiting capacity.  The trick for me was running the output of pg_basebackup through pv (emulates having a slow disk) without having to have double the storage when building a new slave.

First, 'pg_basebackup' to standard out in the tar format.  Then pipe that to 'pv' to quietly do rate limiting.  Then pipe that to 'tar' to lay it out in a directory format.  Tar will dump everything into the current directory, but transform will give you the effect of having selected a directory in the initial command.

The finished product looks something like:
pg_basebackup -U postgres -D - -F t -x -vP | pv -q --rate-limit 100m | tar -xf - --transform='s`^`./pgsql-data-backup/`'