Thread: PG_Base Backup take 8 to 9Hrs
Hi Team,
We are using Postgres -11 version and current PG DB size is 450 GB .
Archive log enable in Database and we are running Full PG_Base backup and it will take 8 to 9 Hrs to complete the backup ,
Please suggest how to reduce the full DB backup time.
OS Version : RHEL6
DB size -450 GB
Postgresql Version -11
Regards,
Ram Pratap.
Hi, Please don't try to recall messages sent on a mailing list. That won't work and will spam everyone. On Tue, Feb 08, 2022 at 11:36:19AM +0000, Ram Pratap Maurya wrote: > > We are using Postgres -11 version and current PG DB size is 450 GB . > > Archive log enable in Database and we are running Full PG_Base backup and it > will take 8 to 9 Hrs to complete the backup , > > Please suggest how to reduce the full DB backup time. That's ~16MB/s, so many times slower than a single old 7200RPM drive. The problem is likely not with postgres, you should get a better storage or a better network.
Use pgbackrest instead Sent from my iPhone > On Feb 8, 2022, at 8:02 AM, Julien Rouhaud <rjuju123@gmail.com> wrote: > > Hi, > > Please don't try to recall messages sent on a mailing list. That won't work > and will spam everyone. > >> On Tue, Feb 08, 2022 at 11:36:19AM +0000, Ram Pratap Maurya wrote: >> >> We are using Postgres -11 version and current PG DB size is 450 GB . >> >> Archive log enable in Database and we are running Full PG_Base backup and it >> will take 8 to 9 Hrs to complete the backup , >> >> Please suggest how to reduce the full DB backup time. > > That's ~16MB/s, so many times slower than a single old 7200RPM drive. The > problem is likely not with postgres, you should get a better storage or a > better network. > >
We are using Postgres -11 version and current PG DB size is 450 GB .
Archive log enable in Database and we are running Full PG_Base backup and it will take 8 to 9 Hrs to complete the backup ,
Please suggest how to reduce the full DB backup time.
Use storage snapshots, faster disks, faster Ethernet adapters and backup software that supports deduplication. You are asking the list to architect your backup solution. For that, some more information is needed: what kind of machine are you using, what OS, how much memory, what kind of storage is your cluster running on, what kind of storage are you backing it up to, how is the backup storage connected (FC/AL, Ethernet, local SATA drives), is there any degree of parallelism and things of that nature. In the absence of that information, I can only recommend attempting with the --run-faster switch to pg_basebackup. You may also try using the Force.
-- Mladen Gogala Database Consultant Tel: (347) 321-1217 https://dbwhisperer.wordpress.com
pgbackrest (full, differential, and incremental)
barman (full and incremental)
Regards,
Michael Vitale
Mladen Gogala wrote on 2/9/2022 7:42 PM:
On 2/8/22 06:36, Ram Pratap Maurya wrote:We are using Postgres -11 version and current PG DB size is 450 GB .
Archive log enable in Database and we are running Full PG_Base backup and it will take 8 to 9 Hrs to complete the backup ,
Please suggest how to reduce the full DB backup time.
Use storage snapshots, faster disks, faster Ethernet adapters and backup software that supports deduplication. You are asking the list to architect your backup solution. For that, some more information is needed: what kind of machine are you using, what OS, how much memory, what kind of storage is your cluster running on, what kind of storage are you backing it up to, how is the backup storage connected (FC/AL, Ethernet, local SATA drives), is there any degree of parallelism and things of that nature. In the absence of that information, I can only recommend attempting with the --run-faster switch to pg_basebackup. You may also try using the Force.
-- Mladen Gogala Database Consultant Tel: (347) 321-1217 https://dbwhisperer.wordpress.com
Please use a third party backup solution that provides faster and more flexible backup alternatives than the built in, pg_basebackup(), which is not usefult for large databases.
pgbackrest (full, differential, and incremental)
barman (full and incremental)
Regards,
Michael Vitale
Mladen Gogala wrote on 2/9/2022 7:42 PM:On 2/8/22 06:36, Ram Pratap Maurya wrote:We are using Postgres -11 version and current PG DB size is 450 GB .
Archive log enable in Database and we are running Full PG_Base backup and it will take 8 to 9 Hrs to complete the backup ,
Please suggest how to reduce the full DB backup time.
Use storage snapshots, faster disks, faster Ethernet adapters and backup software that supports deduplication. You are asking the list to architect your backup solution. For that, some more information is needed: what kind of machine are you using, what OS, how much memory, what kind of storage is your cluster running on, what kind of storage are you backing it up to, how is the backup storage connected (FC/AL, Ethernet, local SATA drives), is there any degree of parallelism and things of that nature. In the absence of that information, I can only recommend attempting with the --run-faster switch to pg_basebackup. You may also try using the Force.
-- Mladen Gogala Database Consultant Tel: (347) 321-1217 https://dbwhisperer.wordpress.com
wells.oliver@gmail.com
Just keep in mind that logical dumps and loads do not support Point In Time Recovery (PITR). Wells Oliver wrote on 2/10/2022 9:36 AM: > Can you just try good ole pg_dump with --format=directory and --jobs=X > where X is some decent number, e.g. 16? We just packed up a 700GB DB > using 16 jobs and it took maybe 80 minutes.
Please use a third party backup solution that provides faster and more flexible backup alternatives than the built in, pg_basebackup(), which is not usefult for large databases.
pgbackrest (full, differential, and incremental)
barman (full and incremental)
Regards,
Michael Vitale
Mladen Gogala wrote on 2/9/2022 7:42 PM:On 2/8/22 06:36, Ram Pratap Maurya wrote:We are using Postgres -11 version and current PG DB size is 450 GB .
Archive log enable in Database and we are running Full PG_Base backup and it will take 8 to 9 Hrs to complete the backup ,
Please suggest how to reduce the full DB backup time.
Use storage snapshots, faster disks, faster Ethernet adapters and backup software that supports deduplication. You are asking the list to architect your backup solution. For that, some more information is needed: what kind of machine are you using, what OS, how much memory, what kind of storage is your cluster running on, what kind of storage are you backing it up to, how is the backup storage connected (FC/AL, Ethernet, local SATA drives), is there any degree of parallelism and things of that nature. In the absence of that information, I can only recommend attempting with the --run-faster switch to pg_basebackup. You may also try using the Force.
-- Mladen Gogala Database Consultant Tel: (347) 321-1217 https://dbwhisperer.wordpress.com
I can wholeheartedly recommend Commvault backup suite. It can seamlessly utilize storage snapshot and has built in parallelism, deduplication and compression.
Regards
-- Mladen Gogala Database Consultant Tel: (347) 321-1217 https://dbwhisperer.wordpress.com
Hi All ,lately i'm getting this error when open base.tar.Any idea why pg_basebackup created bad tar file? and how to fix it?Thanks!Avihaitar xOf base.tar > /dev/nulltar: Unexpected EOF in archivetar: rmtlseek not stopped at a record boundarytar: Error is not recoverable: exiting nowOn Thu, Feb 10, 2022 at 5:16 PM MichaelDBA <MichaelDBA@sqlexec.com> wrote:Just keep in mind that logical dumps and loads do not support Point In
Time Recovery (PITR).
Wells Oliver wrote on 2/10/2022 9:36 AM:
> Can you just try good ole pg_dump with --format=directory and --jobs=X
> where X is some decent number, e.g. 16? We just packed up a 700GB DB
> using 16 jobs and it took maybe 80 minutes.
Angular momentum makes the world go 'round.
Von: Avihai Shoham <avihai.shoham@gmail.com>
Gesendet: Donnerstag, 10. Februar 2022 18:08
An: MichaelDBA <MichaelDBA@sqlexec.com>
Cc: Wells Oliver <wells.oliver@gmail.com>; Mladen Gogala <gogala.mladen@gmail.com>; pgsql-admin@lists.postgresql.org
Betreff: Re: PG_Base Backup take 8 to 9Hrs
Hi All ,
lately i'm getting this error when open base.tar.
Any idea why pg_basebackup created bad tar file? and how to fix it?
Thanks!
Avihai
tar xOf base.tar > /dev/null
tar: Unexpected EOF in archive
tar: rmtlseek not stopped at a record boundary
tar: Error is not recoverable: exiting now
On Thu, Feb 10, 2022 at 5:16 PM MichaelDBA <MichaelDBA@sqlexec.com> wrote:
Just keep in mind that logical dumps and loads do not support Point In
Time Recovery (PITR).
Wells Oliver wrote on 2/10/2022 9:36 AM:
> Can you just try good ole pg_dump with --format=directory and --jobs=X
> where X is some decent number, e.g. 16? We just packed up a 700GB DB
> using 16 jobs and it took maybe 80 minutes.
Hi Avihai
try
--ignore-failed-read
and or
--ignore-zeros
best,
Anton
> On Feb 11, 2022, at 12:22 AM, Dischner, Anton <Anton.Dischner@med.uni-muenchen.de> wrote: > > lately i'm getting this error when open base.tar. > Any idea why pg_basebackup created bad tar file? and how to fix it? So... When asked not to hijack threads, your response is to apologize, wait a few hours, then do it again??? Please send your own question with a subject that matches.