Thread: Postgres Point in time Recovery (PITR),
Hello All,
Can you please share some ideas and scenarios how we can do the PITR in case of disaster.
Thanks,
On 18 October 2019 07:59:21 CEST, Daulat Ram <Daulat.Ram@exponential.com> wrote: >Hello All, >Can you please share some ideas and scenarios how we can do the PITR in >case of disaster. > > >Thanks, Consider Barman. -- 2ndQuadrant - The PostgreSQL Support Company
On Fri, Oct 18, 2019 at 7:59 AM Daulat Ram <Daulat.Ram@exponential.com> wrote: > Can you please share some ideas and scenarios how we can do the PITR in case of disaster. In order to be able to do PITR you need: - a base backup of your database - WALs from the backup going on See <https://www.postgresql.org/docs/12/continuous-archiving.html>. Then, to keep it simple, if the disaster happens in the time window between the backup and the last WAL archiving, you can restore an instance at any point in time previously the disaster itself. Luca
On Fri, Oct 18, 2019 at 7:59 AM Daulat Ram <Daulat.Ram@exponential.com> wrote:
> Can you please share some ideas and scenarios how we can do the PITR in case of disaster.
In order to be able to do PITR you need:
- a base backup of your database
- WALs from the backup going on
See <https://www.postgresql.org/docs/12/continuous-archiving.html>.
Then, to keep it simple, if the disaster happens in the time window
between the backup and the last WAL archiving, you can restore an
instance at any point in time previously the disaster itself.
Luca
Hello All,
Can you please share some ideas and scenarios how we can do the PITR in case of disaster.
Processing xlog segments from streaming for db01_11
00000001000001740000002E
00000001000001740000002F
000000010000017400000030
Starting remote restore for server db01_11 using backup 20181130T190002
Destination directory: /home/andreak/barman-restore
Doing PITR. Recovery target time: '2018-12-06 12:20:00+01:00'
17445, dbname1, /storage/fast_ssd/11/tablespaces/dbname1
29218, dbname2, /storage/fast_ssd/11/tablespaces/dbname2
...
Copying the base backup.
Copying required WAL segments.
Generating recovery.conf
Identify dangerous settings in destination directory.
WARNING
The following configuration files have not been saved during backup, hence they have not been restored.
You need to manually restore them in order to start the recovered PostgreSQL instance:
postgresql.conf
pg_hba.conf
pg_ident.conf
Recovery completed (start time: 2018-12-06 13:14:53.220043, elapsed time: 4 hours, 52 minutes, 47 seconds)
Your PostgreSQL server has been successfully prepared for recovery!
Attachment
On Fri, Oct 18, 2019 at 10:30 AM Andreas Joseph Krogh <andreas@visena.com> wrote: > We use barman (https://www.pgbarman.org/) for continuous streaming backup and I had to restore from it once, and it wentlike this: Just for the records, here's an example of restore with pgbackrest: % sudo -u postgres pgbackrest --stanza=miguel \ --log-level-console=info --delta restore ... INFO: restore backup set 20190916-125652F INFO: remove invalid files/paths/links from /postgres/pgdata/11 INFO: cleanup removed 148 files, 3 paths ... INFO: write /postgres/pgdata/11/recovery.conf INFO: restore global/pg_control (performed last to ensure aborted restores cannot be started) INFO: restore command end: completed successfully (5113ms)
On 10/18/19 11:29 AM, Luca Ferrari wrote: > On Fri, Oct 18, 2019 at 10:30 AM Andreas Joseph Krogh > <andreas@visena.com> wrote: >> We use barman (https://www.pgbarman.org/) for continuous streaming backup and I had to restore from it once, and it wentlike this: > > Just for the records, here's an example of restore with pgbackrest: > > % sudo -u postgres pgbackrest --stanza=miguel \ > --log-level-console=info --delta restore > ... > INFO: restore backup set 20190916-125652F > INFO: remove invalid files/paths/links from /postgres/pgdata/11 > INFO: cleanup removed 148 files, 3 paths > ... > INFO: write /postgres/pgdata/11/recovery.conf > INFO: restore global/pg_control (performed last > to ensure aborted restores cannot be started) > INFO: restore command end: completed successfully (5113ms) pgBackRest also has a tutorial on PITR: https://pgbackrest.org/user-guide.html#pitr -- -David david@pgmasters.net
Sometimes, to reduce the amount of downtime involved while restoring and recovering a backup, you may also use a additional delayed standby.
Avinash Vallarapu.
On 10/18/19 11:29 AM, Luca Ferrari wrote:
> On Fri, Oct 18, 2019 at 10:30 AM Andreas Joseph Krogh
> <andreas@visena.com> wrote:
>> We use barman (https://www.pgbarman.org/) for continuous streaming backup and I had to restore from it once, and it went like this:
>
> Just for the records, here's an example of restore with pgbackrest:
>
> % sudo -u postgres pgbackrest --stanza=miguel \
> --log-level-console=info --delta restore
> ...
> INFO: restore backup set 20190916-125652F
> INFO: remove invalid files/paths/links from /postgres/pgdata/11
> INFO: cleanup removed 148 files, 3 paths
> ...
> INFO: write /postgres/pgdata/11/recovery.conf
> INFO: restore global/pg_control (performed last
> to ensure aborted restores cannot be started)
> INFO: restore command end: completed successfully (5113ms)
pgBackRest also has a tutorial on PITR:
https://pgbackrest.org/user-guide.html#pitr
--
-David
david@pgmasters.net
Hi All,
Thanks for your suggestions.
One more questions is, how backups are useful if we have streaming replication . As I know, we can promote the standby as primary in case of disaster at primary side. Do we need to schedule backups if we have streaming replication?
Thanks
From: Avinash Kumar <avinash.vallarapu@gmail.com>
Sent: Friday, October 18, 2019 5:28 PM
To: David Steele <david@pgmasters.net>
Cc: Luca Ferrari <fluca1978@gmail.com>; Andreas Joseph Krogh <andreas@visena.com>; Daulat Ram <Daulat.Ram@exponential.com>; pgsql-general@lists.postgresql.org
Subject: Re: Postgres Point in time Recovery (PITR),
Hi Daulat,
PITR entirely depends on what type of backups you choose.
Sometimes, to reduce the amount of downtime involved while restoring and recovering a backup, you may also use a additional delayed standby.
You could use the PG built-in feature to delay the replication and fast-forward it to the safest point to achieve PITR. But this requires you to have an additional standby.
If you have several TBs of database, pgBackRest is of course a way to go for backups (there are few more open source solutions), but also consider the amount of time it takes for recovery. Keeping all of this in mind, your approach to PITR changes.
So i would ask you this question, what is the backup tool you use and what is your backup strategy ? Are you taking a physical backup and performing continuous archiving of WALs ? The answer to your question entirely depends on this. :)
Regards,
Avinash Vallarapu.
On Fri, Oct 18, 2019 at 5:17 PM David Steele <david@pgmasters.net> wrote:
On 10/18/19 11:29 AM, Luca Ferrari wrote:
> On Fri, Oct 18, 2019 at 10:30 AM Andreas Joseph Krogh
> <andreas@visena.com> wrote:
>> We use barman (https://www.pgbarman.org/) for continuous streaming backup and I had to restore from it once, and it went like this:
>
> Just for the records, here's an example of restore with pgbackrest:
>
> % sudo -u postgres pgbackrest --stanza=miguel \
> --log-level-console=info --delta restore
> ...
> INFO: restore backup set 20190916-125652F
> INFO: remove invalid files/paths/links from /postgres/pgdata/11
> INFO: cleanup removed 148 files, 3 paths
> ...
> INFO: write /postgres/pgdata/11/recovery.conf
> INFO: restore global/pg_control (performed last
> to ensure aborted restores cannot be started)
> INFO: restore command end: completed successfully (5113ms)
pgBackRest also has a tutorial on PITR:
https://pgbackrest.org/user-guide.html#pitr
--
-David
david@pgmasters.net
--
9000799060
Hello All,
Can you please share some ideas and scenarios how we can do the PITR in case of disaster.
Hi All,
Thanks for your suggestions.
One more questions is, how backups are useful if we have streaming replication . As I know, we can promote the standby as primary in case of disaster at primary side. Do we need to schedule backups if we have streaming replication?
Avinash Vallarapu.
Thanks
From: Avinash Kumar <avinash.vallarapu@gmail.com>
Sent: Friday, October 18, 2019 5:28 PM
To: David Steele <david@pgmasters.net>
Cc: Luca Ferrari <fluca1978@gmail.com>; Andreas Joseph Krogh <andreas@visena.com>; Daulat Ram <Daulat.Ram@exponential.com>; pgsql-general@lists.postgresql.org
Subject: Re: Postgres Point in time Recovery (PITR),
Hi Daulat,
PITR entirely depends on what type of backups you choose.
Sometimes, to reduce the amount of downtime involved while restoring and recovering a backup, you may also use a additional delayed standby.You could use the PG built-in feature to delay the replication and fast-forward it to the safest point to achieve PITR. But this requires you to have an additional standby.
If you have several TBs of database, pgBackRest is of course a way to go for backups (there are few more open source solutions), but also consider the amount of time it takes for recovery. Keeping all of this in mind, your approach to PITR changes.
So i would ask you this question, what is the backup tool you use and what is your backup strategy ? Are you taking a physical backup and performing continuous archiving of WALs ? The answer to your question entirely depends on this. :)
Regards,
Avinash Vallarapu.
On Fri, Oct 18, 2019 at 5:17 PM David Steele <david@pgmasters.net> wrote:
On 10/18/19 11:29 AM, Luca Ferrari wrote:
> On Fri, Oct 18, 2019 at 10:30 AM Andreas Joseph Krogh
> <andreas@visena.com> wrote:
>> We use barman (https://www.pgbarman.org/) for continuous streaming backup and I had to restore from it once, and it went like this:
>
> Just for the records, here's an example of restore with pgbackrest:
>
> % sudo -u postgres pgbackrest --stanza=miguel \
> --log-level-console=info --delta restore
> ...
> INFO: restore backup set 20190916-125652F
> INFO: remove invalid files/paths/links from /postgres/pgdata/11
> INFO: cleanup removed 148 files, 3 paths
> ...
> INFO: write /postgres/pgdata/11/recovery.conf
> INFO: restore global/pg_control (performed last
> to ensure aborted restores cannot be started)
> INFO: restore command end: completed successfully (5113ms)
pgBackRest also has a tutorial on PITR:
https://pgbackrest.org/user-guide.html#pitr
--
-David
david@pgmasters.net
--
9000799060
On Sat, Oct 19, 2019 at 7:46 PM Daulat Ram <Daulat.Ram@exponential.com> wrote: > One more questions is, how backups are useful if we have streaming replication . As I know, we can promote the standbyas primary in case of disaster at primary side. Do we need to schedule backups if we have streaming replication? Let's speculate a little on that: do you need backups if you have a RAID-1 configuration? Replication helps you reduce almost to zero the time to handle a disaster, backups allow you to recover in a more large time window. Luca
On 21/10/2019 09:52, Luca Ferrari wrote: > On Sat, Oct 19, 2019 at 7:46 PM Daulat Ram <Daulat.Ram@exponential.com> wrote: >> One more questions is, how backups are useful if we have streaming replication . As I know, we can promote the standbyas primary in case of disaster at primary side. Do we need to schedule backups if we have streaming replication? > > Let's speculate a little on that: do you need backups if you have a > RAID-1 configuration? > Replication helps you reduce almost to zero the time to handle a > disaster, backups allow you to recover in a more large time window. > > Luca > > TBH I hear this argument more often than I wish. Offline backups and data replication are nearly entirely orthogonal. Any form of instantaneous redundancy (RAID, instantaneous replication and so on) primary is a mitigation measure to protect data & availability against loss of infrastructure. Backups (preferably with PITR) also do that, but that's not their primary purpose unless you can't afford live redundancy on top of them. Offline backups address many failure scenarios that any form of live replication is defenseless against (eg: logical damage to the data as a result of human errors/bugs/vandalism would hit all your replicas, but you always can perform DR from a backup). Delayed replicas, or any online rollback capability (such as pg_rewind off the server's own pg_wal or Oracle's flashback), somewhat live in a grey area in between, and their effectiveness varies depending on which level was compromised. -- Regards Fabio Ugo Venchiarutti OSPCFC Network Engineering Dpt. Ocado Technology -- Notice: This email is confidential and may contain copyright material of members of the Ocado Group. Opinions and views expressed in this message may not necessarily reflect the opinions and views of the members of the Ocado Group. If you are not the intended recipient, please notify us immediately and delete all copies of this message. Please note that it is your responsibility to scan this message for viruses. References to the "Ocado Group" are to Ocado Group plc (registered in England and Wales with number 7098618) and its subsidiary undertakings (as that expression is defined in the Companies Act 2006) from time to time. The registered office of Ocado Group plc is Buildings One & Two, Trident Place, Mosquito Way, Hatfield, Hertfordshire, AL10 9UL.
On 21/10/2019 09:52, Luca Ferrari wrote:
> On Sat, Oct 19, 2019 at 7:46 PM Daulat Ram <Daulat.Ram@exponential.com> wrote:
>> One more questions is, how backups are useful if we have streaming replication . As I know, we can promote the standby as primary in case of disaster at primary side. Do we need to schedule backups if we have streaming replication?
>
> Let's speculate a little on that: do you need backups if you have a
> RAID-1 configuration?
> Replication helps you reduce almost to zero the time to handle a
> disaster, backups allow you to recover in a more large time window.
>
> Luca
>
>
TBH I hear this argument more often than I wish.
Offline backups and data replication are nearly entirely orthogonal.
Any form of instantaneous redundancy (RAID, instantaneous replication
and so on) primary is a mitigation measure to protect data &
availability against loss of infrastructure.
Backups (preferably with PITR) also do that, but that's not their
primary purpose unless you can't afford live redundancy on top of them.
Offline backups address many failure scenarios that any form of live
replication is defenseless against (eg: logical damage to the data as a
result of human errors/bugs/vandalism would hit all your replicas, but
you always can perform DR from a backup).
Online Consistent Backups + Archiving of WALs are perfectly reliable.
Delayed replicas, or any online rollback capability (such as pg_rewind
off the server's own pg_wal or Oracle's flashback), somewhat live in a
grey area in between, and their effectiveness varies depending on which
level was compromised.
--
Regards
Fabio Ugo Venchiarutti
OSPCFC Network Engineering Dpt.
Ocado Technology
--
Notice:
This email is confidential and may contain copyright material of
members of the Ocado Group. Opinions and views expressed in this message
may not necessarily reflect the opinions and views of the members of the
Ocado Group.
If you are not the intended recipient, please notify us
immediately and delete all copies of this message. Please note that it is
your responsibility to scan this message for viruses.
References to the
"Ocado Group" are to Ocado Group plc (registered in England and Wales with
number 7098618) and its subsidiary undertakings (as that expression is
defined in the Companies Act 2006) from time to time. The registered office
of Ocado Group plc is Buildings One & Two, Trident Place, Mosquito Way,
Hatfield, Hertfordshire, AL10 9UL.
Avinash Vallarapu.
On 21/10/2019 12:10, Avinash Kumar wrote: > > > On Mon, Oct 21, 2019 at 4:19 PM Fabio Ugo Venchiarutti > <f.venchiarutti@ocado.com <mailto:f.venchiarutti@ocado.com>> wrote: > > On 21/10/2019 09:52, Luca Ferrari wrote: > > On Sat, Oct 19, 2019 at 7:46 PM Daulat Ram > <Daulat.Ram@exponential.com <mailto:Daulat.Ram@exponential.com>> wrote: > >> One more questions is, how backups are useful if we have > streaming replication . As I know, we can promote the standby as > primary in case of disaster at primary side. Do we need to schedule > backups if we have streaming replication? > > > > Let's speculate a little on that: do you need backups if you have a > > RAID-1 configuration? > > Replication helps you reduce almost to zero the time to handle a > > disaster, backups allow you to recover in a more large time window. > > > > Luca > > > > > > > TBH I hear this argument more often than I wish. > > > Offline backups and data replication are nearly entirely orthogonal. > > > > Any form of instantaneous redundancy (RAID, instantaneous replication > and so on) primary is a mitigation measure to protect data & > availability against loss of infrastructure. > > > Backups (preferably with PITR) also do that, but that's not their > primary purpose unless you can't afford live redundancy on top of them. > > > Offline backups address many failure scenarios that any form of live > replication is defenseless against (eg: logical damage to the data as a > result of human errors/bugs/vandalism would hit all your replicas, but > you always can perform DR from a backup). > > I think we are way ahead of the days where we need to live on Offline > backups involving downtimes. > Today, we have several Online Consistent Backup solutions such as > pg_basebackup (built-in with PG), pgBackRest, BARMAN, WAL-G etc .... > Online Consistent Backups + Archiving of WALs are perfectly reliable. > We need to ensure that we have safe backup locations, for example, push > them to AWS S3 and forget about redundancy. > Why do you think only Offline Backups are reliable today ? > > > > Delayed replicas, or any online rollback capability (such as pg_rewind > off the server's own pg_wal or Oracle's flashback), somewhat live in a > grey area in between, and their effectiveness varies depending on which > level was compromised. > > What if you delay your replica by a day, before you making a change > tomorrow (that may need a rollback). Delayed Replica is for emergencies > IMO and of course does not satisfy every possible scenario. > > Trade-offs come down to individual implementers, so fair enough. Our biases must come from different use cases/experience: a good fraction of recovery/sanitisation operations I had to carry out were very selective/application specific and involved buildup of anomalies over time, sometimes months. Wouldn't have been possible without an old frozen reference for users to compare with, and that hat to come from very cold storage. -- Regards Fabio Ugo Venchiarutti OSPCFC Network Engineering Dpt. Ocado Technology -- Notice: This email is confidential and may contain copyright material of members of the Ocado Group. Opinions and views expressed in this message may not necessarily reflect the opinions and views of the members of the Ocado Group. If you are not the intended recipient, please notify us immediately and delete all copies of this message. Please note that it is your responsibility to scan this message for viruses. References to the "Ocado Group" are to Ocado Group plc (registered in England and Wales with number 7098618) and its subsidiary undertakings (as that expression is defined in the Companies Act 2006) from time to time. The registered office of Ocado Group plc is Buildings One & Two, Trident Place, Mosquito Way, Hatfield, Hertfordshire, AL10 9UL.
We need to ensure that we have safe backup locations, for example, push them to AWS S3 and forget about redundancy.Why do you think only Offline Backups are reliable today ?
On Mon, 2019-10-21 at 16:40 +0530, Avinash Kumar wrote:We need to ensure that we have safe backup locations, for example, push them to AWS S3 and forget about redundancy.Why do you think only Offline Backups are reliable today ?There have been examples of hackers gaining control of an organization's servers or cloud accounts and not only destroying their online systems but also methodically deleting all their backups.
There are fewer things that can go catastrophically wrong if one has actual offline backups. You have to be a lot more careful about protecting anything attached to the Internet.
On 10/21/19 8:10 AM, Avinash Kumar wrote: > Hi, > > On Mon, Oct 21, 2019 at 8:16 PM Alan Hodgson <ahodgson@lists.simkin.ca > <mailto:ahodgson@lists.simkin.ca>> wrote: > > On Mon, 2019-10-21 at 16:40 +0530, Avinash Kumar wrote: >> >> We need to ensure that we have safe backup locations, for example, >> push them to AWS S3 and forget about redundancy. >> Why do you think only Offline Backups are reliable today ? > > > There have been examples of hackers gaining control of an > organization's servers or cloud accounts and not only destroying > their online systems but also methodically deleting all their backups. > > > There are fewer things that can go catastrophically wrong if one has > actual offline backups. You have to be a lot more careful about > protecting anything attached to the Internet. > > I do not agree with this. If a hacker is gaining control of your > organizational servers to destroy your Online backups, can't he destroy > the offline backups and your database ? Well to me off-line means you have introduced an air gap between your on-line presence and your off-line backups. This would prevent an intruder from accessing the off-line backups. > This is not a right justification to encouraging Offline Backups over > Online Backups. > If you are worried about storing your online backups through internet on > cloud (i do not agree as you can still secure your data on cloud), store > it in on a server in your Organizational network and do not push them > through internet. > Taking Offline Backups is not the only right way to ensure Reliable > Backups. > We are way ahead of the days where you need to face downtime to take > backups. > Online Backups are reliable in PostgreSQL. > -- Adrian Klaver adrian.klaver@aklaver.com
can't he destroy the offline backups and your database ?This is not a right justification to encouraging Offline Backups over Online Backups.If you are worried about storing your online backups through internet on cloud (i do not agree as you can still secure your data on cloud), store it in on a server in your Organizational network and do not push them through internet.Taking Offline Backups is not the only right way to ensure Reliable Backups.We are way ahead of the days where you need to face downtime to take backups.Online Backups are reliable in PostgreSQL.
On Mon, 2019-10-21 at 20:40 +0530, Avinash Kumar wrote:can't he destroy the offline backups and your database ?This is not a right justification to encouraging Offline Backups over Online Backups.If you are worried about storing your online backups through internet on cloud (i do not agree as you can still secure your data on cloud), store it in on a server in your Organizational network and do not push them through internet.Taking Offline Backups is not the only right way to ensure Reliable Backups.We are way ahead of the days where you need to face downtime to take backups.Online Backups are reliable in PostgreSQL.I apologize, I think we have a difference in terminology here. Obviously you don't need to take PostgreSQL out of service to take a backup. I don't know that you ever did; pg_dump even has always worked fine while the database is available.When I say offline backup I mean a backup that is stored in a way that it cannot be accessed via the Internet.
Hi > On 22/10/2019, at 4:14 AM, Adrian Klaver <adrian.klaver@aklaver.com> wrote: > > On 10/21/19 8:10 AM, Avinash Kumar wrote: >> Hi, >> On Mon, Oct 21, 2019 at 8:16 PM Alan Hodgson <ahodgson@lists.simkin.ca <mailto:ahodgson@lists.simkin.ca>> wrote: >> On Mon, 2019-10-21 at 16:40 +0530, Avinash Kumar wrote: >>> >>> We need to ensure that we have safe backup locations, for example, >>> push them to AWS S3 and forget about redundancy. >>> Why do you think only Offline Backups are reliable today ? The only way to ensure, you have a secondary protocol, is using some type of pull approach, were the backup system pull fromthe online system. >> There have been examples of hackers gaining control of an >> organization's servers or cloud accounts and not only destroying >> their online systems but also methodically deleting all their backups. There are fewer things that can go catastrophicallywrong if one has >> actual offline backups. You have to be a lot more careful about >> protecting anything attached to the Internet. >> I do not agree with this. If a hacker is gaining control of your organizational servers to destroy your Online backups,can't he destroy the offline backups and your database ? They only way to be safe is having an external company or passwords isolated from your organisation, my personal approachis having public certs installed from the secondary backup system to pull the backups from the online platforms.Having generated passwords with a keepass encrypted database isolated from the Organisations. > > Well to me off-line means you have introduced an air gap between your on-line presence and your off-line backups. Thiswould prevent an intruder from accessing the off-line backups. The only way is not having the access or perhaps a 2FA to login into AWS platforms to ensure you know when someone is tryingto login to your AWS accounts, Linux servers support 2FA too. > >> This is not a right justification to encouraging Offline Backups over Online Backups. >> If you are worried about storing your online backups through internet on cloud (i do not agree as you can still secureyour data on cloud), store it in on a server in your Organizational network and do not push them through internet. >> Taking Offline Backups is not the only right way to ensure Reliable Backups. >> We are way ahead of the days where you need to face downtime to take backups. >> Online Backups are reliable in PostgreSQL. I think offline backups are useful as complement to the online backups, my current policy of backups for offline one are( weekly, monthly, yearly with 4 backup retention ) only one time a yearly backup was utilised to recover a human mistake. PS: I think if you are really worried about hackers, perhaps you need to chat with your security officer to ensure alerts,accountability and 2FA plus other techniques are implemented in your company, nothing will stop a good hacker andprobably it is working inside of your company, 80% of the attacks comes from people that works inside of the company (Orion security Chile in 2001 shared this information ), no clue what todays stats are. Ps2: don’t use passwords like secret, or s3cr3t, etc. and don’t forget security is just a feeling. > > > -- > Adrian Klaver > adrian.klaver@aklaver.com > >
Hello, I am trying to setup barman for my test environment with the following steps but I m not able to start the backup. Pleaseadvice ! Postgres 11.2 streaming replication on docker container , Centos Server A : pg1 (primary) Server B : pg2 (Standby) I am using pg2 as a Barman server. Barman 2.8 is installed on pg2 Also I have installed " yum install barman-cli-2.8-1.el6.noarch.rpm" on pg1 (postgres server) for 'barman-wal-archive. @pg1 1 . Connect on the server pgsql and log into the postgres account: sudo -i -u postgres 2 . createuser --interactive -P barman 3 . createuser barman 4 . Edit postgresql.conf , for listen_addresses = '*' and sudo service postgresql restart wal_level = replica archive_mode = on archive_command = 'barman-wal-archive pg2 pgsql %p' 5. host all all pg2/32 trust @pg2 (Barman Server) 1 . switch to user barman and generate the keys: ssh-keygen -t rsa 2 . Copy the key to the user account postgres on pgsql: ssh-copy-id postgres@pg1 3 . Barman also requires access to the postgres account on the server pg2. Copy the key into the directory of the postgresuser and test the connection: ssh-copy-id postgres@localhost ssh postgres@localhost -C true 4 . Once this is done, log in as postgres user on pg1and generate an SSH key: ssh-keygen -t rsa 5 . Copy the generated key to the list of authorized keys of the user barman on pg2: ssh-copy-id barman@pg2 6 . Test the connection to the server: @barmaner server /etc/barman.d/pgsql.conf [test] ; Human readable description description = "Example of PostgreSQL Database (via SSH)" ssh_command = ssh postgres@pg1 conninfo = host=pg1 user=barman dbname=postgres backup_method = rsync ;reuse_backup = link ; Identify the standard behavior for backup operations: possible values are ; exclusive_backup (default), concurrent_backup ; concurrent_backup is the preferred method with PostgreSQL >= 9.6 backup_options = exclusive_backup ; Number of parallel workers to perform file copy during backup and recover ;parallel_jobs = 1 archiver = on ;archiver_batch_size = 50 barman@vipostgres-db-test:/etc/barman.d$ barman check pgsql WARNING: No backup strategy set for server 'pgsql' (using default 'exclusive_backup'). WARNING: The default backup strategy will change to 'concurrent_backup' in the future. Explicitly set 'backup_options' tosilence this warning. Server pgsql: WAL archive: FAILED (please make sure WAL shipping is setup) PostgreSQL: FAILED directories: OK retention policy settings: OK backup maximum age: OK (no last_backup_maximum_age provided) compression settings: OK failed backups: OK (there are 0 failed backups) minimum redundancy requirements: OK (have 0 backups, expected at least 0) ssh: FAILED (Connection failed using 'ssh postgres@vipostgres-cn-test -o BatchMode=yes -o StrictHostKeyChecking=no'return code 255) not in recovery: OK pg_receivexlog: FAILED pg_receivexlog compatible: FAILED (PostgreSQL version: None, pg_receivexlog version: None) receive-wal running: FAILED (See the Barman log file for more details) archiver errors: OK NOte: If I try to login mannulay on postgres from barman then it is aking for password : ssh postgres@pg1 Failed to add the host to the list of known hosts (/tmp/.ssh/known_hosts). ========================== Authorized Access Notice ================================================== This computer system is the property of Exponential Interactive Inc. Activities are actively monitored and unauthorized access or use of this computer system is prohibited. ====================================================================================================== Password: Thanks, -----Original Message----- From: Andreas Kretschmer <andreas@a-kretschmer.de> Sent: Friday, October 18, 2019 12:38 PM To: pgsql-general@lists.postgresql.org; Daulat Ram <Daulat.Ram@exponential.com>; pgsql-general@lists.postgresql.org Subject: Re: Postgres Point in time Recovery (PITR), On 18 October 2019 07:59:21 CEST, Daulat Ram <Daulat.Ram@exponential.com> wrote: >Hello All, >Can you please share some ideas and scenarios how we can do the PITR in >case of disaster. > > >Thanks, Consider Barman. -- 2ndQuadrant - The PostgreSQL Support Company