pg_probackup

pg_probackup — manage backup and recovery of Postgres Pro Enterprise database clusters

Synopsis

pg_probackup version

pg_probackup help [command]

pg_probackup init -B backup_dir --skip-if-exists

pg_probackup add-instance -B backup_dir -D data_dir --instance instance_name --skip-if-exists

pg_probackup del-instance -B backup_dir --instance instance_name

pg_probackup set-config -B backup_dir --instance instance_name [option...]

pg_probackup set-backup -B backup_dir --instance instance_name -i backup_id [option...]

pg_probackup show-config -B backup_dir --instance instance_name [option...]

pg_probackup show -B backup_dir [option...]

pg_probackup backup -B backup_dir --instance instance_name -b backup_mode [option...]

pg_probackup restore -B backup_dir --instance instance_name [option...]

pg_probackup checkdb -B backup_dir --instance instance_name -D data_dir [option...]

pg_probackup validate -B backup_dir [option...]

pg_probackup merge -B backup_dir --instance instance_name -i backup_id [option...]

pg_probackup delete -B backup_dir --instance instance_name { -i backup_id | --delete-wal | --delete-expired | --merge-expired } [option...]

pg_probackup archive-push -B backup_dir --instance instance_name --wal-file-path wal_file_path --wal-file-name wal_file_name [option...]

pg_probackup archive-get -B backup_dir --instance instance_name --wal-file-path wal_file_path --wal-file-name wal_file_name [option...]

pg_probackup catchup -b catchup_mode --source-pgdata=path_to_pgdata_on_remote_server --destination-pgdata=path_to_local_dir [option...]

Description #

pg_probackup is a utility to manage backup and recovery of Postgres Pro database clusters. It is designed to perform periodic backups of the Postgres Pro instance that enable you to restore the server in case of a failure. pg_probackup supports PostgreSQL 11 or higher. In Postgres Pro Enterprise, pg_probackup provides S3 (Simple Storage Service) support for storing data in private clouds.

Note

pg_probackup provides complete processing of S3 interface logging.

Overview #

As compared to other backup solutions, pg_probackup offers the following benefits that can help you implement different backup strategies and deal with large amounts of data:

  • S3 support for storing data in private clouds using MinIO object storage, Amazon S3 storage, and VK Cloud storage: provided in Postgres Pro Enterprise. Backup data is transferred to and from S3 without saving it in intermediate locations thus eliminating the need of having a large temporary storage.

  • Incremental backup: with three different incremental modes, you can plan the backup strategy in accordance with your data flow. Incremental backups allow you to save disk space and speed up backup as compared to taking full backups. It is also faster to restore the cluster by applying incremental backups than by replaying WAL files.

  • Incremental restore: speed up restore from a backup by reusing valid unchanged pages available in PGDATA.

  • CFS (Compressed File System) support for incremental backups in DELTA, PAGE, and PTRACK (the fastest) modes: provided in Postgres Pro Enterprise.

  • Validation: automatic data consistency checks and on-demand backup validation without actual data recovery.

  • Verification: on-demand verification of Postgres Pro instance with the checkdb command.

  • Retention: managing WAL archive and backups in accordance with retention policy. You can configure retention policy based on recovery time or the number of backups to keep, as well as specify time to live (TTL) for a particular backup. Expired backups can be merged or deleted.

  • Parallelization: running backup, restore, merge, delete, validate, and checkdb processes on multiple parallel threads.

  • Compression: storing backup data in a compressed state to save disk space.

  • Deduplication: saving disk space by excluding non-data files (such as _vm or _fsm) from incremental backups if these files have not changed since they were copied into one of the previous backups in this incremental chain.

  • Remote operations: backing up Postgres Pro instance located on a remote system or restoring a backup remotely.

  • Backups from a standby: avoiding extra load on the primary by taking backups from a standby server.

  • External directories: backing up files and directories located outside of the Postgres Pro data directory (PGDATA), such as scripts, configuration files, logs, or SQL dump files.

  • Backup catalog: getting the list of backups and the corresponding meta information in plain text or JSON formats.

  • Archive catalog: getting the list of all WAL timelines and the corresponding meta information in plain text or JSON formats.

  • Partial restore: restoring only the specified databases.

  • Catchup: cloning a Postgres Pro instance for a fallen-behind standby server to catch up with the primary.

To manage backup data, pg_probackup creates a backup catalog. This is a directory that stores all backup files with additional meta information, as well as WAL archives required for point-in-time recovery. You can store backups for different instances in separate subdirectories of a single backup catalog.

Using pg_probackup, you can take full or incremental backups:

  • FULL backups contain all the data files required to restore the database cluster.

  • Incremental backups operate at the page level, only storing the data that has changed since the previous backup. It allows you to save disk space and speed up the backup process as compared to taking full backups. It is also faster to restore the cluster by applying incremental backups than by replaying WAL files. pg_probackup supports the following modes of incremental backups:

    • DELTA backup. In this mode, pg_probackup reads all data files in the data directory and copies only those pages that have changed since the previous backup. This mode can impose read-only I/O pressure equal to a full backup.

    • PAGE backup. In this mode, pg_probackup scans all WAL files in the archive from the moment the previous full or incremental backup was taken. Newly created backups contain only the pages that were mentioned in WAL records. This requires all the WAL files since the previous backup to be present in the WAL archive. If the size of these files is comparable to the total size of the database cluster files, speedup is smaller, but the backup still takes less space. You have to configure WAL archiving as explained in Setting up continuous WAL archiving to make PAGE backups.

    • PTRACK backup. In this mode, Postgres Pro tracks page changes on the fly. Continuous archiving is not necessary for it to operate. Each time a relation page is updated, this page is marked in a special PTRACK bitmap. Tracking implies some minor overhead on the database server operation, but speeds up incremental backups significantly.

pg_probackup can take only physical online backups, and online backups require WAL for consistent recovery. So regardless of the chosen backup mode (FULL, PAGE or DELTA), any backup taken with pg_probackup must use one of the following WAL delivery modes:

  • ARCHIVE. Such backups rely on continuous archiving to ensure consistent recovery. This is the default WAL delivery mode.

  • STREAM. Such backups include all the files required to restore the cluster to a consistent state at the time the backup was taken. Regardless of continuous archiving having been set up or not, the WAL segments required for consistent recovery are streamed via replication protocol during backup and included into the backup files. That's why such backups are called autonomous, or standalone.

Limitations #

pg_probackup currently has the following limitations:

  • The remote mode is not supported on Windows systems.

  • On Unix systems, for Postgres Pro 11, a backup can be made only by the same OS user that has started the Postgres Pro server. For example, if Postgres Pro server is started by user postgres, the backup command must also be run by user postgres. To satisfy this requirement when taking backups in the remote mode using SSH, you must set --remote-user option to postgres.

  • The Postgres Pro server from which the backup was taken and the restored server must be compatible by the block_size and wal_block_size parameters and have the same major release number. Depending on cluster configuration, Postgres Pro itself may apply additional restrictions, such as CPU architecture or libc/icu versions.

  • Special limitations of pg_probackup in Postgres Pro Enterprise:

Quick Start #

To quickly get started with pg_probackup, complete the steps below. This will set up FULL and DELTA backups in the remote mode and demonstrate some basic pg_probackup operations. In the following, these terms are used:

  • backupPostgres Pro role used to connect to the Postgres Pro cluster.

  • backupdb — database used to connect to the Postgres Pro cluster.

  • backup_host — host with the backup catalog.

  • backup_user — user on backup_host running all pg_probackup operations.

  • /mnt/backups — directory on backup_host where the backup catalog is stored.

  • postgres_host — host with the Postgres Pro cluster.

  • postgres — user on postgres_host under which Postgres Pro cluster processes are running.

  • /var/lib/pgpro/std-16/dataPostgres Pro data directory on postgres_host.

Steps to perform: #

  1. Install pg_probackup on both backup_host and postgres_host.

  2. Set up an SSH connection from backup_host to postgres_host.

  3. Configure your database cluster for STREAM backups.

  4. Initialize the backup catalog:

    backup_user@backup_host:~$ pg_probackup init -B /mnt/backups
    INFO: Backup catalog '/mnt/backups' successfully initialized
    
  5. Add a backup instance called mydb to the backup catalog:

    backup_user@backup_host:~$ pg_probackup add-instance \
        -B /mnt/backups \
        -D /var/lib/pgpro/std-16/data \
        --instance=node \
        --remote-host=postgres_host \
        --remote-user=postgres
    INFO: Instance 'node' successfully initialized
    
  6. Make a FULL backup:

    backup_user@backup_host:~$ pg_probackup backup \
        -B /mnt/backups \
        -b FULL \
        --instance=node \
        --stream \
        --compress-algorithm=zstd \
        --remote-host=postgres_host \
        --remote-user=postgres \
        -U backup \
        -d backupdb
    INFO: Backup start, pg_probackup version: 2.7.3, instance: node, backup ID: SBOL6J, backup mode: FULL, wal mode: STREAM, remote: true, compress-algorithm: zstd, compress-level: 1
    WARNING:  pgpro_edition() function is old-style and will be removed in future major release, use pgpro_edition GUC variable instead.
    INFO: This PostgreSQL instance was initialized with data block checksums. Data block corruption will be detected
    INFO: Database backup start
    INFO: wait for pg_backup_start()
    INFO: PGDATA size: 96MB
    INFO: Current Start LSN: 0/8000028, TLI: 1
    INFO: Start transferring data files
    INFO: Data files are transferred, time elapsed: 2s
    INFO: wait for pg_stop_backup()
    INFO: pg_stop_backup() successfully executed
    INFO: stop_stream_lsn 0/9000000 currentpos 0/9000000
    INFO: backup->stop_lsn 0/8004E40
    INFO: Getting the Recovery Time from WAL
    INFO: Syncing backup files to disk
    INFO: Backup files are synced, time elapsed: 1s
    INFO: Validating backup SBOL6J
    INFO: Backup SBOL6J data files are valid
    INFO: Backup SBOL6J resident size: 53MB
    INFO: Backup SBOL6J completed
    
  7. List the backups of the instance:

    backup_user@backup_host:~$ pg_probackup show \
            -B /mnt/backups \
            --instance=node
    =============================================================================================================================================
     Instance  Version  ID      Recovery Time                  Mode  WAL Mode  TLI  Time  Data   WAL  Zalg  Zratio  Start LSN  Stop LSN   Status 
    =============================================================================================================================================
     node      17       SBOL6J  2024-04-09 18:18:21.970314+03  FULL  STREAM    1/0    4s  37MB  16MB  zstd    2.57  0/8000028  0/8004E40  OK     
    
  8. Make an incremental backup in the DELTA mode:

    backup_user@backup_host:~$ pg_probackup backup \
        -B /mnt/backups \
        -b DELTA \
        --instance=node \
        --stream \
        --compress-algorithm=zstd \
        --remote-host=postgres_host \
        --remote-user=postgres \
        -U backup \
        -d backupdb
    INFO: Backup start, pg_probackup version: 2.7.3, instance: node, backup ID: SBOL6N, backup mode: DELTA, wal mode: STREAM, remote: true, compress-algorithm: zstd, compress-level: 1
    WARNING:  pgpro_edition() function is old-style and will be removed in future major release, use pgpro_edition GUC variable instead.
    INFO: This PostgreSQL instance was initialized with data block checksums. Data block corruption will be detected
    INFO: Database backup start
    INFO: wait for pg_backup_start()
    INFO: Parent backup: SBOL6J
    INFO: PGDATA size: 96MB
    INFO: Current Start LSN: 0/9000028, TLI: 1
    INFO: Parent Start LSN: 0/8000028, TLI: 1
    INFO: Start transferring data files
    INFO: Data files are transferred, time elapsed: 1s
    INFO: wait for pg_stop_backup()
    INFO: pg_stop_backup() successfully executed
    INFO: stop_stream_lsn 0/A000000 currentpos 0/A000000
    INFO: backup->stop_lsn 0/9000190
    INFO: Getting the Recovery Time from WAL
    INFO: Syncing backup files to disk
    INFO: Backup files are synced, time elapsed: 0
    INFO: Validating backup SBOL6N
    INFO: Backup SBOL6N data files are valid
    INFO: Backup SBOL6N resident size: 17MB
    INFO: Backup SBOL6N completed
    
  9. Add or modify some parameters in the pg_probackup configuration file, so that you do not have to specify them each time on the command line:

    backup_user@backup_host:~$ pg_probackup set-config \
        -B /mnt/backups \
        --instance=node \
        --remote-host=postgres_host \
        --remote-user=postgres \
        -U backup \
        -d backupdb
    
  10. Check the configuration of the instance:

    backup_user@backup_host:~$ pg_probackup show-config \
        -B /mnt/backups \
        --instance=node
    # Backup instance information
    pgdata = /var/lib/pgpro/std-16/data
    system-identifier = 7355886958826772732
    xlog-seg-size = 16777216
    # Connection parameters
    pgdatabase = backupdb
    pghost = postgres_host
    pguser = backup
    # Archive parameters
    archive-timeout = 5min
    # Logging parameters
    log-level-console = INFO
    log-level-file = OFF
    log-format-console = PLAIN
    log-format-file = PLAIN
    log-filename = pg_probackup.log
    log-rotation-size = 0TB
    log-rotation-age = 0d
    # Retention parameters
    retention-redundancy = 0
    retention-window = 0
    wal-depth = 0
    # Compression parameters
    compress-algorithm = none
    compress-level = 1
    # Remote access parameters
    remote-proto = ssh
    remote-host = postgres_host
    remote-user = postgres
    

    Note that the parameters not modified via set-config retain their default values.

  11. Make another incremental backup in the DELTA mode, omitting the parameters stored in the configuration file earlier:

    backup_user@backup_host:~$ pg_probackup backup \
        -B /mnt/backups \
        -b DELTA \
        --instance=node \
        --stream \
        --compress-algorithm=zstd
    INFO: Backup start, pg_probackup version: 2.7.3, instance: node, backup ID: SBOL6P, backup mode: DELTA, wal mode: STREAM, remote: true, compress-algorithm: zstd, compress-level: 1
    WARNING:  pgpro_edition() function is old-style and will be removed in future major release, use pgpro_edition GUC variable instead.
    INFO: This PostgreSQL instance was initialized with data block checksums. Data block corruption will be detected
    INFO: Database backup start
    INFO: wait for pg_backup_start()
    INFO: Parent backup: SBOL6N
    INFO: PGDATA size: 96MB
    INFO: Current Start LSN: 0/A000028, TLI: 1
    INFO: Parent Start LSN: 0/9000028, TLI: 1
    INFO: Start transferring data files
    INFO: Data files are transferred, time elapsed: 1s
    INFO: wait for pg_stop_backup()
    INFO: pg_stop_backup() successfully executed
    INFO: stop_stream_lsn 0/B000000 currentpos 0/B000000
    INFO: backup->stop_lsn 0/A000190
    INFO: Getting the Recovery Time from WAL
    INFO: Syncing backup files to disk
    INFO: Backup files are synced, time elapsed: 0
    INFO: Validating backup SBOL6P
    INFO: Backup SBOL6P data files are valid
    INFO: Backup SBOL6P resident size: 17MB
    INFO: Backup SBOL6P completed
    
  12. List the backups of the instance again:

    backup_user@backup_host:~$ pg_probackup show \
        -B /mnt/backups \
        --instance=node
    ================================================================================================================================================
     Instance  Version  ID      Recovery Time                  Mode   WAL Mode  TLI  Time    Data   WAL  Zalg  Zratio  Start LSN  Stop LSN   Status 
    ================================================================================================================================================
     node      17       SBOL6P  2024-04-09 18:18:26.630175+03  DELTA  STREAM    1/1    1s  1147kB  16MB  zstd    1.00  0/A000028  0/A000190  OK     
     node      17       SBOL6N  2024-04-09 18:18:25.015713+03  DELTA  STREAM    1/1    2s  1160kB  16MB  zstd    1.04  0/9000028  0/9000190  OK     
     node      17       SBOL6J  2024-04-09 18:18:21.970314+03  FULL   STREAM    1/0    4s    37MB  16MB  zstd    2.57  0/8000028  0/8004E40  OK     
    
  13. Restore the data from the latest available backup to an arbitrary location:

    backup_user@backup_host:~$ pg_probackup restore \
        -B /mnt/backups \
        -D /var/lib/pgpro/std-16/staging-data \
        --instance=node
    INFO: Validating parents for backup SBOL6P
    INFO: Validating backup SBOL6J
    INFO: Backup SBOL6J data files are valid
    INFO: Validating backup SBOL6N
    INFO: Backup SBOL6N data files are valid
    INFO: Validating backup SBOL6P
    INFO: Backup SBOL6P data files are valid
    INFO: Backup SBOL6P WAL segments are valid
    INFO: Backup SBOL6P is valid.
    INFO: Restoring the database from the backup starting at 2024-04-09 18:18:25+03 on localhost
    INFO: Start restoring backup files. PGDATA size: 112MB
    INFO: Backup files are restored. Transferred bytes: 112MB, time elapsed: 2s
    INFO: Restore incremental ratio (less is better): 100% (112MB/112MB)
    INFO: Syncing restored files to disk
    INFO: Restored backup files are synced, time elapsed: 1s
    INFO: Restore of backup SBOL6P completed.
    

Installation and Setup #

Once you have pg_probackup installed, complete the following setup:

  • Initialize the backup catalog.

  • Add a new backup instance to the backup catalog.

  • Configure the database cluster to enable pg_probackup backups.

  • Optionally, configure SSH for running pg_probackup operations in the remote mode.

  • Optionally, configure S3 for running pg_probackup connected to the S3 storage.

Initializing the Backup Catalog #

pg_probackup stores all WAL and backup files in the corresponding subdirectories of the backup catalog.

To initialize the backup catalog, run the following command:

pg_probackup init -B backup_dir

where backup_dir is the path to the backup catalog. If the backup_dir already exists, it must be empty. Otherwise, pg_probackup returns an error.

The user launching pg_probackup must have full access to the backup_dir directory.

pg_probackup creates the backup_dir backup catalog, with the following subdirectories:

  • wal/ — directory for WAL files.

  • backups/ — directory for backup files.

Once the backup catalog is initialized, you can add a new backup instance.

Adding a New Backup Instance #

pg_probackup can store backups for multiple database clusters in a single backup catalog. To set up the required subdirectories, you must add a backup instance to the backup catalog for each database cluster you are going to back up.

To add a new backup instance, run the following command:

pg_probackup add-instance -B backup_dir -D data_dir --instance=instance_name [remote_options]

Where:

  • data_dir is the data directory of the cluster you are going to back up. To set up and use pg_probackup, write access to this directory is required.

  • instance_name is the name of the subdirectories that will store WAL and backup files for this cluster.

  • remote_options are optional parameters that need to be specified only if data_dir is located on a remote system.

pg_probackup creates the instance_name subdirectories under the backups/ and wal/ directories of the backup catalog. The backups/instance_name directory contains the pg_probackup.conf configuration file that controls pg_probackup settings for this backup instance. If you run this command with the remote_options, the specified parameters will be added to pg_probackup.conf.

For details on how to fine-tune pg_probackup configuration, see the section called “Configuring pg_probackup.

The user launching pg_probackup must have full access to backup_dir directory and at least read-only access to data_dir directory. If you specify the path to the backup catalog in the BACKUP_PATH environment variable, you can omit the corresponding option when running pg_probackup commands.

Note

For Postgres Pro 11 or higher, it is recommended to use the group access feature, so that backups can be done by any OS user in the same group as the cluster owner. In this case, the user should have read permissions for the cluster directory.

Configuring the Database Cluster #

Although pg_probackup can be used by a superuser, it is recommended to create a separate role with the minimum permissions required for the chosen backup strategy. In these configuration instructions, the backup role is used as an example.

For security reasons, it is recommended to run the configuration SQL queries below in a separate database.

postgres=# CREATE DATABASE backupdb;
postgres=# \c backupdb

To perform a backup, the following permissions for role backup are required only in the database used for connection to the Postgres Pro server.

For Postgres Pro versions 11 — 14:

BEGIN;
CREATE ROLE backup WITH LOGIN;
GRANT USAGE ON SCHEMA pg_catalog TO backup;
GRANT EXECUTE ON FUNCTION pg_catalog.current_setting(text) TO backup;
GRANT EXECUTE ON FUNCTION pg_catalog.set_config(text, text, boolean) TO backup;
GRANT EXECUTE ON FUNCTION pg_catalog.pg_is_in_recovery() TO backup;
GRANT EXECUTE ON FUNCTION pg_catalog.pg_start_backup(text, boolean, boolean) TO backup;
GRANT EXECUTE ON FUNCTION pg_catalog.pg_stop_backup(boolean, boolean) TO backup;
GRANT EXECUTE ON FUNCTION pg_catalog.pg_create_restore_point(text) TO backup;
GRANT EXECUTE ON FUNCTION pg_catalog.pg_switch_wal() TO backup;
GRANT EXECUTE ON FUNCTION pg_catalog.pg_last_wal_replay_lsn() TO backup;
GRANT EXECUTE ON FUNCTION pg_catalog.txid_current() TO backup;
GRANT EXECUTE ON FUNCTION pg_catalog.txid_current_snapshot() TO backup;
GRANT EXECUTE ON FUNCTION pg_catalog.txid_snapshot_xmax(txid_snapshot) TO backup;
GRANT EXECUTE ON FUNCTION pg_catalog.pg_control_checkpoint() TO backup;
COMMIT;

For Postgres Pro 15 or higher:

BEGIN;
CREATE ROLE backup WITH LOGIN;
GRANT USAGE ON SCHEMA pg_catalog TO backup;
GRANT EXECUTE ON FUNCTION pg_catalog.current_setting(text) TO backup;
GRANT EXECUTE ON FUNCTION pg_catalog.set_config(text, text, boolean) TO backup;
GRANT EXECUTE ON FUNCTION pg_catalog.pg_is_in_recovery() TO backup;
GRANT EXECUTE ON FUNCTION pg_catalog.pg_backup_start(text, boolean) TO backup;
GRANT EXECUTE ON FUNCTION pg_catalog.pg_backup_stop(boolean) TO backup;
GRANT EXECUTE ON FUNCTION pg_catalog.pg_create_restore_point(text) TO backup;
GRANT EXECUTE ON FUNCTION pg_catalog.pg_switch_wal() TO backup;
GRANT EXECUTE ON FUNCTION pg_catalog.pg_last_wal_replay_lsn() TO backup;
GRANT EXECUTE ON FUNCTION pg_catalog.txid_current() TO backup;
GRANT EXECUTE ON FUNCTION pg_catalog.txid_current_snapshot() TO backup;
GRANT EXECUTE ON FUNCTION pg_catalog.txid_snapshot_xmax(txid_snapshot) TO backup;
GRANT EXECUTE ON FUNCTION pg_catalog.pg_control_checkpoint() TO backup;
COMMIT;

In the pg_hba.conf file, allow connection to the database cluster on behalf of the backup role.

Since pg_probackup needs to read cluster files directly, pg_probackup must be started by (or connected to, if used in the remote mode) the OS user that has read access to all files and directories inside the data directory (PGDATA) you are going to back up.

Depending on whether you plan to take standalone or archive backups, Postgres Pro cluster configuration will differ, as specified in the sections below. To back up the database cluster from a standby server, run pg_probackup in the remote mode, or create PTRACK backups, additional setup is required.

For details, see the sections Setting up STREAM Backups, Setting up continuous WAL archiving, Setting up Backup from Standby, Configuring the Remote Mode, Setting up Partial Restore, and Setting up PTRACK Backups.

Setting up STREAM Backups #

To set up the cluster for STREAM backups, complete the following steps:

  • If the backup role does not exist, create it with the REPLICATION privilege when Configuring the Database Cluster:

    CREATE ROLE backup WITH LOGIN REPLICATION;
    
  • If the backup role already exists, grant it with the REPLICATION privilege:

    ALTER ROLE backup WITH REPLICATION;
    
  • In the pg_hba.conf file, allow replication on behalf of the backup role.

  • Make sure the parameter max_wal_senders is set high enough to leave at least one session available for the backup process.

  • Set the parameter wal_level to be higher than minimal.

If you are planning to take PAGE backups in the STREAM mode or perform PITR with STREAM backups, you still have to configure WAL archiving, as explained in the section Setting up continuous WAL archiving.

Once these steps are complete, you can start taking FULL, PAGE, DELTA, and PTRACK backups in the STREAM WAL mode.

Note

If you are planning to rely on .pgpass for authentication when running backup in STREAM mode, then .pgpass must contain credentials for replication database, used to establish connection via replication protocol. Example: pghost:5432:replication:backup_user:my_strong_password

Setting up Continuous WAL Archiving #

Making backups in the PAGE backup mode, performing PITR and making backups with the ARCHIVE WAL delivery mode require continuous WAL archiving to be enabled. To set up continuous archiving in the cluster, complete the following steps:

  • Make sure the wal_level parameter is higher than minimal.

  • If you are configuring archiving on the primary, archive_mode must be set to on or always. To perform archiving on a standby, set this parameter to always.

  • Set the archive_command parameter, as follows:

    archive_command = '"install_dir/pg_probackup" archive-push -B "backup_dir" --instance=instance_name --wal-file-name=%f [remote_options]'
    

where install_dir is the installation directory of the pg_probackup version you are going to use, backup_dir and instance_name refer to the already initialized backup catalog instance for this database cluster, and remote_options only need to be specified to archive WAL on a remote host. For details about all possible archive-push parameters, see the section archive-push.

Once these steps are complete, you can start making backups in the ARCHIVE WAL mode, backups in the PAGE backup mode, as well as perform PITR.

You can view the current state of the WAL archive using the show command. For details, see the section called “Viewing WAL Archive Information”.

If you are planning to make PAGE backups and/or backups with ARCHIVE WAL mode from a standby server that generates a small amount of WAL traffic, without long waiting for WAL segment to fill up, consider setting the archive_timeout Postgres Pro parameter on the primary. The value of this parameter should be slightly lower than the --archive-timeout setting (5 minutes by default), so that there is enough time for the rotated segment to be streamed to a standby and sent to WAL archive before the backup is aborted because of --archive-timeout.

Note

Instead of using the archive-push command provided by pg_probackup, you can use any other tool to set up continuous archiving as long as it delivers WAL segments into backup_dir/wal/instance_name directory. If compression is used, it should be gzip, and .gz suffix in filename is mandatory.

Note

Instead of configuring continuous archiving by setting the archive_mode and archive_command parameters, you can opt for using the pg_receivewal utility. In this case, pg_receivewal -D directory option should point to backup_dir/wal/instance_name directory. pg_probackup supports WAL compression that can be done by pg_receivewal. Zero Data Loss archive strategy can be achieved only by using pg_receivewal.

Setting up Backups from a Standby #

pg_probackup can take backups from a standby server. This requires the following additional setup:

Once these steps are complete, you can start taking FULL, PAGE, DELTA, or PTRACK backups with appropriate WAL delivery mode: ARCHIVE or STREAM, from the standby server.

Backups from a standby server have the following limitations:

  • If a standby is promoted to a primary during the backup process, the backup fails.

  • All WAL records required for the backup must contain sufficient full-page writes. This requires you to enable full_page_writes on the primary, and not to use tools like pg_compresslog as archive_command to remove full-page writes from WAL files.

Setting up Cluster Verification #

Logical verification of a database cluster requires the following additional setup. Role backup is used as an example:

  • Install the amcheck or amcheck_next extension in every database of the cluster:

    CREATE EXTENSION amcheck;
    
  • Grant the following permissions to the backup role in every database of the cluster:

GRANT SELECT ON TABLE pg_catalog.pg_am TO backup;
GRANT SELECT ON TABLE pg_catalog.pg_class TO backup;
GRANT SELECT ON TABLE pg_catalog.pg_database TO backup;
GRANT SELECT ON TABLE pg_catalog.pg_namespace TO backup;
GRANT SELECT ON TABLE pg_catalog.pg_extension TO backup;
GRANT EXECUTE ON FUNCTION bt_index_check(regclass) TO backup;
GRANT EXECUTE ON FUNCTION bt_index_check(regclass, bool) TO backup;
GRANT EXECUTE ON FUNCTION bt_index_check(regclass, bool, bool) TO backup;

Setting up Partial Restore #

If you are planning to use partial restore, complete the following additional step:

  • Grant the read-only access to pg_catalog.pg_database to the backup role only in the database used for connection to Postgres Pro server:

    GRANT SELECT ON TABLE pg_catalog.pg_database TO backup;
    

Configuring the Remote Mode #

pg_probackup supports the remote mode that allows you to perform backup, restore and WAL archiving operations remotely. In this mode, the backup catalog is stored on a local system, while Postgres Pro instance to backup and/or to restore is located on a remote system. Currently the only supported remote protocol is SSH.

Set up SSH #

If you are going to use pg_probackup in remote mode via SSH, complete the following steps:

  1. Install pg_probackup on both systems: backup_host and postgres_host.

  2. For communication between the hosts set up a passwordless SSH connection between the backup_user user on backup_host and the postgres user on postgres_host:

    backup_user@backup_host:~$ ssh-copy-id postgres@postgres_host
    

    Where:

    • backup_host is the system with backup catalog.

    • postgres_host is the system with the Postgres Pro cluster.

    • backup_user is the OS user on backup_host used to run pg_probackup.

    • postgres is the user on postgres_host under which Postgres Pro cluster processes are running. For Postgres Pro 11 or higher a more secure approach can be used thanks to group access feature.

  3. If you are going to rely on continuous WAL archiving, set up a passwordless SSH connection between the postgres user on postgres_host and the backup user on backup_host:

    postgres@postgres_host:~$ ssh-copy-id backup_user@backup_host
    
  4. Make sure pg_probackup on postgres_host can be located when a connection via SSH is made. For example, for Bash, you can modify PATH in ~/.bashrc of the postgres user (above the line in bashrc that exits the script for non-interactive shells). Alternatively, for pg_probackup commands, specify the path to the directory containing the pg_probackup binary on postgres_host via the --remote-path option.

pg_probackup in the remote mode via SSH works as follows:

  • Only the following commands can be launched in the remote mode: add-instance, backup, restore, delete, catchup, archive-push, and archive-get.

  • Operating in remote mode requires pg_probackup binary to be installed on both local and remote systems. The versions of local and remote binary must be the same.

  • When started in the remote mode, the main pg_probackup process on the local system connects to the remote system via SSH and launches one or more agent processes on the remote system, which are called remote agents. The number of remote agents is equal to the -j/--threads setting.

  • The main pg_probackup process uses remote agents to access remote files and transfer data between local and remote systems.

  • Remote agents try to minimize the network traffic and the number of round-trips between hosts.

  • The main process is usually started on backup_host and connects to postgres_host, but in case of archive-push and archive-get commands the main process is started on postgres_host and connects to backup_host.

  • Once data transfer is complete, remote agents are terminated and SSH connections are closed.

  • If an error condition is encountered by a remote agent, then all agents are terminated and error details are reported by the main pg_probackup process, which exits with an error.

  • Compression is always done on postgres_host, while decompression is always done on backup_host.

Note

You can impose additional restrictions on SSH settings to protect the system in the event of account compromise.

Note

Setting the number of threads (-j/--threads option) to a value greater than 10 for pg_probackup working in the remote mode via SSH may result in the actual number of SSH connections exceeding the maximum allowed number of simultaneous SSH connections on the remote server and consequently lead to an ERROR: Agent error: kex_exchange_identification: Connection closed by remote host error. To correct the error, either reduce the number of pg_probackup threads or adjust the value of MaxStartups configuration parameter of the remote SSH server. If SSH is set up as a xinetd service on the remote server, adjust the value of the xinetd per_source configuration parameter rather than MaxStartups.

Configuring S3 Connectivity #

pg_probackup supports S3 interface for storing backups. Backup data is transferred to and from S3 without saving it in intermediate locations thus eliminating the need of having a large temporary storage.

An example configuration with a remote agent and a cloud storage (S3) is shown in Figure I.1.

Figure I.1. pg_probackup setup with a remote agent and S3

pg_probackup setup with a remote agent and S3


In this figure, the following logical components are shown:

Backup server

A server where the main process of pg_probackup runs and where local backups are stored.

Database server

A server with a database instance that needs to be backed up or restored.

Remote agent

A secondary pg_probackup process running on the database server. Only applicable to the remote mode.

Cloud storage

A cloud storage for backups.

Set up Access to S3 Storage #

If you are going to use pg_probackup with S3 interface, complete the following steps:

  • Create a bucket with a unique and meaningful name in the S3 storage for you future backups.

  • Create ACCESS_KEY and SECRET_ACCESS_KEY tokens to be used for secure connectivity instead of your username and password.

  • For communication between pg_probackup and S3 server, set values of environment variables corresponding to your S3 server. For example:

    export PG_PROBACKUP_S3_HOST=127.0.0.1
    export PG_PROBACKUP_S3_PORT=9000
    export PG_PROBACKUP_S3_REGION=ru-msk
    export PG_PROBACKUP_S3_BUCKET_NAME=test1
    export PG_PROBACKUP_S3_ACCESS_KEY=admin
    export PG_PROBACKUP_S3_SECRET_ACCESS_KEY=password
    export PG_PROBACKUP_S3_HTTPS=ON
    

    Alternatively, you can provide S3 server settings in the S3 configuration file (see the --s3-config-file option in the section S3 Options for details).

    It makes sense to specify S3 server settings if --s3=minio, as described in the section S3 Options.

    The following environment variables can be specified:

    PG_PROBACKUP_S3_HOST

    Address or list of addresses of the S3 server. A list of one or several semicolon-delimited addresses. Do not add a semicolon after the last address in the list. Each address can include the port number, separated by a colon. If the port number is not specified, the value of PG_PROBACKUP_S3_PORT is assumed. Do not add a colon if the port number is not specified.

    For example:

    export PG_PROBACKUP_S3_PORT=80
    export PG_PROBACKUP_S3_HOST="127.0.0.1:9000;10.4.13.56:443;172.17.0.1"
    

    In this example, for the 127.0.0.1 address, the port 9000 is explicitly specified, for 10.4.13.56, the port 443 is specified, while for the 172.17.0.1 address, port 80, specified through PG_PROBACKUP_S3_PORT, will be used.

    If any of the specified addresses gets unavailable while pg_probackup is in operation, requests to the S3 storage are distribited between the rest of the specified addresses. That is, when several addresses are specified, pg_probackup performs load balancing of S3 requests.

    PG_PROBACKUP_S3_PORT

    The port of the S3 server.

    PG_PROBACKUP_S3_REGION

    The region of the S3 server.

    PG_PROBACKUP_S3_BUCKET_NAME

    The name of the bucket on the S3 server.

    PG_PROBACKUP_S3_ACCESS_KEY
    PG_PROBACKUP_S3_SECRET_ACCESS_KEY

    Secure tokens on the S3 server.

    PG_PROBACKUP_S3_HTTPS

    The protocol to be used. Possible values:

    • ON or HTTPS — use HTTPS

    • Other than ON or HTTPS — use HTTP

    PG_PROBACKUP_S3_BUFFER_SIZE

    The size of the read/write buffer for communicating with S3, in MiB. The default is 16.

    PG_PROBACKUP_S3_RETRIES

    The maximum number of attempts to execute an S3 request in case of failures. The default is 3.

    PG_PROBACKUP_S3_TIMEOUT

    The maximum amount of time to execute an HTTP request to the S3 server, in seconds. The default is 300.

    PG_PROBACKUP_S3_IGNORE_CERT_VER

    Don't verify the certificate host and peer. The default is ON.

    PG_PROBACKUP_S3_CA_CERTIFICATE

    Specify the path to file with trust Certificate Authority (CA) bundle.

    PG_PROBACKUP_S3_CA_PATH

    Specify the directory with trust CA certificates.

    PG_PROBACKUP_S3_CLIENT_CERT

    Setup SSL client certificate.

    PG_PROBACKUP_S3_CLIENT_KEY

    Setup private key file for TLS and SSL client certificate.

Setting up PTRACK Backups #

Note

PTRACK versions lower than 2.0 are deprecated and not supported. Postgres Pro Standard and Postgres Pro Enterprise versions starting with 11.9.1 contain PTRACK 2.0. Upgrade your server to avoid issues in backups that you will take in future and be sure to take fresh backups of your clusters with the upgraded PTRACK since the backups taken with PTRACK 1.x might be corrupt.

If you are going to use PTRACK backups, complete the following additional steps. The role that will perform PTRACK backups (the backup role in the examples below) must have access to all the databases of the cluster.

For Postgres Pro 11 or higher:

  1. Create PTRACK extension:

    CREATE EXTENSION ptrack;
    

  2. To enable tracking page updates, set ptrack.map_size parameter to a positive integer and restart the server.

    For optimal performance, it is recommended to set ptrack.map_size to N / 1024, where N is the size of the Postgres Pro cluster, in MB. If you set this parameter to a lower value, PTRACK is more likely to map several blocks together, which leads to false-positive results when tracking changed blocks and increases the incremental backup size as unchanged blocks can also be copied into the incremental backup. Setting ptrack.map_size to a higher value does not affect PTRACK operation, but it is not recommended to set this parameter to a value higher than 1024.

Note

If you change the ptrack.map_size parameter value, the previously created PTRACK map file is cleared, and tracking newly changed blocks starts from scratch. Thus, you have to retake a full backup before taking incremental PTRACK backups after changing ptrack.map_size.

Usage #

Creating a Backup #

To create a backup, run the following command:

pg_probackup backup -B backup_dir --instance=instance_name -b backup_mode

Where backup_mode can take one of the following values: FULL, DELTA, PAGE, and PTRACK.

When restoring a cluster from an incremental backup, pg_probackup relies on the parent full backup and all the incremental backups between them, which is called the backup chain. Thus, to perform incremental backups, it is necessary to have the last parent full backup with the status OK or DONE in the directory. If the parent full backup has the MERGING or MERGED status, an incremental backup cannot be performed.

For example, if merge has already been launched with a single full backup, an attempt to perform an incremental backup will end with the following messages:

WARNING: Valid full backup on current timeline 1 is not found, trying to look up on previous timelines
WARNING: Cannot find valid backup on previous timelines
ERROR: Create new full backup before an incremental one

ARCHIVE Mode #

ARCHIVE is the default WAL delivery mode.

For example, to make a FULL backup in ARCHIVE mode, run:

pg_probackup backup -B backup_dir --instance=instance_name -b FULL

ARCHIVE backups rely on continuous archiving to get WAL segments required to restore the cluster to a consistent state at the time the backup was taken.

When a backup is taken, pg_probackup ensures that WAL files containing WAL records between Start LSN and Stop LSN actually exist in backup_dir/wal/instance_name directory. pg_probackup also ensures that WAL records between Start LSN and Stop LSN can be parsed. This precaution eliminates the risk of silent WAL corruption.

STREAM Mode #

STREAM is the optional WAL delivery mode.

For example, to make a FULL backup in the STREAM mode, add the --stream flag to the command from the previous example:

pg_probackup backup -B backup_dir --instance=instance_name -b FULL --stream --temp-slot

The optional --temp-slot flag ensures that the required segments remain available if the WAL is rotated before the backup is complete.

Unlike backups in ARCHIVE mode, STREAM backups include all the WAL segments required to restore the cluster to a consistent state at the time the backup was taken.

During backup pg_probackup streams WAL files containing WAL records between Start LSN and Stop LSN to backup_dir/backups/instance_name/backup_id/database/pg_wal directory. To eliminate the risk of silent WAL corruption, pg_probackup also checks that WAL records between Start LSN and Stop LSN can be parsed.

Even if you are using continuous archiving, STREAM backups can still be useful in the following cases:

  • STREAM backups can be restored on the server that has no file access to WAL archive.

  • STREAM backups enable you to restore the cluster state at the point in time for which WAL files in archive are no longer available.

  • Backup in STREAM mode can be taken from a standby of a server that generates small amount of WAL traffic, without long waiting for WAL segment to fill up.

Page Validation #

If data_checksums are enabled in the database cluster, pg_probackup uses this information to check correctness of data files during backup. While reading each page, pg_probackup checks whether the calculated checksum coincides with the checksum stored in the page header. This guarantees that the Postgres Pro instance and the backup itself have no corrupt pages. Note that pg_probackup reads database files directly from the filesystem, so under heavy write load during backup it can show false-positive checksum mismatches because of partial writes. If a page checksum mismatch occurs, the page is re-read and checksum comparison is repeated.

A page is considered corrupt if checksum comparison has failed more than 300 times. In this case, the backup is aborted.

Even if data checksums are not enabled, pg_probackup always performs sanity checks for page headers.

External Directories #

To back up a directory located outside of the data directory, use the optional --external-dirs parameter that specifies the path to this directory. If you would like to add more than one external directory, you can provide several paths separated by colons on Linux systems or semicolons on Windows systems.

For example, to include /etc/dir1 and /etc/dir2 directories into the full backup of your instance_name instance that will be stored under the backup_dir directory on Linux, run:

pg_probackup backup -B backup_dir --instance=instance_name -b FULL --external-dirs=/etc/dir1:/etc/dir2

Similarly, to include C:\dir1 and C:\dir2 directories into the full backup on Windows, run:

pg_probackup backup -B backup_dir --instance=instance_name -b FULL --external-dirs=C:\dir1;C:\dir2

pg_probackup recursively copies the contents of each external directory into a separate subdirectory in the backup catalog. Since external directories included into different backups do not have to be the same, when you are restoring the cluster from an incremental backup, only those directories that belong to this particular backup will be restored. Any external directories stored in the previous backups will be ignored.

To include the same directories into each backup of your instance, you can specify them in the pg_probackup.conf configuration file using the set-config command with the --external-dirs option.

Performing Cluster Verification #

To verify that Postgres Pro database cluster is not corrupt, run the following command:

pg_probackup checkdb [-B backup_dir [--instance=instance_name]] [-D data_dir] [connection_options]

This command performs physical verification of all data files located in the specified data directory by running page header sanity checks, as well as block-level checksum verification if checksums are enabled. If a corrupt page is detected, checkdb continues cluster verification until all pages in the cluster are validated.

By default, similar page validation is performed automatically while a backup is taken by pg_probackup. The checkdb command enables you to perform such page validation on demand, without taking any backup copies, even if the cluster is not backed up using pg_probackup at all.

To perform cluster verification, pg_probackup needs to connect to the cluster to be verified. In general, it is enough to specify the backup instance of this cluster for pg_probackup to determine the required connection options. However, if -B and --instance options are omitted, you have to provide connection options and data_dir via environment variables or command-line options.

Physical verification cannot detect logical inconsistencies, missing or nullified blocks and entire files, or similar anomalies. Extensions amcheck and amcheck_next provide a partial solution to these problems.

If you would like, in addition to physical verification, to verify all indexes in all databases using these extensions, you can specify the --amcheck flag when running the checkdb command:

pg_probackup checkdb -D data_dir --amcheck [connection_options]

You can skip physical verification by specifying the --skip-block-validation flag. In this case, you can omit backup_dir and data_dir options, only connection options are mandatory:

pg_probackup checkdb --amcheck --skip-block-validation [connection_options]

Logical verification can be done more thoroughly with the --heapallindexed flag by checking that all heap tuples that should be indexed are actually indexed, but at the higher cost of CPU, memory, and I/O consumption.

Validating a Backup #

pg_probackup calculates checksums for each file in a backup during the backup process. The process of checking checksums of backup data files is called the backup validation. By default, validation is run immediately after the backup is taken and right before the restore, to detect possible backup corruption.

Note

The backup validation includes checking checksums for CFS files.

If you would like to skip backup validation, you can specify the --no-validate flag when running backup and restore commands.

To ensure that all the required backup files are present and can be used to restore the database cluster, you can run the validate command with the exact recovery target options you are going to use for recovery.

For example, to check that you can restore the database cluster from a backup copy up to transaction ID 4242, run this command:

pg_probackup validate -B backup_dir --instance=instance_name --recovery-target-xid=4242

If validation completes successfully, pg_probackup displays the corresponding message. If validation fails, you will receive an error message with the exact time, transaction ID, and LSN up to which the recovery is possible.

If you specify backup_id via -i/--backup-id option, then only the backup copy with specified backup ID will be validated. If backup_id is specified with recovery target options, the validate command will check whether it is possible to restore the specified backup to the specified recovery target.

For example, to check that you can restore the database cluster from a backup copy with the SBOL6P backup ID up to the specified timestamp, run this command:

pg_probackup validate -B backup_dir --instance=instance_name -i SBOL6P --recovery-target-time="2024-04-10 18:18:26+03"

If you specify the backup_id of an incremental backup, all its parents starting from FULL backup will be validated.

If you omit all the parameters, all backups are validated.

Restoring a Cluster #

To restore the database cluster from a backup, run the restore command with at least the following options:

pg_probackup restore -B backup_dir --instance=instance_name -i backup_id

Where:

  • backup_dir is the backup catalog that stores all backup files and meta information.

  • instance_name is the backup instance for the cluster to be restored.

  • backup_id specifies the backup to restore the cluster from. If you omit this option, pg_probackup uses the latest valid backup available for the specified instance. If you specify an incremental backup to restore, pg_probackup automatically restores the underlying full backup and then sequentially applies all the necessary increments.

Once the restore command is complete, start the database service.

If you restore ARCHIVE backups, perform PITR, or specify the --restore-as-replica flag with the restore command to set up a standby server, pg_probackup creates a recovery configuration file once all data files are copied into the target directory. This file includes the minimal settings required for recovery, except for the password in the primary_conninfo parameter; you have to add the password manually or use the --primary-conninfo option, if required. For Postgres Pro 11, recovery settings are written into the recovery.conf file. Starting from Postgres Pro 12, pg_probackup writes these settings into the probackup_recovery.conf file and then includes it into postgresql.auto.conf.

If you are restoring a STREAM backup, the restore is complete at once, with the cluster returned to a self-consistent state at the point when the backup was taken. For ARCHIVE backups, Postgres Pro replays all available archived WAL segments, so the cluster is restored to the latest state possible within the current timeline. You can change this behavior by using the recovery target options with the restore command, as explained in the section called “Performing Point-in-Time (PITR) Recovery”.

If the cluster to restore contains tablespaces, pg_probackup restores them to their original location by default. To restore tablespaces to a different location, use the --tablespace-mapping/-T option. Otherwise, restoring the cluster on the same host will fail if tablespaces are in use, because the backup would have to be written to the same directories.

When using the --tablespace-mapping/-T option, you must provide absolute paths to the old and new tablespace directories. If a path happens to contain an equals sign (=), escape it with a backslash. This option can be specified multiple times for multiple tablespaces. For example:

pg_probackup restore -B backup_dir --instance=instance_name -D data_dir -j 4 -i backup_id -T tablespace1_dir=tablespace1_newdir -T tablespace2_dir=tablespace2_newdir

To restore the cluster on a remote host, follow the instructions in the section called “Using pg_probackup in the Remote Mode”.

Note

By default, the restore command validates the specified backup before restoring the cluster. If you run regular backup validations and would like to save time when restoring the cluster, you can specify the --no-validate flag to skip validation and speed up the recovery.

Incremental Restore #

The speed of restore from backup can be significantly improved by replacing only invalid and changed pages in already existing Postgres Pro data directory using incremental restore options with the restore command.

To restore the database cluster from a backup in incremental mode, run the restore command with the following options:

pg_probackup restore -B backup_dir --instance=instance_name -D data_dir -I incremental_mode
      

Where incremental_mode can take one of the following values:

  • CHECKSUM — read all data files in the data directory, validate header and checksum in every page and replace only invalid pages and those with checksum and LSN not matching with corresponding page in backup. This is the simplest, the most fool-proof incremental mode. Recommended to use by default.

  • LSN — read the pg_control in the data directory to obtain redo LSN and redo TLI, which allows you to determine a point in history(shiftpoint), where data directory state shifted from target backup chain history. If shiftpoint is not within reach of backup chain history, then restore is aborted. If shiftpoint is within reach of backup chain history, then read all data files in the data directory, validate header and checksum in every page and replace only invalid pages and those with LSN greater than shiftpoint. This mode offers a greater speed up compared to CHECKSUM, but rely on two conditions to be met. First, data_checksums parameter must be enabled in data directory (to avoid corruption due to hint bits). This condition will be checked at the start of incremental restore and the operation will be aborted if checksums are disabled. Second, the pg_control file must be synched with state of data directory. This condition cannot checked at the start of restore, so it is a user responsibility to ensure that pg_control contain valid information. Therefore it is not recommended to use LSN mode in any situation, where pg_control cannot be trusted or has been tampered with: after pg_resetxlog execution, after restore from backup without recovery been run, etc.

  • NONE — regular restore without any incremental optimizations.

Regardless of chosen incremental mode, pg_probackup will check, that postmaster in given destination directory is not running and system-identifier is the same as in the backup.

Suppose you want to return an old primary as a replica after switchover using incremental restore in LSN mode:

==================================================================================================================================================
 Instance  Version  ID      Recovery Time                  Mode    WAL Mode  TLI  Time   Data   WAL  Zalg  Zratio  Start LSN   Stop LSN    Status 
==================================================================================================================================================
 node      17       SBOL8S  2024-04-09 18:19:43.707720+03  DELTA   STREAM    16/15    3s  114MB  64MB   lz4    1.42  0/3C003020  0/3E8D4930  OK     
 node      17       SBOL8G  2024-04-09 18:19:32.594670+03  PTRACK  STREAM    15/15    4s   30MB  16MB  zlib    2.23  0/31000028  0/310029E0  OK     
 node      17       SBOL83  2024-04-09 18:19:22.269595+03  PAGE    STREAM    15/15    7s   46MB  32MB  pglz    1.44  0/29000028  0/2A0000F8  OK     
 node      17       SBOL7P  2024-04-09 18:19:06.557301+03  FULL    STREAM    15/0     6s  144MB  16MB  zstd    2.47  0/22000028  0/220001C8  OK     

backup_user@backup_host:~$ pg_probackup restore -B /mnt/backups --instance=node -R -I lsn
INFO: Destination directory and tablespace directories are empty, disable incremental restore
INFO: Validating parents for backup SBOL8S
INFO: Validating backup SBOL7P
INFO: Backup SBOL7P data files are valid
INFO: Validating backup SBOL83
INFO: Backup SBOL83 data files are valid
INFO: Validating backup SBOL8G
INFO: Backup SBOL8G data files are valid
INFO: Validating backup SBOL8S
INFO: Backup SBOL8S data files are valid
INFO: Backup SBOL8S WAL segments are valid
INFO: Backup SBOL8S is valid.
INFO: Restoring the database from the backup starting at 2024-04-09 18:19:40+03
INFO: Start restoring backup files. PGDATA size: 616MB
INFO: Backup files are restored. Transferred bytes: 616MB, time elapsed: 2s
INFO: Restore incremental ratio (less is better): 100% (616MB/616MB)
INFO: Syncing restored files to disk
INFO: Restored backup files are synced, time elapsed: 2s
INFO: Restore of backup SBOL8S completed.

Note

Incremental restore is possible only for backups with program_version equal or greater than 2.4.0.

Partial Restore #

If you have enabled partial restore before taking backups, you can restore only some of the databases using partial restore options with the restore commands.

To restore the specified databases only, run the restore command with the following options:

pg_probackup restore -B backup_dir --instance=instance_name --db-include=database_name
      

The --db-include option can be specified multiple times. For example, to restore only databases db1 and db2, run the following command:

pg_probackup restore -B backup_dir --instance=instance_name --db-include=db1 --db-include=db2

To exclude one or more databases from restore, use the --db-exclude option:

pg_probackup restore -B backup_dir --instance=instance_name --db-exclude=database_name

The --db-exclude option can be specified multiple times. For example, to exclude the databases db1 and db2 from restore, run the following command:

pg_probackup restore -B backup_dir --instance=instance_name --db-exclude=db1 --db-exclude=db2

Partial restore relies on lax behavior of Postgres Pro recovery process toward truncated files. For recovery to work properly, files of excluded databases are restored as files of zero size. After the Postgres Pro cluster is successfully started, you must drop the excluded databases using DROP DATABASE command.

To decouple a single cluster containing multiple databases into separate clusters with minimal downtime, you can do partial restore of the cluster as a standby using the --restore-as-replica option for specific databases.

Note

The template0 and template1 databases are always restored.

Note

Due to recovery specifics of Postgres Pro versions earlier than 12, it is advisable that you set the hot_standby parameter to off when running partial restore of a Postgres Pro cluster of version earlier than 12. Otherwise the recovery may fail.

Performing Point-in-Time (PITR) Recovery #

If you have enabled continuous WAL archiving before taking backups, you can restore the cluster to its state at an arbitrary point in time (recovery target) using recovery target options with the restore command.

You can use both STREAM and ARCHIVE backups for point in time recovery as long as the WAL archive is available at least starting from the time the backup was taken. If -i/--backup-id option is omitted, pg_probackup automatically chooses the backup that is the closest to the specified recovery target and starts the restore process, otherwise pg_probackup will try to restore the specified backup to the specified recovery target.

  • To restore the cluster state at the exact time, specify the --recovery-target-time option, in the timestamp format. For example:

    pg_probackup restore -B backup_dir --instance=instance_name --recovery-target-time="2024-04-10 18:18:26+03"
    
  • To restore the cluster state up to a specific transaction ID, use the --recovery-target-xid option:

    pg_probackup restore -B backup_dir --instance=instance_name --recovery-target-xid=687
    
  • To restore the cluster state up to the specific LSN, use --recovery-target-lsn option:

    pg_probackup restore -B backup_dir --instance=instance_name --recovery-target-lsn=16/B374D848
    
  • To restore the cluster state up to the specific named restore point, use --recovery-target-name option:

    pg_probackup restore -B backup_dir --instance=instance_name --recovery-target-name="before_app_upgrade"
    
  • To restore the backup to the latest state available in the WAL archive, use --recovery-target option with latest value:

    pg_probackup restore -B backup_dir --instance=instance_name --recovery-target="latest"
    
  • To restore the cluster to the earliest point of consistency, use --recovery-target option with the immediate value:

    pg_probackup restore -B backup_dir --instance=instance_name --recovery-target='immediate'
    

Using pg_probackup in the Remote Mode #

pg_probackup supports the remote mode that allows you to perform backup and restore operations remotely via SSH. In this mode, the backup catalog is stored on a local system, while Postgres Pro instance to be backed up is located on a remote system. You must have pg_probackup installed on both systems.

Note

pg_probackup relies on passwordless SSH connection for communication between the hosts.

Note

In addition to SSH connection, pg_probackup uses a regular connection to the database to manage the remote operation. See the section Configuring the Database Cluster for details of how to set up a database connection.

The typical workflow is as follows:

For example, to create an archive full backup of a Postgres Pro cluster located on a remote system with host address 192.168.0.2 on behalf of the postgres user via SSH connection through port 2302, run:

pg_probackup backup -B backup_dir --instance=instance_name -b FULL --remote-user=postgres --remote-host=192.168.0.2 --remote-port=2302

To restore the latest available backup on a remote system with host address 192.168.0.2 on behalf of the postgres user via SSH connection through port 2302, run:

pg_probackup restore -B backup_dir --instance=instance_name --remote-user=postgres --remote-host=192.168.0.2 --remote-port=2302

Restoring an ARCHIVE backup or performing PITR in the remote mode require additional information: destination address, port and username for establishing an SSH connection from the host with database to the host with the backup catalog. This information will be used by the restore_command to copy WAL segments from the archive to the Postgres Pro pg_wal directory.

To solve this problem, you can use Remote WAL Archive Options.

For example, to restore latest backup on remote system using remote mode through SSH connection to user postgres on host with address 192.168.0.2 via port 2302 and user backup on backup catalog host with address 192.168.0.3 via port 2303, run:

pg_probackup restore -B backup_dir --instance=instance_name --remote-user=postgres --remote-host=192.168.0.2 --remote-port=2302 --archive-host=192.168.0.3 --archive-port=2303 --archive-user=backup

Provided arguments will be used to construct the restore_command:

restore_command = '"install_dir/pg_probackup" archive-get -B "backup_dir" --instance=instance_name --wal-file-path=%p --wal-file-name=%f --remote-host=192.168.0.3 --remote-port=2303 --remote-user=backup'

Alternatively, you can use the --restore-command option to provide the entire restore_command:

pg_probackup restore -B backup_dir --instance=instance_name --remote-user=postgres --remote-host=192.168.0.2 --remote-port=2302 --restore-command='"install_dir/pg_probackup" archive-get -B "backup_dir" --instance=instance_name --wal-file-path=%p --wal-file-name=%f --remote-host=192.168.0.3 --remote-port=2303 --remote-user=backup'

Note

The remote mode is currently unavailable for Windows systems.

Running pg_probackup on Parallel Threads #

backup, restore, merge, delete, catchup, checkdb, and validate processes can be executed on several parallel threads. This can significantly speed up pg_probackup operation given enough resources (CPU cores, disk, and network bandwidth).

Parallel execution is controlled by the -j/--threads command-line option. For example, to create a backup using four parallel threads, run:

pg_probackup backup -B backup_dir --instance=instance_name -b FULL -j 4

Note

Parallel restore applies only to copying data from the backup catalog to the data directory of the cluster. When Postgres Pro server is started, WAL records need to be replayed, and this cannot be done in parallel.

Configuring pg_probackup #

Once the backup catalog is initialized and a new backup instance is added, you can use the pg_probackup.conf configuration file located in the backup_dir/backups/instance_name directory to fine-tune pg_probackup configuration.

For example, backup and checkdb commands use a regular Postgres Pro connection. To avoid specifying connection options each time on the command line, you can set them in the pg_probackup.conf configuration file using the set-config command.

Note

It is not recommended to edit pg_probackup.conf manually.

Initially, pg_probackup.conf contains the following settings:

  • PGDATA — the path to the data directory of the cluster to back up.

  • system-identifier — the unique identifier of the Postgres Pro instance.

Additionally, you can define remote, retention, logging, and compression settings using the set-config command:

pg_probackup set-config -B backup_dir --instance=instance_name
[--external-dirs=external_directory_path] [remote_options] [connection_options] [retention_options] [logging_options]

To view the current settings, run the following command:

pg_probackup show-config -B backup_dir --instance=instance_name

You can override the settings defined in pg_probackup.conf when running pg_probackup commands via the corresponding environment variables and/or command line options.

Specifying Connection Settings #

If you define connection settings in the pg_probackup.conf configuration file, you can omit connection options in all the subsequent pg_probackup commands. However, if the corresponding environment variables are set, they get higher priority. The options provided on the command line overwrite both environment variables and configuration file settings.

If nothing is given, the default values are taken. By default pg_probackup tries to use local connection via Unix domain socket (localhost on Windows) and tries to get the database name and the user name from the PGUSER environment variable or the current OS user name.

Managing the Backup Catalog #

With pg_probackup, you can manage backups from the command line:

Viewing Backup Information #

To view the list of existing backups for every instance, run the command:

pg_probackup show -B backup_dir

pg_probackup displays the list of all the available backups. For example:

BACKUP INSTANCE 'node'
==================================================================================================================================================
 Instance  Version  ID      Recovery Time                  Mode    WAL Mode  TLI  Time   Data   WAL  Zalg  Zratio  Start LSN   Stop LSN    Status
==================================================================================================================================================
 node      17       SBOL94  2024-04-09 18:19:56.603355+03  FULL    ARCHIVE   1/0    6s  377MB  16MB   lz4    1.46  0/41000028  0/420000C0  OK
 node      17       SBOL8S  2024-04-09 18:19:43.707720+03  DELTA   STREAM    1/1    3s  114MB  64MB   lz4    1.42  0/3C003020  0/3E8D4930  OK
 node      17       SBOL8G  2024-04-09 18:19:32.594670+03  PTRACK  STREAM    1/1    4s   30MB  16MB  zlib    2.23  0/31000028  0/310029E0  OK
 node      17       SBOL83  2024-04-09 18:19:22.269595+03  PAGE    STREAM    1/1    7s   46MB  32MB  pglz    1.44  0/29000028  0/2A0000F8  OK
 node      17       SBOL7P  2024-04-09 18:19:06.557301+03  FULL    STREAM    1/0    6s  144MB  16MB  zstd    2.47  0/22000028  0/220001C8  OK

For each backup, the following information is provided:

  • Instance — the instance name.

  • VersionPostgres Pro major version.

  • ID — the backup identifier.

  • Recovery time — the earliest moment for which you can restore the state of the database cluster.

  • Mode — the method used to take this backup. Possible values: FULL, PAGE, DELTA, PTRACK.

  • WAL Mode — WAL delivery mode. Possible values: STREAM and ARCHIVE.

  • TLI — timeline identifiers of the current backup and its parent.

  • Time — the time it took to perform the backup.

  • Data — the size of the data files in this backup. This value does not include the size of WAL files. For STREAM backups, the total size of the backup can be calculated as Data + WAL.

  • WAL — the uncompressed size of WAL files that need to be applied during recovery for the backup to reach a consistent state.

  • compress-alg — compression algorithm used during backup. Possible values: zlib, pglz, lz4, zstd, none.

  • Zratio — compression ratio calculated as uncompressed-bytes / data-bytes.

  • Start LSN — WAL log sequence number corresponding to the start of the backup process. REDO point for Postgres Pro recovery process to start from.

  • Stop LSN — WAL log sequence number corresponding to the end of the backup process. Consistency point for Postgres Pro recovery process.

  • Status — backup status. Possible values:

    • OK — the backup is complete and valid.

    • DONE — the backup is complete, but was not validated.

    • RUNNING — the backup is in progress.

    • MERGING — the backup is being merged.

    • MERGED — the backup data files were successfully merged, but its metadata is in the process of being updated. Only full backups can have this status.

    • DELETING — the backup files are being deleted.

    • CORRUPT — some of the backup files are corrupt.

    • ERROR — the backup was aborted because of an unexpected error.

    • ORPHAN — the backup is invalid because one of its parent backups is corrupt or missing.

    • HIDDEN_FOR_TEST — a test script marked the backup as nonexistent. (pg_probackup never sets this status by itself.)

You can restore the cluster from the backup only if the backup status is OK or DONE.

To get more detailed information about the backup, run the show command with the backup ID:

pg_probackup show -B backup_dir --instance=instance_name -i backup_id

The sample output is as follows:

#Configuration
backup-mode = FULL
stream = false
compress-alg = lz4
compress-level = 1
from-replica = false

#Compatibility
block-size = 8192
xlog-block-size = 8192
checksum-version = 1
program-version = 2.7.3
server-version = 17

#Result backup info
timelineid = 1
start-lsn = 0/41000028
stop-lsn = 0/420000C0
start-time = '2024-04-09 18:19:52+03'
end-time = '2024-04-09 18:19:58+03'
end-validation-time = '2024-04-09 18:19:59+03'
recovery-xid = 757
recovery-time = '2024-04-09 18:19:56.603355+03'
data-bytes = 395651278
wal-bytes = 16777216
uncompressed-bytes = 578552566
pgdata-bytes = 578552248
status = OK
primary_conninfo = 'user=backup channel_binding=prefer host=localhost port=5432 sslmode=prefer sslcompression=0 sslcertmode=allow sslsni=1 ssl_min_protocol_version=TLSv1.2 gssencmode=prefer krbsrvname=postgres gssdelegation=0 target_session_attrs=any load_balance_hosts=disable'
content-crc = 3862224379

Detailed output has additional attributes:

  • compress-alg — compression algorithm used during backup. Possible values: zlib, pglz, lz4, zstd, none.

  • compress-level — compression level used during backup.

  • from-replica — was this backup taken on a standby? Possible values: 1, 0.

  • block-size — the block_size setting of Postgres Pro cluster at the backup start.

  • checksum-version — are data_checksums enabled in the backed up Postgres Pro cluster? Possible values: 1, 0.

  • program-version — full version of pg_probackup binary used to create the backup.

  • start-time — the backup start time.

  • end-time — the backup end time.

  • end-validation-time — the backup validation end time.

  • expire-time — the point in time when a pinned backup can be removed in accordance with retention policy. This attribute is only available for pinned backups.

  • uncompressed-bytes — the size of data files before adding page headers and applying compression. You can evaluate the effectiveness of compression by comparing uncompressed-bytes to data-bytes if compression if used.

  • pgdata-bytes — the size of Postgres Pro cluster data files at the time of backup. You can evaluate the effectiveness of an incremental backup by comparing pgdata-bytes to uncompressed-bytes.

  • recovery-xid — transaction ID at the backup end time.

  • parent-backup-id — ID of the parent backup. Available only for incremental backups.

  • primary_conninfolibpq connection parameters used to connect to the Postgres Pro cluster to take this backup. The password is not included.

  • note — text note attached to backup.

  • content-crc — CRC32 checksum of backup_content.control file. It is used to detect corruption of backup metainformation.

You can also get the detailed information about the backup in the JSON format:

pg_probackup show -B backup_dir --instance=instance_name --format=json -i backup_id

The sample output is as follows:

[
    {
        "instance": "node",
        "backups": [
            {
                "id": "SBOL94",
                "status": "OK",
                "start-time": "2024-04-09 18:19:52+03",
                "backup-mode": "FULL",
                "wal": "ARCHIVE",
                "compress-alg": "lz4",
                "compress-level": 1,
                "from-replica": "false",
                "block-size": 8192,
                "xlog-block-size": 8192,
                "checksum-version": 1,
                "program-version": "2.7.3",
                "server-version": "17",
                "current-tli": 16,
                "parent-tli": 2,
                "start-lsn": "0/41000028",
                "stop-lsn": "0/420000C0",
                "end-time": "2024-04-09 18:19:58+03",
                "end-validation-time": "2024-04-09 18:19:59+03",
                "recovery-xid": 757,
                "recovery-time": "2024-04-09 18:19:56.603355+03",
                "data-bytes": 395651278,
                "wal-bytes": 16777216,
                "uncompressed-bytes": 578552566,
                "pgdata-bytes": 578552248,
                "primary_conninfo": "user=backup channel_binding=prefer host=localhost port=5432 sslmode=prefer sslcompression=0 sslcertmode=allow sslsni=1 ssl_min_protocol_version=TLSv1.2 gssencmode=prefer krbsrvname=postgres gssdelegation=0 target_session_attrs=any load_balance_hosts=disable",
                "content-crc": 3862224379
            }
        ]
    }
]

Viewing WAL Archive Information #

To view the information about WAL archive for every instance, run the command:

pg_probackup show -B backup_dir [--instance=instance_name] --archive

pg_probackup displays the list of all the available WAL files grouped by timelines. For example:

INFO: checking WAL file name "00000001000000000000001B"
INFO: checking WAL file name "00000001000000000000001C"
INFO: checking WAL file name "00000001000000000000001D"
INFO: checking WAL file name "00000001000000000000001E"
INFO: checking WAL file name "00000001000000000000001F"
INFO: checking WAL file name "000000010000000000000020"
INFO: checking WAL file name "000000010000000000000021"
INFO: checking WAL file name "000000010000000000000022"
INFO: checking WAL file name "000000010000000000000022.00000028.backup"
INFO: checking WAL file name "000000010000000000000023"
INFO: checking WAL file name "000000010000000000000024"
INFO: checking WAL file name "000000010000000000000025"
INFO: checking WAL file name "000000010000000000000026"
INFO: checking WAL file name "000000010000000000000027"
INFO: checking WAL file name "000000010000000000000028"
INFO: checking WAL file name "000000010000000000000029"
INFO: checking WAL file name "000000010000000000000029.00000028.backup"
INFO: checking WAL file name "00000001000000000000002A"
INFO: checking WAL file name "00000001000000000000002B"
INFO: checking WAL file name "00000001000000000000002C"
INFO: checking WAL file name "00000001000000000000002D"
INFO: checking WAL file name "00000001000000000000002E"
INFO: checking WAL file name "00000001000000000000002F"
INFO: checking WAL file name "000000010000000000000030"
INFO: checking WAL file name "000000010000000000000031"
INFO: checking WAL file name "000000010000000000000031.00000028.backup"
INFO: checking WAL file name "000000010000000000000032"
INFO: checking WAL file name "000000010000000000000033"
INFO: checking WAL file name "000000010000000000000034"
INFO: checking WAL file name "000000010000000000000035"
INFO: checking WAL file name "000000010000000000000036"
INFO: checking WAL file name "000000010000000000000037"
INFO: checking WAL file name "000000010000000000000038"
INFO: checking WAL file name "000000010000000000000039"
INFO: checking WAL file name "00000001000000000000003A"
INFO: checking WAL file name "00000001000000000000003B"
INFO: checking WAL file name "00000001000000000000003C"
INFO: checking WAL file name "00000001000000000000003C.00003020.backup"
INFO: checking WAL file name "00000001000000000000003D"
INFO: checking WAL file name "00000001000000000000003E"
INFO: checking WAL file name "00000001000000000000003F"
INFO: checking WAL file name "000000010000000000000040"
INFO: checking WAL file name "000000010000000000000041"
INFO: checking WAL file name "000000010000000000000041.00000028.backup"
INFO: checking WAL file name "000000010000000000000042"

ARCHIVE INSTANCE 'node'
================================================================================================================================
 TLI  Parent TLI  Switchpoint  Min Segno                 Max Segno                 N segments  Size   Zratio  N backups  Status 
================================================================================================================================
 1    0           0/0          00000001000000000000001B  000000010000000000000042  40          640MB  1.00    5          OK     

For each timeline, the following information is provided:

  • TLI — timeline identifier.

  • Parent TLI — identifier of the timeline from which this timeline branched off.

  • Switchpoint — LSN of the moment when the timeline branched off from its parent timeline.

  • Min Segno — the first WAL segment belonging to the timeline.

  • Max Segno — the last WAL segment belonging to the timeline.

  • N segments — number of WAL segments belonging to the timeline.

  • Size — the size that files take on disk.

  • Zalg — compression algorithm used during backup. Possible values: zlib, pglz, lz4, zstd, none.

  • Zratio — compression ratio calculated as N segments * wal_segment_size * wal_block_size / Size.

  • N backups — number of backups belonging to the timeline. To get the details about backups, use the JSON format.

  • Status — status of the WAL archive for this timeline. Possible values:

    • OK — all WAL segments between Min Segno and Max Segno are present.

    • DEGRADED — some WAL segments between Min Segno and Max Segno are missing. To find out which files are lost, view this report in the JSON format. This status may appear if several WAL files (in the middle of the sequence) were deleted by the delete command with the --delete-wal option according to the retention policy. This status does not affect the restore correctness, but it can be impossible to perform PITR of the cluster to some recovery targets.

To get more detailed information about the WAL archive in the JSON format, run the command:

pg_probackup show -B backup_dir [--instance=instance_name] --archive --format=json

The sample output is as follows:

INFO: checking WAL file name "00000001000000000000001B"
INFO: checking WAL file name "00000001000000000000001C"
INFO: checking WAL file name "00000001000000000000001D"
INFO: checking WAL file name "00000001000000000000001E"
INFO: checking WAL file name "00000001000000000000001F"
INFO: checking WAL file name "000000010000000000000020"
INFO: checking WAL file name "000000010000000000000021"
INFO: checking WAL file name "000000010000000000000022"
INFO: checking WAL file name "000000010000000000000022.00000028.backup"
INFO: checking WAL file name "000000010000000000000023"
INFO: checking WAL file name "000000010000000000000024"
INFO: checking WAL file name "000000010000000000000025"
INFO: checking WAL file name "000000010000000000000026"
INFO: checking WAL file name "000000010000000000000027"
INFO: checking WAL file name "000000010000000000000028"
INFO: checking WAL file name "000000010000000000000029"
INFO: checking WAL file name "000000010000000000000029.00000028.backup"
INFO: checking WAL file name "00000001000000000000002A"
INFO: checking WAL file name "00000001000000000000002B"
INFO: checking WAL file name "00000001000000000000002C"
INFO: checking WAL file name "00000001000000000000002D"
INFO: checking WAL file name "00000001000000000000002E"
INFO: checking WAL file name "00000001000000000000002F"
INFO: checking WAL file name "000000010000000000000030"
INFO: checking WAL file name "000000010000000000000031"
INFO: checking WAL file name "000000010000000000000031.00000028.backup"
INFO: checking WAL file name "000000010000000000000032"
INFO: checking WAL file name "000000010000000000000033"
INFO: checking WAL file name "000000010000000000000034"
INFO: checking WAL file name "000000010000000000000035"
INFO: checking WAL file name "000000010000000000000036"
INFO: checking WAL file name "000000010000000000000037"
INFO: checking WAL file name "000000010000000000000038"
INFO: checking WAL file name "000000010000000000000039"
INFO: checking WAL file name "00000001000000000000003A"
INFO: checking WAL file name "00000001000000000000003B"
INFO: checking WAL file name "00000001000000000000003C"
INFO: checking WAL file name "00000001000000000000003C.00003020.backup"
INFO: checking WAL file name "00000001000000000000003D"
INFO: checking WAL file name "00000001000000000000003E"
INFO: checking WAL file name "00000001000000000000003F"
INFO: checking WAL file name "000000010000000000000040"
INFO: checking WAL file name "000000010000000000000041"
INFO: checking WAL file name "000000010000000000000041.00000028.backup"
INFO: checking WAL file name "000000010000000000000042"
[
    {
        "instance": "node",
        "timelines": [
            {
                "tli": 1,
                "parent-tli": 0,
                "switchpoint": "0/0",
                "min-segno": "00000001000000000000001B",
                "max-segno": "000000010000000000000042",
                "n-segments": 40,
                "size": 671088640,
                "zratio": 1.00,
                "closest-backup-id": "",
                "status": "OK",
                "lost-segments": [],
                "backups": [
                    {
                        "id": "SBOL94",
                        "status": "OK",
                        "start-time": "2024-04-09 18:19:52+03",
                        "backup-mode": "FULL",
                        "wal": "ARCHIVE",
                        "compress-alg": "lz4",
                        "compress-level": 1,
                        "from-replica": "false",
                        "block-size": 8192,
                        "xlog-block-size": 8192,
                        "checksum-version": 1,
                        "program-version": "2.7.3",
                        "server-version": "17",
                        "current-tli": 2,
                        "parent-tli": 0,
                        "start-lsn": "0/41000028",
                        "stop-lsn": "0/420000C0",
                        "end-time": "2024-04-09 18:19:58+03",
                        "end-validation-time": "2024-04-09 18:19:59+03",
                        "recovery-xid": 757,
                        "recovery-time": "2024-04-09 18:19:56.603355+03",
                        "data-bytes": 395651278,
                        "wal-bytes": 16777216,
                        "uncompressed-bytes": 578552566,
                        "pgdata-bytes": 578552248,
                        "primary_conninfo": "user=backup channel_binding=prefer host=localhost port=5432 sslmode=prefer sslcompression=0 sslcertmode=allow sslsni=1 ssl_min_protocol_version=TLSv1.2 gssencmode=prefer krbsrvname=postgres gssdelegation=0 target_session_attrs=any load_balance_hosts=disable",
                        "content-crc": 3862224379
                    },
                    {
                        "id": "SBOL8S",
                        "status": "OK",
                        "start-time": "2024-04-09 18:19:40+03",
                        "parent-backup-id": "SBOL8G",
                        "backup-mode": "DELTA",
                        "wal": "STREAM",
                        "compress-alg": "lz4",
                        "compress-level": 1,
                        "from-replica": "false",
                        "block-size": 8192,
                        "xlog-block-size": 8192,
                        "checksum-version": 1,
                        "program-version": "2.7.3",
                        "server-version": "17",
                        "current-tli": 1,
                        "parent-tli": 1,
                        "start-lsn": "0/3C003020",
                        "stop-lsn": "0/3E8D4930",
                        "end-time": "2024-04-09 18:19:43+03",
                        "end-validation-time": "2024-04-09 18:19:44+03",
                        "recovery-xid": 757,
                        "recovery-time": "2024-04-09 18:19:43.707720+03",
                        "data-bytes": 119350434,
                        "wal-bytes": 67108864,
                        "uncompressed-bytes": 170044286,
                        "pgdata-bytes": 578552248,
                        "primary_conninfo": "user=backup channel_binding=prefer host=localhost port=5432 sslmode=prefer sslcompression=0 sslcertmode=allow sslsni=1 ssl_min_protocol_version=TLSv1.2 gssencmode=prefer krbsrvname=postgres gssdelegation=0 target_session_attrs=any load_balance_hosts=disable",
                        "content-crc": 1259851036
                    },
                    {
                        "id": "SBOL8G",
                        "status": "OK",
                        "start-time": "2024-04-09 18:19:28+03",
                        "parent-backup-id": "SBOL83",
                        "backup-mode": "PTRACK",
                        "wal": "STREAM",
                        "compress-alg": "zlib",
                        "compress-level": 1,
                        "from-replica": "false",
                        "block-size": 8192,
                        "xlog-block-size": 8192,
                        "checksum-version": 1,
                        "program-version": "2.7.3",
                        "server-version": "17",
                        "current-tli": 1,
                        "parent-tli": 1,
                        "start-lsn": "0/31000028",
                        "stop-lsn": "0/310029E0",
                        "end-time": "2024-04-09 18:19:32+03",
                        "end-validation-time": "2024-04-09 18:19:33+03",
                        "recovery-xid": 756,
                        "recovery-time": "2024-04-09 18:19:32.594670+03",
                        "data-bytes": 31218302,
                        "wal-bytes": 16777216,
                        "uncompressed-bytes": 69610366,
                        "pgdata-bytes": 510263736,
                        "primary_conninfo": "user=backup channel_binding=prefer host=localhost port=5432 sslmode=prefer sslcompression=0 sslcertmode=allow sslsni=1 ssl_min_protocol_version=TLSv1.2 gssencmode=prefer krbsrvname=postgres gssdelegation=0 target_session_attrs=any load_balance_hosts=disable",
                        "content-crc": 2293248595
                    },
                    {
                        "id": "SBOL83",
                        "status": "OK",
                        "start-time": "2024-04-09 18:19:15+03",
                        "parent-backup-id": "SBOL7P",
                        "backup-mode": "PAGE",
                        "wal": "STREAM",
                        "compress-alg": "pglz",
                        "compress-level": 1,
                        "from-replica": "false",
                        "block-size": 8192,
                        "xlog-block-size": 8192,
                        "checksum-version": 1,
                        "program-version": "2.7.3",
                        "server-version": "17",
                        "current-tli": 1,
                        "parent-tli": 1,
                        "start-lsn": "0/29000028",
                        "stop-lsn": "0/2A0000F8",
                        "end-time": "2024-04-09 18:19:22+03",
                        "end-validation-time": "2024-04-09 18:19:22+03",
                        "recovery-xid": 755,
                        "recovery-time": "2024-04-09 18:19:22.269595+03",
                        "data-bytes": 48394744,
                        "wal-bytes": 33554432,
                        "uncompressed-bytes": 69577598,
                        "pgdata-bytes": 441975224,
                        "primary_conninfo": "user=backup channel_binding=prefer host=localhost port=5432 sslmode=prefer sslcompression=0 sslcertmode=allow sslsni=1 ssl_min_protocol_version=TLSv1.2 gssencmode=prefer krbsrvname=postgres gssdelegation=0 target_session_attrs=any load_balance_hosts=disable",
                        "content-crc": 343227200
                    },
                    {
                        "id": "SBOL7P",
                        "status": "OK",
                        "start-time": "2024-04-09 18:19:01+03",
                        "backup-mode": "FULL",
                        "wal": "STREAM",
                        "compress-alg": "zstd",
                        "compress-level": 1,
                        "from-replica": "false",
                        "block-size": 8192,
                        "xlog-block-size": 8192,
                        "checksum-version": 1,
                        "program-version": "2.7.3",
                        "server-version": "17",
                        "current-tli": 1,
                        "parent-tli": 0,
                        "start-lsn": "0/22000028",
                        "stop-lsn": "0/220001C8",
                        "end-time": "2024-04-09 18:19:07+03",
                        "end-validation-time": "2024-04-09 18:19:09+03",
                        "recovery-xid": 754,
                        "recovery-time": "2024-04-09 18:19:06.557301+03",
                        "data-bytes": 151177272,
                        "wal-bytes": 16777216,
                        "uncompressed-bytes": 373695222,
                        "pgdata-bytes": 373694904,
                        "primary_conninfo": "user=backup channel_binding=prefer host=localhost port=5432 sslmode=prefer sslcompression=0 sslcertmode=allow sslsni=1 ssl_min_protocol_version=TLSv1.2 gssencmode=prefer krbsrvname=postgres gssdelegation=0 target_session_attrs=any load_balance_hosts=disable",
                        "content-crc": 1636300818
                    }
                ]
            }
        ]
    }
]

Most fields are consistent with the plain format, with some exceptions:

  • The size is in bytes.

  • The closest-backup-id attribute contains the ID of the most recent valid backup that belongs to one of the previous timelines. You can use this backup to perform point-in-time recovery to this timeline. If such a backup does not exist, this string is empty.

  • The lost-segments array provides with information about intervals of missing segments in DEGRADED timelines. In OK timelines, the lost-segments array is empty.

  • The backups array lists all backups belonging to the timeline. If the timeline has no backups, this array is empty.

Configuring Retention Policy #

With pg_probackup, you can configure retention policy to remove redundant backups, clean up unneeded WAL files, as well as pin specific backups to ensure they are kept for the specified time, as explained in the sections below. All these actions can be combined together in any way.

Removing Redundant Backups #

By default, all backup copies created with pg_probackup are stored in the specified backup catalog. To save disk space, you can configure retention policy to remove redundant backup copies.

To configure retention policy, set one or more of the following variables in the pg_probackup.conf file via set-config:

--retention-redundancy=redundancy

Specifies the number of full backup copies to keep in the backup catalog.

--retention-window=window

Defines the earliest point in time for which pg_probackup can complete the recovery. This option is set in the number of days from the current moment. For example, if retention-window=7, pg_probackup must keep at least one backup copy that is older than seven days, with all the corresponding WAL files, and all the backups that follow.

If both --retention-redundancy and --retention-window options are set, both these conditions have to be taken into account when purging the backup catalog. For example, if you set --retention-redundancy=2 and --retention-window=7, pg_probackup has to keep two full backup copies, as well as all the backups required to ensure recoverability for the last seven days:

pg_probackup set-config -B backup_dir --instance=instance_name --retention-redundancy=2 --retention-window=7

It is recommended to always keep at least two last parent full backups to avoid errors when creating incremental backups.

To clean up the backup catalog in accordance with retention policy, you have to run the delete command with retention flags, as shown below, or use the backup command with these flags to process the outdated backup copies right when the new backup is created.

For example, to remove all backup copies that no longer satisfy the defined retention policy, run the following command with the --delete-expired flag:

pg_probackup delete -B backup_dir --instance=instance_name --delete-expired

If you would like to also remove the WAL files that are no longer required for any of the backups, you should also specify the --delete-wal flag:

pg_probackup delete -B backup_dir --instance=instance_name --delete-expired --delete-wal

You can also set or override the current retention policy by specifying --retention-redundancy and --retention-window options directly when running delete or backup commands:

pg_probackup delete -B backup_dir --instance=instance_name --delete-expired --retention-window=7 --retention-redundancy=2

Since incremental backups require that their parent full backup and all the preceding incremental backups are available, if any of such backups expire, they still cannot be removed while at least one incremental backup in this chain satisfies the retention policy. To avoid keeping expired backups that are still required to restore an active incremental one, you can merge them with this backup using the --merge-expired flag when running backup or delete commands.

Suppose you have backed up the node instance in the backup_dir directory, with the --retention-window option set to 6 and --retention-redundancy option set to 2, and you have the following backups available on August 01, 2024:

BACKUP INSTANCE 'node'
===========================================================================================================================================
 Instance  Version  ID      Recovery Time           Mode   WAL Mode  TLI  Time    Data   WAL  Zalg  Zratio  Start LSN   Stop LSN    Status 
===========================================================================================================================================
 node      17       SHJ1N9  2024-08-01 09:50:00+03  FULL   ARCHIVE   1/0    5s    13MB  16MB  zstd    2,81  0/1D000028  0/1E0000C0  OK
 node      17       SHJ1N8  2024-08-01 09:49:59+03  DELTA  ARCHIVE   1/1    5s  6432kB  16MB  zstd    1,06  0/1A000028  0/1B0000C0  OK
 node      17       SHH6Z8  2024-07-31 09:49:59+03  PAGE   ARCHIVE   1/1    5s  6431kB  16MB  zstd    1,06  0/17000028  0/180000C0  OK
 node      17       SHFCB6  2024-07-30 09:49:57+03  FULL   ARCHIVE   1/0    5s    12MB  16MB  zstd    2,83  0/14000028  0/150000C0  OK
 node      17       SH9SB5  2024-07-27 09:49:56+03  PAGE   ARCHIVE   1/1    5s  6432kB  16MB  zstd    1,06  0/11000028  0/120000C0  OK
 ----------------------------------------------------------retention window-----------------------------------------------------------
 node      17       SH62Z5  2024-07-25 09:49:56+03  DELTA  ARCHIVE   1/1    5s  6431kB  16MB  zstd    1,06  0/E000028   0/F0000C0   OK
 node      17       SH48B3  2024-07-24 09:49:54+03  FULL   ARCHIVE   1/0    5s    12MB  16MB  zstd    2,86  0/B000028   0/C0000C0   OK
 node      17       SGWTN3  2024-07-20 09:49:54+03  PAGE   ARCHIVE   1/1    5s  6432kB  16MB  zstd    1,06  0/8000028   0/90000C0   OK
 node      17       SGUYZ2  2024-07-19 09:49:53+03  DELTA  ARCHIVE   1/1    5s  6442kB  16MB  zstd    1,07  0/5000028   0/60000C0   OK
 node      17       SGT4B1  2024-07-18 09:49:52+03  FULL   ARCHIVE   1/0    5s    11MB  16MB  zstd    2,89  0/2000028   0/3003A28   OK

If you run the delete command with the --delete-expired flag, the backups with IDs SGT4B1, SGUYZ2, and SGWTN3 will be removed as they are expired both according to the retention window and due to redundancy (the required set of full backups has already been retained). SGUYZ2 and SGWTN3 will also be removed since the base full backup is expired.

Running the delete command with the --merge-expired flag will merge backups SH48B3 and SH62Z5 with SH9SB5. The merge will occur with SH9SB5 as it is the first non-expired delta backup, which can be merged with expired delta backups SH62Z5 and expired full backup SH48B3.

pg_probackup delete -B backup_dir --instance=node --delete-expired --merge-expired
pg_probackup show -B backup_dir
BACKUP INSTANCE 'node'
===========================================================================================================================================
 Instance  Version  ID      Recovery Time           Mode   WAL Mode  TLI  Time    Data   WAL  Zalg  Zratio  Start LSN   Stop LSN    Status
===========================================================================================================================================
 node      17       SHJ1N9  2024-08-01 09:50:00+03  FULL   ARCHIVE   1/0    5s    13MB  16MB  zstd    2,81  0/1D000028  0/1E0000C0  OK
 node      17       SHJ1N8  2024-08-01 09:49:59+03  DELTA  ARCHIVE   1/1    5s  6432kB  16MB  zstd    1,06  0/1A000028  0/1B0000C0  OK
 node      17       SHH6Z8  2024-07-31 09:49:59+03  PAGE   ARCHIVE   1/1    5s  6431kB  16MB  zstd    1,06  0/17000028  0/180000C0  OK
 node      17       SHFCB6  2024-07-30 09:49:57+03  FULL   ARCHIVE   1/0    5s    12MB  16MB  zstd    2,83  0/14000028  0/150000C0  OK
 node      17       SH9SB5  2024-07-27 09:49:56+03  FULL   ARCHIVE   1/0    1s    12MB  16MB  zstd    2,84  0/11000028  0/120000C0  OK

The Time field for the merged backup displays the time required for the merge.

Pinning Backups #

If you need to keep certain backups longer than the established retention policy allows, you can pin them for arbitrary time. For example:

pg_probackup set-backup -B backup_dir --instance=instance_name -i backup_id --ttl=30d

This command sets the expiration time of the specified backup to 30 days starting from the time indicated in its recovery-time attribute.

You can also explicitly set the expiration time for a backup using the --expire-time option. For example:

pg_probackup set-backup -B backup_dir --instance=instance_name -i backup_id --expire-time="2027-04-09 18:21:32+00"

Alternatively, you can use the --ttl and --expire-time options with the backup command to pin the newly created backup:

pg_probackup backup -B backup_dir --instance=instance_name -b FULL --ttl=30d
pg_probackup backup -B backup_dir --instance=instance_name -b FULL --expire-time="2027-04-09 18:21:32+00"

To check if the backup is pinned, run the show command:

pg_probackup show -B backup_dir --instance=instance_name -i backup_id

If the backup is pinned, it has the expire-time attribute that displays its expiration time:

...
recovery-time = '2024-04-09 18:21:32+00'
expire-time = '2027-04-09 18:21:32+00'
data-bytes = 22288792
...

You can unpin the backup by setting the --ttl option to zero:

pg_probackup set-backup -B backup_dir --instance=instance_name -i backup_id --ttl=0

Note

A pinned incremental backup implicitly pins all its parent backups. If you unpin such a backup later, its implicitly pinned parents will also be automatically unpinned.

Configuring WAL Archive Retention Policy #

When continuous WAL archiving is enabled, archived WAL segments can take a lot of disk space. Even if you delete old backup copies from time to time, the --delete-wal flag can purge only those WAL segments that do not apply to any of the remaining backups in the backup catalog. However, if point-in-time recovery is critical only for the most recent backups, you can configure WAL archive retention policy to keep WAL archive of limited depth and win back some more disk space.

To configure WAL archive retention policy, you have to run the set-config command with the --wal-depth option that specifies the number of backups that can be used for PITR. This setting applies to all the timelines, so you should be able to perform PITR for the same number of backups on each timeline, if available. Pinned backups are not included into this count: if one of the latest backups is pinned, pg_probackup ensures that PITR is possible for one extra backup.

To remove WAL segments that do not satisfy the defined WAL archive retention policy, you simply have to run the delete or backup command with the --delete-wal flag. For archive backups, WAL segments between Start LSN and Stop LSN are always kept intact, so such backups remain valid regardless of the --wal-depth setting and can still be restored, if required.

You can also use the --wal-depth option with the delete and backup commands to override the previously defined WAL archive retention policy and purge old WAL segments on the fly.

Suppose you have backed up the node instance in the backup_dir directory and configured continuous WAL archiving:

pg_probackup show -B backup_dir --instance=node
==================================================================================================================================================
 Instance  Version  ID      Recovery Time                  Mode   WAL Mode  TLI  Time    Data   WAL  Zalg  Zratio  Start LSN   Stop LSN    Status 
==================================================================================================================================================
 node      17       SBOLDA  2024-04-09 18:22:23.147138+03  DELTA  STREAM    1/1    1s  1165kB  16MB  zstd    1.09  0/6F000028  0/6F000190  OK     
 node      17       SBOLCY  2024-04-09 18:22:16.079841+03  FULL   STREAM    1/0   10s   278MB  16MB  zstd    2.46  0/6D000028  0/6D000190  OK     
 node      17       SBOLCW  2024-04-09 18:22:10.154022+03  DELTA  STREAM    1/1    2s  1364kB  16MB  zstd    1.01  0/6B000028  0/6B000190  OK     
 node      17       SBOLCS  2024-04-09 18:22:07.521646+03  DELTA  STREAM    1/1    4s    78MB  16MB  zstd    2.41  0/69000028  0/69000190  OK     
 node      17       SBOLCC  2024-04-09 18:21:55.830115+03  FULL   STREAM    1/0   15s   278MB  96MB  zstd    2.46  0/600060C8  0/64FE6640  OK     
 node      17       SBOLBW  2024-04-09 18:21:38.399702+03  FULL   STREAM    1/0   12s   278MB  96MB  zstd    2.46  0/54001830  0/589E5908  OK     

You can check the state of the WAL archive by running the show command with the --archive flag:

pg_probackup show -B backup_dir --instance=node --archive
INFO: checking WAL file name "000000010000000000000052"
INFO: checking WAL file name "000000010000000000000053"
INFO: checking WAL file name "000000010000000000000054"
INFO: checking WAL file name "000000010000000000000054.00001830.backup"
INFO: checking WAL file name "000000010000000000000055"
INFO: checking WAL file name "000000010000000000000056"
INFO: checking WAL file name "000000010000000000000057"
INFO: checking WAL file name "000000010000000000000058"
INFO: checking WAL file name "000000010000000000000059"
INFO: checking WAL file name "00000001000000000000005A"
INFO: checking WAL file name "00000001000000000000005B"
INFO: checking WAL file name "00000001000000000000005C"
INFO: checking WAL file name "00000001000000000000005D"
INFO: checking WAL file name "00000001000000000000005E"
INFO: checking WAL file name "00000001000000000000005F"
INFO: checking WAL file name "000000010000000000000060"
INFO: checking WAL file name "000000010000000000000060.000060C8.backup"
INFO: checking WAL file name "000000010000000000000061"
INFO: checking WAL file name "000000010000000000000062"
INFO: checking WAL file name "000000010000000000000063"
INFO: checking WAL file name "000000010000000000000064"
INFO: checking WAL file name "000000010000000000000065"
INFO: checking WAL file name "000000010000000000000066"
INFO: checking WAL file name "000000010000000000000067"
INFO: checking WAL file name "000000010000000000000068"
INFO: checking WAL file name "000000010000000000000069"
INFO: checking WAL file name "000000010000000000000069.00000028.backup"
INFO: checking WAL file name "00000001000000000000006A"
INFO: checking WAL file name "00000001000000000000006B"
INFO: checking WAL file name "00000001000000000000006B.00000028.backup"
INFO: checking WAL file name "00000001000000000000006C"
INFO: checking WAL file name "00000001000000000000006D"
INFO: checking WAL file name "00000001000000000000006D.00000028.backup"
INFO: checking WAL file name "00000001000000000000006E"
INFO: checking WAL file name "00000001000000000000006F"
INFO: checking WAL file name "00000001000000000000006F.00000028.backup"

ARCHIVE INSTANCE 'node'
================================================================================================================================
 TLI  Parent TLI  Switchpoint  Min Segno                 Max Segno                 N segments  Size   Zratio  N backups  Status 
================================================================================================================================
 1    0           0/0          000000010000000000000052  00000001000000000000006F  30          480MB  1.00    6          OK     

WAL purge without --wal-depth cannot achieve much, only one segment is removed:

pg_probackup delete -B backup_dir --instance=node --delete-wal
INFO: checking WAL file name "000000010000000000000054"
INFO: checking WAL file name "000000010000000000000054.00001830.backup"
INFO: checking WAL file name "000000010000000000000055"
INFO: checking WAL file name "000000010000000000000056"
INFO: checking WAL file name "000000010000000000000057"
INFO: checking WAL file name "000000010000000000000058"
INFO: checking WAL file name "000000010000000000000059"
INFO: checking WAL file name "00000001000000000000005A"
INFO: checking WAL file name "00000001000000000000005B"
INFO: checking WAL file name "00000001000000000000005C"
INFO: checking WAL file name "00000001000000000000005D"
INFO: checking WAL file name "00000001000000000000005E"
INFO: checking WAL file name "00000001000000000000005F"
INFO: checking WAL file name "000000010000000000000060"
INFO: checking WAL file name "000000010000000000000060.000060C8.backup"
INFO: checking WAL file name "000000010000000000000061"
INFO: checking WAL file name "000000010000000000000062"
INFO: checking WAL file name "000000010000000000000063"
INFO: checking WAL file name "000000010000000000000064"
INFO: checking WAL file name "000000010000000000000065"
INFO: checking WAL file name "000000010000000000000066"
INFO: checking WAL file name "000000010000000000000067"
INFO: checking WAL file name "000000010000000000000068"
INFO: checking WAL file name "000000010000000000000069"
INFO: checking WAL file name "000000010000000000000069.00000028.backup"
INFO: checking WAL file name "00000001000000000000006A"
INFO: checking WAL file name "00000001000000000000006B"
INFO: checking WAL file name "00000001000000000000006B.00000028.backup"
INFO: checking WAL file name "00000001000000000000006C"
INFO: checking WAL file name "00000001000000000000006D"
INFO: checking WAL file name "00000001000000000000006D.00000028.backup"
INFO: checking WAL file name "00000001000000000000006E"
INFO: checking WAL file name "00000001000000000000006F"
INFO: checking WAL file name "00000001000000000000006F.00000028.backup"

ARCHIVE INSTANCE 'node'
================================================================================================================================
 TLI  Parent TLI  Switchpoint  Min Segno                 Max Segno                 N segments  Size   Zratio  N backups  Status 
================================================================================================================================
 1    0           0/0          000000010000000000000054  00000001000000000000006F  28          448MB  1.00    6          OK     

If you would like, for example, to keep only those WAL segments that can be applied to the latest valid backup, set the --wal-depth option to 1:

pg_probackup delete -B backup_dir --instance=node --delete-wal --wal-depth=1
INFO: checking WAL file name "00000001000000000000006F"
INFO: checking WAL file name "00000001000000000000006F.00000028.backup"

ARCHIVE INSTANCE 'node'
===============================================================================================================================
 TLI  Parent TLI  Switchpoint  Min Segno                 Max Segno                 N segments  Size  Zratio  N backups  Status 
===============================================================================================================================
 1    0           0/0          00000001000000000000006F  00000001000000000000006F  1           16MB  1.00    6          OK     

Alternatively, you can use the --wal-depth option with the backup command:

pg_probackup backup -B backup_dir --instance=node -b DELTA --wal-depth=1 --delete-wal
INFO: checking WAL file name "000000010000000000000071"
INFO: checking WAL file name "000000010000000000000071.00000028.backup"

ARCHIVE INSTANCE 'node'
===============================================================================================================================
 TLI  Parent TLI  Switchpoint  Min Segno                 Max Segno                 N segments  Size  Zratio  N backups  Status 
===============================================================================================================================
 1    0           0/0          000000010000000000000071  000000010000000000000071  1           16MB  1.00    7          OK     

Merging Backups #

As you take more and more incremental backups, the total size of the backup catalog can substantially grow. To save disk space, you can merge incremental backups to their parent full backup by running the merge command, specifying the backup ID of the most recent incremental backup you would like to merge:

pg_probackup merge -B backup_dir --instance=instance_name -i backup_id

This command merges backups that belong to a common incremental backup chain. If you specify a full backup, it will be merged with its first incremental backup. If you specify an incremental backup, it will be merged to its parent full backup, together with all incremental backups between them. Once the merge is complete, the full backup takes in all the merged data, and the incremental backups are removed as redundant. Thus, the merge operation is virtually equivalent to retaking a full backup and removing all the outdated backups, but it allows you to save much time, especially for large data volumes, as well as I/O and network traffic if you are using pg_probackup in the remote mode.

Before the merge, pg_probackup validates all the affected backups to ensure that they are valid. You can check the current backup status by running the show command with the backup ID:

pg_probackup show -B backup_dir --instance=instance_name -i backup_id

If the merge is still in progress, the backup status is displayed as MERGING. For full backups, it can also be shown as MERGED while the metadata is being updated at the final stage of the merge. The merge is idempotent, so you can restart the merge if it was interrupted.

Deleting Backups #

To delete a backup that is no longer required, run the following command:

pg_probackup delete -B backup_dir --instance=instance_name -i backup_id

This command will delete the backup with the specified backup_id, together with all the incremental backups that descend from backup_id, if any. This way you can delete some recent incremental backups, retaining the underlying full backup and some of the incremental backups that follow it.

To delete obsolete WAL files that are not necessary to restore any of the remaining backups, use the --delete-wal flag:

pg_probackup delete -B backup_dir --instance=instance_name --delete-wal

To delete backups that are expired according to the current retention policy, use the --delete-expired flag:

pg_probackup delete -B backup_dir --instance=instance_name --delete-expired

Expired backups cannot be removed while at least one incremental backup that satisfies the retention policy is based on them. If you would like to minimize the number of backups still required to keep incremental backups valid, specify the --merge-expired flag when running this command:

pg_probackup delete -B backup_dir --instance=instance_name --delete-expired --merge-expired

In this case, pg_probackup searches for the oldest incremental backup that satisfies the retention policy and merges this backup with the underlying full and incremental backups that have already expired, thus making it a full backup. Once the merge is complete, the remaining expired backups are deleted.

Before merging or deleting backups, you can run the delete command with the --dry-run flag, which displays the status of all the available backups according to the current retention policy, without performing any irreversible actions.

To delete all backups with specific status, use the --status:

pg_probackup delete -B backup_dir --instance=instance_name --status=ERROR
    

Deleting backups by status ignores established retention policies.

Cloning and Synchronizing Postgres Pro Instance #

pg_probackup can create a copy of a Postgres Pro instance directly, without using the backup catalog. To do this, you can run the catchup command. It can be useful in the following cases:

  • To add a new standby server.

    Usually, pg_basebackup is used to create a copy of a Postgres Pro instance. If the data directory of the destination instance is empty, the catchup command works similarly, but it can be faster if run in parallel mode.

  • To have a fallen-behind standby server catch up with the primary.

    Under write-intensive load, replicas may fail to replay WAL fast enough to keep up with the primary and hence may lag behind. A usual solution to create a new replica and switch to it requires a lot of extra space and data transfer. The catchup command allows you to update an existing replica much faster by bringing differences from the primary.

catchup is different from other pg_probackup operations:

  • The backup catalog is not required.

  • STREAM WAL delivery mode is only supported.

  • Copying external directories is not supported.

  • DDL commands CREATE TABLESPACE/DROP TABLESPACE cannot be run simultaneously with catchup.

  • catchup takes configuration files, such as postgresql.conf, postgresql.auto.conf, or pg_hba.conf, from the source server and overwrites them on the target server. The --exclude-path option allows you to keep the configuration files intact.

To prepare for cloning/synchronizing a Postgres Pro instance, set up the source server as follows:

Before cloning/synchronizing a Postgres Pro instance, ensure that the source server is running and accepting connections. To clone/sync a Postgres Pro instance, on the server with the destination instance, you can run the catchup command as follows:

pg_probackup catchup -b catchup_mode --source-pgdata=path_to_pgdata_on_remote_server --destination-pgdata=path_to_local_dir --stream [connection_options] [remote_options]

Where catchup_mode can take one of the following values:

  • FULL — creates a full copy of the Postgres Pro instance. The data directory of the destination instance must be empty for this mode.

  • DELTA — reads all data files in the data directory and creates an incremental copy for pages that have changed since the destination instance was shut down.

  • PTRACK — tracking page changes on the fly, only reads and copies pages that have changed since the point of divergence of the source and destination instances.

    Warning

    PTRACK catchup mode requires PTRACK not earlier than 2.0 and hence, Postgres Pro not earlier than 11.

By specifying the --stream option, you can set STREAM WAL delivery mode of copying, which will include all the necessary WAL files by streaming them from the server via replication protocol.

You can use connection_options to specify the connection to the source database cluster. If it is located on a different server, also specify remote_options.

If the source database cluster contains tablespaces that must be located in a different directory, additionally specify the --tablespace-mapping option:

pg_probackup catchup -b catchup_mode --source-pgdata=path_to_pgdata_on_remote_server --destination-pgdata=path_to_local_dir --stream --tablespace-mapping=OLDDIR=NEWDIR

To run the catchup command on parallel threads, specify the number of threads with the --threads option:

pg_probackup catchup -b catchup_mode --source-pgdata=path_to_pgdata_on_remote_server --destination-pgdata=path_to_local_dir --stream --threads=num_threads

Before cloning/synchronising a Postgres Pro instance, you can run the catchup command with the --dry-run flag to estimate the size of data files to be transferred, but make no changes on disk:

pg_probackup catchup -b catchup_mode --source-pgdata=path_to_pgdata_on_remote_server --destination-pgdata=path_to_local_dir --stream --dry-run

For example, assume that a remote standby server with the Postgres Pro instance having /replica-pgdata data directory has fallen behind. To sync this instance with the one in /master-pgdata data directory, you can run the catchup command in the PTRACK mode on four parallel threads as follows:

pg_probackup catchup --source-pgdata=/master-pgdata --destination-pgdata=/replica-pgdata -p 5432 -d postgres -U remote-postgres-user --stream --backup-mode=PTRACK --remote-host=remote-hostname --remote-user=remote-unix-username -j 4 --exclude-path=postgresql.conf --exclude-path=postgresql.auto.conf --exclude-path=pg_hba.conf --exclude-path=pg_ident.conf

Note that in this example, the configuration files will not be overwritten during synchronization.

Another example shows how you can add a new remote standby server with the Postgres Pro data directory /replica-pgdata by running the catchup command in the FULL mode on four parallel threads:

pg_probackup catchup --source-pgdata=/master-pgdata --destination-pgdata=/replica-pgdata -p 5432 -d postgres -U remote-postgres-user --stream --backup-mode=FULL --remote-host=remote-hostname --remote-user=remote-unix-username -j 4

Command-Line Reference #

Commands #

This section describes pg_probackup commands. Optional parameters are enclosed in square brackets. For detailed parameter descriptions, see the section Options.

version #

pg_probackup version [--format=json]

Prints pg_probackup version and edition, as well as Postgres Pro version and edition.

If --format=json is specified, the output is printed in the JSON format. This may be needed for native integration with JSON-based applications, such as PPEM. Example of a JSON output:

pg_probackup version --format=json
{
    "pg_probackup":
    {
        "version": "2.8.2",
        "edition": "enterprise"
    },
    "database":
    {
        "type": "Postgres Pro Enterprise",
        "version": "16.3.1"
    },
    "compressions": [zlib, pglz, lz4, zstd]
}

help #

pg_probackup help [command]

Displays the synopsis of pg_probackup commands. If one of the pg_probackup commands is specified, shows detailed information about the options that can be used with this command.

init #

pg_probackup init -B backup_dir [--skip-if-exists] [s3_options] [--help]
[logging_options]

Initializes the backup catalog in backup_dir that will store backup copies, WAL archive, and meta information for the backed up database clusters. If the specified backup_dir already exists, it must be empty. Otherwise, pg_probackup displays a corresponding error message. You can ignore this error by specifying the --skip-if-exists option. Although the backup will not be initialized, the application will return 0 code.

For details, see the section Initializing the Backup Catalog.

add-instance #

pg_probackup add-instance -B backup_dir -D data_dir --instance=instance_name
[--skip-if-exists] [s3_options] [--help] [logging_options]

Initializes a new backup instance inside the backup catalog backup_dir and generates the pg_probackup.conf configuration file that controls pg_probackup settings for the cluster with the specified data_dir data directory. If the catalog was already initialized, you can ignore the error by specifying --skip-if-exists.

For details, see the section Adding a New Backup Instance.

del-instance #

pg_probackup del-instance -B backup_dir --instance=instance_name [s3_options] [--help]
[logging_options]

Deletes all backups and WAL files associated with the specified instance.

set-config #

pg_probackup set-config -B backup_dir --instance=instance_name
[--help] [--pgdata=pgdata-path]
[--retention-redundancy=redundancy][--retention-window=window][--wal-depth=wal_depth]
[--compress-algorithm=compression_algorithm] [--compress-level=compression_level]
[-d dbname] [-h host] [-p port] [-U username]
[--archive-timeout=timeout] [--external-dirs=external_directory_path]
[--restore-command=cmdline]
[remote_options] [remote_wal_archive_options] [logging_options] [s3_options]

Adds the specified connection, compression, retention, logging, and external directory settings into the pg_probackup.conf configuration file, or modifies the previously defined values.

For all available settings, see the Options section.

It is not recommended to edit pg_probackup.conf manually.

set-backup #

pg_probackup set-backup -B backup_dir --instance=instance_name -i backup_id
{--ttl=ttl | --expire-time=time}
[--note=backup_note] [s3_options] [--help] [logging_options]

Sets the provided backup-specific settings into the backup.control configuration file, or modifies the previously defined values.

--note=backup_note

Sets the text note for backup copy. If backup_note contains newline characters, then only the substring before the first newline character will be saved. The maximum size of a text note is 1 KB. The 'none' value removes the current note.

For all available pinning settings, see the section Pinning Options.

show-config #

pg_probackup show-config -B backup_dir --instance instance_name [--format=plain|json] [s3_options]
[--no-scale-units] [logging_options]

Displays all the current pg_probackup configuration settings, including those that are specified in the pg_probackup.conf configuration file located in the backup_dir/backups/instance_name directory and those that were provided on a command line. You can specify the --format=json option to get the result in the JSON format. By default, configuration settings are shown as plain text.

You can also specify the --no-scale-units option to display time and memory configuration settings in their base (unscaled) units. Otherwise, the values are scaled to larger units for optimal display. For example, if archive-timeout is 300, then 5min is displayed, but if archive-timeout is 301, then 301s is displayed. Also, if the --no-scale-units option is specified, configuration settings are displayed without units and for the JSON format, numeric and boolean values are not enclosed in quotes. This facilitates parsing the output.

To edit pg_probackup.conf, use the set-config command.

show #

pg_probackup show -B backup_dir
[--help] [--instance=instance_name [-i backup_id | --archive]] [--format=plain|json] [--no-color] [--show-symlinks] [s3_options]
[logging_options]

Shows the contents of the backup catalog. If instance_name and backup_id are specified, shows detailed information about this backup. If the --archive option is specified, shows the contents of WAL archive of the backup catalog.

By default, the contents of the backup catalog is shown as plain text. You can specify the --format=json option to get the result in the JSON format. If --no-color flag is used, then the output is not colored.

If the --show-symlinks option is specified, the command also shows the links between merged backups and the original full backups that incremental backups were merged to.

For details on usage, see the sections Managing the Backup Catalog and Viewing WAL Archive Information.

backup #

pg_probackup backup -B backup_dir -b backup_mode --instance=instance_name
[--help] [-j num_threads] [--progress]
[--backup-threads num_threads] [--validate-threads num_threads]
[-C] [--stream [-S slot_name] [--temp-slot[=true|false|on|off]]] [--backup-pg-log]
[--no-validate] [--skip-block-validation]
[-w --no-password] [-W --password]
[--write-rate-limit=bitrate]
[--archive-timeout=timeout] [--external-dirs=external_directory_path]
[--no-sync] [--note=backup_note]
[connection_options] [compression_options] [remote_options]
[retention_options] [pinning_options] [logging_options] [s3_options]

Creates a backup copy of the Postgres Pro instance.

-b mode
--backup-mode=mode

Specifies the backup mode to use. Possible values are: FULL, DELTA, PAGE, and PTRACK.

--backup-threads num_threads

Specifies the number of threads for copying files. Overrides the j/--threads option for file copying.

--validate-threads num_threads

Specifies the number of threads for the backup validation. Overrides the j/--threads option for the backup validation.

-C
--smooth-checkpoint

Spreads out the checkpoint over a period of time. By default, pg_probackup tries to complete the checkpoint as soon as possible.

--stream

Makes a STREAM backup, which includes all the necessary WAL files by streaming them from the database server via replication protocol.

--temp-slot[=true|false|on|off]

Creates a temporary physical replication slot for streaming WAL from the backed up Postgres Pro instance. --temp-slot is enabled by default. It ensures that all the required WAL segments remain available if WAL is rotated while the backup is in progress. This flag can only be used together with the --stream flag. The default slot name is pg_probackup_slot. To change it, use the --slot/-S option and explicitly specify --temp-slot or --temp-slot=true|on.

-S slot_name
--slot=slot_name

Specifies the replication slot to connect to for WAL streaming. This option can only be used together with the --stream flag.

--backup-pg-log

Includes the log directory into the backup. This directory usually contains log messages. By default, log directory is excluded.

-E external_directory_path
--external-dirs=external_directory_path

Includes the specified directory into the backup by recursively copying its contents into a separate subdirectory in the backup catalog. This option is useful to back up scripts, SQL dump files, and configuration files located outside of the data directory. If you would like to back up several external directories, separate their paths by a colon on Unix and a semicolon on Windows.

--write-rate-limit=bitrate

Sets the rate of writing data to disk, in MBps or GBps. The default unit is MBps. For example: --write-rate-limit=1GBps or --write-rate-limit=100 (MBps). The default value is 0 — no limitation.

If this option is specified, the following information is displayed at the end of the backup:

  • written — the amount of data written, in MB.

  • total time — the time that elapsed between the first and last writes, in seconds. Note that this is not the total backup time.

  • sleep time — the amount of forced delay time, in seconds.

  • average rate — the actual average write rate, in MBps.

For example:

INFO: Rate limit: written 14975.445 MB, total time 17.163 s, sleep time 2.370 s, average rate 872.560715 MBps

--archive-timeout=wait_time

Sets the timeout for WAL segment archiving and streaming, in seconds. By default, pg_probackup waits 300 seconds.

--skip-block-validation

Disables block-level checksum verification to speed up the backup process.

--no-validate

Skips automatic validation after the backup is taken. You can use this flag if you validate backups regularly and would like to save time when running backup operations.

It is recommended to use this flag when creating a backup to an S3 storage. Due to some features of S3 storages, automatic validation may appear incorrect in this case. Skip automatic validation and then perform validation using a separate validate command.

--no-sync

Do not sync backed up files to disk. You can use this flag to speed up the backup process. Using this flag can result in data corruption in case of operating system or hardware crash. If you use this option, it is recommended to run the validate command once the backup is complete to detect possible issues.

--note=backup_note

Sets the text note for backup copy. If backup_note contain newline characters, then only substring before first newline character will be saved. Max size of text note is 1 KB. The 'none' value removes current note.

For more details of the command settings, see sections Common Options, Connection Options, Retention Options, Pinning Options, Remote Mode Options, Compression Options, Logging Options, and S3 Options.

For details on usage, see the section Creating a Backup.

restore #

pg_probackup restore -B backup_dir --instance=instance_name
[--help] [--dry-run] [-D data_dir] [-i backup_id]
[-j num_threads] [--progress]
[-T OLDDIR=NEWDIR] [--external-mapping=OLDDIR=NEWDIR] [--skip-external-dirs]
[-R | --restore-as-replica] [--no-validate] [--skip-block-validation]
[--force] [--no-sync]
[--restore-command=cmdline]
[--primary-conninfo=primary_conninfo]
[-S | --primary-slot-name=slot_name]
[-X wal_dir | --waldir=wal_dir]
[recovery_target_options] [logging_options] [remote_options]
[partial_restore_options] [remote_wal_archive_options] [s3_options]

Restores the Postgres Pro instance from a backup copy located in the backup_dir backup catalog. If you specify a recovery target option, pg_probackup finds the closest backup and restores it to the specified recovery target. If neither the backup ID nor recovery target options are provided, pg_probackup uses the most recent backup to perform the recovery.

-R
--restore-as-replica

Creates a minimal recovery configuration file to facilitate setting up a standby server. If the replication connection requires a password, you must specify the password manually in the primary_conninfo parameter as it is not included. For Postgres Pro 11 or lower, recovery settings are written into the recovery.conf file. Starting from Postgres Pro 12, pg_probackup writes these settings into the probackup_recovery.conf file in the data directory, and then includes them into the postgresql.auto.conf when the cluster is is started.

--primary-conninfo=primary_conninfo

Sets the primary_conninfo parameter to the specified value. This option will be ignored unless the -R flag is specified.

Example: --primary-conninfo="host=192.168.1.50 port=5432 user=foo password=foopass"

-S
--primary-slot-name=slot_name

Sets the primary_slot_name parameter to the specified value. This option will be ignored unless the -R flag is specified.

-T OLDDIR=NEWDIR
--tablespace-mapping=OLDDIR=NEWDIR

Relocates the tablespace from the OLDDIR to the NEWDIR directory at the time of recovery. Both OLDDIR and NEWDIR must be absolute paths. If the path contains the equals sign (=), escape it with a backslash. This option can be specified multiple times for multiple tablespaces.

--external-mapping=OLDDIR=NEWDIR

Relocates an external directory included into the backup from the OLDDIR to the NEWDIR directory at the time of recovery. Both OLDDIR and NEWDIR must be absolute paths. If the path contains the equals sign (=), escape it with a backslash. This option can be specified multiple times for multiple directories.

--skip-external-dirs

Skip external directories included into the backup with the --external-dirs option. The contents of these directories will not be restored.

--skip-block-validation

Disables block-level checksum verification to speed up validation. During automatic validation before the restore only file-level checksums will be verified.

--no-validate

Skips backup validation. You can use this flag if you validate backups regularly and would like to save time when running restore operations.

--restore-command=cmdline

Sets the restore_command parameter to the specified command. For example: --restore-command='cp /mnt/server/archivedir/%f "%p"'

--force

Allows to ignore an invalid status of the backup. You can use this flag if you need to restore the Postgres Pro cluster from a corrupt or an invalid backup. Use with caution. If PGDATA contains a non-empty directory with system ID different from that of the backup being restored, incremental restore with this flag overwrites the directory contents (while an error occurs without the flag). If tablespaces are remapped through the --tablespace-mapping option into non-empty directories, the contents of such directories will be deleted.

--no-sync

Do not sync restored files to disk. You can use this flag to speed up restore process. Using this flag can result in data corruption in case of operating system or hardware crash. If it happens, you have to run the restore command again.

-X wal_dir
--waldir=wal_dir

Sets the directory to write WAL files to. By default WAL files will be placed in the pg_wal subdirectory of the target directory, but this option can be used to place them elsewhere. wal_dir must be an absolute path, which must not already exist, but if it does, it must be empty.

For more details of the command settings, see sections Common Options, Recovery Target Options, Remote Mode Options, Remote WAL Archive Options, Logging Options, Partial Restore Options, and S3 Options.

For details on usage, see the section Restoring a Cluster.

checkdb #

pg_probackup checkdb
[-B backup_dir] [--instance=instance_name] [-D data_dir]
[--help] [-j num_threads] [--progress]
[--amcheck [--skip-block-validation] [--checkunique] [--heapallindexed]]
[connection_options] [logging_options]
[s3_options]

Verifies the Postgres Pro database cluster correctness by detecting physical and logical corruption.

For the command to work correctly, when the backup instance is created in the S3 storage, you must specify S3 options on the command line or through environment variables.

--amcheck

Performs logical verification of indexes for the specified Postgres Pro instance if no corruption was found while checking data files. You must have the amcheck extension or the amcheck_next extension installed in the database to check its indexes. For databases without amcheck, index verification will be skipped. Additional options --checkunique and --heapallindexed are effective depending on the version of amcheck installed.

--checkunique

Verifies unique constraints during logical verification of indexes. You can use this flag only together with the --amcheck flag when the amcheck extension is installed in the database.

The verification of unique constraints is only possible if in the version of the amcheck extension you are using, the bt_index_check function takes the checkunique parameter.

--heapallindexed

Checks that all heap tuples that should be indexed are actually indexed. You can use this flag only together with the --amcheck flag.

This check is only possible if in the version of the amcheck/amcheck_next extension you are using, the bt_index_check function takes the heapallindexed parameter.

--skip-block-validation

Skip validation of data files. You can use this flag only together with the --amcheck flag, so that only logical verification of indexes is performed.

For more details of the command settings, see sections Common Options, Connection Options, Logging Options, and S3 Options.

For details on usage, see the section Verifying a Cluster.

validate #

pg_probackup validate -B backup_dir
[--help] [--instance=instance_name] [-i backup_id]
[-j num_threads] [--progress] [--wal]
[--skip-block-validation]
[recovery_target_options] [logging_options] [s3_options]

Verifies that all the files required to restore the cluster are present and are not corrupt. If instance_name is not specified, pg_probackup validates all backups available in the backup catalog. If you specify the instance_name without any additional options, pg_probackup validates all the backups available for this backup instance. If you specify the instance_name with a recovery target option and/or a backup_id, pg_probackup checks whether it is possible to restore the cluster using these options. If the --wal option is specified, full check of the WAL archive will be performed instead of only checking WAL segments needed to restore the cluster.

For details, see the section Validating a Backup.

merge #

pg_probackup merge -B backup_dir --instance=instance_name -i backup_id
[--dry-run] [--help] [-j num_threads] [--progress] [--no-validate] [--no-sync]
[logging_options]

Merges backups that belong to a common incremental backup chain. If you specify a full backup, it will be merged with its first incremental backup. If you specify an incremental backup, it will be merged to its parent full backup, together with all incremental backups between them. Once the merge is complete, the full backup takes in all the merged data, and the incremental backups are removed as redundant.

--no-validate

Skips automatic validation before and after merge.

--no-sync

Do not sync merged files to disk. You can use this flag to speed up the merge process. Using this flag can result in data corruption in case of operating system or hardware crash.

For more details of the command settings, see sections Common Options and Merging Backups.

delete #

pg_probackup delete -B backup_dir --instance=instance_name
[--help] [-j num_threads] [--progress]
[--retention-redundancy=redundancy][--retention-window=window][--wal-depth=wal_depth] [--delete-wal]
{-i backup_id | --delete-expired [--merge-expired] | --merge-expired | --status=backup_status}
[--dry-run] [--no-validate] [--no-sync] [logging_options] [s3_options]

Deletes backup with specified backup_id or launches the retention purge of backups and archived WAL that do not satisfy the current retention policies.

--no-validate

Skips automatic validation before and after retention merge.

--no-sync

Do not sync merged files to disk. You can use this flag to speed up the retention merge process. Using this flag can result in data corruption in case of operating system or hardware crash.

For details, see the sections Deleting Backups, Retention Options, and Configuring Retention Policy.

archive-push #

pg_probackup archive-push -B backup_dir --instance=instance_name
--wal-file-name=wal_file_name [--wal-file-path=wal_file_path]
[--help] [--dry-run] [--no-sync] [--compress] [--no-ready-rename] [--overwrite]
[-j num_threads] [--batch-size=batch_size]
[--archive-timeout=timeout]
[--compress-algorithm=compression_algorithm]
[--compress-level=compression_level]
[remote_options] [logging_options] [s3_options]

Copies WAL files into the corresponding subdirectory of the backup catalog and validates the backup instance by instance_name and system-identifier. If parameters of the backup instance and the cluster do not match, this command fails with the following error message: Refuse to push WAL segment segment_name into archive. Instance parameters mismatch.

If the files to be copied already exists in the backup catalog, pg_probackup computes and compares their checksums. If the checksums match, archive-push skips the corresponding file and returns a successful execution code. Otherwise, archive-push fails with an error. If you would like to replace WAL files in the case of checksum mismatch, run the archive-push command with the --overwrite flag.

Each file is copied to a temporary file with the .part suffix. If the temporary file already exists, pg_probackup will wait archive_timeout seconds before discarding it. After the copy is done, atomic rename is performed. This algorithm ensures that a failed archive-push will not stall continuous archiving and that concurrent archiving from multiple sources into a single WAL archive has no risk of archive corruption.

The Postgres Pro server requests WAL segments one at a time. To speed up archiving, you can specify the --batch-size option to copy WAL segments in batches of the specified size. If --batch-size option is used, then you can also specify the -j option to copy the batch of WAL segments on multiple threads.

WAL segments copied to the archive are synced to disk unless the --no-sync flag is used.

You can use archive-push in the archive_command Postgres Pro parameter to set up continuous WAL archiving.

For more details of the command settings, see sections Common Options, Archiving Options, Compression Options, and S3 Options.

archive-get #

pg_probackup archive-get -B backup_dir --instance=instance_name --wal-file-path=wal_file_path --wal-file-name=wal_file_name
[-j num_threads] [--batch-size=batch_size]
[--prefetch-dir=prefetch_dir_path] [--no-validate-wal]
[--dry-run] [--help] [remote_options] [logging_options] [s3_options]

Copies WAL files from the corresponding subdirectory of the backup catalog to the cluster's write-ahead log location. This command is automatically set by pg_probackup as part of the restore_command when restoring backups using a WAL archive. You do not need to set it manually if you use local storage for backups or remote mode.

If you use S3 interface, to ensure that the Postgres Pro server has access to S3 storage to fetch WAL files during restore, you can specify the --s3-config-file option that defines the S3 configuration file with appropriate configuration settings, as described in the section called “S3 Options”.

The Postgres Pro server requests WAL segments one at a time. To speed up recovery, you can specify the --batch-size option to copy WAL segments in batches of the specified size. If --batch-size option is used, then you can also specify the -j option to copy the batch of WAL segments on multiple threads.

For more details of the command settings, see sections Common Options, Archiving Options, Compression Options, and S3 Options.

catchup #

pg_probackup catchup -b catchup_mode
--source-pgdata=path_to_pgdata_on_remote_server
--destination-pgdata=path_to_local_dir
[--help] [-j | --threads=num_threads] [--dry-run]
[--write-rate-limit=bitrate]
[--stream [--temp-slot[=true|false|on|off]] [-P | --perm-slot] [-S | --slot=slot_name]]
[--exclude-path=PATHNAME]
[-T OLDDIR=NEWDIR]
[-X | --waldir=wal_dir]
[connection_options] [remote_options]
[logging_options]

Creates a copy of a Postgres Pro instance without using the backup catalog.

-b catchup_mode
--backup-mode=catchup_mode

Specifies the catchup mode to use. Possible values are: FULL, DELTA, and PTRACK.

--source-pgdata=path_to_pgdata_on_remote_server

Specifies the path to the data directory of the instance to be copied. The path can be local or remote.

--destination-pgdata=path_to_local_dir

Specifies the path to the local data directory to copy to.

-j num_threads
--threads=num_threads

Sets the number of parallel threads for catchup process.

--stream

Copies the instance in STREAM WAL delivery mode, including all the necessary WAL files by streaming them from the server via replication protocol. For catchup, it is enabled by default.

--write-rate-limit=bitrate

Sets the rate of writing data to disk, in MBps or GBps. The default unit is MBps. For example: --write-rate-limit=1GBps or --write-rate-limit=100 (MBps). The default value is 0 — no limitation.

If this option is specified, the following information is displayed at the end of the catchup operation:

  • written — the amount of data written, in MB.

  • total time — the time that elapsed between the first and last writes, in seconds. Note that this is not the total duration of the catchup operation.

  • sleep time — the amount of forced delay time, in seconds.

  • average rate — the actual average write rate, in MBps.

For example:

INFO: Rate limit: written 14975.445 MB, total time 17.163 s, sleep time 2.370 s, average rate 872.560715 MBps

-x=path_prefix
--exclude-path=path_prefix

Specifies a prefix for files to exclude from the synchronization of Postgres Pro instances during copying. The prefix must contain a path relative to the data directory of an instance. If the prefix specifies a directory, all files in this directory will not be synchronized.

Warning

This option is dangerous since excluding files from synchronization can result in incomplete synchronization; use with care.

--temp-slot[=true|false|on|off]

Creates a temporary physical replication slot for streaming WAL from the Postgres Pro instance being copied. --temp-slot is enabled by default. It ensures that all the required WAL segments remain available if WAL is rotated while the backup is in progress. This flag can only be used together with the --stream flag and cannot be used together with the --perm-slot flag. The default slot name is pg_probackup_slot. To change it, use the --slot/-S option and explicitly specify --temp-slot or --temp-slot=true|on.

-P
--perm-slot

Creates a permanent physical replication slot for streaming WAL from the Postgres Pro instance being copied. This flag can only be used together with the --stream flag and cannot be used together with the --temp-slot flag. The default slot name is pg_probackup_perm_slot, which can be changed using the --slot/-S option.

-S slot_name
--slot=slot_name

Specifies the replication slot to connect to for WAL streaming. This option can only be used together with the --stream flag.

-T OLDDIR=NEWDIR
--tablespace-mapping=OLDDIR=NEWDIR

Relocates the tablespace from the OLDDIR to the NEWDIR directory at the time of recovery. Both OLDDIR and NEWDIR must be absolute paths. If the path contains the equals sign (=), escape it with a backslash. This option can be specified multiple times for multiple tablespaces.

-X wal_dir
--waldir=wal_dir

Sets the directory to write WAL files to. By default WAL files will be placed in the pg_wal subdirectory of the target directory, but this option can be used to place them elsewhere. wal_dir must be an absolute path, which must not already exist, but if it does, it must be empty to perform catchup with --catchup-mode=FULL.

For more details of the command settings, see sections Common Options, Connection Options, and Remote Mode Options.

For details on usage, see the section Cloning and Synchronizing Postgres Pro Instance.

Options #

This section describes command-line options for pg_probackup commands. If the option value can be derived from an environment variable, this variable is specified below the command-line option, in the uppercase. Some values can be taken from the pg_probackup.conf configuration file located in the backup catalog.

For details, see the section called “Configuring pg_probackup.

If an option is specified using more than one method, command-line input has the highest priority, while the pg_probackup.conf settings have the lowest priority.

Common Options #

The list of general options.

--dry-run

Initiates a trial run of the appropriate command, which does not actually do any changes, that is, it does not create, delete or move files on disk. This flag allows you to check that all the command options are correct and the command is ready to run. WAL streaming is skipped with --dry-run.

-B directory
--backup-path=directory
BACKUP_PATH

Specifies the absolute path to the backup catalog. Backup catalog is a directory where all backup files and meta information are stored. Since this option is required for most of the pg_probackup commands, you are recommended to specify it once in the BACKUP_PATH environment variable. In this case, you do not need to use this option each time on the command line.

-D directory
--pgdata=directory
PGDATA

Specifies the absolute path to the data directory of the database cluster. This option is mandatory only for the add-instance command. Other commands can take its value from the PGDATA environment variable, or from the pg_probackup.conf configuration file.

-i backup_id
--backup-id=backup_id

Specifies the unique identifier of the backup.

-j num_threads
--threads=num_threads

Sets the number of parallel threads for backup, restore, merge, validate, checkdb, and archive-push processes.

--progress

Shows the progress of operations.

--help

Shows detailed information about the options that can be used with this command.

Recovery Target Options #

If continuous WAL archiving is configured, you can use one of these options together with restore or validate commands to specify the moment up to which the database cluster must be restored or validated.

--recovery-target=immediate|latest

Defines when to stop the recovery:

  • The immediate value stops the recovery after reaching the consistent state of the specified backup, or the latest available backup if the -i/--backup-id option is omitted. This is the default behavior for STREAM backups.

  • The latest value continues the recovery until all WAL segments available in the archive are applied. Setting this value of --recovery-target also sets --recovery-target-timeline to latest.

--recovery-target-timeline=timeline

Specifies a particular timeline to be used for recovery:

  • current — the timeline of the specified backup, default.

  • latest — the timeline of the latest available backup.

  • A numeric value.

--recovery-target-lsn=lsn

Specifies the LSN of the write-ahead log location up to which recovery will proceed.

--recovery-target-name=recovery_target_name

Specifies a named savepoint up to which to restore the cluster.

--recovery-target-time=time

Specifies the timestamp up to which recovery will proceed. If the time zone offset is not specified, the local time zone is used.

Example: --recovery-target-time="2027-04-09 18:21:32+00"

--recovery-target-xid=xid

Specifies the transaction ID up to which recovery will proceed.

--recovery-target-inclusive=boolean

Specifies whether to stop just after the specified recovery target (true), or just before the recovery target (false). This option can only be used together with --recovery-target-time, --recovery-target-lsn or --recovery-target-xid options. The default depends on the recovery_target_inclusive parameter.

--recovery-target-action=pause|promote|shutdown

Specifies recovery_target_action the server should take when the recovery target is reached.

Default: pause

Retention Options #

You can use these options together with backup and delete commands.

For details on configuring retention policy, see the section Configuring Retention Policy.

--retention-redundancy=redundancy

Specifies the number of full backup copies to keep in the data directory. Must be a non-negative integer. The zero value disables this setting.

Default: 0

--retention-window=window

Number of days of recoverability. Must be a non-negative integer. The zero value disables this setting.

Default: 0

--wal-depth=wal_depth

Number of latest valid backups on every timeline that must retain the ability to perform PITR. Must be a non-negative integer. The zero value disables this setting.

Default: 0

--delete-wal

Deletes WAL files that are no longer required to restore the cluster from any of the existing backups.

--delete-expired

Deletes backups that do not conform to the retention policy defined in the pg_probackup.conf configuration file.

--merge-expired

Merges the oldest incremental backup that satisfies the requirements of retention policy with its parent backups that have already expired.

--dry-run

Displays the current status of all the available backups, without deleting or merging expired backups, if any.

Pinning Options #

You can use these options together with backup and set-backup commands.

For details on backup pinning, see the section Backup Pinning.

--ttl=ttl

Specifies the amount of time the backup should be pinned. Must be a non-negative integer. The zero value unpins the already pinned backup. Supported units: ms, s, min, h, d (s by default).

Example: --ttl=30d

--expire-time=time

Specifies the timestamp up to which the backup will stay pinned. Must be an ISO-8601 complaint timestamp. If the time zone offset is not specified, the local time zone is used.

Example: --expire-time="2027-04-09 18:21:32+00"

Logging Options #

You can use these options with any command.

--no-color

Disable coloring for console log messages of warning and error levels.

--log-level-console=log_level

Controls which message levels are sent to the console log. Valid values are verbose, log, info, warning, error and off. Each level includes all the levels that follow it. The later the level, the fewer messages are sent. The off level disables console logging.

Default: info

Note

All console log messages are going to stderr, so the output of show and show-config commands does not mingle with log messages.

--log-level-file=log_level

Controls which message levels are sent to a log file. Valid values are verbose, log, info, warning, error, and off. Each level includes all the levels that follow it. The later the level, the fewer messages are sent. The off level disables file logging.

Default: off

--log-filename=log_filename

Defines the filenames of the created log files. The filenames are treated as a strftime pattern, so you can use %-escapes to specify time-varying filenames.

Default: pg_probackup.log

For example, if you specify the pg_probackup-%u.log pattern, pg_probackup generates a separate log file for each day of the week, with %u replaced by the corresponding decimal number: pg_probackup-1.log for Monday, pg_probackup-2.log for Tuesday, and so on.

This option takes effect if file logging is enabled by the --log-level-file option.

--error-log-filename=error_log_filename

Defines the filenames of log files for error messages only. The filenames are treated as a strftime pattern, so you can use %-escapes to specify time-varying filenames.

Default: none

For example, if you specify the error-pg_probackup-%u.log pattern, pg_probackup generates a separate log file for each day of the week, with %u replaced by the corresponding decimal number: error-pg_probackup-1.log for Monday, error-pg_probackup-2.log for Tuesday, and so on.

This option is useful for troubleshooting and monitoring.

--log-directory=log_directory

Defines the directory in which log files will be created. You must specify the absolute path. This directory is created lazily, when the first log message is written.

Note that the directory for log files is always created locally even if backups are created in the S3 storage. So be sure to pass a local path in log_directory when needed.

Default: $BACKUP_PATH/log/

--log-format-console=log_format

Defines the format of the console log. Only set from the command line. Note that you cannot specify this option in the pg_probackup.conf configuration file through the set-config command and that the backup command also treats this option specified in the configuration file as an error. Possible values are:

  • plain — sets the plain-text format of the console log.

  • json — sets the JSON format of the console log.

Default: plain

--log-format-file=log_format

Defines the format of log files used. Possible values are:

  • plain — sets the plain-text format of log files.

  • json — sets the JSON format of log files.

Default: plain

--log-rotation-size=log_rotation_size

Maximum size of an individual log file. If this value is reached, the log file is rotated once a pg_probackup command is launched, except help and version commands. The zero value disables size-based rotation. Supported units: kB, MB, GB, TB (kB by default).

Default: 0

--log-rotation-age=log_rotation_age

Maximum lifetime of an individual log file. If this value is reached, the log file is rotated once a pg_probackup command is launched, except help and version commands. The time of the last log file creation is stored in $BACKUP_PATH/log/log_rotation. The zero value disables time-based rotation. Supported units: ms, s, min, h, d (min by default).

Default: 0

Connection Options #

You can use these options together with backup, catchup, and checkdb commands.

All libpq environment variables are supported.

-d dbname
--pgdatabase=dbname
PGDATABASE

Specifies the name of the database to connect to. The connection is used only for managing backup process, so you can connect to any existing database. If this option is not provided on the command line, PGDATABASE environment variable, or the pg_probackup.conf configuration file, pg_probackup tries to take this value from the PGUSER environment variable, or from the current user name if PGUSER variable is not set.

-h host
--pghost=host
PGHOST

Specifies the host name of the system on which the server is running. If the value begins with a slash, it is used as a directory for the Unix domain socket.

Default: localhost

-p port
--pgport=port
PGPORT

Specifies the TCP port or the local Unix domain socket file extension on which the server is listening for connections.

Default: 5432

-U username
--pguser=username
PGUSER

User name to connect as.

-w
--no-password

Disables a password prompt. If the server requires password authentication and a password is not available by other means such as a .pgpass file or PGPASSWORD environment variable, the connection attempt will fail. This flag can be useful in batch jobs and scripts where no user is present to enter a password.

-W
--password

Forces a password prompt. (Deprecated)

Compression Options #

You can use these options together with backup and archive-push commands.

--compress-algorithm=compression_algorithm

Defines the algorithm to use for compressing data files. Possible values are zlib, lz4, zstd, pglz, and none. If set to any value, but none, this option enables compression that uses the corresponding algorithm. Both data files and WAL files are compressed. By default, compression is disabled. For the archive-push command, the pglz compression algorithm is not supported.

Note

pg_probackup supports compression algorithms included in the Postgres Pro version. In particular:

  • lz4 is supported for Postgres Pro Enterprise 13 and higher.

  • zstd is supported for Postgres Pro Enterprise 11 and higher.

Default: none

--compress-level=compression_level

Defines the compression level. This option can be used together with the --compress-algorithm option. Possible values depend on the compression algorithm specified:

  • 0 — 9 for zlib

  • 1 for pglz

  • 0 — 12 for lz4

  • 0 — 22 for zstd

The value of 0 sets the default compression level for the specified algorithm:

  • 6 for zlib

  • 1 for pglz

  • 9 for lz4

  • 3 for zstd

Note

The pure lz4 algorithmn has only one compression level — 1. So, if the specified compression algorithm is lz4 and --compress-level is greater than 1, the lz4hc algorithm is actually used, which is much slower although does better compression.

Default: 1

--compress

Specifies the default compression algorithm and --compress-level=1. The default algorithm is selected among those supported by Postgres Pro according to the priorities: zstd (highest) -> lz4 -> zlib -> pglz. The --compress option overrides the --compression-algorithm and --compress-level settings and cannot be specified together with them.

Archiving Options #

These options can be used with the archive-push command in the archive_command setting and the archive-get command in the restore_command setting.

Additionally, remote mode options and logging options can be used.

--wal-file-path=wal_file_path

Provides the path to the WAL file in archive_command and restore_command. Use the %p variable as the value for this option or explicitly specify the path to a file outside of the data directory. If you skip this option, the path specified in pg_probackup.conf will be used.

--wal-file-name=wal_file_name

Provides the name of the WAL file in archive_command and restore_command. Use the %f variable as the value for this option for correct processing. If the value of --wal-file-path is a path outside of the data directory, explicitly specify the filename.

--overwrite

Overwrites archived WAL file. Use this flag together with the archive-push command if the specified subdirectory of the backup catalog already contains this WAL file and it needs to be replaced with its newer copy. Otherwise, archive-push reports that a WAL segment already exists, and aborts the operation. If the file to replace has not changed, archive-push skips this file regardless of the --overwrite flag.

--batch-size=batch_size

Used to speed up archiving in case of archive-push or to speed up recovery in case of archive-get. Sets the maximum number of WAL files that can be copied into the archive by a single archive-push process, or from the archive by a single archive-get process.

--archive-timeout=wait_time

Sets the timeout for considering existing .part files to be stale. By default, pg_probackup waits 300 seconds. This option can be used only with archive-push command.

--no-ready-rename

Do not rename status files in the archive_status directory. This option should be used only if archive_command contains multiple commands. This option can be used only with archive-push command.

--no-sync

Do not sync copied WAL files to disk. You can use this flag to speed up archiving process. Using this flag can result in WAL archive corruption in case of operating system or hardware crash. This option can be used only with archive-push command.

--prefetch-dir=path

Directory used to store prefetched WAL segments if --batch-size option is used. Directory must be located on the same filesystem and on the same mountpoint the PGDATA/pg_wal is located. By default files are stored in PGDATA/pg_wal/pbk_prefetch directory. This option can be used only with archive-get command.

--no-validate-wal

Do not validate prefetched WAL file before using it. Use this option if you want to increase the speed of recovery. This option can be used only with archive-get command.

Remote Mode Options #

This section describes the options related to running pg_probackup operations remotely via SSH. These options can be used with add-instance, set-config, backup, catchup, restore, archive-push, and archive-get commands.

For details on configuring and using the remote mode, see the section called “Configuring the Remote Mode” and the section called “Using pg_probackup in the Remote Mode”.

--remote-proto=proto

Specifies the protocol to use for remote operations. Currently only the SSH protocol is supported. Possible values are:

  • ssh enables the remote mode via SSH. This is the default value.

  • none explicitly disables the remote mode.

You can omit this option if the --remote-host option is specified.

--remote-host=destination

Specifies the remote host IP address or hostname to connect to.

--remote-port=port

Specifies the remote host port to connect to.

Default: 22

--remote-user=username

Specifies remote host user for SSH connection. If you omit this option, the current user initiating the SSH connection is used.

--remote-path=path

Specifies pg_probackup installation directory on the remote system.

--ssh-options=ssh_options

Provides a string of SSH command-line options. For example, the following options can be used to set keep-alive for SSH connections opened by pg_probackup: --ssh-options="-o ServerAliveCountMax=5 -o ServerAliveInterval=60". For the full list of possible options, see ssh_config manual page.

Remote WAL Archive Options #

This section describes the options used to provide the arguments for remote mode options in archive-get used in the restore_command command when restoring ARCHIVE backups or performing PITR.

--archive-host=destination

Provides the argument for the --remote-host option in the archive-get command.

--archive-port=port

Provides the argument for the --remote-port option in the archive-get command.

Default: 22

--archive-user=username

Provides the argument for the --remote-user option in the archive-get command. If you omit this option, the user that has started the Postgres Pro cluster is used.

Default: Postgres Pro user

Incremental Restore Options #

This section describes the options for incremental cluster restore. These options can be used with the restore command.

-I incremental_mode
--incremental-mode=incremental_mode

Specifies the incremental mode to be used. Possible values are:

  • CHECKSUM — replace only pages with mismatched checksum and LSN.

  • LSN — replace only pages with LSN greater than point of divergence.

  • NONE — regular restore.

Partial Restore Options #

This section describes the options for partial cluster restore. These options can be used with the restore command.

--db-exclude=dbname

Specifies the name of the database to exclude from restore. All other databases in the cluster will be restored as usual, including template0 and template1. This option can be specified multiple times for multiple databases.

--db-include=dbname

Specifies the name of the database to restore from a backup. All other databases in the cluster will not be restored, with the exception of template0 and template1. This option can be specified multiple times for multiple databases.

S3 Options #

This section describes the options needed to store backups in private clouds. These options can be used with any commands that pg_probackup runs using S3 interface.

--s3=s3_interface_provider

Specifies the S3 interface provider. Possible values are:

  • minio — MinIO object storage, compatible with S3 cloud storage service. With this provider, custom S3 server settings can be specified. The HTTP protocol, port 9000, and region us-east-1 are used by default.

  • vk — VK Cloud storage. With this provider, the S3 host address hb.vkcs.cloud, port 443, and HTTPS protocol are only used. Custom values of the host, port, and protocol are ignored. The default value of region is ru-msk.

  • aws — Amazon S3 storage, offered by Amazon Web Services (AWS). With this provider, the S3 host address bucket_name.s3.region.amazonaws.com, port 443, and HTTPS protocol are only used. Custom values of the host, port, and protocol are ignored. The default value of region is us-east-1.

With --s3=minio, pg_probackup will work fine for a VK Cloud storage if the S3 host address, port and protocol are properly specified (host address is hb.vkcs.cloud or the one specified in the appropriate section of the VK Cloud profile, port 443, and HTTPS protocol). Do not specify --s3=minio for the Amazon S3 storage.

Once a pg_probackup command runs with the --s3 option, pg_probackup starts running all commands that support parallel execution on 10 parallel threads (for details, see the section called “Running pg_probackup on Parallel Threads”). You can change the number of threads using the -j/--threads option.

--s3-config-file=path_to_config_file

Specifies the S3 configuration file. Settings in the configuration file override the environment variables. If this option is not specified, pg_probackup first looks for the S3 configuration file at /etc/pg_probackup/s3.config and then at ~postgres/.pg_probackup/s3.config. The following is an example of the S3 configuration file:

access-key = ...
secret-key = ...
s3-host = localhost
s3-port = 9000
s3-bucket = s3demo
s3-region=us-east-1
s3-buffer-size = 32
s3-secure = on | https | http | off

Testing and Debugging Options #

This section describes options useful only in a test or development environment.

--cfs-nondatafile-mode

Instructs backup command to backup CFS in a legacy mode. This allows fine-tuning compatibility with pg_probackup versions earlier than 2.6.0. This option is mainly designed for testing.

PGPROBACKUP_TESTS_SKIP_HIDDEN

Instructs pg_probackup to ignore backups marked as hidden. Note that pg_probackup can never mark a backup as hidden. It can only be done by directly editing the backup.control file. This option can only be set with environment variables.

--destroy-all-other-dbs

By default, pg_probackup exits with an error if an attempt is made to perform a partial incremental restore since this destroys databases not included in the restore set. This flag allows you to suppress the error and proceed with the partial incremental restore (e.g., to keep a development database snapshot up-to-date with a production one). This option can be used with the restore command.

Important

Never use this flag in a production cluster.

PGPROBACKUP_TESTS_SKIP_EMPTY_COMMIT

Instructs pg_probackup to skip empty commits after pg_backup_stop.

Versioning #

pg_probackup follows semantic versioning.

Authors #

Postgres Professional, Moscow, Russia.

Credits #

pg_probackup utility is based on pg_arman, which was originally written by NTT and then developed and maintained by Michael Paquier.