Re: Enhance pg_dump multi-threaded streaming (WAS: Re: filesystem full during vacuum - space recovery issues) - Mailing list pgsql-admin

From Ron Johnson
Subject Re: Enhance pg_dump multi-threaded streaming (WAS: Re: filesystem full during vacuum - space recovery issues)
Date
Msg-id CANzqJaC-h5Bpt_Fa0dTT7wSUqYZDeLm4_K32T2groTPjh2p3mQ@mail.gmail.com
Whole thread Raw
In response to Re: Enhance pg_dump multi-threaded streaming (WAS: Re: filesystem full during vacuum - space recovery issues)  (Thomas Simpson <ts@talentstack.to>)
List pgsql-admin
On Fri, Jul 19, 2024 at 10:19 PM Thomas Simpson <ts@talentstack.to> wrote:

Hi Doug

On 19-Jul-2024 17:21, Doug Reynolds wrote:
Thomas—

Why are you using logical backups for a database this large?  A solution like PgBackRest?  Obviously, if you are going to upgrade, but for operational use, that seems to be a slow choice.

In normal operation the server runs as a primary-replica and pgbackrest handles backups.

Expire the oldest pgbackrest, so as to free up space for a multithreaded pg_dump. 

  Right when disk space was used up, pgbackrest also took a backup during the failed vacuum so going back to it (or anything earlier) would also roll forward the WALs for recovery to date and put me right back where I am just now by running out of space part way through.


Who says you have to restore to the failure point?  That's what the "--target" option is for.

For example, if you took a full backup on 7/14 at midnight, and want to restore to 7/18 23:00, run:
declare LL=detail
declare PGData=/path/to/data
declare -i Threads=`nproc`-2
declare BackupSet=20240714-000003F
declare RestoreUntil="2024-07-18 23:00"
pgbackrest restore \
    --stanza=localhost \
    --log-level-file=$LL \
    --log-level-console=$LL \
    --process-max=${Threads}
    --pg1-path=$PGData \
    --set=$BackupSet \
    --type=time --target="${RestoreUntil}"

 

pgsql-admin by date:

Previous
From: Muhammad Ikram
Date:
Subject: Re: Oracle to Postgres
Next
From: Sathish Reddy
Date:
Subject: Pg_repack