Re: Backup PostgreSQL from RDS straight to S3 - Mailing list pgsql-general

From Steven Lembark
Subject Re: Backup PostgreSQL from RDS straight to S3
Date
Msg-id 20190919010344.7f7c9bb7.lembark@wrkhors.com
Whole thread Raw
In response to Backup PostgreSQL from RDS straight to S3  (Anthony DeBarros <adebarros@gmail.com>)
List pgsql-general
s3fs available on linux allows mounting S3 directly as a local 
filesystem. At that point something like:

  pg_dump ... | gzip -9 -c > /mnt/s3-mount-point/$basename.pg_dump.gz;

will do the deed nicely. If your S3 volume is something like
your_name_here.com/pg_dump then you could parallize it by dumping
separate databases into URL's based on the date and database name:

    tstamp=$(date +%Y.%m.%d-%H.%M.%S);

    gzip='/bin/gzip -9 -v';
    dump='/opt/postgres/bin/pg_dump -blah -blah -blah';

    for i in your database list
    do
        echo "Dump: '$i'";
        $dump $i | $gzip > /mnt/pg-backups/$tstamp/$i.dump.gz &
    done

    # at this point however many databases are dumping...

    wait;

    echo "Goodnight.";

If you prefer to only keep a few database backups (e.g., a rolling
weekly history) then use the day-of-week for the tstamp; if you
want to keep fewer then $(( $(date +%s) / 86400 % $num_backups)) 
will do (leap-second notwhithstanding).

Check rates to see which AWS location is cheapest for the storage
and procesing to gzip the content. Also check the CPU charges for
zipping vs. storing the data -- it may be cheaper in the long run
to use "gzip --fast" with smaller, more repeatetive content than 
to pay the extra CPU charges for "gzip --best".

-- 
Steven Lembark                                        3646 Flora Place
Workhorse Computing                                St. Louis, MO 63110
lembark@wrkhors.com                                    +1 888 359 3508



pgsql-general by date:

Previous
From: Ron
Date:
Subject: Re: PostgreSQL License
Next
From: Matthias Apitz
Date:
Subject: PGPASSWORD in crypted form, for example BlowFish or SHA-256