Re: Stuck trying to backup large database - best practice? - Mailing list pgsql-general

From Joseph Kregloh
Subject Re: Stuck trying to backup large database - best practice?
Date
Msg-id CAAW2xfcUZBvbpRO7HDrcTYNeT8FpRzT+-+KZ08vzByNXN76Cwg@mail.gmail.com
Whole thread Raw
In response to Re: Stuck trying to backup large database - best practice?  (Antony Gelberg <antony.gelberg@gmail.com>)
List pgsql-general
I apologize if it has already been suggested. I already deleted the previous emails in this chain. 

Have you looked into Barman? My current database is just a tad over 1TB. I have one master, two slaves, and another machine running Barman. The slaves are there for redundancy purposes. Master fails, a slave gets promoted. The backups are all done by Barman. This allows for PITR. I do not run any backup software on the database server, but I do on the Barman server. In Barman I keep a 7 day retention policy, then I have Bacula backing that up with a 1 month retention policy. So theoretically I could do a PITR up to a month in the past.

Thanks,
-Joseph Kregloh

On Mon, Jan 12, 2015 at 5:16 PM, Antony Gelberg <antony.gelberg@gmail.com> wrote:
On Mon, Jan 12, 2015 at 7:08 PM, Adrian Klaver
<adrian.klaver@aklaver.com> wrote:
>
> On 01/12/2015 08:40 AM, Antony Gelberg wrote:
>>
>> On Mon, Jan 12, 2015 at 6:23 PM, Adrian Klaver
>> <adrian.klaver@aklaver.com> wrote:
>>>
>>> On 01/12/2015 08:10 AM, Antony Gelberg wrote:
>>>>
>>>> On Mon, Jan 12, 2015 at 5:31 PM, Adrian Klaver
>>>> <adrian.klaver@aklaver.com> wrote:
>>> pg_basebackup has additional features which in your case are creating
>>> issues. pg_dump on the other hand is pretty much a straight forward data
>>> dump and if you use -Fc you get compression.
>>
>>
>> So I should clarify - we want to be able to get back to the same point
>> as we would once the WAL was applied.  If we were to use pg_dump,
>> would we lose out in any way?
>
>
> pg_dump does not save WALs, so it would not work for that purpose.
>
>  Appreciate insight as to how
>>
>> pg_basebackup is scuppering things.
>
>
> From original post it is not entirely clear whether you are using the -X or -x options. The command you show does not have them, but you mention -Xs. In any case it seems wal_keep_segments will need to be bumped up to keep WAL segments around that are being recycled during the backup process. How much will depend on a determination of fast Postgres is using/recycling log segments?  Looking at the turnover in the pg_xlog directory would be a start.

The original script used -xs, but that didn't make sense, so we used
-Xs in the end, but then we cancelled the backup as we assumed that we
wouldn't have enough space for it uncompressed.  Did we miss
something?

I think your suggestion of looking in pg_xlog and tweaking
wal_keep_segments is interesting, we'll take a look, and I'll report
back with findings.

Thanks for your very detailed help.

Antony


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

pgsql-general by date:

Previous
From: Antony Gelberg
Date:
Subject: Re: Stuck trying to backup large database - best practice?
Next
From: Adrian Klaver
Date:
Subject: Re: Stuck trying to backup large database - best practice?