Re: PITR Backups - Mailing list pgsql-performance

From Koichi Suzuki
Subject Re: PITR Backups
Date
Msg-id 467F938F.1030708@oss.ntt.co.jp
Whole thread Raw
In response to Re: PITR Backups  ("Simon Riggs" <simon@2ndquadrant.com>)
Responses Re: PITR Backups
Re: PITR Backups
List pgsql-performance
Hi,

Year, I agree we should carefully follow how Done really did a backup.
My point is PostgreSQL may have to extend the file during the hot backup
to write to the new block.  It is slightly different from Oracle's case.
  Oracle allocates all the database space in advance so that there could
be no risk to modify the metadata on the fly.  In our case, because SAN
based storage snapshot is device level, not file system level, even a
file system does not know that the snapshot is being taken and we might
encounter the case where metadata and/or user data are not consistent.
Such snapshot (whole filesystem) might be corrupted and cause file
system level error.

I'm interested in this.   Any further comment/openion is welcome.

Regards;

Simon Riggs Wrote:
> On Fri, 2007-06-22 at 11:30 +0900, Toru SHIMOGAKI wrote:
>> Tom Lane wrote:
>>> Dan Gorman <dgorman@hi5.com> writes:
>>>>    All of our databases are on NetApp storage and I have been looking
>>>> at SnapMirror (PITR RO copy ) and FlexClone (near instant RW volume
>>>> replica) for backing up our databases. The problem is because there
>>>> is no write-suspend or even a 'hot backup mode' for postgres it's
>>>> very plausible that the database has data in RAM that hasn't been
>>>> written and will corrupt the data.
>>> Alternatively, you can use a PITR base backup as suggested here:
>>> http://www.postgresql.org/docs/8.2/static/continuous-archiving.html
>> I think Dan's problem is important if we use PostgreSQL to a large size database:
>>
>> - When we take a PITR base backup with hardware level snapshot operation
>>   (not filesystem level) which a lot of storage vender provide, the backup data
>>   can be corrupted as Dan said. During recovery we can't even read it,
>>   especially if meta-data was corrupted.
>>
>> - If we don't use hardware level snapshot operation, it takes long time to take
>>   a large backup data, and a lot of full-page-written WAL files are made.
>>
>> So, I think users need a new feature not to write out heap pages during taking a
>> backup.
>
> Your worries are unwarranted, IMHO. It appears Dan was taking a snapshot
> without having read the procedure as clearly outlined in the manual.
>
> pg_start_backup() flushes all currently dirty blocks to disk as part of
> a checkpoint. If you snapshot after that point, then you will have all
> the data blocks required from which to correctly roll forward. On its
> own, the snapshot is an inconsistent backup and will give errors as Dan
> shows. It is only when the snapshot is used as the base backup in a full
> continuous recovery that the inconsistencies are removed and the
> database is fully and correctly restored.
>
> pg_start_backup() is the direct analogue of Oracle's ALTER DATABASE
> BEGIN BACKUP. Snapshots work with Oracle too, in much the same way.
>
> After reviewing the manual, if you honestly think there is a problem,
> please let me know and I'll work with you to investigate.
>


--
-------------
Koichi Suzuki

pgsql-performance by date:

Previous
From: "Dawid Kuroczko"
Date:
Subject: Is AIX Concurrent IO safe with PostgreSQL?
Next
From: Michael Stone
Date:
Subject: Re: Performance query about large tables, lots of concurrent access