Ühel kenal päeval, R, 2006-04-14 kell 17:31, kirjutas Tom Lane:
> Hannu Krosing <hannu@skype.net> writes:
> > Ühel kenal päeval, R, 2006-04-14 kell 16:40, kirjutas Tom Lane:
> >> If the backup-taker reads, say, 4K at a time then it's
> >> certainly possible that it gets a later version of the second half of a
> >> page than it got of the first half. I don't know about you, but I sure
> >> don't feel comfortable making assumptions at that level about the
> >> behavior of tar or cpio.
> >>
> >> I fear we still have to disable full_page_writes (force it ON) if
> >> XLogArchivingActive is on. Comments?
>
> > Why not just tell the backup-taker to take backups using 8K pages ?
>
> How?
use find + dd, or whatever. I just dont want it to be made universally
unavailable just because some users *might* use an file/disk-level
backup solution which is incompatible.
> (No, I don't think tar's blocksize options control this
> necessarily --- those indicate the blocking factor on the *tape*.
> And not everyone uses tar anyway.)
If I'm desperate enough to get the 2x reduction of WAL writes, I may
even write my own backup solution.
> Even if this would work for all popular backup programs, it seems
> far too fragile: the consequence of forgetting the switch would be
> silent data corruption, which you might not notice until the slave
> had been in live operation for some time.
We may declare only one solution to be supported by us with
XLogArchivingActive, say a gnu tar modified to read in Nx8K blocks
( pg_tar :p ).
I guess that even if we can control what operating system does, it is
still possible to get a torn page using some SAN solution, where you can
freeze the image for backup independent of OS.
----------------
Hannu