On Tue, 2010-04-27 at 12:14 -0400, Merlin Moncure wrote:
> On Tue, Apr 27, 2010 at 11:13 AM, Kevin Grittner
> <Kevin.Grittner@wicourts.gov> wrote:
> > Merlin Moncure <mmoncure@gmail.com> wrote:
> >
> >> The proposal only seems a win to me if a fair percentage of the
> >> larger files don't change, which strikes me as a relatively low
> >> level case to optimize for.
> >
> > That's certainly a situation we face, with a relatively slow WAN in
> > the middle.
> >
> > http://archives.postgresql.org/pgsql-admin/2009-07/msg00071.php
> >
> > I don't know how rare or common that is.
>
> hm...interesting read. pretty clever. Your archiving requirements are high.
>
> With the new stuff (HS/SR) taken into consideration, would you have
> done your DR the same way if you had to do it all over again?
>
> Part of my concern here is that manual filesystem level backups are
> going to become an increasingly arcane method of doing things as the
> HS/SR train starts leaving the station.
Actually the HS/SR speaks _for_ adding explicit change dates to files,
as the mod times on slave side will be different, and you may still want
to know when the table really was last modified
>
> hm, it would be pretty neat to see some of the things you do pushed
> into logical (pg_dump) style backups...with some enhancements so that
> it can skip tables haven't changed and are exhibited in a previously
> supplied dump. This is more complicated but maybe more useful for a
> broader audience?
Yes, I see the main value in of this for pg_dump backups, as physical
files already have this in terms of file ctime/mtime/atime
>
> Side question: is it impractical to backup via pg_dump a hot standby
> because of query conflict issues?
>
> merlin
>
--
Hannu Krosing http://www.2ndQuadrant.com
PostgreSQL Scalability and Availability Services, Consulting and Training