Re: How to know if a database has changed - Mailing list pgsql-general

From Sam Gendler
Subject Re: How to know if a database has changed
Date
Msg-id CAEV0TzBFhyFxBwSzYN=tjs+b8S5KS1oA4c+Fub69-RUwmgqe+Q@mail.gmail.com
Whole thread Raw
In response to Re: How to know if a database has changed  (Andreas Kretschmer <andreas@a-kretschmer.de>)
Responses Re: How to know if a database has changed
List pgsql-general
I think there's a more useful question, which is why do you want to do this?  If it is just about conditional backups, surely the cost of backup storage is low enough, even in S3 or the like, that a duplicate backup is an afterthought from a cost perspective? Before you start jumping through hoops to make your backups conditional, I'd first do some analysis and figure out what the real cost of the thing I'm trying to avoid actually is, since my guess is that you are deep into a premature optimization here, where either the cost of the duplicate backup isn't consequential or the frequency of duplicate backups is effectively 0.  It would always be possible to run some kind of checksum on the backup and skip storing it if it matches the previous backup's checksum if you decide that there truly is value in conditionally backing up the db.  Sure, that would result in dumping a db that doesn't need to be dumped, but if your write transaction rate is so low that backups end up being duplicates on a regular basis, then surely you can afford the cost of a pg_dump without any significant impact on performance?

On Mon, Dec 11, 2017 at 10:49 AM, Andreas Kretschmer <andreas@a-kretschmer.de> wrote:


Am 11.12.2017 um 18:26 schrieb Andreas Kretschmer:
it's just a rough idea...

... and not perfect, because you can't capture ddl in this way.



Regards, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com



pgsql-general by date:

Previous
From: Devrim Gündüz
Date:
Subject: Re: Unsigned RPM's ?
Next
From: Jim Finnerty
Date:
Subject: Re: Why the planner does not use index for a large amount of data?