Re: a back up question - Mailing list pgsql-general

From David G. Johnston
Subject Re: a back up question
Date
Msg-id CAKFQuwaQkF6k2mi0DDSqn6aXuU-+gZQAMQwsoEzYehmApceLgg@mail.gmail.com
Whole thread Raw
In response to a back up question  (Martin Mueller <martinmueller@northwestern.edu>)
Responses Re: a back up question
List pgsql-general
On Tue, Dec 5, 2017 at 2:52 PM, Martin Mueller <martinmueller@northwestern.edu> wrote:

Are there rules for thumb for deciding when you can dump a whole database and when you’d be better off dumping groups of tables? I have a database that has around 100 tables, some of them quite large, and right now the data directory is well over 100GB. My hunch is that I should divide and conquer, but I don’t have a clear sense of what counts as  “too big” these days. Nor do I have a clear sense of whether the constraints have to do with overall size, the number of tables, or machine memory (my machine has 32GB of memory).

 

Is 10GB a good practical limit to keep in mind?



​I'd say the rule-of-thumb is if you have to "divide-and-conquer" you should use non-pg_dump based backup solutions.  Too big is usually measured in units of time, not memory.​

Any ability to partition your backups into discrete chunks is going to be very specific to your personal setup.  Restoring such a monster without constraint violations is something I'd be VERY worried about.

David J.

pgsql-general by date:

Previous
From: Martin Mueller
Date:
Subject: a back up question
Next
From: Carl Karsten
Date:
Subject: Re: a back up question