a back up question - Mailing list pgsql-general

From Martin Mueller
Subject a back up question
Date
Msg-id 001039C7-15DF-4A44-B0B9-3E100C9D68D3@northwestern.edu
Whole thread Raw
Responses Re: a back up question
Re: a back up question
Re: a back up question
List pgsql-general

Are there rules for thumb for deciding when you can dump a whole database and when you’d be better off dumping groups of tables? I have a database that has around 100 tables, some of them quite large, and right now the data directory is well over 100GB. My hunch is that I should divide and conquer, but I don’t have a clear sense of what counts as  “too big” these days. Nor do I have a clear sense of whether the constraints have to do with overall size, the number of tables, or machine memory (my machine has 32GB of memory).

 

Is 10GB a good practical limit to keep in mind?

 

 

pgsql-general by date:

Previous
From: John R Pierce
Date:
Subject: Re: Feature idea: Dynamic Data Making
Next
From: "David G. Johnston"
Date:
Subject: Re: a back up question