Re: a back up question - Mailing list pgsql-general

From Stephen Frost
Subject Re: a back up question
Date
Msg-id 20171206145715.GA4628@tamriel.snowman.net
Whole thread Raw
In response to Re: a back up question  (John R Pierce <pierce@hogranch.com>)
List pgsql-general
John, all,

* John R Pierce (pierce@hogranch.com) wrote:
> On 12/5/2017 2:09 PM, Martin Mueller wrote:
> >Time is not really a problem for me, if we talk about hours rather
> >than days.  On a roughly comparable machine I’ve made backups of
> >databases less than 10 GB, and it was a matter of minutes.  But I
> >know that there are scale problems. Sometimes programs just hang
> >if the data are beyond some size.  Is that likely in Postgres if
> >you go from ~ 10 GB to ~100 GB?  There isn’t any interdependence
> >among my tables beyond  queries I construct on the fly, because I
> >use the database in a single user environment
>
> another factor is restore time.    restores have to create
> indexes.   creating indexes on multi-million-row tables can take
> awhile.  (hint, be sure to set maintenance_work_mem to 1GB before
> doing this!)

I'm sure you're aware of this John, but for others following along, just
to be clear: indexes have to be recreated when restoring from a
*logical* (eg: pg_dump based) backups.  Indexes don't have to be
recreated for *physical* (eg: file-based) backups.

Neither pg_dump nor the various physical-backup utilities should hang or
have issues with larger data sets.

Thanks!

Stephen

pgsql-general by date:

Previous
From: "David G. Johnston"
Date:
Subject: Re: Why the planner does not use index for a large amount of data?
Next
From: Vick Khera
Date:
Subject: Re: a back up question