Re: Best practise for upgrade of 24GB+ database - Mailing list pgsql-admin

From Kevin Grittner
Subject Re: Best practise for upgrade of 24GB+ database
Date
Msg-id 4F198B510200002500044A64@gw.wicourts.gov
Whole thread Raw
In response to Re: Best practise for upgrade of 24GB+ database  (francis picabia <fpicabia@gmail.com>)
List pgsql-admin
francis picabia <fpicabia@gmail.com> wrote:

> That's great information.  9.0 is introducing streaming
> replication, so that is another option I'll look into.

We upgrade multi-TB databases in just a couple minutes using
pg_upgrade using the hard-link option.  That doesn't count
post-upgrade vacuum/analyze time, but depending on your usage you
might get away with analyzing a few tables before letting users in,
and doing the database-wide vacuum analyze while the database is in
use.

One of the other options might be better for you, but this one has
worked well for us.

-Kevin

pgsql-admin by date:

Previous
From: "Kevin Grittner"
Date:
Subject: Re: Meta data information on tables
Next
From: Brian Fehrle
Date:
Subject: buffers_backend climbing during data importing, bad thing or no biggie?