Re: Dumping a database that is not accepting commands? - Mailing list pgsql-admin

From Kevin Grittner
Subject Re: Dumping a database that is not accepting commands?
Date
Msg-id 1379421830.30619.YahooMailNeo@web162902.mail.bf1.yahoo.com
Whole thread Raw
In response to Dumping a database that is not accepting commands?  (Natalie Wenz <nataliewenz@ebureau.com>)
Responses Re: Dumping a database that is not accepting commands?
List pgsql-admin
Natalie Wenz <nataliewenz@ebureau.com> wrote:

> I have a large database from our test environment that got into trouble with
> some high volume and some long-running queries about…six weeks ago? We have a
> buffer mechanism that has been storing the new data since the database stopped
> accepting connections, so we haven't really lost any data, which is good.
> But the single-user backend vacuum freeze is still grinding away, using 100% cpu
> most of the time, except when it's doing a lot of writes. We did cancel the
> vacuum once so we could stop the database and take a snapshot of the filesystem
> (we are using ZFS on FreeBSD) and copy the snapshot to another machine. This
> allowed us to get a fresh database started (with 9.3!) where we could unspool
> the last six weeks of data into a fresh database, and be able to access at least
> that much of our data.

I'm not sure how you could have done that without dealing with the
wraparound before the upgrade.

> Now:
> I have a copy of the database (with data from all time up until the database
> shut itself down six weeks ago) that I just need the data from. I am becoming
> impatient with the vacuum, as it appears to have not even started working on the
> files for one of the largest relations in the database (that table was about
> 14TB last I saw). I'm trying to find alternatives to waiting another who
> knows how many weeks for the vacuum to finish just to have the database in a
> state where I can dump the data out, since this is no longer the
> "live" version. This copy running on hardware with plenty of space to
> work with. The database has about a million transactions before it wraps.

The copy running on 9.3, or the original?

> Is it possible to somehow just dump the data using pg_dump or pg_dumpall? I
> haven't tried to see if those utilities will work when the database is
> protecting itself from data loss. If it were possible, would it be wise (do
> those utilities use tons of transactions to dump, or would it be safely within
> the million or so that I have)? I suppose I could use copy?

pg_dump uses COPY (by default, anyway), and does all its work in a
single transaction.  But it needs to start that transaction.

> Would there be any risks if I were to do that? Maybe none of this
> is risky at this point because we can always clone the original
> snapshot, and try again.

I'm not sure that pg_dump followed by restore would be expected to
be faster than finishing the VACUUM, unless that is configured to
pace itself way too slowly.

> Any ideas or suggestions?

After getting past this crisis, I would take a close look at your
vacuuming regimen -- it sounds like it is not aggressive enough to
keep you out of trouble.

I'm sorry that I don't have a better suggestion for resolving the
crisis than running VACUUM at maximum speed.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


pgsql-admin by date:

Previous
From: "Benjamin Krajmalnik"
Date:
Subject: New autovacuum messages in postgres log after upgrade
Next
From: Kevin Grittner
Date:
Subject: Re: New autovacuum messages in postgres log after upgrade