Transaction id wraparound problem - Mailing list pgsql-general

From Morris Goldstein
Subject Transaction id wraparound problem
Date
Msg-id 72ef1f580611151518m638da5b7mc366b881dcf254a9@mail.gmail.com
Whole thread Raw
Responses Re: Transaction id wraparound problem  ("Ed L." <pgsql@bluepolka.net>)
List pgsql-general
I've encountered transaction wraparound problems in a long-running
test using postgresql 7.4.8. There is no critical data at risk, but I
do need to understand the problem and get a fix into our product as
quickly as possible.

My postgres log file has messages like this:

    2006-11-14 04:08:19 [27203] WARNING:  some databases have not been
vacuumed in over 2 billion transactions
    DETAIL:  You may have already suffered transaction-wraparound data loss.
    2006-11-14 17:37:37 [7988] WARNING:  some databases have not been
vacuumed in over 2 billion transactions
    DETAIL:  You may have already suffered transaction-wraparound data loss.
    2006-11-15 06:37:20 [21481] WARNING:  some databases have not been
vacuumed in over 2 billion transactions
    DETAIL:  You may have already suffered transaction-wraparound data loss.

I also see a few of these:

    2006-11-05 04:16:00 [16253] WARNING:  some databases have not been
vacuumed in 2055456833 transactions
    2006-11-06 19:07:29 [25211] WARNING:  some databases have not been
vacuumed in 2106677625 transactions

Here is output from SELECT datname, age(datfrozenxid) FROM pg_database:

      datname  |     age
    -----------+-------------
     testdb    |  1073752155
     template1 | -1670903080
     template0 | -1670903080

Here is a summary of my test:

- The database has 16 schemas with identical table declarations in
each.

- Each schema has two large tables. One has about 10M rows currently,
and the other has 20M rows. The smaller table has two indexes and the
other has one. So the entire database has about 480M rows.

- The test has been inserting and updating data nearly continuously
for three months. (I'm testing reliability and scalability of our
application.) A typical transaction creates or updates 1-3 rows (1 in
the smaller table, 2 in the larger).

- Vacuum ("vacuum analyze verbose") runs daily. Recently, vacuums have
been taking more than a day to run, so vacuum is now running
continuously.

I am sure the vacuum is indeed running as described -- the messages
above are produced at the end of the vacuum.  Also, top shows my
vacuum script running every day (or two, if the vacuum takes more than
24 hours).

If I'm vacuuming every day (or two), and not running anywhere near 1
billion transactions a day, why am I running into transaction id
wraparound problems?

Is this just complaining that template0 and template1 haven't been
vacuumed in the over 2 billion transactions encountered by testdb? (I
never touch template0 and template1.) If that's what's going on, I
take it that I have no risk of data loss? And is there some reason to
vacuum these databases, (other than to avoid the scary messages)?


Morris Goldstein

pgsql-general by date:

Previous
From: "George Pavlov"
Date:
Subject: Re: ORDER BY
Next
From: "Ed L."
Date:
Subject: Re: Transaction id wraparound problem