- Mailing list pgsql-admin

From Yuri Niyazov
Subject
Date
Msg-id CACuBw0gqJwyfm9LKU7wRNN+2ant3SthcU2gyxYxsiGTGaAdLmQ@mail.gmail.com
Whole thread Raw
Responses Re: your mail  (Alvaro Herrera <alvherre@2ndquadrant.com>)
List pgsql-admin
We are running Postgresql 9.3.8. We had something fairly surprising happen here the other day: we started reaching the vacuum limit, with our logs filling up with “WARNING:  database "mydb" must be vacuumed within 177009986 transactions” messages, hundreds of them per second, and we came dangerously close to experiencing a DB shutdown. *However*, there were only three things running that were writing to the database - a db-wide VACUUM that we manually started, a per-table autovacuum that postgres itself started, and a "write 1 row every 1-minute" cron job into a separate table that we use to track replication delay. 

Whenever we killed the autovacuum process, the very fast incrementation of the transaction IDs would stop. Then, a few minutes later, PG would restart the autovacuum process, and then very soon after that, the transaction IDs would start incrementing again very quickly. 

Modifying postgresql.conf to turn off autovacuums didn't prevent auto-vacuuming from restarting, in-line with documentation that says that autovacuums will start whenever postgres is in danger of running out of transaction IDs. 

We ended up writing a script that would check the running processes and continuously kill autovacuums as they started. We were able to get the manual vacuum to finish without running out of transaction IDs and without experiencing a shut-down, but the entire experience was slightly concerning.

Did we hit upon a known edge case where two vacuum processes running in parallel would increment transaction IDs very quickly?


pgsql-admin by date:

Previous
From: Shreeyansh Dba
Date:
Subject: Re: Is there any way that one of the Postgres Background/Utilityprocess may go down?
Next
From: Alvaro Herrera
Date:
Subject: Re: your mail