all the tables have a very reasonable age, but the database itself still has an age approaching two billion. So, what do we do now? Were we wrong to truncate and drop this unneeded table without letting a vacuum on it finish?
greatest(age(c.relfrozenxid),age(t.relfrozenxid)) as age
FROM pg_class c
LEFT JOIN pg_class t ON c.reltoastrelid = t.oid
WHERE c.relkind IN ('r', 'm') and greatest(age(c.relfrozenxid),age(t.relfrozenxid)) > 100000000;
table_name | age
------------------------------------+-----------
notification_messages | 227283989
information_schema.sql_features | 177276271
information_schema.sql_parts | 177276271
user_notification_message_activity | 159132783
bundles_tmp | 177276271
bundles_old_int_id | 244381510
bundles | 146576938
(7 rows)
academia_notifications=# select datname, age(datfrozenxid) FROM pg_database;
datname | age
------------------------+------------
template1 | 1901010993
template0 | 1901010993
academia_notifications | 1951010993
postgres | 1186462760
(4 rows)
Our alerts that check whether a table is coming close to the 2 billion point mark started firing on a database. We noticed that the table that had the biggest age had a vacuum running on it that hasn't finished in 85 days. It was a table that was no longer use, so instead of restarting the vacuum, we truncated and deleted the table, expecting that would get rid of the need to