Thread: How to VACUUM_FULL / clean / truncate of pg_largeobject without (much) downtime?
How to VACUUM_FULL / clean / truncate of pg_largeobject without (much) downtime?
Hi Group,
Kindly help on this issue, we are in critical condition
I have sent this e-mail in the other groups, but let me be specific little more deep. Have been reading the PG Documentation around pg_catalog.pg_largeobject. I am aware that this table holds the large objects in pages. However, our application which is JBoss based using Hibernate/JDBC does not write anything to pg_largeobject table specifically.
We use the PostgreSQL 9.3.2 version. Recently we are facing an issue with size on pg_largeobject and size increase is very dramatic, grows about 20GB per day.
The actual data itself, in user tables, is about 60GB, but pg_catalog.pg_largeobject table is 200GB plues. Please let me know how to vacuum_full/clean/truncate this table without losing any user data in other table (and with minimal downtime).
With regards to this pg_largeobject, I have the following questions:
- Even though we are not writing explicitly into this pg_largeobject, some how this table is growing in GBs each day. When does this table get updated ?
- Is there way to do a auto-vacuum on this table ? why does it grow at such a rate ?
- Was there any configuration change that may have triggered this to grow? For last one year or so, there was no problem, but it started growing all of sudden in last two weeks. The only change we had in last two weeks was that we have scheduled nightly base-backup and auto-vacuum enabled.
- Presently pg_largeobject contains so many duplicate rows (loid). Though there are only about 0.6 million rows (LOIDs), but the total number of rows including duplicates are about 59million records. What are all these ?
Kindly help sharing this information, and appreciate your quick help on this.
Thanks and Regards
M.Shiva