Re: How to clean/truncate / VACUUM FULL pg_largeobject without (much) downtime? - Mailing list pgsql-performance

From Jim Nasby
Subject Re: How to clean/truncate / VACUUM FULL pg_largeobject without (much) downtime?
Date
Msg-id 555F9CF8.8000708@BlueTreble.com
Whole thread Raw
In response to How to clean/truncate / VACUUM FULL pg_largeobject without (much) downtime?  ("Muthusamy, Sivaraman" <sivaraman.muthusamy@in.verizon.com>)
List pgsql-performance
On 5/11/15 4:55 AM, Muthusamy, Sivaraman wrote:
> Hi Group,
>
> Facing a problem where pg_catalog.pg_largetobject has been growing fast
> recently, in last two weeks. The actual data itself, in user tables, is
> about 60GB, but pg_catalog.pg_largeobject table is 200GB plues. Please
> let me know how to clean/truncate this table without losing any user
> data in other table.

Autovacuum should be taking care of it for you, though you could also
try a manual vacuum (VACUUM pg_largeobject;).

> With regards to this pg_largeobject, I have the following questions:
>
> -What is this pg_largetobject ?

It stores large objects
http://www.postgresql.org/docs/9.4/static/lo-interfaces.html

> -what does it contain ? tried PostgreSQL documentation and lists, but
> could not get much from it.
>
> -why does it grow ?
>
> -Was there any configuration change that may have triggered this to
> grow? For last one year or so, there was no problem, but it started
> growing all of sudden in last two weeks. The only change we had in last
> two weeks was that we have scheduled night base-backup for it and
> auto-vacuum feature enabled.

Changes to autovacuum settings could certainly cause changes.
Long-running transactions would prevent cleanup, as would any prepared
transactions (which should really be disabled unless you explicitly need
them).

> -pg_largeobject contains so many duplicate rows (loid). Though there are
> only about 0.6 million rows (LOIDs), but the total number of rows
> including duplicates are about 59million records. What are all these ?

Each row can only be ~2KB wide, so any LO that's larger than that will
be split into multiple rows.
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com


pgsql-performance by date:

Previous
From: Josh Berkus
Date:
Subject: Re: PostgreSQL disk fragmentation causes performance problems on Windows
Next
From: Jim Nasby
Date:
Subject: Re: Fastest way / best practice to calculate "next birthdays"