Re: XID wraparound with huge pg_largeobject - Mailing list pgsql-general

From David Kensiski
Subject Re: XID wraparound with huge pg_largeobject
Date
Msg-id CAGTbF5WbLa0M_wkj48qPhb7unV04S-5ca_e6kJY+r8nq+2jZ6A@mail.gmail.com
Whole thread Raw
In response to Re: XID wraparound with huge pg_largeobject  (Roxanne Reid-Bennett <rox@tara-lu.com>)
Responses Re: XID wraparound with huge pg_largeobject
List pgsql-general
On Tue, Dec 1, 2015 at 1:48 PM, Roxanne Reid-Bennett <rox@tara-lu.com> wrote:
On 11/30/2015 9:58 AM, David Kensiski wrote:
I am working with a client who has a 9.1 database rapidly approaching XID wraparound.  They also have an exceedingly large pg_largeobject table (4217 GB) that has never been vacuumed.  An attempt to vacuum this on a replica has run for days and never succeeded.  (Or more accurately, never been allowed to succeed because we needed to get the replica back on-line.)
...
Any other ideas about how we can do this?

David,

My gut reaction was maybe dump/restore ... but it's pg_largeobject.  I have read the list for years and my memory tells me that it is a problem child in that arena. (e.g. as you found out w Slony...)  and at 4000Gb, not something that can sandbox very well. 

I attempted to dump the qa system and pipe it to restore on a test server and it crashed qa.  Doesn't a dump run as a single transaction, so a long running dump on a busy database accrues more and more resources that don't get released until it completes?  Or am I missing something?

Either way, it means several days of downtime given the size of the database.  Not really practical.


Because it's v9.1... and you hadn't gotten any responses (until Jeff)... and I had the time... I did some digging in the archives...

The most promising alternate idea... Last February Adam Hooper was migrating to SSD, Bill Moran suggesting trying to CLUSTER pg_largeobject instead of VACUUM FULL.  (full topic: on 2/3/2015 entitled "VACUUM FULL pg_largeobject without (much) downtime?")

CLUSTER has been referenced in the list other times to collapse unused space.... (sometime in 2010-2011):

As a last resort this week, I'm going to get 500+GB of extra file store
added, add a tablespace and move pg_largeobjects to this area. Then use
CLUSTER to rebuild pg_largeobjects back in the default tablespace. This
should fix things I hope, and if needed I'll use Cluster regularly.

This is intriguing and is worth testing to see if it would work.  Getting the space to do it on production might be tough since all drive slots are full and we're already using 3 TB drives.  But worth trying in qa and if it works, then I can get creative for prod.

 
It's "an" other idea...  I've no idea whether it will work any better than biting the bullet and just running VACUUM FULL.

other bits and pieces...

In 2010, Tom suggested REINDEX then VACUUM on pg_largeobject for an 8.? system.  That peaked my interest because we found with 9.1 that weekly reindexing helped with performance.  However the person who used it didn't find any performance improvement with his VACUUM.  I think reindexing was added to VACUUM FULL in the 9.0 release (but would have to search the release notes to find it).

I remember reading somewhere during this (but can't find the reference <sigh>) that an interrupted VACUUM FREEZE does capture "some" data, so multiples of those actually incrementally improves the speed of the next - but again I can't find the reference, so I've no idea who, when, version, and whether my memory is faulty or misapplied.

I've seen this as well, and Jeff alluded to it in his post.  Subsequent vacuums will run faster after a partial vacuum since some of the data has already been processed.
 

There are miscellaneous improvements in the actual running of VACUUM FULL (and more often autovacuum) suggested through tweaking the vacuum parameters "vacuum_cost_delay" being a high priority target.    Jeff's questions all point an identifying any limitations that are costing you time due to configuration.

Definitely be tweaking the parameters to get the best performance if we have to go the vacuum route.
 

Totally not part of this specific problem... You have run or know of vacuumlo for deleting orphaned LOs...?  Might be worth running it before you collect your free space.  [just in case you didn't or hadn't... twice the bang, half the pain - but only if you do it before collecting your free space]

I don't think anyone has tried to vacuumlo, but another one I can test in qa.  If it improves vacuum full performance it might be worth doing it in two phases.

Thanks to both of you for your input!  (I forgot to thank Jeff in my reply to him.)

--Dave


pgsql-general by date:

Previous
From: David Kensiski
Date:
Subject: Re: XID wraparound with huge pg_largeobject
Next
From: Christophe Pettus
Date:
Subject: AccessExclusiveLock on tuple?