Re: vacuumlo issue - Mailing list pgsql-hackers

From Tom Lane
Subject Re: vacuumlo issue
Date
Msg-id 25231.1332255187@sss.pgh.pa.us
Whole thread Raw
In response to vacuumlo issue  (MUHAMMAD ASIF <anaeem.it@hotmail.com>)
Responses Re: vacuumlo issue  (Josh Kupershmidt <schmiddy@gmail.com>)
List pgsql-hackers
MUHAMMAD ASIF <anaeem.it@hotmail.com> writes:
> We have noticed the following issue with vacuumlo database that have millions of record in pg_largeobject i.e.
>    WARNING:  out of shared memoryFailed to remove lo 155987:    ERROR:  out of shared memory   HINT:  You might need
toincrease max_locks_per_transaction.
 
> Why do we need to increase max_locks_per_transaction/shared memory for
> clean up operation,

This seems to be a consequence of the 9.0-era decision to fold large
objects into the standard dependency-deletion algorithm and hence
take out locks on them individually.

I'm not entirely convinced that that was a good idea.  However, so far
as vacuumlo is concerned, the only reason this is a problem is that
vacuumlo goes out of its way to do all the large-object deletions in a
single transaction.  What's the point of that?  It'd be useful to batch
them, probably, rather than commit each deletion individually.  But the
objects being deleted are by assumption unreferenced, so I see no
correctness argument why they should need to go away all at once.
        regards, tom lane


pgsql-hackers by date:

Previous
From: Alvaro Herrera
Date:
Subject: Re: Error trying to compile a simple C trigger
Next
From: "Albe Laurenz"
Date:
Subject: Re: vacuumlo issue