Re: vacuumlo issue - Mailing list pgsql-hackers

From Tom Lane
Subject Re: vacuumlo issue
Date
Msg-id 26230.1332258653@sss.pgh.pa.us
Whole thread Raw
In response to Re: vacuumlo issue  (Josh Kupershmidt <schmiddy@gmail.com>)
Responses Re: vacuumlo issue  (Robert Haas <robertmhaas@gmail.com>)
Re: vacuumlo issue  (MUHAMMAD ASIF <anaeem.it@hotmail.com>)
List pgsql-hackers
Josh Kupershmidt <schmiddy@gmail.com> writes:
> On Tue, Mar 20, 2012 at 7:53 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
>> I'm not entirely convinced that that was a good idea. However, so far
>> as vacuumlo is concerned, the only reason this is a problem is that
>> vacuumlo goes out of its way to do all the large-object deletions in a
>> single transaction. What's the point of that? It'd be useful to batch
>> them, probably, rather than commit each deletion individually.  But the
>> objects being deleted are by assumption unreferenced, so I see no
>> correctness argument why they should need to go away all at once.

> I think you are asking for this option:
>   -l LIMIT     stop after removing LIMIT large objects
> which was added in b69f2e36402aaa.

Uh, no, actually that flag seems utterly brain-dead.  Who'd want to
abandon the run after removing some arbitrary subset of the
known-unreferenced large objects?  You'd just have to do all the search
work over again.  What I'm thinking about is doing a COMMIT after every
N large objects.

I see that patch has not made it to any released versions yet.
Is it too late to rethink the design?  I propose (a) redefining it
as committing after every N objects, and (b) having a limit of 1000
or so objects by default.
        regards, tom lane


pgsql-hackers by date:

Previous
From: Tom Lane
Date:
Subject: Re: Postgres 8.4 planner question - bad plan, good plan for almost same queries.
Next
From: Jeff Janes
Date:
Subject: Re: Regarding column reordering project for GSoc 2012