Re: [HACKERS] Vacuum: allow usage of more than 1GB of work mem - Mailing list pgsql-hackers

From Claudio Freire
Subject Re: [HACKERS] Vacuum: allow usage of more than 1GB of work mem
Date
Msg-id CAGTBQpbhTBKE7bCfessQuUwZm6SBdNYV6hq0GrEu3g+oWXAuMw@mail.gmail.com
Whole thread Raw
In response to Re: [HACKERS] Vacuum: allow usage of more than 1GB of work mem  (Claudio Freire <klaussfreire@gmail.com>)
Responses Re: [HACKERS] Vacuum: allow usage of more than 1GB of work mem  (David Steele <david@pgmasters.net>)
List pgsql-hackers
On Fri, Apr 7, 2017 at 10:06 PM, Claudio Freire <klaussfreire@gmail.com> wrote:
>>> >> +             if (seg->num_dead_tuples >= seg->max_dead_tuples)
>>> >> +             {
>>> >> +                     /*
>>> >> +                      * The segment is overflowing, so we must allocate a new segment.
>>> >> +                      * We could have a preallocated segment descriptor already, in
>>> >> +                      * which case we just reinitialize it, or we may need to repalloc
>>> >> +                      * the vacrelstats->dead_tuples array. In that case, seg will no
>>> >> +                      * longer be valid, so we must be careful about that. In any case,
>>> >> +                      * we must update the last_dead_tuple copy in the overflowing
>>> >> +                      * segment descriptor.
>>> >> +                      */
>>> >> +                     Assert(seg->num_dead_tuples == seg->max_dead_tuples);
>>> >> +                     seg->last_dead_tuple = seg->dt_tids[seg->num_dead_tuples - 1];
>>> >> +                     if (vacrelstats->dead_tuples.last_seg + 1 >= vacrelstats->dead_tuples.num_segs)
>>> >> +                     {
>>> >> +                             int                     new_num_segs = vacrelstats->dead_tuples.num_segs * 2;
>>> >> +
>>> >> +                             vacrelstats->dead_tuples.dt_segments = (DeadTuplesSegment *) repalloc(
>>> >> +                                                        (void *) vacrelstats->dead_tuples.dt_segments,
>>> >> +                                                                new_num_segs * sizeof(DeadTuplesSegment));
>>> >
>>> > Might be worth breaking this into some sub-statements, it's quite hard
>>> > to read.
>>>
>>> Breaking what precisely? The comment?
>>
>> No, the three-line statement computing the new value of
>> dead_tuples.dt_segments.  I'd at least assign dead_tuples to a local
>> variable, to cut the length of the statement down.
>
> Ah, alright. Will try to do that.

Attached is an updated patch set with the requested changes.

Segment allocation still follows the exponential strategy, and segment
lookup is still linear.

I rebased the early free patch (patch 3) to apply on top of the v9
patch 2 (it needed some changes). I recognize the early free patch
didn't get nearly as much scrutiny, so I'm fine with commiting only 2
if that one's ready to go but 3 isn't.

If it's decided to go for fixed 128M segments and a binary search of
segments, I don't think I can get that ready and tested before the
commitfest ends.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Attachment

pgsql-hackers by date:

Previous
From: Bruce Momjian
Date:
Subject: Re: [pgsql-www] [HACKERS] Small issue in online devel documentationbuild
Next
From: Tom Lane
Date:
Subject: Re: [HACKERS] Performance improvement for joins where outer side is unique