Re: Fwd: [HACKERS] Vacuum: allow usage of more than 1GB of work mem - Mailing list pgsql-hackers

From Claudio Freire
Subject Re: Fwd: [HACKERS] Vacuum: allow usage of more than 1GB of work mem
Date
Msg-id CAGTBQpamp2Nr8hVPk6PV6Txz+j=X+dk7TbCYzWkat+kQouijCQ@mail.gmail.com
Whole thread Raw
In response to Re: Fwd: [HACKERS] Vacuum: allow usage of more than 1GB of work mem  (Alexey Chernyshov <a.chernyshov@postgrespro.ru>)
Responses Re: Fwd: [HACKERS] Vacuum: allow usage of more than 1GB of work mem
List pgsql-hackers
On Wed, Jul 12, 2017 at 11:48 AM, Alexey Chernyshov
<a.chernyshov@postgrespro.ru> wrote:
> Thank you for the patch and benchmark results, I have a couple remarks.
> Firstly, padding in DeadTuplesSegment
>
> typedef struct DeadTuplesSegment
>
> {
>
>     ItemPointerData last_dead_tuple;    /* Copy of the last dead tuple
> (unset
>
>                                          * until the segment is fully
>
>                                          * populated). Keep it first to
> simplify
>
>                                          * binary searches */
>
>     unsigned short padding;        /* Align dt_tids to 32-bits,
>
>                                  * sizeof(ItemPointerData) is aligned to
>
>                                  * short, so add a padding short, to make
> the
>
>                                  * size of DeadTuplesSegment a multiple of
>
>                                  * 32-bits and align integer components for
>
>                                  * better performance during lookups into
> the
>
>                                  * multiarray */
>
>     int            num_dead_tuples;    /* # of entries in the segment */
>
>     int            max_dead_tuples;    /* # of entries allocated in the
> segment */
>
>     ItemPointer dt_tids;        /* Array of dead tuples */
>
> }    DeadTuplesSegment;
>
> In the comments to ItemPointerData is written that it is 6 bytes long, but
> can be padded to 8 bytes by some compilers, so if we add padding in a
> current way, there is no guaranty that it will be done as it is expected.
> The other way to do it with pg_attribute_alligned. But in my opinion, there
> is no need to do it manually, because the compiler will do this optimization
> itself.

I'll look into it. But my experience is that compilers won't align
struct size like this, only attributes, and this attribute is composed
of 16-bit attributes so it doesn't get aligned by default.

> On 11.07.2017 19:51, Claudio Freire wrote:
>>>
>>> -----
>>>
>>> +       /* Search for the segment likely to contain the item pointer */
>>> +       iseg = vac_itemptr_binsrch(
>>> +               (void *) itemptr,
>>> +               (void *)
>>> &(vacrelstats->dead_tuples.dt_segments->last_dead_tuple),
>>> +               vacrelstats->dead_tuples.last_seg + 1,
>>> +               sizeof(DeadTuplesSegment));
>>> +
>>>
>>> I think that we can change the above to;
>>>
>>> +       /* Search for the segment likely to contain the item pointer */
>>> +       iseg = vac_itemptr_binsrch(
>>> +               (void *) itemptr,
>>> +               (void *) &(seg->last_dead_tuple),
>>> +               vacrelstats->dead_tuples.last_seg + 1,
>>> +               sizeof(DeadTuplesSegment));
>>>
>>> We set "seg = vacrelstats->dead_tuples.dt_segments" just before this.
>>
>> Right
>
> In my mind, if you change vacrelstats->dead_tuples.last_seg + 1 with
> GetNumDeadTuplesSegments(vacrelstats), it would be more meaningful.

It's not the same thing. The first run it might, but after a reset of
the multiarray, num segments is the allocated size, while last_seg is
the last one filled with data.

> Besides, you can change the vac_itemptr_binsrch within the segment with
> stdlib bsearch, like:
>
>     res = (ItemPointer) bsearch((void *) itemptr,
>
>                                 (void *) seg->dt_tids,
>
>                                 seg->num_dead_tuples,
>
>                                 sizeof(ItemPointerData),
>
>                                 vac_cmp_itemptr);
>
>     return (res != NULL);

The stdlib's bsearch is quite slower. The custom bsearch inlines the
comparison making it able to factor out of the loop quite a bit of
logic, and in general generate far more specialized assembly.

For the compiler to optimize the stdlib's bsearch call, whole-program
optimization should be enabled, something that is unlikely. Even then,
it may not be able to, due to aliasing rules.

This is what I came up to make the new approach's performance on par
or better than the old one, in CPU cycles. In fact, benchmarks show
that time spent on the CPU is lower now, in large part, due to this.

It's not like it's the first custom binary search in postgres, also.



pgsql-hackers by date:

Previous
From: David Kohn
Date:
Subject: [HACKERS] Function Volatility and Views Unexpected Behavior
Next
From: Claudio Freire
Date:
Subject: Re: Fwd: [HACKERS] Vacuum: allow usage of more than 1GB of work mem