Re: [HACKERS] Vacuum: allow usage of more than 1GB of work mem - Mailing list pgsql-hackers

From Claudio Freire
Subject Re: [HACKERS] Vacuum: allow usage of more than 1GB of work mem
Date
Msg-id CAGTBQpaQ1r+jo3akdSQ6vdt8aeO6oBn37KGVRjQ4KPBLqGuV3w@mail.gmail.com
Whole thread Raw
In response to Re: [HACKERS] Vacuum: allow usage of more than 1GB of work mem  (Masahiko Sawada <sawada.mshk@gmail.com>)
List pgsql-hackers
On Fri, Apr 7, 2017 at 7:43 PM, Claudio Freire <klaussfreire@gmail.com> wrote:
>>> + * Lookup in that structure proceeds sequentially in the list of segments,
>>> + * and with a binary search within each segment. Since segment's size grows
>>> + * exponentially, this retains O(N log N) lookup complexity.
>>
>> N log N is a horrible lookup complexity.  That's the complexity of
>> *sorting* an entire array.  I think you might be trying to argue that
>> it's log(N) * log(N)? Once log(n) for the exponentially growing size of
>> segments, one for the binary search?
>>
>> Afaics you could quite easily make it O(2 log(N)) by simply also doing
>> binary search over the segments.  Might not be worth it due to the small
>> constant involved normally.
>
> It's a typo, yes, I meant O(log N) (which is equivalent to O(2 log N))


To clarify, lookup over the segments is linear, so it's O(M) with M
the number of segments, then the binary search is O(log N) with N the
number of dead tuples.

So lookup is O(M + log N), but M < log N because of the segment's
exponential growth, therefore the lookup is O(2 log N)



pgsql-hackers by date:

Previous
From: Claudio Freire
Date:
Subject: Re: [HACKERS] Vacuum: allow usage of more than 1GB of work mem
Next
From: Alvaro Herrera
Date:
Subject: Re: [HACKERS] [COMMITTERS] pgsql: Improve 64bit atomics support.