Re: Vacuum: allow usage of more than 1GB of work mem - Mailing list pgsql-hackers

From Simon Riggs
Subject Re: Vacuum: allow usage of more than 1GB of work mem
Date
Msg-id CANP8+jJ4M95Mwm3xnFSJYLCV-5Wp2H8322MJoiCh3fCrA-oW_w@mail.gmail.com
Whole thread Raw
In response to Re: Vacuum: allow usage of more than 1GB of work mem  (Robert Haas <robertmhaas@gmail.com>)
Responses Re: Vacuum: allow usage of more than 1GB of work mem  (Robert Haas <robertmhaas@gmail.com>)
List pgsql-hackers
On 6 September 2016 at 19:09, Robert Haas <robertmhaas@gmail.com> wrote:
> On Tue, Sep 6, 2016 at 2:06 PM, Simon Riggs <simon@2ndquadrant.com> wrote:
>> On 6 September 2016 at 19:00, Tom Lane <tgl@sss.pgh.pa.us> wrote:
>>> Robert Haas <robertmhaas@gmail.com> writes:
>>>> Yeah, but I've seen actual breakage from exactly this issue on
>>>> customer systems even with the 1GB limit, and when we start allowing
>>>> 100GB it's going to get a whole lot worse.
>>>
>>> While it's not necessarily a bad idea to consider these things,
>>> I think people are greatly overestimating the consequences of the
>>> patch-as-proposed.  AFAICS, it does *not* let you tell VACUUM to
>>> eat 100GB of workspace.  Note the line right in front of the one
>>> being changed:
>>>
>>>          maxtuples = (vac_work_mem * 1024L) / sizeof(ItemPointerData);
>>>          maxtuples = Min(maxtuples, INT_MAX);
>>> -        maxtuples = Min(maxtuples, MaxAllocSize / sizeof(ItemPointerData));
>>> +        maxtuples = Min(maxtuples, MaxAllocHugeSize / sizeof(ItemPointerData));
>>>
>>> Regardless of what vac_work_mem is, we aren't gonna let you have more
>>> than INT_MAX ItemPointers, hence 12GB at the most.  So the worst-case
>>> increase from the patch as given is 12X.  Maybe that's enough to cause
>>> bad consequences on some systems, but it's not the sort of disaster
>>> Robert posits above.
>>
>> Is there a reason we can't use repalloc here?
>
> There are two possible problems, either of which is necessarily fatal:
>
> 1. I expect repalloc probably works by allocating the new space,
> copying from old to new, and freeing the old.  That could work out
> badly if we are nearly the edge of the system's allocation limit.
>
> 2. It's slower than the approach proposed upthread of allocating the
> array in segments.  With that approach, we never need to memcpy()
> anything.
>
> On the plus side, it's probably less code.

Hmm, OK.

What occurs to me is that we can exactly predict how many tuples we
are going to get when we autovacuum, since we measure that and we know
what the number is when we trigger it.

So there doesn't need to be any guessing going on at all, nor do we
need it to be flexible.

My proposal now is to pass in the number of rows changed since last
vacuum and use that (+10% to be safe) as the size of the array, up to
the defined limit.

Manual VACUUM still needs to guess, so we might need a flexible
solution there, but generally we don't. We could probably estimate it
from the VM.

-- 
Simon Riggs                http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



pgsql-hackers by date:

Previous
From: Claudio Freire
Date:
Subject: Re: Vacuum: allow usage of more than 1GB of work mem
Next
From: Robert Haas
Date:
Subject: Re: Vacuum: allow usage of more than 1GB of work mem