Re: Vacuum: allow usage of more than 1GB of work mem - Mailing list pgsql-hackers

From Claudio Freire
Subject Re: Vacuum: allow usage of more than 1GB of work mem
Date
Msg-id CAGTBQpY9eOdukxZjQwpuf0A1hbumZ6MKFhBW1gvSo4pjVYyKGQ@mail.gmail.com
Whole thread Raw
In response to Re: Vacuum: allow usage of more than 1GB of work mem  (Greg Stark <stark@mit.edu>)
Responses Re: Vacuum: allow usage of more than 1GB of work mem  (Pavan Deolasee <pavan.deolasee@gmail.com>)
List pgsql-hackers
On Wed, Sep 7, 2016 at 12:12 PM, Greg Stark <stark@mit.edu> wrote:
> On Wed, Sep 7, 2016 at 1:45 PM, Simon Riggs <simon@2ndquadrant.com> wrote:
>> On 6 September 2016 at 19:59, Tom Lane <tgl@sss.pgh.pa.us> wrote:
>>
>>> The idea of looking to the stats to *guess* about how many tuples are
>>> removable doesn't seem bad at all.  But imagining that that's going to be
>>> exact is folly of the first magnitude.
>>
>> Yes.  Bear in mind I had already referred to allowing +10% to be safe,
>> so I think we agree that a reasonably accurate, yet imprecise
>> calculation is possible in most cases.
>
> That would all be well and good if it weren't trivial to do what
> Robert suggested. This is just a large unsorted list that we need to
> iterate throught. Just allocate chunks of a few megabytes and when
> it's full allocate a new chunk and keep going. There's no need to get
> tricky with estimates and resizing and whatever.

I agree. While the idea of estimating the right size sounds promising
a priori, considering the estimate can go wrong and over or
underallocate quite severely, the risks outweigh the benefits when you
consider the alternative of a dynamic allocation strategy.

Unless the dynamic strategy has a bigger CPU impact than expected, I
believe it's a superior approach.



pgsql-hackers by date:

Previous
From: Robert Haas
Date:
Subject: Re: Fun fact about autovacuum and orphan temp tables
Next
From: Doug Doole
Date:
Subject: Re: ICU integration