Thread: Re: 10.1: hash index size exploding on vacuum full analyze

Re: 10.1: hash index size exploding on vacuum full analyze

From
Teodor Sigaev
Date:
> Initially, I have also thought of doing it in swap_relation_files, but
> we don't have stats values there.  We might be able to pass it, but
> not sure if there is any need for same.  As far as Toast table's case
> is concerned, I don't see the problem because we are copying the data
> row-by-row only for heap where the value of num_tuples and num_pages
> could be different.  See  copy_heap_data.

Ok, agree. AP (sorry, I don't see your name), could your check that patch fixes 
your issue?

Nevertheless, I'm going to push this patch in any case and, suppose, it should 
be backpatched to version 10 too, although the bug is not about data loss or any 
corruption. But patch looks rather  straightforward and has low risk of some new 
bugs.
-- 
Teodor Sigaev                                   E-mail: teodor@sigaev.ru
                                                    WWW: http://www.sigaev.ru/


Re: 10.1: hash index size exploding on vacuum full analyze

From
Amit Kapila
Date:
On Tue, Dec 26, 2017 at 9:48 PM, Teodor Sigaev <teodor@sigaev.ru> wrote:
>> Initially, I have also thought of doing it in swap_relation_files, but
>> we don't have stats values there.  We might be able to pass it, but
>> not sure if there is any need for same.  As far as Toast table's case
>> is concerned, I don't see the problem because we are copying the data
>> row-by-row only for heap where the value of num_tuples and num_pages
>> could be different.  See  copy_heap_data.
>
>
> Ok, agree. AP (sorry, I don't see your name), could your check that patch
> fixes your issue?
>
> Nevertheless, I'm going to push this patch in any case and, suppose, it
> should be backpatched to version 10 too, although the bug is not about data
> loss or any corruption. But patch looks rather  straightforward and has low
> risk of some new bugs.
>

Ideally, we can backpatch this patch to prior versions as well, but I
think users will see this problem in v10 onwards (as hash indexes are
primarily getting used from v10), so it seems okay to backpatch till
10.  In future, if we see any other symptom in prior branches, then we
can always backpatch it.


-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


Re: 10.1: hash index size exploding on vacuum full analyze

From
AP
Date:
On Tue, Dec 26, 2017 at 07:18:48PM +0300, Teodor Sigaev wrote:
> > Initially, I have also thought of doing it in swap_relation_files, but
> > we don't have stats values there.  We might be able to pass it, but
> > not sure if there is any need for same.  As far as Toast table's case
> > is concerned, I don't see the problem because we are copying the data
> > row-by-row only for heap where the value of num_tuples and num_pages
> > could be different.  See  copy_heap_data.
> 
> Ok, agree. AP (sorry, I don't see your name), could your check that patch
> fixes your issue?

I shall. I think (hope - grr) the last on-fire issue has been squished* so 
things are looking good for me to poke this next week.

Andrew.


Re: 10.1: hash index size exploding on vacuum full analyze

From
Teodor Sigaev
Date:
thank you, pushed

Amit Kapila wrote:
> On Tue, Dec 26, 2017 at 9:48 PM, Teodor Sigaev <teodor@sigaev.ru> wrote:
>>> Initially, I have also thought of doing it in swap_relation_files, but
>>> we don't have stats values there.  We might be able to pass it, but
>>> not sure if there is any need for same.  As far as Toast table's case
>>> is concerned, I don't see the problem because we are copying the data
>>> row-by-row only for heap where the value of num_tuples and num_pages
>>> could be different.  See  copy_heap_data.
>>
>>
>> Ok, agree. AP (sorry, I don't see your name), could your check that patch
>> fixes your issue?
>>
>> Nevertheless, I'm going to push this patch in any case and, suppose, it
>> should be backpatched to version 10 too, although the bug is not about data
>> loss or any corruption. But patch looks rather  straightforward and has low
>> risk of some new bugs.
>>
> 
> Ideally, we can backpatch this patch to prior versions as well, but I
> think users will see this problem in v10 onwards (as hash indexes are
> primarily getting used from v10), so it seems okay to backpatch till
> 10.  In future, if we see any other symptom in prior branches, then we
> can always backpatch it.
> 
> 

-- 
Teodor Sigaev                                   E-mail: teodor@sigaev.ru
                                                    WWW: http://www.sigaev.ru/