Thread: Bug in huge simplehash

Bug in huge simplehash

From
Yura Sokolov
Date:
Good day, hackers.

Our client caught process stuck in tuplehash_grow. There was a query 
like
`select ts, count(*) from really_huge_partitioned_table group by ts`, 
and
planner decided to use hash aggregation.

Backtrace shows that oldsize were 2147483648 at the moment. While 
newsize
were optimized, looks like it were SH_MAX_SIZE.

#0  0x0000000000603d0c in tuplehash_grow (tb=0x7f18c3c284c8, 
newsize=<optimized out>) at ../../../src/include/lib/simplehash.h:457
         hash = 2147483654
         startelem = 1
         curelem = 1
         oldentry = 0x7f00c299e0d8
         oldsize = 2147483648
         olddata = 0x7f00c299e048
         newdata = 0x32e0448
         i = 6
         copyelem = 6

EXPLAIN shows that there are 2604186278 rows in all partitions, but 
planner
thinks there will be only 200 unique rows after group by. Looks like we 
was
mistaken.

  Finalize GroupAggregate  (cost=154211885.42..154211936.09 rows=200 
width=16)
     Group Key: really_huge_partitioned_table.ts
     ->  Gather Merge  (cost=154211885.42..154211932.09 rows=400 
width=16)
           Workers Planned: 2
           ->  Sort  (cost=154210885.39..154210885.89 rows=200 width=16)
                 Sort Key: really_huge_partitioned_table.ts
                 ->  Partial HashAggregate  
(cost=154210875.75..154210877.75 rows=200 width=16)
                       Group Key: really_huge_partitioned_table.ts
                       ->  Append  (cost=0.43..141189944.36 
rows=2604186278 width=8)
                             ->  Parallel Index Only Scan using 
really_huge_partitioned_table_001_idx2 on 
really_huge_partitioned_table_001  (cost=0.43..108117.92 rows=2236977 
width=8)
                             ->  Parallel Index Only Scan using 
really_huge_partitioned_table_002_idx2 on 
really_huge_partitioned_table_002  (cost=0.43..114928.19 rows=2377989 
width=8)
                             .... and more than 400 partitions more

After some investigation I found bug that is present in simplehash from 
its
beginning:

- sizemask is set only in SH_COMPUTE_PARAMETERS . And it is set in this 
way:

         /* now set size */
         tb->size = size;

         if (tb->size == SH_MAX_SIZE)
                 tb->sizemask = 0;
         else
                 tb->sizemask = tb->size - 1;

   that means, when we are resizing to SH_MAX_SIZE, sizemask becomes 
zero.

- then sizemask is used to SH_INITIAL_BUCKET and SH_NEXT to compute 
initial and
   next position:

   SH_INITIAL_BUCKET(SH_TYPE * tb, uint32 hash)
     return hash & tb->sizemask;
   SH_NEXT(SH_TYPE * tb, uint32 curelem, uint32 startelem)
     curelem = (curelem + 1) & tb->sizemask;

- and then SH_GROW stuck in element placing loop:

   startelem = SH_INITIAL_BUCKET(tb, hash);
   curelem = startelem;
   while (true)
     curelem = SH_NEXT(tb, curelem, startelem);

There is Assert(curelem != startelem) in SH_NEXT, but since no one test 
it
with 2 billion elements, it were not triggered. And Assert is not 
compiled
in production code.

Attached patch fixes it with removing condition and type casting:

         /* now set size */
         tb->size = size;
         tb->sizemask = (uint32)(size - 1);


OOOOOOPS

While writting this letter, I looke at newdata in the frame of 
tuplehash_grow:

     newdata = 0x32e0448

It is bellow 4GB border. Allocator does not allocate many-gigabytes 
chunks
(and we certainly need 96GB in this case) in sub 4GB address space. 
Because
mmap doesn't do this.

I went to check SH_GROW and.... It is `SH_GROW(SH_TYPE *tb, uint32 
newsize)`
:-(((
Therefore when `tb->size == SH_MAX_SIZE/2` and we call `SH_GROW(tb, 
tb->size * 2)`,
then SH_GROW(tb, 0) is called due to truncation.
And SH_COMPUTE_PARAMETERS is also accepts `uint32 newsize`.

Ahh... ok, patch is updated to fix this as well.


regards.

-----

Yura Sokolov
y.sokolov@postgrespro.ru
funny.falcon@gmail.com
Attachment

Re: Bug in huge simplehash

From
David Rowley
Date:
On Tue, 10 Aug 2021 at 20:53, Yura Sokolov <y.sokolov@postgrespro.ru> wrote:
> EXPLAIN shows that there are 2604186278 rows in all partitions, but
> planner
> thinks there will be only 200 unique rows after group by. Looks like we
> was
> mistaken.

This looks unrelated.  Looks like the planner used DEFAULT_NUM_DISTINCT.

>          /* now set size */
>          tb->size = size;
>
>          if (tb->size == SH_MAX_SIZE)
>                  tb->sizemask = 0;
>          else
>                  tb->sizemask = tb->size - 1;

Ouch.  That's not great.

>          /* now set size */
>          tb->size = size;
>          tb->sizemask = (uint32)(size - 1);

That fix seems fine.

> I went to check SH_GROW and.... It is `SH_GROW(SH_TYPE *tb, uint32
> newsize)`
> :-(((
> Therefore when `tb->size == SH_MAX_SIZE/2` and we call `SH_GROW(tb,
> tb->size * 2)`,
> then SH_GROW(tb, 0) is called due to truncation.
> And SH_COMPUTE_PARAMETERS is also accepts `uint32 newsize`.

Yeah. Agreed. I don't see anything wrong with your fix for that.

I'm surprised nobody has hit this before. I guess having that many
groups is not common.

Annoyingly this just missed the window for being fixed in the minor
releases going out soon. We'll need to wait a few days before
patching.

David



Re: Bug in huge simplehash

From
Ranier Vilela
Date:
Em ter., 10 de ago. de 2021 às 05:53, Yura Sokolov <y.sokolov@postgrespro.ru> escreveu:
Good day, hackers.

Our client caught process stuck in tuplehash_grow. There was a query
like
`select ts, count(*) from really_huge_partitioned_table group by ts`,
and
planner decided to use hash aggregation.

Backtrace shows that oldsize were 2147483648 at the moment. While
newsize
were optimized, looks like it were SH_MAX_SIZE.

#0  0x0000000000603d0c in tuplehash_grow (tb=0x7f18c3c284c8,
newsize=<optimized out>) at ../../../src/include/lib/simplehash.h:457
         hash = 2147483654
         startelem = 1
         curelem = 1
         oldentry = 0x7f00c299e0d8
         oldsize = 2147483648
         olddata = 0x7f00c299e048
         newdata = 0x32e0448
         i = 6
         copyelem = 6

EXPLAIN shows that there are 2604186278 rows in all partitions, but
planner
thinks there will be only 200 unique rows after group by. Looks like we
was
mistaken.

  Finalize GroupAggregate  (cost=154211885.42..154211936.09 rows=200
width=16)
     Group Key: really_huge_partitioned_table.ts
     ->  Gather Merge  (cost=154211885.42..154211932.09 rows=400
width=16)
           Workers Planned: 2
           ->  Sort  (cost=154210885.39..154210885.89 rows=200 width=16)
                 Sort Key: really_huge_partitioned_table.ts
                 ->  Partial HashAggregate 
(cost=154210875.75..154210877.75 rows=200 width=16)
                       Group Key: really_huge_partitioned_table.ts
                       ->  Append  (cost=0.43..141189944.36
rows=2604186278 width=8)
                             ->  Parallel Index Only Scan using
really_huge_partitioned_table_001_idx2 on
really_huge_partitioned_table_001  (cost=0.43..108117.92 rows=2236977
width=8)
                             ->  Parallel Index Only Scan using
really_huge_partitioned_table_002_idx2 on
really_huge_partitioned_table_002  (cost=0.43..114928.19 rows=2377989
width=8)
                             .... and more than 400 partitions more

After some investigation I found bug that is present in simplehash from
its
beginning:

- sizemask is set only in SH_COMPUTE_PARAMETERS . And it is set in this
way:

         /* now set size */
         tb->size = size;

         if (tb->size == SH_MAX_SIZE)
                 tb->sizemask = 0;
         else
                 tb->sizemask = tb->size - 1;

   that means, when we are resizing to SH_MAX_SIZE, sizemask becomes
zero.

- then sizemask is used to SH_INITIAL_BUCKET and SH_NEXT to compute
initial and
   next position:

   SH_INITIAL_BUCKET(SH_TYPE * tb, uint32 hash)
     return hash & tb->sizemask;
   SH_NEXT(SH_TYPE * tb, uint32 curelem, uint32 startelem)
     curelem = (curelem + 1) & tb->sizemask;

- and then SH_GROW stuck in element placing loop:

   startelem = SH_INITIAL_BUCKET(tb, hash);
   curelem = startelem;
   while (true)
     curelem = SH_NEXT(tb, curelem, startelem);

There is Assert(curelem != startelem) in SH_NEXT, but since no one test
it
with 2 billion elements, it were not triggered. And Assert is not
compiled
in production code.

Attached patch fixes it with removing condition and type casting:

         /* now set size */
         tb->size = size;
         tb->sizemask = (uint32)(size - 1);


OOOOOOPS

While writting this letter, I looke at newdata in the frame of
tuplehash_grow:

     newdata = 0x32e0448

It is bellow 4GB border. Allocator does not allocate many-gigabytes
chunks
(and we certainly need 96GB in this case) in sub 4GB address space.
Because
mmap doesn't do this.

I went to check SH_GROW and.... It is `SH_GROW(SH_TYPE *tb, uint32
newsize)`
:-(((
Therefore when `tb->size == SH_MAX_SIZE/2` and we call `SH_GROW(tb,
tb->size * 2)`,
then SH_GROW(tb, 0) is called due to truncation.
And SH_COMPUTE_PARAMETERS is also accepts `uint32 newsize`.

Ahh... ok, patch is updated to fix this as well.
It seems that we need to fix the function prototype too.

/* void <prefix>_grow(<prefix>_hash *tb) */
-SH_SCOPE void SH_GROW(SH_TYPE * tb, uint32 newsize);
+SH_SCOPE void SH_GROW(SH_TYPE * tb, uint64 newsize);

regards,
Ranier Vilela

Re: Bug in huge simplehash

From
Yura Sokolov
Date:
Ranier Vilela писал 2021-08-10 14:21:
> Em ter., 10 de ago. de 2021 às 05:53, Yura Sokolov
> <y.sokolov@postgrespro.ru> escreveu:
> 
>> 
>> I went to check SH_GROW and.... It is `SH_GROW(SH_TYPE *tb, uint32
>> newsize)`
>> :-(((
>> Therefore when `tb->size == SH_MAX_SIZE/2` and we call `SH_GROW(tb,
>> tb->size * 2)`,
>> then SH_GROW(tb, 0) is called due to truncation.
>> And SH_COMPUTE_PARAMETERS is also accepts `uint32 newsize`.
>> 
>> Ahh... ok, patch is updated to fix this as well.
> 
> It seems that we need to fix the function prototype too.
> 
> /* void <prefix>_grow(<prefix>_hash *tb) */
> -SH_SCOPE void SH_GROW(SH_TYPE * tb, uint32 newsize); +SH_SCOPE void
> SH_GROW(SH_TYPE * tb, uint64 newsize);

Ahh... Thank you, Ranier.
Attached v2.

regards,

-----

Yura Sokolov
Attachment

Re: Bug in huge simplehash

From
David Rowley
Date:
On Wed, 11 Aug 2021 at 00:10, Yura Sokolov <y.sokolov@postgrespro.ru> wrote:
> Attached v2.

Eyeballing this it looks fine, but I was a little nervous backpatching
without testing it properly, so I ended up provisioning a machine with
256GB and doing a round of testing.

I just created the most simple table I could:

create table a (a bigserial, b int);
and inserted 2^31 rows.

insert into a (b) values(1);
insert into a (b) select b from a; -- repeated until I got 2^31 rows.

set work_mem = '256GB';
set max_parallel_workers_per_gather = 0;

I could recreate the issue described with the following query:

explain (analyze , timing off) select a from a group by a;

After watching perf top for a while it switched to:

  98.90%  postgres            [.] tuplehash_grow
   0.36%  [kernel]            [k] change_p4d_range
   0.24%  postgres            [.] LookupTupleHashEntry
   0.09%  postgres            [.] tts_minimal_store_tuple
   0.07%  [kernel]            [k] vm_normal_page
   0.02%  [kernel]            [k] __softirqentry_text_start
   0.02%  postgres            [.] heap_fill_tuple
   0.02%  postgres            [.] AllocSetAlloc

After patching I got:

explain (analyze , timing off) select a from a group by a;
                                               QUERY PLAN
---------------------------------------------------------------------------------------------------------
 HashAggregate  (cost=35149810.71..53983243.28 rows=1883343257
width=8) (actual rows=2147483648 loops=1)
   Group Key: a
   Batches: 1  Memory Usage: 201334801kB
   ->  Seq Scan on a  (cost=0.00..30441452.57 rows=1883343257 width=8)
(actual rows=2147483648 loops=1)
 Planning Time: 0.105 ms
 Execution Time: 2173480.905 ms
(6 rows)
Time: 2173482.166 ms (36:13.482)

And, since I only had 256GB of memory on this machine and couldn't
really do 2^32 groups, I dropped SH_FILLFACTOR to 0.4 and
SH_MAX_FILLFACTOR to 0.48 and tried again to ensure I got the hash
table full message:

postgres=# explain (analyze on , timing off) select a from a group by a;
ERROR:  hash table size exceeded
Time: 1148554.672 ms (19:08.555)

After doing that, I felt a bit better about batch-patching it, so I did.

Thanks for the patch.

David



Re: Bug in huge simplehash

From
Andres Freund
Date:
Hi,

On 2021-08-10 11:52:59 +0300, Yura Sokolov wrote:
> - sizemask is set only in SH_COMPUTE_PARAMETERS . And it is set in this way:
> 
>         /* now set size */
>         tb->size = size;
> 
>         if (tb->size == SH_MAX_SIZE)
>                 tb->sizemask = 0;
>         else
>                 tb->sizemask = tb->size - 1;
> 
>   that means, when we are resizing to SH_MAX_SIZE, sizemask becomes zero.

I think that was intended to be ~0.


> Ahh... ok, patch is updated to fix this as well.

Any chance you'd write a test for simplehash with such huge amount of
values? It'd require a small bit of trickery to be practical. On systems
with MAP_NORESERVE it should be feasible.


>  static inline void
> -SH_COMPUTE_PARAMETERS(SH_TYPE * tb, uint32 newsize)
> +SH_COMPUTE_PARAMETERS(SH_TYPE * tb, uint64 newsize)
>  {
>      uint64        size;
>  
> @@ -322,11 +322,7 @@ SH_COMPUTE_PARAMETERS(SH_TYPE * tb, uint32 newsize)
>  
>      /* now set size */
>      tb->size = size;
> -
> -    if (tb->size == SH_MAX_SIZE)
> -        tb->sizemask = 0;
> -    else
> -        tb->sizemask = tb->size - 1;
> +    tb->sizemask = (uint32)(size - 1);

ISTM using ~0 would be nicer here?

Greetings,

Andres Freund



Re: Bug in huge simplehash

From
Yura Sokolov
Date:
Andres Freund писал 2021-08-13 12:21:
> Hi,
> 
> On 2021-08-10 11:52:59 +0300, Yura Sokolov wrote:
>> - sizemask is set only in SH_COMPUTE_PARAMETERS . And it is set in 
>> this way:
>> 
>>         /* now set size */
>>         tb->size = size;
>> 
>>         if (tb->size == SH_MAX_SIZE)
>>                 tb->sizemask = 0;
>>         else
>>                 tb->sizemask = tb->size - 1;
>> 
>>   that means, when we are resizing to SH_MAX_SIZE, sizemask becomes 
>> zero.
> 
> I think that was intended to be ~0.

I believe so.

>> Ahh... ok, patch is updated to fix this as well.
> 
> Any chance you'd write a test for simplehash with such huge amount of
> values? It'd require a small bit of trickery to be practical. On 
> systems
> with MAP_NORESERVE it should be feasible.

Which way C structures should be tested in postgres?
dynahash/simplhash - do they have any tests currently?
I'll do if hint is given.

>>  static inline void
>> -SH_COMPUTE_PARAMETERS(SH_TYPE * tb, uint32 newsize)
>> +SH_COMPUTE_PARAMETERS(SH_TYPE * tb, uint64 newsize)
>>  {
>>      uint64        size;
>> 
>> @@ -322,11 +322,7 @@ SH_COMPUTE_PARAMETERS(SH_TYPE * tb, uint32 
>> newsize)
>> 
>>      /* now set size */
>>      tb->size = size;
>> -
>> -    if (tb->size == SH_MAX_SIZE)
>> -        tb->sizemask = 0;
>> -    else
>> -        tb->sizemask = tb->size - 1;
>> +    tb->sizemask = (uint32)(size - 1);
> 
> ISTM using ~0 would be nicer here?

I don't think so.
To be rigid it should be `~(uint32)0`.
But I believe it doesn't differ from `tb->sizemask = (uint32)(size - 1)`
that is landed with patch, therefore why `if` is needed?

> 
> Greetings,
> 
> Andres Freund



Re: Bug in huge simplehash

From
Andres Freund
Date:
Hi,

On 2021-08-13 12:44:17 +0300, Yura Sokolov wrote:
> Andres Freund писал 2021-08-13 12:21:
> > Any chance you'd write a test for simplehash with such huge amount of
> > values? It'd require a small bit of trickery to be practical. On systems
> > with MAP_NORESERVE it should be feasible.
> 
> Which way C structures should be tested in postgres?
> dynahash/simplhash - do they have any tests currently?
> I'll do if hint is given.

We don't have a great way right now :(. I think the best is to have a
SQL callable function that uses some API. See e.g. test_atomic_ops() et
al in src/test/regress/regress.c


> > >  static inline void
> > > -SH_COMPUTE_PARAMETERS(SH_TYPE * tb, uint32 newsize)
> > > +SH_COMPUTE_PARAMETERS(SH_TYPE * tb, uint64 newsize)
> > >  {
> > >      uint64        size;
> > > 
> > > @@ -322,11 +322,7 @@ SH_COMPUTE_PARAMETERS(SH_TYPE * tb, uint32
> > > newsize)
> > > 
> > >      /* now set size */
> > >      tb->size = size;
> > > -
> > > -    if (tb->size == SH_MAX_SIZE)
> > > -        tb->sizemask = 0;
> > > -    else
> > > -        tb->sizemask = tb->size - 1;
> > > +    tb->sizemask = (uint32)(size - 1);
> > 
> > ISTM using ~0 would be nicer here?
> 
> I don't think so.
> To be rigid it should be `~(uint32)0`.
> But I believe it doesn't differ from `tb->sizemask = (uint32)(size - 1)`
> that is landed with patch, therefore why `if` is needed?

Personally I find it more obvious to understand the intended behaviour
with ~0 (i.e. all bits set) than with a width truncation.

Greetings,

Andres Freund



Re: Bug in huge simplehash

From
Ranier Vilela
Date:
Em sex., 13 de ago. de 2021 às 07:15, Andres Freund <andres@anarazel.de> escreveu:
Hi,

On 2021-08-13 12:44:17 +0300, Yura Sokolov wrote:
> Andres Freund писал 2021-08-13 12:21:
> > Any chance you'd write a test for simplehash with such huge amount of
> > values? It'd require a small bit of trickery to be practical. On systems
> > with MAP_NORESERVE it should be feasible.
>
> Which way C structures should be tested in postgres?
> dynahash/simplhash - do they have any tests currently?
> I'll do if hint is given.

We don't have a great way right now :(. I think the best is to have a
SQL callable function that uses some API. See e.g. test_atomic_ops() et
al in src/test/regress/regress.c


> > >  static inline void
> > > -SH_COMPUTE_PARAMETERS(SH_TYPE * tb, uint32 newsize)
> > > +SH_COMPUTE_PARAMETERS(SH_TYPE * tb, uint64 newsize)
> > >  {
> > >   uint64          size;
> > >
> > > @@ -322,11 +322,7 @@ SH_COMPUTE_PARAMETERS(SH_TYPE * tb, uint32
> > > newsize)
> > >
> > >   /* now set size */
> > >   tb->size = size;
> > > -
> > > - if (tb->size == SH_MAX_SIZE)
> > > -         tb->sizemask = 0;
> > > - else
> > > -         tb->sizemask = tb->size - 1;
> > > + tb->sizemask = (uint32)(size - 1);
> >
> > ISTM using ~0 would be nicer here?
>
> I don't think so.
> To be rigid it should be `~(uint32)0`.
> But I believe it doesn't differ from `tb->sizemask = (uint32)(size - 1)`
> that is landed with patch, therefore why `if` is needed?

Personally I find it more obvious to understand the intended behaviour
with ~0 (i.e. all bits set) than with a width truncation.
The generated code is identical.

regards,
Ranier Vilela

Re: Bug in huge simplehash

From
Yura Sokolov
Date:
Ranier Vilela писал 2021-08-13 14:12:
> Em sex., 13 de ago. de 2021 às 07:15, Andres Freund
> <andres@anarazel.de> escreveu:
> 
>> Hi,
>> 
>> On 2021-08-13 12:44:17 +0300, Yura Sokolov wrote:
>>> Andres Freund писал 2021-08-13 12:21:
>>>> Any chance you'd write a test for simplehash with such huge
>> amount of
>>>> values? It'd require a small bit of trickery to be practical. On
>> systems
>>>> with MAP_NORESERVE it should be feasible.
>>> 
>>> Which way C structures should be tested in postgres?
>>> dynahash/simplhash - do they have any tests currently?
>>> I'll do if hint is given.
>> 
>> We don't have a great way right now :(. I think the best is to have
>> a
>> SQL callable function that uses some API. See e.g. test_atomic_ops()
>> et
>> al in src/test/regress/regress.c
>> 
>>>> >  static inline void
>>>> > -SH_COMPUTE_PARAMETERS(SH_TYPE * tb, uint32 newsize)
>>>> > +SH_COMPUTE_PARAMETERS(SH_TYPE * tb, uint64 newsize)
>>>> >  {
>>>> >   uint64          size;
>>>> >
>>>> > @@ -322,11 +322,7 @@ SH_COMPUTE_PARAMETERS(SH_TYPE * tb,
>> uint32
>>>> > newsize)
>>>> >
>>>> >   /* now set size */
>>>> >   tb->size = size;
>>>> > -
>>>> > - if (tb->size == SH_MAX_SIZE)
>>>> > -         tb->sizemask = 0;
>>>> > - else
>>>> > -         tb->sizemask = tb->size - 1;
>>>> > + tb->sizemask = (uint32)(size - 1);
>>>> 
>>>> ISTM using ~0 would be nicer here?
>>> 
>>> I don't think so.
>>> To be rigid it should be `~(uint32)0`.
>>> But I believe it doesn't differ from `tb->sizemask = (uint32)(size
>> - 1)`
>>> that is landed with patch, therefore why `if` is needed?
>> 
>> Personally I find it more obvious to understand the intended
>> behaviour
>> with ~0 (i.e. all bits set) than with a width truncation.
> 
> https://godbolt.org/z/57WcjKqMj
> The generated code is identical.

I believe, you mean https://godbolt.org/z/qWzE1ePTE

> regards,
> Ranier Vilela



Re: Bug in huge simplehash

From
Andres Freund
Date:
On 2021-08-13 14:40:08 +0300, Yura Sokolov wrote:
> Ranier Vilela писал 2021-08-13 14:12:
> > Em sex., 13 de ago. de 2021 às 07:15, Andres Freund
> > <andres@anarazel.de> escreveu:
> > > Personally I find it more obvious to understand the intended
> > > behaviour
> > > with ~0 (i.e. all bits set) than with a width truncation.
> > 
> > https://godbolt.org/z/57WcjKqMj
> > The generated code is identical.
> 
> I believe, you mean https://godbolt.org/z/qWzE1ePTE

I don't think the choice of instructions matters. This is called around
creation and resizing - the other costs are several orders of magnitude
more expensive than determining the mask.