If nbuckets is around 2^30+1, which can happen if work_mem is very high,
then:
nbuckets = 1 << my_log2(lnbuckets);
ends up as 1 << 31, which is negative, leading to a SIGFPE on my
machine, but I think it can also lead to an infinite loop or a crash
(after corrupting the HASHHDR).
The only simple way I can reproduce this is with gdb:
1. attach gdb to a session
2. set a breakpoint in ExecInitRecursiveUnion and continue
3. execute in session:
set work_mem='100GB';
with recursive r (i) as
(select 1 union select i+1 from r where i < 10)
select * from r;
4. (gdb) set node->numGroups = (1 << 30) + 1
5. (gdb) continue
I think we should just cap nbuckets at 1 << 30 in init_htab.
There was a previous fix here:
http://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=299d1716525c659f0e02840e31fbe4dea3
But it assumed that init_htab could deal with INT_MAX. In practice,
work_mem will usually be the limiting factor anyway, but not if it's set
high.
Regards,
Jeff Davis