Hi,
On 2024-11-05 16:59:56 -0800, Jeff Davis wrote:
> Fixing it seems fairly easy though: we just need to completely destroy
> the hash table each time and recreate it. Something close to the
> attached patch (rough).
That'll often be *way* slower though. Both because acquiring and faulting-in
memory is far from free and because it'd often lead to starting to grow the
hashtable from a small size again.
I think this patch would lead to way bigger regressions than the occasionally
too large hashtable does. I'm not saying that we shouldn't do something about
that, but I don't think it can be this.
Greetings,
Andres Freund