On 2017-12-06 21:38:42 +0100, Tomas Vondra wrote:
> It's one thing when the hash table takes longer to lookup something or
longer aka "forever".
> when it consumes a bit more memory. Say, ~2x more than needed, give or
> take. I'm perfectly fine with that, particularly when it's a worst-case
> evil data set like this one.
I think the way to prevent that kind of attack is to add randomization.
> FWIW I've constructed the data sets for two reasons - to convince myself
> that my understanding of the simplehash code is correct, and to provide
> a data set triggering the other growth condition in simplehash code. My
> understanding is that if we stop growing the table after the load factor
> drops below some threshold (as TL proposed earlier in this thread), it
> should address both of these cases.
Yea, I'm not adverse to adding a few stopgaps that break in a less
annoying manner. WAll I'm saying is that I don't think we need to be
super concerned about this specific way of breaking things.
Greetings,
Andres Freund