On Mon, 11 Mar 2024 at 22:09, John Naylor <johncnaylorls@gmail.com> wrote:
> I ran the test function, but using 256kB and 3MB for the reset
> frequency, and with 8,16,24,32 byte sizes (patched against a commit
> after the recent hot/cold path separation). Images attached. I also
> get a decent speedup with the bump context, but not quite as dramatic
> as on your machine. It's worth noting that slab is the slowest for me.
> This is an Intel i7-10750H.
Thanks for trying this out. I didn't check if the performance was
susceptible to the memory size before the reset. It certainly would
be once the allocation crosses some critical threshold of CPU cache
size, but probably it will also be to some extent regarding the number
of actual mallocs that are required underneath.
I see there's some discussion of bump in [1]. Do you still have a
valid use case for bump for performance/memory usage reasons?
The reason I ask is due to what Tom mentioned in [2] ("It's not
apparent to me that the "bump context" idea is valuable enough to
foreclose ever adding more context types"). So, I'm just probing to
find other possible use cases that reinforce the usefulness of bump.
It would be interesting to try it in a few places to see what
performance gains could be had. I've not done much scouting around
the codebase for other uses other than non-bounded tuplesorts.
David
[1] https://postgr.es/m/CANWCAZbxxhysYtrPYZ-wZbDtvRPWoeTe7RQM1g_+4CB8Z6KYSQ@mail.gmail.com
[2] https://postgr.es/m/3537323.1708125284@sss.pgh.pa.us