> Another thing that came to mind is that we don't track the memory for > the cache key. So that could account for some additional memory usage > with Memoize. I have a patch locally to fix that. Likely that would be > a master-only fix, however. I doubt that's accounting for much of the > extra memory you're reporting anyway. In hindsight, we should be > tracking that, but I think at the time I was writing this code, I had > thoughts that it wasn't much memory compared to storing the cached > tuples. I now think differently.
I've also attached the have_memoize_track_cachekey_memory.patch to address this. I intend this one for master only. I considered if maybe the executor changes without the planner changes could be backpatched, but I don't think that's a good idea. It wouldn't cause plan stability problems, but it could cause executor performance changes if we start evicting more cache entries due to memory pressure.
It seems that the memory used by key is already accounted for in EMPTY_ENTRY_MEMORY_BYTES. I wonder if this change is needed.
Also I'm kinda confused about using MinimalTuple->t_len vs. using GetMemoryChunkSpace(MinimalTuple). Why do we choose t_len rather than GetMemoryChunkSpace in EMPTY_ENTRY_MEMORY_BYTES and CACHE_TUPLE_BYTES?