On Wed, 2020-03-25 at 12:42 -0700, Andres Freund wrote:
> As mention on im/call: I think this is mainly an argument for
> combining
> the group tuple & per-group state allocations. I'm kind of fine with
> afterwards taking the allocator overhead into account.
The overhead comes from two places: (a) the chunk header; and (b) the
round-up-to-nearest-power-of-two behavior.
Combining the group tuple and per-group states only saves the overhead
from (a); it does nothing to help (b), which is often bigger. And it
only saves that overhead when there *is* a per-group state (i.e. not
for a DISTINCT query).
> I still don't buy that its useful to estimate the by-ref transition
> value overhead. We just don't have anything even have close enough to
> a
> meaningful value to base this on.
By-ref transition values aren't a primary motivation for me. I'm fine
leaving that discussion separate if that's a sticking point. But if we
do have a way to measure chunk overhead, I don't really see a reason
not to use it for by-ref as well.
> -1 to [AllocSet-specific] approach. I think it's architecturally the
> wrong direction to
> add more direct calls to functions within specific contexts.
OK.
> Yea, the "context needs to exist" part sucks. I really don't want to
> add
> calls directly into AllocSet from more places though. And just
> ignoring
> the parameters to the context seems wrong too.
So do you generally favor the EstimateMemoryChunkSpace() patch (that
works for all context types)? Or do you have another suggestion? Or do
you think we should just do nothing?
Regards,
Jeff Davis