On Tue, 18 May 2021 at 01:31, Amit Langote <amitlangote09@gmail.com> wrote:
> Hmm, does this seem common enough for the added complexity to be worthwhile?
I'd also like to know if there's some genuine use case for this. For
testing purposes does not seem to be quite a good enough reason.
A slightly different optimization that I have considered and even
written patches before was to have ExecFindPartition() cache the last
routed to partition and have it check if the new row can go into that
one on the next call. I imagined there might be a use case for
speeding that up for RANGE partitioned tables since it seems fairly
likely that most use cases, at least for time series ranges will
always hit the same partition most of the time. Since RANGE requires
a binary search there might be some savings there. I imagine that
optimisation would never be useful for HASH partitioning since it
seems most likely that we'll be routing to a different partition each
time and wouldn't save much since routing to hash partitions are
cheaper than other types. LIST partitioning I'm not so sure about. It
seems much less likely than RANGE to hit the same partition twice in a
row.
IIRC, the patch did something like call ExecPartitionCheck() on the
new tuple with the previously routed to ResultRelInfo. I think the
last used partition was cached somewhere like relcache (which seems a
bit questionable). Likely this would speed up the example case here
a bit. Not as much as the proposed patch, but it would likely apply in
many more cases.
I don't think I ever posted the patch to the list, and if so I no
longer have access to it, so it would need to be done again.
David