Hi Lepikhov,
Thanks for using my patch and I am glad that you found it useful.
On Mon, Sep 4, 2023 at 10:56 AM Lepikhov Andrei
<a.lepikhov@postgrespro.ru> wrote:
>
> Hi, hackers,
>
> Looking at the planner behaviour with the memory consumption patch [1], I figured out that arrays increase memory
consumptionby the optimizer significantly. See init.sql in attachment.
> The point here is that the planner does small memory allocations for each element during estimation. As a result, it
lookslike the planner consumes about 250 bytes for each integer element.
I guess the numbers you mentioned in init.sql are total memory used by
the planner (as reported by the patch in the thread) when planning
that query and not memory consumed by Const nodes themselves. Am I
right? I think the measurements need to be explained better and also
the realistic scenario you are trying to oprimize.
I guess, the reason you think that partitioning will increase the
memory consumed is because each partition will have the clause
translated for it. Selectivity estimation for each partition will
create those many Const nodes and hence consume memory. Am I right?
Can you please measure the memory consumed with and without your
patch.
>
> It is maybe not a problem most of the time. However, in the case of partitions, memory consumption multiplies by each
partition.Such a corner case looks weird, but the fix is simple. So, why not?
With vectorized operations becoming a norm these days, it's possible
to have thousands of element in array of an ANY or IN clause. Also
will be common to have thousands of partitions. But I think what we
need to do here is to write a selectivity estimation function which
takes an const array and return selectivity without requiring to
create a Const node for each element.
>
> The diff in the attachment is proof of concept showing how to reduce wasting of memory. Having benchmarked a bit, I
didn'tfind any overhead.
>
You might want to include your benchmarking results as well.
--
Best Wishes,
Ashutosh Bapat