On 22.08.2018 07:54, Yamaji, Ryo wrote:
> On Tuesday, August 7, 2018 at 0:36 AM, Konstantin Knizhnik wrote:
>
>> I have registered the patch for next commitfest.
>> For some reasons it doesn't find the latest autoprepare-10.patch and still
>> refer to autoprepare-6.patch as the latest attachement.
> I am sorry for the long delay in my response.
> I confirmed it and become reviewer the patch.
> I am going to review.
>
>
>> Right now each prepared statement has two memory contexts: one for raw
>> parse tree used as hash table key and another for cached plan itself.
>> May be it will be possible to combine them. To calculate memory consumed
>> by cached plans, it will be necessary to calculate memory usage statistic
>> for all this contexts (which requires traversal of all context's chunks)
>> and sum them. It is much more complex and expensive than current check:
>> (++autoprepare_cached_plans > autoprepare_limit) although I so not think
>> that it will have measurable impact on performance...
>> May be there should be some faster way to estimate memory consumed by
>> prepared statements.
>>
>> So, the current autoprepare_limit allows to limit number of autoprepared
>> statements and prevent memory overflow caused by execution of larger
>> number of different statements.
>> The question is whether we need more precise mechanism which will take
>> in account difference between small and large queries. Definitely simple
>> query can require 10-100 times less memory than complex query. But memory
>> contexts themselves (even with small block size) somehow minimize
>> difference in memory footprint of different queries, because of chunked
>> allocation.
> Thank you for telling me how to implement and its problem.
> In many cases when actually designing a system, we estimate the amount of memory
> to use and prepare a memory accordingly. Therefore, if we need to estimate memory
> usage in both patterns that are limit amount of memory used by prepare queries or
> number of prepared statements, I think that there is no problem with the current
> implementation.
>
> Apart from the patch, if it can implement an application that outputs estimates of
> memory usage when we enter a query, you may be able to use autoprepare more comfortably.
> But it sounds difficult..
I have added autoprepare_memory_limit parameter which allows to limit
memory used by autoprepared statements rather than number of such
statements.
If value of this parameter is non zero, then total size of all memory
contexts used for autoprepared statement is maintained (can be
innsoected in debugger in autoprepare_used_memory variable).
As I already noticed, calculating memory context size adds extra
overhead but I do not notice any influence in performance. New version
of the patch is attached.
--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company