On 26.04.2017 04:00, Tsunakawa, Takayuki wrote: Are you considering some upper limit on the number of prepared statements? In this case we need some kind of LRU for maintaining cache of autoprepared statements. I think that it is good idea to have such limited cached - it can avoid memory overflow problem. I will try to implement it.
I attach new patch which allows to limit the number of autoprepared statements (autoprepare_limit GUC variable). Also I did more measurements, now with several concurrent connections and read-only statements. Results of pgbench with 10 connections, scale 10 and read-only statements are below:
Protocol
TPS
extended
87k
prepared
209k
simple+autoprepare
206k
As you can see, autoprepare provides more than 2 times speed improvement.
Also I tried to measure overhead of parsing (to be able to substitute all literals, not only string literals). I just added extra call of pg_parse_query. Speed is reduced to 181k. So overhead is noticeable, but still making such optimization useful. This is why I want to ask question: is it better to implement slower but safer and more universal solution?
Unsafe solution has not any sense, and it is dangerous (80% of database users has not necessary knowledge). If somebody needs the max possible performance, then he use explicit prepared statements.