I was looking into using prepared statements and using the auto plan_cache_mode. The issue is that the current sample of data collected to determine whether to use custom plans or the generic one is very small and susceptible to a bad set of queries that might pick the suboptimal choice.
It seems unreasonable for the live instance of PostgreSQL to figure this out on-the-fly. A more useful approach would seem to be: use "always custom" for inexpensive plans over simple queries until you hit a point of pain. Then test whether changing to "always generic" results in a better outcome given the specific situation you encountered.
The choice of 5 is basically the trade-off between people using prepared statements solely for their sql injection protection benefits without actually reusing them and never getting to a point of switching to a generic plan because the amount of time it takes to be confident (get a statistically significant sampling) is too far out. How would you go about deciding what an appropriate value for this setting if you did have control of it?
If anything, removing the arbitrary number and basing the decision on, say, whether or not the incoming parameter is within the MCV array, would seem to be a way to expend a bit more effort with a cheap lookup that likely will save the need for a custom plan. I would think some plans - say ones related to PK lookups, would simply be assumed better off done with a generic plan involving an index scan (maybe based upon a table size check) derived from the initial custom plan.