To be honest, I am a bit surprised that we decided to enable this by default. It's not obvious to me that statistics should be regarded as part of the database in the same way that table definitions or table data are. That said, I'm not overwhelmingly opposed to that choice. However, even if it's the right choice in theory, we should maybe rethink if it's going to be too slow or use too much memory.
I'm strongly in favor of the choice to make it default. This is reducing the impact of a post-upgrade customer footgun wherein heavy workloads are applied to a database post-upgrade but before analyze/vacuumdb have had a chance to do their magic [1].
It seems to me that we're fretting over seconds when the feature is potentially saving the customer hours of reduced availability if not outright downtime.
[1] In that situation, the workload queries have no stats, get terrible plans, everything becomes a sequential scan. Sequential scans swamp the system, starving the analyze commands of the I/O they need to get the badly needed statistics. Even after the stats are in place, the system is still swamped with queries that were in flight before the stats were in place. Even well intentioned customers [2] can fall prey to this when their microservices detect that the database is online again, and automatically resume work.
[2] This exact situation happened at a place where I was consulting. The microservices all restarted work automatically despite assurances that they would not. That bad experience was my primary motivator for implementing theis feature.