From that commit message: > Historically, we've considered the state with relpages and reltuples > both zero as indicating that we do not know the table's tuple density. > This is problematic because it's impossible to distinguish "never yet > vacuumed" from "vacuumed and seen to be empty". In particular, a user > cannot use VACUUM or ANALYZE to override the planner's normal heuristic > that an empty table should not be believed to be empty because it is > probably about to get populated. That heuristic is a good safety > measure, so I don't care to abandon it, but there should be a way to > override it if the table is indeed intended to stay empty.
So that implicitly provides our reasoning for not analyzing up-front on table creation.
I haven't thought about this too deeply yet, but it seems plausible to me that the dangers of overestimating row count here (at minimum in queries like I described with lots of joins) are higher than the dangers of underestimating, which we would do if we believed the table was empty. One critical question would be how fast we can assume the table will be auto-analyzed (i.e., how fast would the underestimate be corrected.
I found this issue a few years ago. This application had 40% of tables with one or zero row, 30% was usual size, and 30% was sometimes really big. It can be "relative" common in OLAP applications.
The estimation was terrible. I don't think there can be some better heuristic. Maybe we can introduce some table option like expected size, that can be used when real statistics are not available.
Some like
CREATE TABLE foo(...) WITH (default_relpages = x)
It is not a perfect solution, but it allows fix this issue by one command.