I've heard of use cases where dumping stats without data would help
with production database planner debugging on a non-prod system.
So far, I'm seeing these use cases:
1. Binary upgrade. (schema: on, data: off, stats: on)
2. Dump to file/dir and restore elsewhere. (schema: on, data: on, stats: on)
3. Dump stats for one or more objects, either to directly apply those stats to a remote database, or to allow a developer to edit/experiment with those stats. (schema: off, data: off, stats: on)
4. restore situations where stats are not wanted and/or not trusted (whatever: on, stats: off)
Case #1 is handled via pg_upgrade and special case flags in pg_dump.
Case #2 uses the default pg_dump options, so that's covered.
Case #3 would require a --statistics-only option mutually exclusive with --data-only and --schema-only. Alternatively, I could reanimate the script pg_export_statistics, but we'd end up duplicating a lot of filtering options that pg_dump already has solved. Similarly, we may want server-side functions that generate the statements for us (pg_get_*_stats paired with each pg_set_*_stats)
Case #4 is handled via --no-statistics.
Attached is v19, which attempts to put table stats in SECTION_DATA and matview/index stats in SECTION_POST_DATA. It's still failing one TAP test (004_pg_dump_parallel: parallel restore as inserts). I'm still unclear as to why using SECTION_NONE is a bad idea, but I'm willing to go along with DATA/POST_DATA, assuming we can make it work.