Having looked through this thread and discussed a bit with Corey off-line, the approach that Tom laid out up-thread seems like it would make the most sense overall- that is, eliminate the JSON bits and the SPI and instead export the stats data by running queries from the new version of pg_dump/server (in the FDW case) against the old server with the intelligence of how to transform the data into the format needed for the current pg_dump/server to accept, through function calls where the function calls generally map up to the rows/information being updated- a call to update the information in pg_class for each relation and then a call for each attribute to update the information in pg_statistic.
Thanks for the excellent summary of our conversation, though I do add that we discussed a problem with per-attribute functions: each function would be acquiring locks on both the relation (so it doesn't go away) and pg_statistic, and that lock thrashing would add up. Whether that overhead is judged significant or not is up for discussion. If it is significant, it makes sense to package up all the attributes into one call, passing in an array of some new pg_statistic-esque special type....the very issue that sent me down the JSON path.
I certainly see the flexibility in having a per-attribute functions, but am concerned about non-binary-upgrade situations where the attnums won't line up, and if we're passing them by name then the function has dig around looking for the right matching attnum, and that's overhead too. In the whole-table approach, we just iterate over the attributes that exist, and find the matching parameter row.