That seems like a pretty poor solution. It will cause pg_stat_statements to fail altogether as soon as the stats file exceeds 1GB. (Admittedly, failing is better than crashing, but not by that much.) Worse, it causes that to happen on EVERY platform, not only Windows where the problem is.
I don't think it is a Windows only problem, even on POSIX platforms it might not be safe trying to read() over 2GB.
I think instead, we need to turn the subsequent one-off read() call into a loop that reads no more than INT_MAX bytes at a time. It'd be possible to restrict that to Windows, but probably no harm in doing it the same way everywhere.
Seems reasonable to me, can such a change be back-patched?
A different line of thought is that maybe we shouldn't be letting the file get so big in the first place. Letting every backend have its own copy of a multi-gigabyte stats file is going to be problematic, and not only on Windows. It looks like the existing logic just considers the number of hash table entries, not their size ... should we rearrange things to keep a running count of the space used?
+1. There should be a mechanism to limit the effective memory size.