If your entire database can comfortably fit in RAM, I would make shared_buffers large enough to hold the entire database. If not, I would set the value small (say, 8GB) and let the OS do the heavy lifting of deciding what to keep in cache. If you go with the first option, you probably want to use pg_prewarm after each restart to get the data into cache as fast as you can, rather than let it get loaded in naturally as you run queries; Also, you would probably want to set random_page_cost and seq_page_cost quite low, like maybe 0.1 and 0.05.
In all deference to your status as a contributor, what are these recommendations based on/would you share the rationale? I'd just like to better understand. I have never heard a recommendation to set random & seq page cost below 1 before for instance.
If the entire database were say 1 or 1.5 TBs and ram was on the order of 96 or 128 GBs, but some of the data is (almost) never accessed, would the recommendation still be the same to rely more on the OS caching? Do you target a particular cache hit rate as reported by Postgres stats?