SELECT count(*) AS pages_read
FROM (
SELECT c.oid::regclass::text AS rel,
f.fork,
ser.i AS blocknr,
page_header(get_raw_page(c.oid::regclass::text,
f.fork,
ser.i))
FROM pg_class c
CROSS JOIN (values ('main'::text),
('fsm'::text),
('vm'::text)) f(fork)
CROSS JOIN pg_relation_size(c.oid::regclass, f.fork) sz(sz)
) t1;
The idea is to read just everything. Since a select works only inside one database, this works only for that database. If you have multiple databases in a cluster, you need to run it in every one of them.
Note this only works if your page size is the usual 8k. If you have compiled your postgres otherwise then change 8192 to whatever it is.
Also, PG verifies the checksum when it reads a page from storage. So, this will miss pages that are present in shared_buffers. But assuming that they came there from storage in the first place, that should be good enough.
Alternatives are something like pg_dumpall >/dev/null. This reads all data files but won't probably detect problems in indexes. Still it's a good idea to do once in a while to check toasted data for instance.