A thought: commit 8fcb32db prevented us from logging messages that are
too big to be decoded, but it wasn't back-patched. I think that means
that in older branches, there is a behaviour change unrelated to the
"garbage bytes" problem discussed in this thread, and separate also
from the out-of-memory problem. If someone generates a record too big
to decode, say with pg_logical_emit_message(), we will fail
differently. Before this patch set, we'd bogusly detect end-of-WAL,
and after this patch we'd fail to palloc and recovery would bogusly
fail. Which outcome is more bogus is hard to answer, and clearly we
should prevent it upstream, but didn't for technical reasons. Do you
agree that that is a separate topic that doesn't prevent us from
committing this fix?