On Mon, Oct 20, 2025 at 09:12:52AM +0200, Filip Janus wrote:
> The problem is caused by a difference between the currently used algorithms
> and post-quantum ones. For example, commonly used algorithms like RSA have
> a defined digest algorithm, but ML-DSA does not.
Looking more carefully, ML-DSA uses two hash functions internally,
though not to digest the to-be-signed data: SHAKE128 and SHAK256, so
this falls in to the sub-case of the certificate's signatureAlgorithm
using multiple hash functions in RFC 5929, section 4.1, third item, so
indeed we can't define tls-server-end-point.
Perhaps the fix for this is for signing algorithms to specify what hash
or "hash" function to use for tls-server-end-point channel bindings
(possibly the identity function).
I will post as much to the TLS mailing list, but since ML-DSA is
specified by NIST, any change to ML-DSA to say this will have to go
through them, and so on for others, so we might just be best off instead
altering RFC 5929 and maybe setting up an IANA registry.
Fun stuff.
Nico
--