On Tue, Jul 22, 2025 at 4:23 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:
> The trend of the results were similar:
>
> getrandom: 497.061 ms
> RAND_bytes: 1152.260 ms ms
> /dev/urandom: 1696.065 ms
>
> Please let me know if I'm missing configurations or settings to
> measure this workload properly.
I don't think you're missing anything, or else I'm missing something
too. If I modify pg_strong_random() to call getentropy() in addition
to the existing RAND_bytes() code, `perf` shows RAND_bytes() taking up
2.4x the samples that getentropy() does. That's very similar to your
results.
> On Tue, Jul 22, 2025 at 11:46 AM Jacob Champion
> <jacob.champion@enterprisedb.com> wrote:
> > That is _really_ surprising to me at first glance. I thought
> > RAND_bytes() was supposed to be a userspace PRNG, which I would
> > naively expect to take much less time than pulling data from Linux.
So my expectation was naive for sure. This has sent me down a bit of a
rabbit hole, starting with Adam Langley's BoringSSL post [1] which led
to a post/rant on urandom [2]. I don't think an API that advertises
"strong randomness" should ever prioritize performance over strength.
But maybe the pendulum has swung far enough that we can expect any
kernel supporting getentropy() to be able to do the job just as well
as OpenSSL does in userspace, except also faster? I think it might be
worth a discussion.
Thanks,
--Jacob
[1] https://www.imperialviolet.org/2015/10/17/boringssl.html
[2] https://sockpuppet.org/blog/2014/02/25/safely-generate-random-numbers/