Re: Random not so random - Mailing list pgsql-general

From Bruno Wolff III
Subject Re: Random not so random
Date
Msg-id 20041006171031.GC17331@wolff.to
Whole thread Raw
In response to Re: Random not so random  ("Arnau Rebassa" <arebassa@hotmail.com>)
Responses Re: Random not so random
List pgsql-general
I am going to keep this on general for now, since it seems like other people
might be interested even though it is straying a somewhat off topic.

On Wed, Oct 06, 2004 at 18:02:39 +0200,
  Marco Colombo <marco@esi.it> wrote:
>
> It depends. What's wrong with a SQL function taking long to
> complete? It could be a long computation, maybe days long. As far as

Days long SQL queries are hardly normal. Common things you might want
to generate secure random numbers for aren't going to be queries you
want to run a long time. For example you might want to generate
session ids to store in a cookie handed out for web requests.

> the client is concerned, there's no difference. Of course you'll have
> to use it with care, just as you would do with any potentially long
> function (that is, don't lock everything and then sit down waiting).

This might be reasonable if there was a significant benefit to doing so.
My argument is that /dev/random does not produce significantly more
secure random numbers under normal circumstances.

> >SHA1 is pretty safe for this purpose. The recent weaknesses of related
> >hashes isn't going to be of much help in predicting the output of
> >/dev/urandom. If SHA1 were to be broken that badly /dev/random would
> >also be broken.
>
> No, because /dev/random returns only after the internal state has been
> perturbed "enough". An observer may (in theory) get to know all
> bits of the internal pool right _after_ a read() from /dev/random,
> but still isn't able to guess the value of all of them right before the
> _next_ read() returns, exactly because the kernel waits until the return
> is safe in this regard.

OK, I'll buy that for cases where the estimates of entropy acquired are
reasonable. This may still be a problem for some uses though. It may allow
an attacker to figure out other data returned by /dev/random that they
weren't able to observe.

> Now, if the attacker gets the output, and breaks SHA1, he'll know the
> internal state again. But if the output goes elsewhere, he won't be
> able to guess it. That's why /dev/random is 'safe'. It won't return
> until the output is random enough.

You don't necessarily need to break SHA1 to be able to track the internal
state.

> Now, my esteem is 0 entropy bits for my pool, since you drained them all.
>
> "Secure Application X" needs to generate a session key (128bits),
> so reads them from /dev/urandom. Let's assume the entropy count is still 0.
> I (the kernel) provide it with 128bits that come from my PRNG + SHA1 engine.
> Now, you can predict those 128 bits easily, and thus you know the session
> key. The attack succeeds.
>
> "Really Secure Application Y" needs 128 bits, too. But this time it
> reads from /dev/random (assume entropy count is still 0).
> Now, I can't fulfill the request. So have Y wait on read().
>
> As time passes, I start collecting entropy bits, from IRQ timings,
> or a hardware RNG. These bits change the internal pool in a way you
> don't know. Eventually I get the count up to 128 bits.
> Now, I run my PRNG (from the now-changed state) + SHA1 engine, and return
> 128 bits to Y. Can you guess the session key? Of course. You know
> how internal state was before, you can "go through all possible entropy
> values". How many of them? 2^128. That's, no wonder, the same of trying
> to guess the key directly. Now you're telling me if you had the key,
> you could know the internal state again? Man, if you had the key, you
> already broke application Y, and the attack already succeeded by other
> means!

Assuming that all 128 bits were grabbed in one call to /dev/random, you
should be safe. However if the bits were returned a byte at a time
with intervening bytes returned to /dev/urandom to the attacker, the
key could be vunerable.

> >My memory of looking at the /dev/[u]random code is that
> >there is just one entropy pool and entropy is added to it as it is
> >obtained. So that if values are obtained from /dev/[u]random at a high
> >enough rate the above attack is practical.
> >
> >So the only case where you might want to use /dev/random over /dev/urandom
> >is where the internal state is vunerable, the attacker has access to a
> >large
> >fraction of the output values and where there are at least some gaps
> >between
> >samples where large amounts of entropy are collected.
>
> No. SHA1 protects the internal pool from an attacker who knows all the
> output. That's easy, just read from /dev/random until it blocks. If you're
> fast enough, you can assume no one else read from /dev/random. Now,
> _if you can break SHA1_, you have enough cleartext output of the internal
> PRNG to guess it's state. You may have used /dev/urandom as well, there's
> no difference.

I think you misunderstood the context. The attacker is assumed to have
initially gotten the state through some means. By watching the output
from /dev/urandom it is possible to figure out what the current state is
if not too much entropy has been added since the attacker knew the state.

> The purpose of /dev/random blocking is not protecting the internal pool
> state, that's SHA1 job. /dev/random blocks in order to protect other
> users (application Y in my example) from you.

SHA1 only protects you from getting the state solely be viewing the output.
It doesn't protect against you getting through other means.

I think /dev/random output would be better protected if it didn't share
a pool with /dev/urandom. Doing that is what makes tracking the internal
state possible.

Another issue with using /dev/random instead of /dev/urandom is that it
adds another way to do denial of service attacks. That may or may not
be an issue in a particular use since there may be more effective ways
of doing denial of service attacks in other ways.

pgsql-general by date:

Previous
From: David Fetter
Date:
Subject: Re: database constraints
Next
From: Marco Colombo
Date:
Subject: Re: Random not so random