prng change feedback

This is merely a historical archive of years 2008-2021, before the migration to mailman3.

A maintained and still updated list archive can be found at https://lists.osmocom.org/hyperkitty/list/OpenBSC@lists.osmocom.org/.

ringsignature at riseup.net ringsignature at riseup.net
Thu Oct 5 12:40:11 UTC 2017


On 2017-10-05 11:37, Max wrote:
> I feel an urge to re-iterate few points - see below.
> N. B: I'm talking about concrete code available in gerrit.
> 
> On 05.10.2017 13:09, Neels Hofmeyr wrote:
>> On Thu, Oct 05, 2017 at 09:56:51AM +0000, ringsignature at riseup.net wrote:
>>> system RNG is properly seeded at boot, getrandom() should not block at
>>> any later point in the system's lifetime.
>> What about an attacker firing endless events that cause us to generate new
>> TMSIs and ends up exhausting the entropy pool?
> 
> It still won't block. See https://gerrit.osmocom.org/#/c/1526/
> 

As I understand the getrandom() interface in modern Linux systems - it
is documented to block until it is initialized and then never block
again - is that an incorrect understanding of that interface?

>> AFAIK that's a real possibility?
> 
> Not sure either way: the attacker should deplete entropy faster than
> kernel gathers it.

Gathering entropy is orthogonal and also important. The core issue as I
understand it is *expanding* available entropy with a PRNG construction
from an original seed at boot time. Is an attacker really able to
deplete the PRNG's theoretical output limit? If we want to be extremely
conservative, lets say that is 2^64 outputs is the limit, would the
system really not receive another 128 bits of entropy in some manner
before that output limit is reached? Just on network card interrupts and
timers alone of fetching the TIMSI, I would expect the /dev/random
interface to be reseeding the internal prng pool. An attacker could
easily gather 128bits of TIMSI data but that expenditure does not
directly correspond to the number of input bits at seed time. That's one
nice property of the PRNG over a direct RNG interface.

> Also, it's irrelevant for RAND_bytes() vs getrandom() discussion: both
> use the same
> entropy pool, both will generate "not good enough" random bytes if out
> of entropy.

Agreed. The main reason to use getrandom() is that it is simpler and
ultimately what most projects need to use unless they directly read from
a device such as /dev/random, /dev/urandom, or another hardware device
of some kind.

> 
> So the context for this part of the discussion is in:
> https://gerrit.osmocom.org/#/c/3819/
> https://gerrit.osmocom.org/#/c/3820/
> https://gerrit.osmocom.org/#/c/3821/
> Meaning: what do we do if we don't get "random enough" data?
> 
> So far we've just logged it and carried on with insecure rand().
> Could/should we do
> better? Should we fail? Should we let user decide via config option?

Yes, I think getrandom() is a better default and in fact, the only safe
interface. I suggest failing the build absent a getrandom() system
call/glibc interface. Additionally, it would be good to ensure that any
system running OpenBSC has some source of entropy beyond interrupts and
timing - is that already the case?

> 
>> Would be interesting to run a few tests on how quickly we can end up in entropy
>> exhaustion.
> 
> I think it's more of an academic exercise at this point: even if our
> tests would show
> that we can't deplete it, it doesn't mean that attacker couldn't come
> up with better
> ways. So we should decide what should be done when this happens. So
> far the decision
> was to "log and forget". Should we change it?

It would be good to know the theoretical limits of /dev/urandom from a
given random seed absent any other influences. I was not able to find a
clear explanation and used a simple rule of thumb to come up with 2^64
(n bits / 2) outputs. It seems reasonable to log a low entropy situation
- but what exactly are the conditions for that situation?

>>  But if we practically block because of too little entropy
> 
> No, we don't. See https://gerrit.osmocom.org/#/c/1526/
> 
> The patch was initiated by the need to fix licensing issue, so it
> preserves all the
> properties of original code:
> - it does not block
> - it uses insecure random
> 

Understood.

> Since we're touching this part of the code anyway, we might also
> change the way we
> treat entropy depletion.

Is there a system wide entropy depletion monitor in place?

> But, it's somewhat orthogonal to the use of getrandom(): we can change
> the way we
> deal with RAND_bytes() failure in another unrelated patch series.
> 
> Using getrandom() is not introducing any problems with the random data
> which are not
> already there. It also does not fix any of them. It's good that
> discussion around it
> attracted our attention to those problems but I think we should keep
> in mind that
> those are related but different issues.

Using getrandom() is strictly better based on my understanding of the
interface.

Happy Hacking,
RS



More information about the OpenBSC mailing list