prng change feedback

This is merely a historical archive of years 2008-2021, before the migration to mailman3.

A maintained and still updated list archive can be found at https://lists.osmocom.org/hyperkitty/list/OpenBSC@lists.osmocom.org/.

ringsignature at riseup.net ringsignature at riseup.net
Thu Oct 5 12:20:21 UTC 2017


On 2017-10-05 11:09, Neels Hofmeyr wrote:
> Hey RS,
> 
> thanks for your excellent input on P/RNG!
> 

I'm glad it may be useful.

> There's a lot in it, let me first off sprinkle a few remarks...
> 
> On Thu, Oct 05, 2017 at 09:56:51AM +0000, ringsignature at riseup.net wrote:
>> system RNG is properly seeded at boot, getrandom() should not block at
>> any later point in the system's lifetime.
> 
> What about an attacker firing endless events that cause us to generate new
> TMSIs and ends up exhausting the entropy pool? AFAIK that's a real possibility?
> 

Could you quantify that? What is the process by which an attacker would
be able to cause new TMSI generation? Does it require interactivity from
them or can they simply flood OpenBSC with a packet that triggers a the
creation of a TMSI? If such a flood is possible, that suggests a
security problem at the least and a possible entropy problem.

On the topic of exhausting the entropy pool: as long as a modern
GNU/Linux system is properly seeded at boot, /dev/urandom is guaranteed
to not block and to continue to emit cryptographically secure random
numbers. The entropy post by djb discusses the trade offs of seed-once
versus seed-many-times.

I think it is reasonable to seed-once at system boot and to simply
always use getrandom() with the assumption that you cannot exhaust the
entropy pool. The Linux kernel version 3.16 appears to be when the
kernel greatly improved with regard to the random interface. I think on
many modern GNU/Linux systems (2017-03-13 RANDOM(4) says so),
getrandom() will block until it is initialized:

        The  /dev/random  interface  is considered a legacy interface,
and /dev/urandom is preferred and sufficient in
       all use cases, with the exception of applications which require
randomness during early boot time;  for  these
       applications, getrandom(2) must be used instead, because it will
block until the entropy pool is initialized.

In fact, internally in the kernel, it was once and should still be the
case that the entropy pool is being constantly updated with timer and
interrupt information, in addition to other sources. In a sense, there
is no seed-once option - only a seed once with csprng source, then seed
many times with other sources including possible csprng sources. Some of
those inputs may be controlled or influenced by an attacker but such an
attack would be against the entire /dev/random interface. That is
definitely a good research project and I'm sure Professor Heninger is
already working on it. With regard to the entire PRNG being broken, this
is also why I suggested to hash or transform the output bytes, just in
case, something is broken as a defense in depth strategy. If an attacker
can break the PRNG and also predict the preimage of H(RandomBytes), then
they probably don't need the RandomBytes in the first place and the RNG
is almost certainly beyond salvation. This would have stopped the
*practical* exploitation of DualEC in TLS but it almost certainly
wouldn't stop another Debian OpenSSL fiasco.

If one is not convinced by the notion that one 128bit or 256bit seed at
boot is good for say, 2^64 outputs even with these internal reseeding
operations, and you expect to use that many outputs, reseeding could be
done with a userspace rng daemon to hardware bridge like rng-tools.

There are a few free and open hardware designs such as the Gnuk (in the
NeuG configuration) that can be re-purposed as a hardware RNG. I recall
that the device specifically uses a trick with an ADC that can fail very
badly. It seems to me that a properly designed RNG that is cheap to
build and easy to buy would be very nice as a project too.

Now down to the brass tacks:

How much random data does OpenBSC currently use in a given period of
time, say one day?
Is it really larger than the theoretical limits of the outputs for
/dev/urandom?
What are the system requirements for OpenBSC with regard to a hardware
rng or manual seeding?
Does OpenBSC only run on Linux or BSD systems with the getrandom()
interface?
Does OpenBSC store random seed data between boots?
Could someone share the value of /proc/sys/kernel/random/entropy_avail
from a typical OpenBSC machine?

>> As an aside: If for some reason there is no cryptographically secure
>> hardware RNG on the OpenBSC system, one wonders if it might be of
>> interest to use the available RF interfaces as part of a design for such
>> an RNG. There would be concerns about adversarial control of inputs, of
>> course.
> 
> It might well be possible to use RF noise as entropy, but the HLR that is
> generating the RAND bytes is layers away from the BTS and its antenna. If all
> run on the same small integrated box, somehow getting RF noise from the DSP and
> feeding entropy to the system which in turn gets used on the other end by the
> HLR is a possibility, but not really when the HLR is, as by the usual GSM
> design, in a central location serving numerous cells.

Understood.

As a slight digression: There was once some kind of Ubuntu cloud project
[1] called Pollinate to solve this for virtual machines and other
systems with no direct hardware entropy source. While that seems
excessive for this situation, it's a reasonably good overview of how
another project shares entropy over the network. Over HTTP there are raw
bytes on the network, while over HTTPS it has the problem of needing
entropy to start a secure TLS session. One option might be to ensure
that the devices are seeded at install time, another might be to extend
the already existing protocols to share entropy data from a system which
has some, and another might just be to raise the default system
requirements to include one or two RNG devices. There are many
strategies that seem promising and all of them require around 128 or 256
bits of entropy at boot time, every boot.
 
> Would be interesting to run a few tests on how quickly we can end up in entropy
> exhaustion. Using getrandom() always would be the easiest and safest. Saying
> that we prefer to be quick rather than secure sounds wrong indeed. But if we
> practically block because of too little entropy, then we annoy lab / testing
> setups that don't care about security, and we provide a DoS attack vector.

It would be surprising if one was able to exhaust it. I would expect an
RF denial of service for local clients before I would expect the Linux
/dev/urandom PRNG to output predictable sequences based on asking for
random TIMSI data.

Happy Hacking,
RS

[0] https://www.gniibe.org/memo/development/gnuk/rng/neug.html
[1]
http://people.canonical.com/~kirkland/Random%20Seeds%20in%20Ubuntu%2014.04%20LTS%20Cloud%20Instances.pdf
[2] https://launchpad.net/pollen



More information about the OpenBSC mailing list