Hi.
Microbenchmarking is nice but I'd like to know bigger picture: why do we need to
optimize current implementation further? Is it just for the sake of aesthetics and
fun of it? Do we have some profiling result showing that in scenario A Viterbi
decoder occupies XX% of the time?
Don't get me wrong - I'm not against optimizations, just would like to know more
about the general context.
On 15.06.2017 23:43, Vadim Yanitskiy wrote:
So, I am open for your ideas, opinions and remarks.
--
Max Suraev <msuraev(a)sysmocom.de>
http://www.sysmocom.de/
=======================================================================
* sysmocom - systems for mobile communications GmbH
* Alt-Moabit 93
* 10559 Berlin, Germany
* Sitz / Registered office: Berlin, HRB 134158 B
* Geschaeftsfuehrer / Managing Director: Harald Welte