Hi.
Microbenchmarking is nice but I'd like to know bigger picture: why do we need to optimize current implementation further? Is it just for the sake of aesthetics and fun of it? Do we have some profiling result showing that in scenario A Viterbi decoder occupies XX% of the time?
Don't get me wrong - I'm not against optimizations, just would like to know more about the general context.
On 15.06.2017 23:43, Vadim Yanitskiy wrote:
So, I am open for your ideas, opinions and remarks.