Hello guys,
I am looking for a way to disable the vocoder for voice calls, i.e.,
supply my own 260 bits every 20 ms to be sent in the uplink TCH
instead of the output from the vocoder in the Calypso DSP, and on the
downlink, receive the bits which would otherwise go into the vocoder.
My question is: is it possible to do what I seek using the know-how of
Calypso DSP black magic that has already been amassed by the OsmocomBB
project? I am thinking of the following 3 possible starting points:
1. LCR integration: the mobile app can be configured to route voice
call audio to the Linux host instead of the phone's earpiece and
mic, right? At which point does it intercept the standard voice
path? Does it intercept right where I want it, passing raw
over-the-air TCH bits to the external host, such that Asterisk or
whatever has to run the GSM codec, or is the intercept happening at
the point of linear PCM samples, such that the uplink TCH bits are
still generated by the DSP black box?
2. The burst_ind branch lets the Linux host see every burst that is
received on the downlink, right? It would therefore include TCH
bursts during voice calls, right? This way I should be able to
capture all of the raw TCH bits on the downlink - but what about
the uplink?
3. I've also read about the Calypso-as-BTS hack - way cool! In order
to work, this hack must support both receiving and transmitting
arbitrary bursts, right? If neither option 1 nor option 2 would
work, do you guys think the Calypso-as-BTS implementation code
would serve as a starting point for what I seek? I do need to run
the OsmocomBB phone in the standard MS role, not in the BTS role,
and I need to place voice calls on the network in the standard
manner - but with TCH rerouted to my own source and sink for raw
over-the-air bits.
TIA for any guidance!