Hi,
- Should I use the master branch in osmocom-bb git, or one of the
other branches? The Branches page in the wiki is blank.
Currently, master.
If you have trouble establishing voice calls, you can try to cherry-pick df6269173cbc72e02964159cb60d2db712eb0f07 over master.
- Am I correct in my understanding that, at least in some branch (see
above), osmocom-bb does support this phone well enough to actually make a call? (Yes, I realize that layers 2&3 run on an attached host, hence the phone has to be tethered to a laptop running osmocon and mobile the whole time, and I do know how to enable Tx in the target build. :)
voice & sms should work fine from the master branch.
Some small code changes might be needed for the DP-L10 though, not sure, I never really use this phone myself so I can't say. The wiki might have more info and if it doesn't, feel free to update it once you figured it out.
- How is the voice routing implemented? Do I need to use lcr
integration on the host, or will the audio come out of the phone's own speaker even though all higher layers of the stack are running on the PC? If the voice call audio does go through the phone's own speaker and mic, which ones? This phone model has both an earpiece speaker and a loudspeaker, plus the usual analog headset jack. Which of these audio routing options are supported, if any?
By default the audio will be router to/from the phone default speaker/microphone. That's because during normal phone function, the ARM and the higher layer never even see audio ... the audio frames are received, demodulated, decoded, decompressed and directly fed to the speaker all inside the DSP. The upper layer just configures the audio mode.
We also added the option to route the audio frame to/from the PC in the firmware by using some special DSP mode but if you want to get the audio externally this is currently only possible with the LCR integration. (or you have to modify the code ...)
Cheers,
Sylvain