On Mon, Jun 13, 2016 at 7:04 PM, Harald Welte <laforge(a)gnumonks.org> wrote:
On Mon, Apr 11, 2016 at 05:04:13PM -0700, Tom Tsou
wrote:
Related is the question of when osmo-trx should
send CLOCK
indications. Right now a CLOCK indications is sent with arrival of
commands on the control interface. After starting, CLOCK indications
are sent at an one second interval (216 frames). The indications sent
from the control interface are why osmo-bts is receiving CLOCK so
early.
I don't know, to be honest. I didn't write osmo-bts-trx. Other PHY
layers we interact send us information on the GSM frame number with
every frame number increment.
We also receive PH-ReadyToSend.ind in line with GSM PHY layer
specifications for each frame to be sent. osmo-bts simply responds to
those and all clock handling is entirely in the PHY.
As osmo-trx dosen't do that (it's only half of a PHY layer), the
missing part (scheduler) is implemented inside osmo-bts-trx. This
scheduler is then generating the equivalent of the PH-ReadyToSend.ind
towards the L1SAP and the common part of OsmoBTS.
So in osmo-bts-trx it seems that there is code in trx_sched_clock() that
tries to generate the frame numbers locally in between perios of no
"CLOCK IND" from osmo-trx by starting an osmo_timer for it. This seems
a bit ugly but of course reduces the number of UDP packets that we need
to process.
If osmo-bts-trx users have not experienced any timing related issues, I
think there is no reason to introduce any changes itno this part, i.e.
keep the frequency of the "CLOCK IND" frames as-is, to also remain
compatible with other OpenBTS-like transceiver implementations.
I don't remember issues with this part, but looking into the code I
don't see much log printing there, so even if we encountered them,
they probably wen unnoticed. Which is not a good behavior.
I personally think that sending CLOCK IND every frame is a good idea.
if we do this, we only need to check for lost CLOCK IND's and the code
becomes much simpler. We're already sending 8 times more UDP packets
even in idle mode (8 downlink bursts per frame), and 16 times more in
fully loaded mode (downlink + uplink), and if we're running multi-TRX
system, than proportion is even higher. We can do more 'perf'
monitoring, but my feeling is that the impact will be a minor. If we
find that UDP adds significant overhead, we can switch to a more
efficient IPC (UNIX sockets?), but I seriously doubt that this will be
needed.
As a side note, we (Fairwaves) will be able to look into these issues
deeply only in a few months in the best case. So if there are any
volunteers who want to get all these issues fixes before that, don't
hold your breath.
--
Regards,
Alexander Chemeris.
CEO, Fairwaves, Inc.
https://fairwaves.co