Hello GSM community,
I just put a new release of Themyscira GSM codec libraries and
utilities package:
ftp://ftp.freecalypso.org/pub/GSM/codecs/gsm-codec-lib-r2.tar.bz2ftp://ftp.freecalypso.org/pub/GSM/codecs/gsm-codec-lib-latest.tar.bz2
(symlink)
The two libraries in this package (libgsmefr and libgsmfrp) are
intended for people who develop gateway software interconnecting
Osmocom-based GSM networks to PSTN or other networks, gateways which
include a speech transcoding function that terminates the GSM codec
leg.
If anyone is currently interconnecting an Osmocom GSM voice network to
the outside world using software which you did not write yourself
(Asterisk, FreeSwitch, Kamailio, whatever) and you care about the plain
old FR codec, and/or care about EFR, beyond just AMR, I encourage you
to investigate your current non-Osmocom gateway software to see exactly
how it implements FR and EFR. Because there were NO pre-existing FOSS
libraries that correctly implement FR and EFR decoding prior to my
Themyscira gsm-codec-lib development, most pre-existing gateway
software probably implements these codecs in a flawed manner:
FR codec: Everyone to my knowledge implements this codec using classic
libgsm, a library that dates back to 1990s. It's a good library, it's
a fully correct implementation of GSM 06.10 spec, and I use it too.
However, it implements _only_ a bare 06.10 encoder and a bare 06.10
decoder, without any DTX functions of GSM 06.31 and related specs. In
the encoder direction having no DTX isn't really a problem (you won't
be able to do DTXd anyway unless you got lots of spectrum and are
running multi-ARFCN cells), but the lack of an Rx DTX handler per GSM
06.31 *is* a real problem: if you feed the uplink from a GSM call (RTP
stream from a BTS) to a bare GSM 06.10 decoder such as gsm_decode()
function in libgsm, you won't get correct handling of SID frames,
which every standard GSM MS will transmit, and you won't get correct
handling of BFI frame gaps, which will always occur. The correct
solution is to insert a call to a GSM 06.31 Rx DTX handler (it is more
than an ECU) just before the call to gsm_decode(), and my libgsmfrp
offering is that GSM 06.31 Rx DTX handler.
EFR codec: Everyone other than me implements EFR (if they support it
at all) using an AMR library such as libopencore-amrnb. I have seen
totally broken implementations that schlep 244-bit payloads directly
between supposed-to-be-EFR RTP and the AMR library, without reordering
those bits per gsm690_12_2_bitorder[] - those implementation have
exactly zero chance of ever actually working with a real GSM-EFR MS on
the other end - and I've also seen implementations that do perform this
bit reordering and are thus closer to correct. But even the latter
implementations are still wrong when it comes to SID handling: EFR is
equivalent to the highest MR122 mode of AMR only for regular speech
frames, but not for SID. There does exist a special encoding format for
representing GSM-EFR SID in AMR frame interfaces, but libopencore-amrnb
does not support GSM-EFR SID in any way at all. If you take the uplink
from a GSM-EFR call and feed it to libopencore-amrnb decoder, any time
the GSM MS emits a SID frame, strange noise sounds will appear at the
output of that decoder, instead of the correct comfort noise.
Themyscira libgsmefr is a proper encoder and decoder library for EFR,
based on the EFR reference code from ETSI, in exactly the same way how
libopencore-amrnb is based on the AMR reference code from ETSI/3GPP.
It still has some performance problems which I will be working on later
(the goal of getting it to perform no worse than libopencore-amrnb has
not been achieved yet), but at least it is correct.
Hasta la Victoria, Siempre,
Mychaela aka The Mother
Dear Harald,
A long time passed since I worked with the Nokia Site family and
OpenBSC. I managed to save an UltraSite cabinet from scrap, so I try
to revive it for a museum.
On the old NITB versions I managed to make this work once, now I am
trying with the new (at least to me) Osmo-BSC implementation.
To keep it simple, only one TRX is configured:
OML <--> E1 TS 1 (64kbit)
RSL <--> E1 TS 2 (64kbit)
TRXSIG <--> E1 TS 3 and 4
DAHDI is used with a Digium Wildcard TE110P T1/E1 Board.
Osmo-BSC is able to do the OML bootstrap, but the RSL waits for LAPD endlessly.
My first question is: should Osmo-BSC be able to bootstrap the BTS
fully (all the way to "on air" mode) if it is not (yet) connected to
any other core element (MGW, MSC, STP) ?
This is the Osmo-BSC log (after the NOKIA_BTS_RESET command + the
reset_wait_time passed):
DLLAPD input/lapd.c:245 (0:1-T1-S62): LAPD Allocating SAP for SAPI=62
/ TEI=1 (d l=0x56284cfbd220, sap=0x56284cfbd200)
DLLAPD input/lapd.c:255 (0:1-T1-S62): k=1 N200=3 N201=260 T200=1.0 T203=10.0
DLLAPD input/lapd.c:519 (0:1-T1-S62): LAPD DL-ESTABLISH request TEI=1 SAPI=62
DLLAPD input/lapd.c:654 (0:1-T1-S62) LAPD DL-ESTABLISH confirm TEI=1 SAPI=62
DNM bts_nokia_site.c:63 (bts=0) bootstrapping OML
DNM bts_nokia_site.c:1729 (bts=0) Rx ABIS_OM_MDISC_FOM
DNM bts_nokia_site.c:1573 (bts=0) Rx (0x82) NOKIA_BTS_OMU_STARTED
DNM bts_nokia_site.c:1583 (bts=0) Rx BTS type = 17 (UltraSite GSM 900)
DNM bts_nokia_site.c:1098 (bts=0) Sending NOKIA_BTS_START_DOWNLOAD_REQ
DNM bts_nokia_site.c:1729 (bts=0) Rx ABIS_OM_MDISC_FOM
DNM bts_nokia_site.c:1573 (bts=0) Rx (0x84) NOKIA_BTS_MF_REQ
DNM bts_nokia_site.c:1729 (bts=0) Rx ABIS_OM_MDISC_FOM
DNM bts_nokia_site.c:1573 (bts=0) Rx (0x88) NOKIA_BTS_CONF_REQ
DNM bts_nokia_site.c:1098 (bts=0) Sending NOKIA_BTS_ACK
DNM bts_nokia_site.c:1260 (bts=0) Sending multi-segment 0
DNM bts_nokia_site.c:1260 (bts=0) Sending multi-segment 1
DNM bts_nokia_site.c:1729 (bts=0) Rx ABIS_OM_MDISC_FOM
DNM bts_nokia_site.c:1573 (bts=0) Rx (0x81) NOKIA_BTS_ACK
DNM bts_nokia_site.c:1604 (bts=0) Rx ACK = 1
DLLAPD input/lapd.c:245 (0:2-T1-S0): LAPD Allocating SAP for SAPI=0 /
TEI=1 (dl= 0x56284d252a20, sap=0x56284d252a00)
DLLAPD input/lapd.c:255 (0:2-T1-S0): k=2 N200=3 N201=260 T200=1.0 T203=10.0
DLLAPD input/lapd.c:519 (0:2-T1-S0): LAPD DL-ESTABLISH request TEI=1 SAPI=0
DLLAPD lapd_core.c:421 (0:2-T1-S0) sending MDL-ERROR-IND cause 1 from
state LAPD _STATE_IDLE
DLLAPD input/lapd.c:658 (0:2-T1-S0) LAPD DL-RELEASE indication TEI=1 SAPI=0
DLLAPD input/lapd.c:282 (0:2-T1-S0): LAPD Freeing SAP for SAPI=0 /
TEI=1 (dl=0x5 6284d252a20, sap=0x56284d252a00)
DCHAN lchan_fsm.c:1779
lchan(0-0-0-CCCH_SDCCH4-0)[0x56284d251770]{UNUSED}: (type =NONE) lchan
allocation failed in state UNUSED: LCHAN_EV_TS_ERROR
DCHAN lchan_fsm.c:197
lchan(0-0-0-CCCH_SDCCH4-0)[0x56284d251770]{UNUSED}: (type= NONE) lchan
activation failed (lchan allocation failed in state UNUSED: LCHAN_EV
_TS_ERROR)
DCHAN lchan_fsm.c:1779
lchan(0-0-0-CCCH_SDCCH4-1)[0x56284d2519b0]{UNUSED}: (type =NONE) lchan
allocation failed in state UNUSED: LCHAN_EV_TS_ERROR
DCHAN lchan_fsm.c:197
lchan(0-0-0-CCCH_SDCCH4-1)[0x56284d2519b0]{UNUSED}: (type= NONE) lchan
activation failed (lchan allocation failed in state UNUSED: LCHAN_EV
_TS_ERROR)
DCHAN lchan_fsm.c:1779
lchan(0-0-0-CCCH_SDCCH4-2)[0x56284d251bf0]{UNUSED}: (type =NONE) lchan
allocation failed in state UNUSED: LCHAN_EV_TS_ERROR
DCHAN lchan_fsm.c:197
lchan(0-0-0-CCCH_SDCCH4-2)[0x56284d251bf0]{UNUSED}: (type= NONE) lchan
activation failed (lchan allocation failed in state UNUSED: LCHAN_EV
_TS_ERROR)
DCHAN lchan_fsm.c:1779
lchan(0-0-0-CCCH_SDCCH4-3)[0x56284d251e30]{UNUSED}: (type =NONE) lchan
allocation failed in state UNUSED: LCHAN_EV_TS_ERROR
DCHAN lchan_fsm.c:197
lchan(0-0-0-CCCH_SDCCH4-3)[0x56284d251e30]{UNUSED}: (type= NONE) lchan
activation failed (lchan allocation failed in state UNUSED: LCHAN_EV
_TS_ERROR)
Would be nice to make this old beast running again.
Much appreciate any and all help.
Regards,
Csaba
Hello Osmocom,
I know a lot of people here have salvaged T1/E1 BTS equipment from
Nokia, Ericsson etc. But what about the next level up - has anyone
been able to salvage a classic T1/E1 BSC that goes with those BTSes?
And given the hardware, does anyone in our community know how to get
one of those beasts working?
I am interested in the TRAU component of the classic GSM BSS
architecture, and I would really love to lay my hands (remotely, via
OCTOI, would be just fine) on one of those beauties. Specifically, I
seek to feed custom-crafted bits to the TRAU's Abis input and capture
what it puts out on the A interface G.711 side, and vice-versa.
What can be learned from such experiments? Several things:
* I would love to play with TFO: see the TFO_REQ in-band signaling
messages the TRAU should put out on its own during the first 5 s or
so, then send our own TFO_REQ and TFO_ACK to the TRAU, do the whole
protocol, and get the TRAU to actually enter TFO mode. Reading the
spec is one thing, but seeing it in action would be so much more fun!
I've also been wanting to write my own FOSS implementation of in-band
TFO within G.711 RTP, but it would be an impractical task without
having some other existing implementation to test against.
* If we can get TFO to work, we'll be able to see exactly how real
TRAUs handled the onerous requirements of TS 28.062 section C.3.2.1.1.
Implementing those rules for FR1 would be quite easy, but try doing
the same for EFR or HR1 - *very* daunting! It would be lovely to see
exactly what actual historical implementations did here.
* Outside of TFO, we should be able to get the TRAU into a known state
by feeding it spec-defined encoder and decoder homing frames, and then
craft our own test sequences (beyond the standard ones it was surely
tested with by its designers) to exercise those parts of the codec
implementation where the specs allow implementors to innovate,
particularly everything to do with error concealment.
But doing all of the above requires access to some old-style T1/E1 BSC
that contains such a TRAU. Does anyone in our community have access
to such hw?
M~
Should I change the way I do private branches in osmocom?
I push a lot of private branches everywhere. I was asked in PM if I could cut
down on branches a bit because it clutters other developers' view of the git
history. My immediate response was: the other developer should simply not fetch
my branches, or invoke tig or gitk in a way that shows only selected branches.
But I reflected a bit and would like to ask generally how we want to do it.
For osmocom it apparently is mostly me pushing private branches a lot. What if
we all did that...
In linux kernel development it seems to be more like each developer has her own
public repository to make a mess in.
So, i could make git clones of our main repositories in gitea and keep my
private branches there. It seems like maybe i should do that out of common
courtesy.
But it also adds a bunch of overhead for me, keeping separate repositories
synced. Having multiple remotes affects git commandline behavior. I used to
have separate fetch/push URLs for a while, but it was annoying in some ways.
I can change my ways, but only if i really have to.
Any opinions? Are my branches annoying?
Aspects:
- backup of my ongoing work. (daily)
- offering preliminary work to customers for manual build. (weekly)
- seeing what others are up to. (rare but happens)
- limiting branch clutter. (all the time for everyone)
thanks!
~N
libosmocore/include/osmocom/codec/ecu.h lines 32-44:
/* As the developer and copyright holder of the related code, I hereby
* state that any ECU implementation using 'struct osmo_ecu_ops' and
* registering with the 'osmo_ecu_register()' function shall not be
* considered as a derivative work under any applicable copyright law;
* the copyleft terms of GPLv2 shall hence not apply to any such ECU
* implementation.
*
* The intent of the above exception is to allow anyone to combine third
* party Error Concealment Unit implementations with libosmocodec.
* including but not limited to such published by ETSI.
*
* -- Harald Welte <laforge(a)gnumonks.org> on August 1, 2019.
*/
Question about "ECU implementations ... published by ETSI": what
exactly are they? To the best of my knowledge (I could be wrong, I am
a late joiner to this party), ETSI/3GPP never published any ECU
implementations - instead what they did publish are reference encoder
and decoder implementations for HR1, EFR and AMR codecs. In each of
those reference codec implementations, the error concealment function
is deeply intertied with the guts of the decoder (the thing that puts
out a block of 160 linear PCM samples, *not* a corrected codec frame)
and does not exist as a separable piece - thus none of those reference
codec implementations can be meaningfully called an ECU implementation,
and there is no code in there that could be hooked up to libosmocodec
ECU framework in a technically feasible way, no matter what license.
So what I am missing? Are there some other code publications from
ETSI or 3GPP which I am not aware of, ones that do implement a
separate or at least separable ECU, as opposed to a complete decoder
that takes potentially-errored codec frames as input and emits linear
PCM as output? If any such "true" ECU implementations do exist, I
would love to be pointed in the direction of one!
M~
I fixed something in SCCP_Emulation.ttcn (from Ericsson), and trying out what
I'm allowed to do, it happened so that I just pushed the fix onto
https://github.com/osmocom/titan.ProtocolEmulations.SCCP master
See https://github.com/osmocom/titan.ProtocolEmulations.SCCP/commit/17a894fc662…
The fix is trivial enough, but I'd like to note that I pushed something that
no-one reviewed, and hope that's ok.
my github user is part of that Osmocom group, I set the remote to a git@...
URL, so that's why I could just push onto master.
For the future, do we have a process to review fixes for Ericsson's ttcn code?
Send it to this ML?
~N
Hello GSM community,
I just pushed a private feature branch to osmo-bts, implementing a
feature of much interest to me:
https://cgit.osmocom.org/osmo-bts/log/?h=falconia/rtp_traulike
What is it all about? This document, contained inside the branch,
explains it all:
https://cgit.osmocom.org/osmo-bts/tree/doc/rtp_traulike.txt?h=falconia/rtp_…
Some day I would like to see this feature merged (it is enabled with a
vty option, and if you don't set that option, nothing changes), but
whether it will ever be merged or not, this feature is what I am going
to run with in my ThemWi operation, and themwi-mgw (my Osmocom CN to
G.711 PSTN transcoder) is going to depend on it.
Enjoy!
Hasta la Victoria, Siempre,
Mychaela aka The Mother
Hello GSM community,
I realize that most of you over in Osmocom land would much rather see
me submit Gerrit patches than write lengthy ML posts, but right now I
really need some help with the algorithmic logic of a feature before I
can develop patches implementing said feature - so please bear with
me.
The fundamental question is: what is the most correct way for a GSM
network (let's ignore divisions between network elements for the
moment) to construct the DL speech frame stream for call leg B if it
is coming from the UL of call leg A? I am talking about call scenarios
where call leg A and call leg B use the same codec, thus no transcoding
is done (TrFO), and let me also further restrict this question to
old-style FR/HR/EFR codecs, as opposed to AMR.
At first the answer may seem so obvious that many people will probably
wonder why I am asking such a silly question: just take the speech
frame stream from call leg A UL, feed it to call leg B DL and be done
with it, right? But the question is not so simple. What should the
UL-to-DL mapper do when the UL stream hits a BFI instead of a valid
speech frame? What should this mapper do if call leg A does DTXu but
there is no DTXd on call leg B?
The only place in 3GPP specs where I could find an answer to this
question is TS 28.062 section C.3.2.1.1. Yes, I know that it's the
spec for in-band TFO within G.711, a feature which I reason no one
other than me probably cares about, but that particular section - I am
talking about section C.3.2.1.1 specifically, you can ignore the rest
of TFO for the purpose of this question - seems to me like it should
apply to _any_ scenario where an FR/HR/EFR frame stream is directly
passed from call leg A to call leg B without transcoding, including
scenarios like a self-contained Osmocom network with OsmoMSC switching
from one MS to another without any external MNCC.
Let us first consider the case of FR1 codec, which is the simplest.
Suppose call leg A has DTXu but call leg B has no DTXd - one can't do
DTXd on C0, so if 200 kHz of spectrum is all you got, operating a BTS
with just C0, then no one can do DTXd. When Alice on call leg A is
silent, her MS will send a SID every 480 ms and have its Tx off the
rest of the time, and the frame stream from the BTS serving her call
leg will exhibit a SID frame in every 24th position and BFI placemarkers
in all other positions.
So what should the DL frame stream going to Bob look like in this
scenario? My reading of section C.3.2.1.1 (second paragraph from the
top is the one that covers this scenario) tells me that the *network*
(set aside the question of which element) is supposed to turn that
stream of BFIs with occasional interspersed SIDs into a stream of
valid *speech* frames going to Bob, a stream of valid speech frames
representing comfort noise as produced by a network-located CN
generator. The spec says in that paragraph: "The Downlink TRAU Frames
shall not contain the SID codeword, but parameters that allow a direct
decoding."
Needless to say, there is no code anywhere in Osmocom currently that
does the above, thus current Osmocom is not able to produce the fancy
TrFO behavior which the spec(s) seem to call for. (I said "spec(s)"
vaguely because I only found a spec for TFO, not for TrFO, but I don't
see any reason why this aspect of TFO spec shouldn't also apply to
TrFO when the actual problem at hand is exactly the same.)
But no no no guys, I am *not* bashing Osmocom here, I am seeking to
improve it! As it happens, fully implementing the complete set of
TS 28.062 section C.3.2.1.1 rules (I shall hereafter call them C3211
rules for short) for the original FR1 codec would be quite easy, and I
already have a code implementation which I am eyeing to integrate into
Osmocom. Themyscira libgsmfrp is a FLOSS library that implements a
complete, spec-compliant Rx DTX handler for FR1, and it is 100% my own
original work, not based on ETSI or TI or any other sources, thus no
silly license issues - and I am eyeing the idea of integrating the
same functions, appropriately renamed, repackaged and re-API-ed, into
libosmocodec, and then invoking that functionality in OsmoBTS, in the
code path that goes from RTP Rx to feeding TCH DL to PHY layers.
But while FR1 is easy, doing the same for EFR is where the real
difficulty lies, and this is the part where I come to the community
for help. The key diff between FR1 and EFR that matters here is how
their respective Rx DTX handlers are defined in the specs: for FR1 the
Rx DTX handler is a separate piece, with the interface from this Rx
DTX handler to the main body of the decoder being another 260-bit FR1
frame (this time without possibility of SID or BFI), and the specs for
DTX (06.31 plus 06.11 and 06.12) define and describe the needed Rx DTX
handler in terms of emitting that secondary 260-bit FR1 frame. Thus
implementing this functionality in Themyscira libgsmfrp was a simple
matter of taking the logic described in the specs and turning it into
code.
But for EFR the specs do not define the Rx DTX handler as a separate
piece, instead it is integrated into the guts of the full decoder.
There is a decoder, presented as published C source from ETSI, that
takes a 244-bit EFR frame, which can be either speech or SID, *plus* a
BFI flag as input, and emits a block of 160 PCM samples as output -
all Rx DTX logic is buried inside, intertwined with the actual speech
decoder operation, which is naturally quite complex.
I've already spent a lot of time looking at the reference C
implementation of EFR from ETSI - I kinda had to, as I did the rather
substantial work of turning it into a usable function library, with
state structures and a well-defined interface instead of global vars
and namespace pollution - the result is Themyscira libgsmefr - but I
am still nowhere closer to being able to implement C3211 functionality
for this codec.
The problem is this: starting with a EFR SID frame and previous history
of a few speech frames (the hangover period), how would one produce
output EFR speech frames (not SID) that represent comfort noise, as
C3211 says is required? We can all easily look at ETSI's original
code that generates CN as part of the standard decoder: but that code
generates linear PCM output, not secondary EFR speech frames that
represent CN. There is the main body of the speech decoder, and there
are conditions throughout that slightly modify this decoder logic in
subtle ways for CN generation and/or for ECU-style substitution/muting
- but no guidance for how one could construct "valid speech" EFR
frames that would produce a similar result when fed to the standard
decoder in the MS after crossing radio leg B.
This is where I could really use some input from more senior and more
knowledgeable GSM-ers: does anyone know how mainstream commercial GSM
infra vendors (particularly "ancient" ones of pure T1/E1 TDM kind)
have solved this problem? What do _they_ do in the scenario of call
leg A with DTXu turning into call leg B without DTXd?
Given that those specs were written in the happy and glorious days
when everyone used 2G, when GSM operators had lots of spectrum, and
when most networks operated large multi-ARFCN BTSes with frequency
hopping, I figure that almost everyone probably ran with DTXd enabled
when that spec section was written - hence if I wonder if the authors
of the TFO spec failed to appreciate the magnitude of what they were
asking implementors to do when they stipulated that a UL-to-DL mapping
from DTXu-on to DTXd-off "shall" emit no-SID speech frames that
represent TFO-TRAU-generated CN. And if I wonder if the actual
implementors ignored that stipulation even Back In The Day...
Here is one way how we might be able to "cheat" - what if we implement
a sort of fake DTXd in OsmoBTS for times when real DTXd is not possible
because we only have C0? Here is what I mean: suppose the stream of
TCH frames about to be sent to the PHY layer (perhaps the output of my
proposed, to-be-implemented UL-to-DL mapper) is the kind that would be
intended for DTXd-enabled DL in the original GSM architecture, with
all speech pauses filled with repeated SIDs, every 20 ms without fail.
A traditional DTXd BTS is supposed to transmit only those SIDs that
either immediately follow a speech frame or fall in the SACCH-aligned
always-Tx position, and turn the Tx off at other times. We can't
actually turn off Tx at those "other" times when we are C0 - but what
if we create a "fake DTXd" effect by transmitting a dummy FACCH
containing an L2 fill frame at exactly the same times when we would do
real DTXd if we could? The end effect will be that the spec-based Rx
DTX handler in the MS will "see" the same "thing" as with real DTXd:
receiving FACCH in all those "empty" 20 ms frame windows will cause
that spec-based Rx DTX handler to get BFI=1, exactly the same as if
radio Tx were truly off and the MS were listening to radio noise.
Anyway, I would love to hear other people's thoughts on these ideas,
especially if someone happens to know how traditional GSM infra vendors
handled those pesky requirements of TS 28.062 section C.3.2.1.1 for
UL-to-DL mapping.
Sincerely,
Your GSM-obsessed Mother Mychaela
(Trying TLDR as asked)
Does anyone here happen to live in a country where at least one public
GSM network is known to still operate with traditional T1/E1 TRAUs? I
would like to try making an international call from my G.711 VoIP
system in USA to a GSM MS in whatever country that is served by a
traditional TRAU, and see if I can detect TFO in-band signaling within
the G.711 PCM sample stream coming from the far end. Or if someone
knows for certain some solid technical reason why this idea is doomed,
I would appreciate that explanation too.
Reason for my interest: I have a desire to implement GSM 08.62 or 3GPP
TS 28.062 TFO, and of course publish the source under a public domain
license, but the protocol is complex enough to where I don't see any
feasible way to test for correctness other than through actual
interoperability testing with an existing implementation.
M~
Hello Osmocom,
The Gerrit instructions page in the OsmoCNI wiki says:
> If you would like to push private branches to the Gerrit repository,
> you also need to be added to the "known users" group.
> Please send a short requesting email to openbsc(a)lists.osmocom.org.
I am requesting to be added to the just-described "known users" group
so I can push private branches to osmo-bts repository. My current
reason for desiring such ability is that I am working on a large-ish
feature in OS#5975, and I anticipate needing to do a lot of work
before my patch series will reach mergeable quality. As I understand
it, private branches in Osmocom git repositories of the form
$developer_name/$feature_name exist precisely for feature development
scenario just like the one I find myself in currently, hence I request
the necessary permissions, same as other Osmocom developers.
With devotion to GSM Forever,
(Hasta la Victoria, Siempre,)
Mother Mychaela