Hi Keith,
the "big" problem is not adding Osmux media handling here or there
(whether fixing it in the new osmo-mgw, or adding it to freeswitch, or
to whatever else). That's I think the comparatively the easy part.
The bigger problem is how to *signal/* osmux between the various elements.
The protocols that are used for signaling the media plane typically
assume that each media flow
* uses RTP/UDP
* operates on its own port number for this flow
whereas in osmux you now have multiple flows (unidirectional calls)
sharing one UDP 5-tuple (src-ip/src-port/dst-ip/dst-port/l4). Instead,
you have some internal ID to distinguish which of the media flows inside
the osmux flow you are referring-to.
So the non-trivial problem to solve is how to make the media-related
signaling understand those differences, or what kind of mechanism to
invent to map it to the existing presumptions that said implementations
have on how media flows work.
Implementing some kind of proxy (e.g. osmo-bsc_nat for old SCCPlite
based A) is one option, as long as that proxy will handle translation of
both the actual media (RTP/Osmux) as well as the signaling that is
associated for those media flows (e.g. MGCP in old SCCPlite, or 3GPP
AoIP in proper A interface). At that time, said proxy knows all the
related state and can perform whatever mapping/translation.
However, proxies work only good in 1:1 relationships. This means, you
will be in trouble if you're looking for redundant systems (proxies
would have to replicate state), or if you want to have a more "peer to
peer" architecture between your villages/deployments, where any village
would establish calls directly to any other village, and hence we're
not just talking about a strict client-server model anymore.
1) This is a question that relates to voip traffic in
and out of the
autonomous communities over satellite.
Currently this is what we HAVE to have in the local village to support
autonomous operation):
* BSC
* MSC
* HLR
* Freeswitch (freeswitch is our call control, call routing, billing etc)
We can't have any essential backhaul running over the satellite , like A
for example as we would loose all functionality when the sat link is
down, and we'd burn expensive bandwidth!
I'm not sure about the "burning of expensive bandwidth" in an "A
backhaul" scenario. Sure, you'd have some location update traffic, but
the actual media plane of local calls could stay local by means of LCLS,
as Neels has pointed out.
Maybe another freeswitch could signal the osmoMGW via
some kind of
SIP<->MGCP > translator. Or we teach the MGW to speak SIP?
a media gateway is (as the name implies) not a signaling gateway. As
MGCP is an interoperable protocol to control media gateways, you could
use any switch that speaks MGCP to control it. yate offers some level
of MGCP support, for example - as do many of the large/proprietary soft
switches. This of course doesn't solve the question of how to represent
OSMUX on MGCP/SDP side in the first place.
Another option that I want to put on the table and see
what people say
is to look at implementing osmux as a codec in freeswitch.
I don't know anything about freeswitch internals. I would expect that
the proper way to go about this is to actually have Freeswitch talk MGCP
and use it to control OsmoMGW. That's how the concept of media gateways
was intended to work, and it fits the architectural model.
I don't know what that would mean in terms of
effort.
I don't know if the osmux code can be abstracted, if that is the right
term, into it's
own external includible library that could then be used to build a
freeswitch codec.
I don't know either, but I would be surprised if it wasn't possible.
The biggest question I'd look into is the concurrency model. Osmocom
code uses single-thread/single-process event-driven select()
abstraction, which doesn't work well with heavily-multithreaded
environments/applications.
I looked some implementations of AMR codec for
freeswitch, and really it looks like a boiler plate codec with
registration, setup and then it calls the encoding and decoding
functions in the opencore amr library.
The question is: How can you represent a "trunk" of multiple calls
between two systems sharing one UDP IP/port tuple, and how do you
express that in SDP. Once that problem has been solved, you can signal
OSMUX either via SIP or via MGCP. And once that signaling on the wire
is clear, you have to think how you can fit that into the data model
of whatever program you want to teach it
I think though, that in 1683, we are stalled by
https://osmocom.org/issues/2391 for the split setup.
I don't agree, sorry. In fact, I fail to see the connection. #2391 is
only for annotating logs with more context, not about any functional
changes.
2) Another question relates to this proposal of a
media gateway less mode:
https://osmocom.org/issues/3142
I think this suits us, because really our call signalling is all
happening in freeswitch and we would prefer I think not to have the
media gateway at all most of the time. and in fact for local Mobile to
Mobile calls on the same BTS,
we would in fact prefer the RTP be local on the sysmobts, or indeed
BTS<->BTS!
Freeswitch has a "bypass media" parameter, in fact you can even activate
this at
call processing time before bridging the call, depending on whether it
makes sense in terms of direct connection being possible and the lack of
transcoding.
There's also a "bypass media after bridge" parameter that is
automatically using SIP (re)-INVITES to switch the RTP stream from
passing through freeswitch or going directly from end point to end point.
Using "bypass media" in our profile works nicely; of course, as all it
is doing is using the IP address(es) of the
osmo-bts in the SDP, so the rtp stream loops on lo in the sysmobts. It
would help with something that I mentioned at Osmodevcon, Harald, which
was that in some cases we might like to avoid having our RTP go
over the (sometimes variable quality) Wifi links between the BTS and the
BSC.
As Neels already hinted, the "proper" method of doing this is actually
to extend the current LCLS implementation to go that way. So far, it
only avoids media back-haul from BSC to MSC over the A interface. But
with some extra logic, we could make RTP go directly between BTSs.
Doing this with LCLS means that it's not a binary "bypass", but that you
can actually change it at any point during the call. So you could start
with media going all the way back to the core (for ringtone play-out,
etc.) and then go to local switching during the call, only to later on
going back through the core to play some "your balance is about to
exceed" playback from a media playout server in the core, etc.
So it's not an "either-or", but a very flexible system. Also, with LCLS
enabled, the system will automatically figure out if local switching is
possible, i.e. if both legs are going through the same BSC or not.
So this is something I think I would like to see:
* Full support for SIP reinvite in osmo-sip-connector which would then
send a MNCC_RTP_MODIFY which ends up sending (I think it's called a
IPAC_CRCX?) to osmo-bts which will then switch the stream endpoints.
This I believe is also necessary for handover to function with an MNCC
socket setup and we don't currently have it, not even in the osmo-nitb.
I don't know off my head how complex that would be. My general
preference is to modify MNCC in a way to either
a) include SDP for media plane parameters in MNCC. This overcomes
various restrictions in MNCC today, such as being only able to
configure one codec, rather than a list of codecs permitted, or
b) simply only handle the RAN/A side MGCP connection from OsmoMSC, and
leave the CN side MGCP connection to the external MNCC entity. This
way MNCC doesn't have to be "encumbered" with SDP etc.
The problem with the external (SIP) world fiddling with too many
low-level details of RTP media plans is that they don't have all of the
information that is owned by the GSM side of things. What kind of
codecs are supported and/or permitted by BTS/BSC/MSC/etc? What exactly
was negotiated between the various elements? What if after a hand-over
between different BTSs, the channel type and/or codec/rate are changing?
A proper MGW (ours is lacking re-introduction of transcoding) exists to
resolve all those bits by separating the GSM-network internals from the
external side.
* Implement a no media gateway mode in osmo-msc and
have the MSC control
the media stream using SIP via MNCC/osmo-sip-connector instead of
controlling the MGW using MGCP.
Possible, but I don't think this is advisable. It's violating layering
/ network architecture all over the place. It assumes that your entiren
network from CN to the BTS exist in a transparently routed network
without any firewalls or NATs. And it would introduce special-purpose
code which is not relevant to "normal" GSM network operation. That code
needs to be maintained, and we'd need an entire different set of tests
for all of this "non-standard" code.
I would much rather want to see a system which is as closely aligned
with the "normal" GSM network use case, which would mean going down the
LCLS route for keeping media flows local. Currently within BSC-MGW, but
with some easy patches you can get direct BTS-to-BTS.
The hard problem to solve, as indicated, is how to express osmux in SDP,
and then how to use SIP+OSMUX on the connections between OsmoMSCs of
different vilalges and/or your central "public network gateway".
Regards,
Harald
--
- Harald Welte <laforge(a)gnumonks.org>
http://laforge.gnumonks.org/
============================================================================
"Privacy in residential applications is a desirable marketing option."
(ETSI EN 300 175-7 Ch. A6)