Hi Sebastian and team,
I've been thinking a bit on how to implement many concurrent OsmocomBB instances in a setup where we use Sebastian's OsmocomBB virt_phy and osmo-bts-virtual.
The point of having virtual mobile phones is that you can easily simulate many. Many more than you can easily physically manage, so I'm thinking definitely of hundreds and preferably thousands of MSs. This way we can easily simulate load on BTSs, use up all channels, get in overload situations where we have to reject immediate assignments, etc.
Right now, the data structures in virt_phy are a bit convoluted, and there is no clear separation between the state of the actual GSMTAP socket and the MS-specific state attached to one given L1CTL connection.
So if you want to run multiple MS, it seems currently one would have to run multiple pairs of { virt_phy, mobile }, one pair for each MS. This leads to a rather large set of processes, and all have to process their own copy of the same UDP messages received on the multicast socket.
connection-oriented unix domain sockets can very well handle multiple connections (similar to a single TCP server being able to acccept many incoming connections). So if the MS-specific state (like the scheduler / L1CTL related state) is properly separated from the GSMTAP side, any number of "mobile" programs (or other L1CTL users) could actually connect to the same virt_phy program and share it.
Do you think this is worth it? I would really like the idea of just having to start one program, and not having to configure each "mobile" instance with a different l1sap socket name, manage to close the processes vice-versa, etc.
Theoretically one could stay in the current 1:1 model, but I somehow find 1:N more appealing.
Regards, Harald
Hi Harald,
that is great news. First of all, thank you for taking the time to cleanup and merging the virt-phy. I really appreciate that.
You have a point. The virtual layer's usability would greatly increase when it supports more than a few concurrent MSs.
So if you want to run multiple MS, it seems currently one would have to run multiple pairs of { virt_phy, mobile }, one pair for each MS. This leads to a rather large set of processes, and all have to process their own copy of the same UDP messages received on the multicast socket.
Your assumption is correct, currently you need indeed a pair of {virt_phy, mobile} for each connected MS instance. I really like the idea with the 1-to-n relation and I think it is worth implementing. Especially regarding the load testing usecase. Currently setting this up is probably not so fun... I also thought about it back then, but had no concrete idea how to implement it.
Currently, we use a multicast client socket to receive messages on the downlink of the virt_phy. So each virt_phy instance will receive all messages on the downlink, like on the "real" air interface and has to decide by itsself if it wants to process it or not. As this decision is often done by upper layers, that are implemented in layer2/3, the messages have to be forwarded to the mobile app. For example, RR has to check if a paging message is for me or not.
connection-oriented unix domain sockets can very well handle multiple connections
Now if we have only one instance of virt_phy connected to multiple instances of "mobile", received messages on the virt_phy still have to be broadcasted to all "mobile" instances somehow. So if you have multiple TCP-connections on the L1CTL-socket, you would still have to broadcast the messages to them. I thought this is why we used the multicast socket, but it seems as if it is used at the wrong place then. Because if we only have one virt-phy instance, the GSMTAP_socket would not need to be a multicast socket, am I right?
So if the MS-specific state (like the scheduler / L1CTL related state) is properly separated from the GSMTAP side
I think we need an own physical state for each connected L1CTL socket then. Currently the MS physical state is not properly separated from the GSMTAP socket state. That's true.
As I know, mobile can also be configured to configure multiple MS instances. I already tried to use this (https://github.com/osmocom/osmocom-bb/blob/master/ src/host/virt_phy/example_configs/osmocom-bb-mobilex2.cfg), but had some problems with receiving and sending messages over the virt-phy. One of the MS's usually stalled after a time I think. Maybe this can also be used to have only one instance of mobile and virt-phy to simulate multiple MSs in the end.
Kind Regards, Basti
Hi Sebastian,
On Sat, Jul 15, 2017 at 05:23:33PM +0200, Sebastian Stumpf wrote:
Your assumption is correct, currently you need indeed a pair of {virt_phy, mobile} for each connected MS instance. I really like the idea with the 1-to-n relation and I think it is worth implementing. Especially regarding the load testing usecase. Currently setting this up is probably not so fun...
Well, one could always add some kind of helper program/script that takes care of starting the individual tuples of programs, allocating unix domain socket names to it, ... - but rather than investing time in that direction I think it's more elegant the other way around.
I also thought about it back then, but had no concrete idea how to implement it. Currently, we use a multicast client socket to receive messages on the downlink of the virt_phy. So each virt_phy instance will receive all messages on the downlink, like on the "real" air interface and has to decide by itsself if it wants to process it or not.
Yes, this is still the right choice, and I've always argued in favor of multicast sockets. The rationals is that you can start any number of different programs and they can all participate in the simulate RF layer.
connection-oriented unix domain sockets can very well handle multiple connections
Now if we have only one instance of virt_phy connected to multiple instances of "mobile", received messages on the virt_phy still have to be broadcasted to all "mobile" instances somehow. So if you have multiple TCP-connections on the L1CTL-socket, you would still have to broadcast the messages to them.
correct, at least if multiple MS are listening to the same downlink BCCH/CCCH frames. Once they are in dedicated mode, their per-MS state inside virt_phy should filter out all messages that are not for the specific timeslot+subslot (translating to a chan_nr) that their virtual PHY is on.
Having multicast "solve" that problem is one way to do it. And yes, the current approach has some beauty to it.
It's just on the other hand from the usability point of view, having to configure different unix domain sockets for each pair of { virt_phy, mobile } seems quite clumsy. As stated above, one approach could be to have a launcher that starts both and takes care of allocating a (random) unix domain socket path. Or even have an option where the virt_phy can be fork+exec'ed from "mobile". Maybe those options are actually simpler..
I thought this is why we used the multicast socket, but it seems as if it is used at the wrong place then. Because if we only have one virt-phy instance, the GSMTAP_socket would not need to be a multicast socket, am I right?
You still want multicast on the GSMTAP layer as you want to have multiple BTSs, and for sure those are going to be separate OsmoBTS processes.
So if the MS-specific state (like the scheduler / L1CTL related state) is properly separated from the GSMTAP side
I think we need an own physical state for each connected L1CTL socket then. Currently the MS physical state is not properly separated from the GSMTAP socket state. That's true.
There are multiple data structures that look like that separation was intended, but then there are unfortunately some global variables and "layering violations" that entangle them with each other :/
As I know, mobile can also be configured to configure multiple MS instances. I already tried to use this (https://github.com/osmocom/osmocom-bb/blob/master/ src/host/virt_phy/example_configs/osmocom-bb-mobilex2.cfg), but had some problems with receiving and sending messages over the virt-phy. One of the MS's usually stalled after a time I think. Maybe this can also be used to have only one instance of mobile and virt-phy to simulate multiple MSs in the end.
I'm not sure if this is the most "attractive" way of going ahead. It should actually already work right now: Each MS in "mobile" then uses a different /tmp/osmocom_l2 socket, each to a separate virt_phy. This is how it is done with two physical phones in the traditional setup.
I'm not planning any immediate work here, as I'm now busy with implementing test cases in TTCN-3. My code there basically simply subscribes to the downlink multicast group and processes the GSMTAP frames to validate the scheduling of various SYSTEM INFORMATION messages, together with dynamically changing configuration over the VTY.
Right now I'm looking at interfacing L1CTL from TTCN-3, so one could also establish a dedicated channel from the test cases, which is useful for topics like simulating RACH load and validation of LAPDm (by sending hand-crafted LAPDm frames, rather than using the libosmocore lapdm code).
Hi all,
On Sun, Jul 16, 2017 at 02:47:49PM +0200, Harald Welte wrote:
I'm not planning any immediate work here [...]
I was annoyed/stuck with some other work and gave the "multiple MS" support a chance last night. Took a bit longer than expected, but was able to complete it this morning. We can now have any number of MSs connect to the L1CTL socket in parallel. Logging has been extended to make sure it will always log some context information to see which MS a given log message is for.
Hasn't been tested extensively so far, but was working fine here with multiple concurrent users.
I'm right now tracking down a problem on the osmo-bts-virtual side, where it appears to not properly close dedicated channels after the MS disappears. Basically the link timeout doesn't seem to work in the virtual case, i.e. if the MS doesn't send any uplink frames, the lchan stays open.
baseband-devel@lists.osmocom.org