Hi all,
Have anyone tried Osmo GSM stack (including the trx) with the LimeNet
Micro with the Pi compute module? My question is if it is stable enough
for any real deployment or just another toy for the lab.
With the USB3 LimeSDR, after a few hours I _always_ get a broken trx,
with lots of failure messages...
Thanks,
Rafael Diniz
Rhizomatica
Hello
I am trying to use openbsc with usrp x310. I have configured and executed osmo-trx from( https://github.com/osmocom/osmo-trx) and osmo-bts(https://github.com/osmocom/osmo-bts) and I can see the network on my phone but can't register to it, though I have set auth policy to accept-all. I have attached attached my config files for reference.
BR
Snehasish
Hi Harald, Everyone,
On 09/04/2019 09.32, Harald Welte wrote:
> could you outline somewhere (mailinglist? maybe a wiki page?) how this
> would work together?
I've started adding code coverage support in changes 13496 and 13551 for
OS#1987.
Initially my goal was to generate reports only manually using gcov and
lcov tools but after submitting my first change to gerrit I've noticed
that Jenkins is used for CI/CD so I thought that it might be useful to
generate these reports as part of the CI process. Jenkins has a
Cobertura plugin [3] which collects coverage data in XML format [4],
visualize them and track coverage metrics over time/builds. I've used
lcov_cobertura tool to convert the lcov's coverage format to XML.
> This Dockerfile which you're patching is used for build verification
> of gerrit patches. Do you want to run coverage reports on every build
> of every patch before commit?
Yes, that was my idea. Other projects that I've used or contributed to
in the past are using a workflow where code coverage data is generated
in every CI build, then send to a coverage service, which analyze them
and post message back in the PR about the coverage results - whether the
coverage has increased, decrease, diff between reports, etc... For
example see [1], [2], [5], [6]
> I would argue it makes more sense to have coverage reports done once
> per day?
>
In my opinion the coverage reports are more useful when they are closely
integrated into the code review process and you receive information how
coverage has changed together with the build status in the code review
system. When reviewing a change the report could gives you more insights
but still you should be able to merge it no matter what the coverage
report says.
If you find these reports not very useful during the code review process
or the changes too disruptive to your workflow then we could build them
once per day in the Osmocom_OBS_nightly (or another) job or not build
them at all in the CI.
BTW I just noticed that libosmocore is not built in the docker containers.
Regards,
Vasil
[1] https://coveralls.io/github/neovim/neovim
[2] https://codecov.io/gh/scrapy/scrapy
[3] https://wiki.jenkins.io/display/JENKINS/Cobertura+Plugin
[4]
https://docs.openstack.org/infra/jenkins-job-builder/publishers.html#publis…
[5] https://github.com/vlm/asn1c/pull/295#issuecomment-420464856
[6] https://github.com/mouse07410/asn1c/pull/22#issuecomment-298964785
Dear all,
the following question popped up on IRC:
22:08 < roh> is there some documentation how the minLatency() call in osmo-trx works?
22:10 < roh> it looks to me like the values we use for usrp1 and lms are not correct/cargocult
I agree, the values are likely wrong for the non-UHD devices.
Interestingly, they are auto-tuned at runtime, see the following piece in Transceiver.cpp:
// if underrun hasn't occurred in the last sec (216 frames) drop
// transmit latency by a timeslot
if (mTransmitLatency > mRadioInterface->minLatency()) {
if (radioClock->get() > mLatencyUpdateTime + GSM::Time(216,0)) {
mTransmitLatency.decTN();
LOG(INFO) << "reduced latency: " << mTransmitLatency;
mLatencyUpdateTime = radioClock->get();
}
}
However, that block only applies to devices with TX_WINDOW_USRP1 set, that is
USRP1, B100 and B2xx devices.
In fact, I cannot find any user of the minLatency() method outside the context
of TX_WINDOW_USRP1, and hence I think it doesn't matter what kind of magic
value the LMS driver supplies?
So at least I conclude:
* it's only ever used on USRP1, B100 and B2xx, and it is dynamically adjusted
at runtime on those platforms
Regards,
Harald
--
- Harald Welte <laforge(a)gnumonks.org> http://laforge.gnumonks.org/
============================================================================
"Privacy in residential applications is a desirable marketing option."
(ETSI EN 300 175-7 Ch. A6)
Hi all,
we just ran out of storage on one of our build slaves. /var/lib/docker
inside the debian9 lxc had grown to 240 GB while actually it should be
much smaller. "docker system prune" helped to freee 35GB of build cache,
but still the remaining 217 GB was way too much compared to what it should
use as per "docker system df -v" - and also much more than other build
slaves were using (~ 13GB)
It turned out that the files were in /var/lib/docker/vfs, and as I didn't
see that yet on other systems, I investigated:
* if the physical host hosting the lxc container doesn't have 'overlay.ko'
loaded, overlay/overlay2 storage drivers are not available to docker
* docker "silently" falls back to using "vfs" which is basically not copy-on-write
but copy-everything-all-the-time, and which consumes massive amounts of storage
I now made sure build2.osmocom.org loads "overlay.ko" by default and I
changed the storage driver on the debian9 lxc. This however means you
loos all containers/images, and everything has to be rebuilt from
scratch.
How to test:
$ docker info | grep Storage
should render
Storage Driver: overlay2
If it shows vfs, something is wrong. I checked our other build slaves
and they're set up correctly.
Regards,
Harald
--
- Harald Welte <hwelte(a)sysmocom.de> http://www.sysmocom.de/
=======================================================================
* sysmocom - systems for mobile communications GmbH
* Alt-Moabit 93
* 10559 Berlin, Germany
* Sitz / Registered office: Berlin, HRB 134158 B
* Geschaeftsfuehrer / Managing Director: Harald Welte
Looking at sending GSUP messages between MSCs via an HLR acting as forwarding
agent, I see that the current decision for GSUP message consumption is
suboptimal:
Depending on the message type sent and received, libvlr of osmo-msc forwards
GSUP messages to the MSC code, and there, again, depending on the message type,
specific callbacks get invoked.
See vlr_gsupc_read_cb() and msc_vlr_route_gsup_msg().
In current osmo-msc it might seem to make sense to first resolve the IMSI to a
vlr_subscr in vlr.c. But if osmo-msc acts as a Handover target for an inter-MSC
Handover, it should be able to handle unknown IMSIs. Also, should we ever go
for a separate SMSC process, the VLR as first stage makes no sense. Finding a
vlr_subscr is a one-liner with vlr_subscr_find_by_imsi().
I would much rather have an explicit destination entity advertised in the GSUP
messages, and an explicit common GSUP MUX stage. In other words, the VLR of
osmo-msc shouldn't act as a GSUP forwarder, it should merely be one of the GSUP
consumers, and shouldn't even be involved when the messages are intended for
inter-MSC, for USSD or for SMS use.
And finally, for GSUP error responses, for example a report that a specific
target could not be reached, it may not be possible to trivially derive the
right GSUP message consumer from the GSUP message (like "Routing Error").
Going towards that idea, I have put in place the following in my temporary dev
source tree:
enum osmo_gsup_entity {
OSMO_GSUP_ENTITY_NONE = 0,
OSMO_GSUP_ENTITY_HLR,
OSMO_GSUP_ENTITY_VLR,
OSMO_GSUP_ENTITY_ESME,
OSMO_GSUP_ENTITY_SMSC,
OSMO_GSUP_ENTITY_USSD, // FIXME: what's an "ESME"/"SMSC" for USSD?
OSMO_GSUP_ENTITY_MSC_A,
OSMO_GSUP_ENTITY_MSC_B,
OSMO_GSUP_ENTITY_COUNT,
};
struct osmo_gsup_message {
[...]
enum osmo_gsup_entity source_entity;
enum osmo_gsup_entity destination_entity;
[...]
};
For calling the right rx_cb, we would need only an explicit target kind, but
for returning errors it is better to also include the source entity kind
explicitly.
A gsup_client_mux API:
struct gsup_client_mux_rx_cb {
int (* func )(struct gsup_client_mux *gcm, void *data, const struct osmo_gsup_message *msg);
void *data;
};
struct gsup_client_mux {
struct osmo_gsup_client *gsup_client;
/* Target clients by enum osmo_gsup_entity */
struct gsup_client_mux_rx_cb rx_cb[OSMO_GSUP_ENTITY_COUNT];
};
int gsup_client_mux_init(struct gsup_client_mux *gcm, struct osmo_gsup_client *gsup_client);
int gsup_client_mux_tx(struct gsup_client_mux *gcm, const struct osmo_gsup_message *gsup_msg);
void gsup_client_mux_tx_error_reply(struct gsup_client_mux *gcm, const struct osmo_gsup_message *gsup_orig,
enum gsm48_gmm_cause cause);
For backwards compat, we would still need to do target classification by
message type, but only if no explicit destination_entity is set:
static enum osmo_gsup_entity gsup_client_mux_classify(struct gsup_client_mux *gcm,
const struct osmo_gsup_message *gsup)
{
if (gsup->destination_entity)
return gsup->destination_entity;
/* Legacy message that lacks an explicit target entity. Guess by message type for backwards compat: */
switch (gsup_msg->message_type) {
case OSMO_GSUP_MSGT_PROC_SS_REQUEST:
case OSMO_GSUP_MSGT_PROC_SS_RESULT:
case OSMO_GSUP_MSGT_PROC_SS_ERROR:
return OSMO_GSUP_ENTITY_USSD;
case OSMO_GSUP_MSGT_MO_FORWARD_SM_ERROR:
case OSMO_GSUP_MSGT_MO_FORWARD_SM_RESULT:
case OSMO_GSUP_MSGT_READY_FOR_SM_ERROR:
case OSMO_GSUP_MSGT_READY_FOR_SM_RESULT:
case OSMO_GSUP_MSGT_MT_FORWARD_SM_REQUEST:
return OSMO_GSUP_ENTITY_SMSC;
default:
/* osmo-hlr capable of forwarding inter-MSC messages always includes the target entity, so any
* other legacy message is for the VLR. */
return OSMO_GSUP_ENTITY_VLR;
}
}
We'd have:
HLR <-> VLR
ESME <-> SMSC
USSD <-> USSD (names??)
MSC_A <-> MSC_B
Thanks for your thoughts.
~N