Hi,
My name is Brackley Cassinga Form DRC, we run a community network called
pamoja net where we offer gsm services using osmocom open source software
and OC Base station.
Recently I have tried to install another base station as the same installed
but I could not find any resource guiding through all the steps to take to
run NIB on a base station.
I'm currently running Ubuntu and I will appreciate if you could guide me on
the installation of BSC,hlr,MSC , in order to run a basic gsm network.
Thank you. Regards
--
*Ir Brackley heshima Casinga **Pacifique*
*CEO and Founder of kwanzatechnologie*
KwanzaTechnologies ,GlobalElectronics
+243977265291 | +243977265291 | Pcassinga(a)gmail.com/
brackley(a)ensemblepourladifference.org
www.kwantechnologies.jimdosite.com <http://www.kwantechnologies.com/> |
Skype: Brackley cassinga <https://webapp.wisestamp.com/#>
Av Semliki N 43
Hello All,
I am working on get OpenBSC connected to a 3rd party MSC, I read from the
Osmo document that I can use the bsc only mode and the osm-bsc instead of
the osmo-nitb.
Currently I had osmo-trx-uhd, osmo-bts-trx, osmo-bsc, and osmo-stp
configured and and running, but I see a lot of below error logs in the
osmo-stp console:
```````````````````````````
DLSS7 <000c> osmo_ss7.c:1468 asp-asp-dyn-0: xua_srv_conn_cb():
sctp_recvmsg() returned 56 (flags=0x80)
DLM3UA <000f> m3ua.c:722 asp-asp-dyn-0: Received M3UA Message (XFER:DATA)
DLM3UA <000f> m3ua.c:541 asp-asp-dyn-0: m3ua_rx_xfer
DLM3UA <000f> m3ua.c:580 asp-asp-dyn-0: m3ua_rx_xfer(): M3UA data header:
opc=337=0.42.1 dpc=185=0.23.1
DLSS7 <000c> osmo_ss7_hmrt.c:278 m3ua_hmdc_rx_from_l2(): dpc=185=0.23.1 not
local, message is for routing
DLSS7 <000c> osmo_ss7_hmrt.c:258 MTP-TRANSFER.req for DPC 185: no route!
````````````````````````````
On the BSC console, I am seeing a lot of:
```````````````````````````````
<0007> a_reset.c:106 A-RESET(msc-0)[0xb76ce0]{DISC}: (re)sending BSSMAP
RESET message...
<0007> osmo_bsc_sigtran.c:93 Sending RESET to MSC:
RI=SSN_PC,PC=0.23.1,SSN=BSSAP
```````````````````````````````
Can anyone identify what is the issue here?
Appreciates!
Regards,
Weiqi
I think here is a bug:
char *osmo_quote_str_c(const void *ctx, const char *str, int in_len)
{
char *buf = talloc_size(ctx, OSMO_MAX(in_len+2, 32));
if (!buf)
return NULL;
return osmo_quote_str_buf2(buf, 32, str, in_len);
}
We may allocate more than 32 bytes (see OSMO_MAX()) but still allow to write
only 32 bytes?
Looks like the allocated len should be stored in a local variable to pass to
osmo_quote_str_buf2().
And if I'm right, what is the 32 for? At least 32??
~N
Dear Osmocom community,
A question about this part of code - function sgsn_ggsn_ctx_drop_pdp:
http://git.osmocom.org/osmo-sgsn/tree/src/gprs/gprs_sgsn.c#n720
The second branch of the condition (hard-dropping) is called even when the phone
is registered, and hence no Deactivate PDP Context Request is sent to the phone.
Due to that, the phone doesn't know that the PDP Context was deleted on the
network side and keeps acting as if it is still active -> the PS isn't working
when this happens.
Any suggestions/thoughts on how this can be fixed?
Thanks
Kind regards,
Mykola
Hi all,
Have anyone tried Osmo GSM stack (including the trx) with the LimeNet
Micro with the Pi compute module? My question is if it is stable enough
for any real deployment or just another toy for the lab.
With the USB3 LimeSDR, after a few hours I _always_ get a broken trx,
with lots of failure messages...
Thanks,
Rafael Diniz
Rhizomatica
Hi,
As I'm looking into adding a generic cell modem framework to the linux
kernel (to create session netdevs etc.), I started looking for a
metadata encapsulation, a la Radiotap (I'm a wifi guy :-) ).
So obviously, I found gsmtap, but for my use case it doesn't really
address most of the interesting data, and it got me wondering. So a few
questions, if I may:
1) Why the design with encapsulating it in UDP? Radiotap is just a raw
header without IP etc. in front, and you use it with tcpdump,
wireshark or similar tools on the local system. What's the value in
having something "network transparent"?
2) The format of gsmtap doesn't seem very extensible, but I guess a new
version could be made that has a TLV-based format or so. I'd have
argued that a new version isn't even needed, but the length field is
only 8 bits right now which seems too short.
(speaking of versions - the docs say "version, set to 0x01 currently"
but "#define GSMTAP_VERSION 0x02")
3) Does the packet data follow the gsmtap header? It's not really clear
to me based on reading the wireshark code.
In particular, the data I'm thinking of is higher-level things, like the
session ID for a frame when it's going through the kernel, or perhaps a
flow label on RX, etc.
Also, vendor-specific data would be useful, e.g. to encapsulate the
device-specific headers like QMI, where such metadata is encapsulated in
a vendor- or device-specific way, which you'd want to see for debugging
certain things, but for other things the generic "session ID" type
information - encoded in a vendor-agnostic way - would be better to show
in wireshark.
Since it doesn't seem possible to use gsmtap in the current version,
would it make sense to define a new gsmtap that (say) has version 3 or
something, followed by an overall length and TLVs? I do note that this
wouldn't be compatible with the current wireshark code as it doesn't
check the version, just shows it...
Or would it make more sense to define a new ARPHDR_WWANTAP like
ARPHDR_IEEE80211_RADIOTAP and just use that instead of encapsulating in
IP/UDP, and then have a completely new (extensible) protocol inside of
that? I'm not really sure I see the point of UDP encapsulation anyway.
Thanks,
johannes
This is an idea about logging context, the logging filter and FSM instances.
The idea is to make logging context a first-class citizen of osmo_fsm_inst and
of LOGP(), in order to get cleaner FSM ids, better configurable logging and a
more universal and reliable approach to logging filters.
At events like 35c3 I want to, and hence any production operator wants to have
useful filters for logging. The attempt with this RFC is to get this in an
elegant way that also improves current logging context and saves FSM IDs.
== Status quo ==
- Log filters:
Since a long time ago, we have a logging filter mechanism which requires
setting a global "context pointer" to some object, for every code path. It is
not trivial to use and we mostly neglected using and testing it, so it doesn't
work well in practice. See enum log_filter_index, enum log_ctx_index,
log_set_context() in libosmocore and the filter_fn() in msc_main.c as an
example.
- Logging context:
For a long time, we added logging context to each and every LOGP() line, like:
LOGP(DMSC, LOGL_DEBUG, "(%s %s %d %d) Actual message",
subscr->ctx1, osmo_foo_name(subscr->ctx2), trans->ctx3, another->ctx);
Since the osmo_fsm_inst showed up, I have more and more used the FSM instance's
id as logging context:
#define LOG_MY_OBJ(my_obj, loglevel, fmt, args...) \
LOGPFSML(my_obj->fi, loglevel, fmt, ##args)
my_obj_update_id(my_obj)
{
osmo_fsm_inst_update_id_f(my_obj->fi,
"%s-%s-%d-%d",
my_obj->ctx1, osmo_foo_name(my_obj->ctx2), my_obj->ctx3, my_obj->ctx);
}
stuff() {
LOG_MY_OBJ(my_obj, LOGL_DEBUG, "Actual message");
}
The advantages are obvious:
- less time spent composing log message context when developing code.
- context string is composed only once and re-used, less cpu cycles used to compose.
- it is now made hard to forget adding useful log context.
But this also has disadvantages:
- Long lines
Since I want to be able to trace logging output by various items, my logging
context has often grown to a size that makes the log uncomfortable to read
unless the terminal is really wide.
For example, in osmo-msc, the context now often looks like this:
DMSC DEBUG msc_a(IMSI-901700000014705:MSISDN-1001:TMSI-0x8D91D71B:GERAN-A-0:LU)[0x61200000b620]{MSC_A_ST_AUTH_CIPH}: Received Event MSC_A_EV_AUTHENTICATED (vlr_lu_fsm.c:741)
It includes the IMSI, MSISDN, TMSI, the RAN conn info and the complete-l3 type.
An extreme example is
DCC DEBUG rtp_stream(IMSI-901700000014705:MSISDN-1001:TMSI-0x8D91D71B:GERAN-A-1:CM_SERVICE_REQ:trans-8:call-2147483649:RTP_TO_RAN:CI-BBBD5102:remote-port-not-sent:codec-not-sent:local-192-168-2-4-42000:remote-192-168-2-3-23004)[0x612000015e20]{ESTABLISHING}: Committing: Tx MDCX to update the MGW: updating remote-RTP-IP-port codec (rtp_stream.c:330)
WTF. But of course, for an RTP stream, it is interesting to see what subscriber
it is for, but also what callref and CC transaction, also the MGW's endpoint
CI, whether it is to RAN or CN, and which RTP address and ports are involved.
Now, a log line like this is pretty much guaranteed to be truncated in the
display of gsmtap_log in wireshark. Very irritating.
We would have to remove select items of logging context from the FSM instance
ID to strike a good balance of length and usefulness. Each time we tweak it,
all the logging output in test programs changes ... ugh.
- Special characters
Since an FSM instance can only have a limited set of characters for id, we
can't add IP addresses as 1.2.3.4, but need to use 1-2-3-4. And so on and so
on.
- ID obfuscation
If I want to address a specific osmo_fsm_inst on the CTRL interface, I need to
pass the entire long id. I don't know of anyone using this in practice, but
abusing the id as logging context makes this really hard.
I think this is the worst side effect of me adding elaborate logging context to
FSM instance ids.
- log reading
Various lengths of logging context make the log line's actual message show on
various "indentation" levels. Instead I would like to be able to configure the
log context part of a log message to show in the *end* of a log line instead of
at the start, so that the short messages like "New conn" or "Rx Location
Updating Request" show lined up one below each other; or alternatively
introduce a common chunk of indentation spacing to line things up. That's
currently not possible if both the log context and log message are just one
arbitrary string fed to LOGP().
== IDEA ==
Reflecting on this, I formed the idea of moving logging context to a separate
array of strings for each object. The most common usage would be to tie this
array of context information to an osmo_fsm_inst.
I'm not sure how far we could take this before we start producing too much
load. It would of course be the aim to find a good balance of effort and
usefulness, or think of nifty simplifications or optimizations.
It's just a vague idea, but I guess it would look something like this in pseudo
code:
my_obj_update_id(my_obj)
{
set_log_context(my_obj->logctx, "IMSI", my_obj->imsi);
set_log_context(my_obj->logctx, "MSISDN", my_obj->vlr->msisdn);
set_log_context(my_obj->logctx, "RTP-local", osmo_sockaddr_str_name(my_obj->rtp_stream->rtp_local));
...
}
set_log_context(ctx, name, val)
{
slot = log_context_find_or_create_slot(ctx, name);
osmo_talloc_replace_str(ctx, &ctx[slot], val);
/* or osmo_strlcpy() to a fixed-size char[]? */
}
#define LOG_MY_OBJ(my_obj, level, fmt, args...) \
LOGP_CTX(DMSC, my_obj->ctx, level, fmt, ##args)
use() {
LOG_MY_OBJ(my_obj, LOGL_DEBUG, "Actual message");
/* i.e. most current log macros would be invoked in the same way */
}
And then allow logging configuration like
% hide all logging context in log output, except MSISDN and RTP-local:
> logging print context none
> logging print context MSISDN
> logging print context RTP-local
% show only logging lines where this exact IMSI is part of the context
> logging filter-rule IMSI == 901700000014705
In gsmtap_log packets, we could add a listing of key-value items for log
context, hopefully allowing to show / hide each individual context by name in a
wireshark UI.
Instead of repeating context like the IMSI in various FSM instance IDs, the
logging context could be made chainable, i.e. internally, we could reference
one logging context from another. For example, the rtp_stream FSM instance
would only add the RTP address to its logging context, and then "upwards"
reference the call_leg FSM's context. That would add the callref, and
"upwards" reference the subscriber's context, which adds IMSI, MSISDN and RAN
conn info to the rtp_stream logging context. (We might be able to do this
implicitly from osmo_fsm_inst parent/child relations that we already have.)
It might even be thinkable to allow traversing all across related context info,
so that filtering for a given RTP port could cause even high level objects
(like the vlr_subscr) to search through all childrens' log ctx information to
see whether anyone defines this RTP-local context, thus show all logging for a
conn that currently uses this RTP port.
Premature optimization... To run each log line through a set of filters would
potentially take up a lot of CPU cycles. For that I also have an idea to
resolve the decision to log or not to log only once per target object and cache
that result. For example, when I change the logging filter to pick a specific
MSISDN, an internal token is incremented. If I hit an object's logging, I
decide whether it passes the logging filter, and store the outcome as
true/false with the current filter's token value in the object. Next time I hit
that object and the token is still the same, the filter must also have the same
result, and I can just use the stored true/false. If the user changes the
filter, the token increments and on next logging, each object notices that the
filter needs to be re-run in full. If the object's context info changes, reset
the object's stored token to also rerun the filter on next logging. This is
quite vague, especially not clear whether each object with logging context
would have to store tokens for each and every log target separately...
Premature optimization: require the log context names to be compile-time string
literals, so that we can just compare name_a == name_b without invoking
strcmp().
A general advantage is that we would always see the relevant context for
logging, and we would immediately notice bugs in the context, thus we would
always ensure reliable logging filters "for free".
An easy way out of implementing log filters in osmocom would be to simply not
have them, just provide well-defined log context in gsmtap_log, in a way that
wireshark can parse. Users can then use wireshark rules to filter the log. The
disadvantage here is that we would have to first compose all logging, instead
of nicely skipping entire log lines before they are even composed.
So, this is so far just an idea I would like to run by you.
Even if the solutions might still be unworthy, I think it is worthwhile to be
aware of the implications I highlighted above.
~N
Hi all,
Have anyone tried Osmo GSM stack (including the trx) with the LimeNet
Micro with the Pi compute module? My question is if it is stable enough
for any real deployment or just another toy for the lab.
With the USB3 LimeSDR, after a few hours I _always_ get a broken trx,
with lots of failure messages...
Thanks,
Rafael Diniz
Rhizomatica
Hello
I am trying to use openbsc with usrp x310. I have configured and executed osmo-trx from( https://github.com/osmocom/osmo-trx) and osmo-bts(https://github.com/osmocom/osmo-bts) and I can see the network on my phone but can't register to it, though I have set auth policy to accept-all. I have attached attached my config files for reference.
BR
Snehasish
Hi Harald, Everyone,
On 09/04/2019 09.32, Harald Welte wrote:
> could you outline somewhere (mailinglist? maybe a wiki page?) how this
> would work together?
I've started adding code coverage support in changes 13496 and 13551 for
OS#1987.
Initially my goal was to generate reports only manually using gcov and
lcov tools but after submitting my first change to gerrit I've noticed
that Jenkins is used for CI/CD so I thought that it might be useful to
generate these reports as part of the CI process. Jenkins has a
Cobertura plugin [3] which collects coverage data in XML format [4],
visualize them and track coverage metrics over time/builds. I've used
lcov_cobertura tool to convert the lcov's coverage format to XML.
> This Dockerfile which you're patching is used for build verification
> of gerrit patches. Do you want to run coverage reports on every build
> of every patch before commit?
Yes, that was my idea. Other projects that I've used or contributed to
in the past are using a workflow where code coverage data is generated
in every CI build, then send to a coverage service, which analyze them
and post message back in the PR about the coverage results - whether the
coverage has increased, decrease, diff between reports, etc... For
example see [1], [2], [5], [6]
> I would argue it makes more sense to have coverage reports done once
> per day?
>
In my opinion the coverage reports are more useful when they are closely
integrated into the code review process and you receive information how
coverage has changed together with the build status in the code review
system. When reviewing a change the report could gives you more insights
but still you should be able to merge it no matter what the coverage
report says.
If you find these reports not very useful during the code review process
or the changes too disruptive to your workflow then we could build them
once per day in the Osmocom_OBS_nightly (or another) job or not build
them at all in the CI.
BTW I just noticed that libosmocore is not built in the docker containers.
Regards,
Vasil
[1] https://coveralls.io/github/neovim/neovim
[2] https://codecov.io/gh/scrapy/scrapy
[3] https://wiki.jenkins.io/display/JENKINS/Cobertura+Plugin
[4]
https://docs.openstack.org/infra/jenkins-job-builder/publishers.html#publis…
[5] https://github.com/vlm/asn1c/pull/295#issuecomment-420464856
[6] https://github.com/mouse07410/asn1c/pull/22#issuecomment-298964785
Dear all,
the following question popped up on IRC:
22:08 < roh> is there some documentation how the minLatency() call in osmo-trx works?
22:10 < roh> it looks to me like the values we use for usrp1 and lms are not correct/cargocult
I agree, the values are likely wrong for the non-UHD devices.
Interestingly, they are auto-tuned at runtime, see the following piece in Transceiver.cpp:
// if underrun hasn't occurred in the last sec (216 frames) drop
// transmit latency by a timeslot
if (mTransmitLatency > mRadioInterface->minLatency()) {
if (radioClock->get() > mLatencyUpdateTime + GSM::Time(216,0)) {
mTransmitLatency.decTN();
LOG(INFO) << "reduced latency: " << mTransmitLatency;
mLatencyUpdateTime = radioClock->get();
}
}
However, that block only applies to devices with TX_WINDOW_USRP1 set, that is
USRP1, B100 and B2xx devices.
In fact, I cannot find any user of the minLatency() method outside the context
of TX_WINDOW_USRP1, and hence I think it doesn't matter what kind of magic
value the LMS driver supplies?
So at least I conclude:
* it's only ever used on USRP1, B100 and B2xx, and it is dynamically adjusted
at runtime on those platforms
Regards,
Harald
--
- Harald Welte <laforge(a)gnumonks.org> http://laforge.gnumonks.org/
============================================================================
"Privacy in residential applications is a desirable marketing option."
(ETSI EN 300 175-7 Ch. A6)
Hi all,
we just ran out of storage on one of our build slaves. /var/lib/docker
inside the debian9 lxc had grown to 240 GB while actually it should be
much smaller. "docker system prune" helped to freee 35GB of build cache,
but still the remaining 217 GB was way too much compared to what it should
use as per "docker system df -v" - and also much more than other build
slaves were using (~ 13GB)
It turned out that the files were in /var/lib/docker/vfs, and as I didn't
see that yet on other systems, I investigated:
* if the physical host hosting the lxc container doesn't have 'overlay.ko'
loaded, overlay/overlay2 storage drivers are not available to docker
* docker "silently" falls back to using "vfs" which is basically not copy-on-write
but copy-everything-all-the-time, and which consumes massive amounts of storage
I now made sure build2.osmocom.org loads "overlay.ko" by default and I
changed the storage driver on the debian9 lxc. This however means you
loos all containers/images, and everything has to be rebuilt from
scratch.
How to test:
$ docker info | grep Storage
should render
Storage Driver: overlay2
If it shows vfs, something is wrong. I checked our other build slaves
and they're set up correctly.
Regards,
Harald
--
- Harald Welte <hwelte(a)sysmocom.de> http://www.sysmocom.de/
=======================================================================
* sysmocom - systems for mobile communications GmbH
* Alt-Moabit 93
* 10559 Berlin, Germany
* Sitz / Registered office: Berlin, HRB 134158 B
* Geschaeftsfuehrer / Managing Director: Harald Welte
Looking at sending GSUP messages between MSCs via an HLR acting as forwarding
agent, I see that the current decision for GSUP message consumption is
suboptimal:
Depending on the message type sent and received, libvlr of osmo-msc forwards
GSUP messages to the MSC code, and there, again, depending on the message type,
specific callbacks get invoked.
See vlr_gsupc_read_cb() and msc_vlr_route_gsup_msg().
In current osmo-msc it might seem to make sense to first resolve the IMSI to a
vlr_subscr in vlr.c. But if osmo-msc acts as a Handover target for an inter-MSC
Handover, it should be able to handle unknown IMSIs. Also, should we ever go
for a separate SMSC process, the VLR as first stage makes no sense. Finding a
vlr_subscr is a one-liner with vlr_subscr_find_by_imsi().
I would much rather have an explicit destination entity advertised in the GSUP
messages, and an explicit common GSUP MUX stage. In other words, the VLR of
osmo-msc shouldn't act as a GSUP forwarder, it should merely be one of the GSUP
consumers, and shouldn't even be involved when the messages are intended for
inter-MSC, for USSD or for SMS use.
And finally, for GSUP error responses, for example a report that a specific
target could not be reached, it may not be possible to trivially derive the
right GSUP message consumer from the GSUP message (like "Routing Error").
Going towards that idea, I have put in place the following in my temporary dev
source tree:
enum osmo_gsup_entity {
OSMO_GSUP_ENTITY_NONE = 0,
OSMO_GSUP_ENTITY_HLR,
OSMO_GSUP_ENTITY_VLR,
OSMO_GSUP_ENTITY_ESME,
OSMO_GSUP_ENTITY_SMSC,
OSMO_GSUP_ENTITY_USSD, // FIXME: what's an "ESME"/"SMSC" for USSD?
OSMO_GSUP_ENTITY_MSC_A,
OSMO_GSUP_ENTITY_MSC_B,
OSMO_GSUP_ENTITY_COUNT,
};
struct osmo_gsup_message {
[...]
enum osmo_gsup_entity source_entity;
enum osmo_gsup_entity destination_entity;
[...]
};
For calling the right rx_cb, we would need only an explicit target kind, but
for returning errors it is better to also include the source entity kind
explicitly.
A gsup_client_mux API:
struct gsup_client_mux_rx_cb {
int (* func )(struct gsup_client_mux *gcm, void *data, const struct osmo_gsup_message *msg);
void *data;
};
struct gsup_client_mux {
struct osmo_gsup_client *gsup_client;
/* Target clients by enum osmo_gsup_entity */
struct gsup_client_mux_rx_cb rx_cb[OSMO_GSUP_ENTITY_COUNT];
};
int gsup_client_mux_init(struct gsup_client_mux *gcm, struct osmo_gsup_client *gsup_client);
int gsup_client_mux_tx(struct gsup_client_mux *gcm, const struct osmo_gsup_message *gsup_msg);
void gsup_client_mux_tx_error_reply(struct gsup_client_mux *gcm, const struct osmo_gsup_message *gsup_orig,
enum gsm48_gmm_cause cause);
For backwards compat, we would still need to do target classification by
message type, but only if no explicit destination_entity is set:
static enum osmo_gsup_entity gsup_client_mux_classify(struct gsup_client_mux *gcm,
const struct osmo_gsup_message *gsup)
{
if (gsup->destination_entity)
return gsup->destination_entity;
/* Legacy message that lacks an explicit target entity. Guess by message type for backwards compat: */
switch (gsup_msg->message_type) {
case OSMO_GSUP_MSGT_PROC_SS_REQUEST:
case OSMO_GSUP_MSGT_PROC_SS_RESULT:
case OSMO_GSUP_MSGT_PROC_SS_ERROR:
return OSMO_GSUP_ENTITY_USSD;
case OSMO_GSUP_MSGT_MO_FORWARD_SM_ERROR:
case OSMO_GSUP_MSGT_MO_FORWARD_SM_RESULT:
case OSMO_GSUP_MSGT_READY_FOR_SM_ERROR:
case OSMO_GSUP_MSGT_READY_FOR_SM_RESULT:
case OSMO_GSUP_MSGT_MT_FORWARD_SM_REQUEST:
return OSMO_GSUP_ENTITY_SMSC;
default:
/* osmo-hlr capable of forwarding inter-MSC messages always includes the target entity, so any
* other legacy message is for the VLR. */
return OSMO_GSUP_ENTITY_VLR;
}
}
We'd have:
HLR <-> VLR
ESME <-> SMSC
USSD <-> USSD (names??)
MSC_A <-> MSC_B
Thanks for your thoughts.
~N