Dear all,
for probably about a year (or longer) we have been putting up with VTY
tests which cause builds to break under unclear circumstances. I personally
believe the probability of a VTY test failing has recently increased again,
and this is barely tolerable anymore. Often, rebasing/cherry-picking the given
patch one or two times also doesn't work. Yet, the given patch-under-test
is not even touching anything related to VTY, like
In https://gerrit.osmocom.org/3899 which has failed in
https://jenkins.osmocom.org/jenkins/job/OpenBSC-gerrit/2451/ and
https://jenkins.osmocom.org/jenkins/job/OpenBSC-gerrit/2454/
I know Neels and others have spend already significant time in the past trying
to resolve this - unsuccessfully.
So I think the situation has reached a point where we should disable the vty
tests, or at least the specific part of the vty tests that is known to break
most frequently.
I definitely want us to have *more* testing, not less. However, when the test
itself is not stable yet - particularly after that much time - we cannot
have that buggy test delay our development.
I would vote for running those tests regularly (daily, every few hours, you name
it), but not as part of the mandatory build verification for gerrit V+1.
What do others think?
--
- Harald Welte <laforge(a)gnumonks.org> http://laforge.gnumonks.org/
============================================================================
"Privacy in residential applications is a desirable marketing option."
(ETSI EN 300 175-7 Ch. A6)
On Mon, Sep 11, 2017 at 11:01:23AM +0200, Pau Espin Pedrol wrote:
> today I found the same issue again in prod (/sierra_2 -> /sierra_4).
> Interestingly, both /sierra_2 and /sierra_3 seem to be using the same
> firmware. dmesg shows again a disconnect from the usb port.
We have this idea of giving the same modem a persistent name on ofono, which
would solve the symptoms, but it's curious why this happens.
I had the watchdog script in place that power cycles the quad modem board and
restarts ofono as soon as the modem names mismatch what we expect. But lynxis
disabled it. The reasoning is that we won't catch ofono errors. That may be
true; my idea was to be able automatically recover ofono without manual
intervention. I guess it all depends on how closely you (lynxis) watch the gsm
testers for failures? Because my focus is not particularly on ofono, and when I
hit a broken situation and need to test things, I will restart ofono and rather
not in-depth investigate the failure; in a dismissive way "come on ofono, do
what I want now." What do you guys think about this?
> I guess only /sierra_2 crashes because it's the first modem in the
> resources.conf list and thus it is usually used in all tests (for instance,
> tests which require only 1 modem), and probably some of the steps done in
> one of those tests is crashing the modem.
Something I thought about before: we could implement a kind of random or round
robin to not always pick the first matching resources in the list. Advantage is
that we would cycle through the hardware and force us to precisely formulate
e.g. modem requirements. The disadvantage is that not every test is run exactly
the same, adding complexity that may obscure analysis. i.e. to reproduce a run
on a particular modem, we would have to somehow clamp that randomness, e.g.
log a random seed at the start and allow passing in a random seed on the
cmdline.
~N
Dear List
I am trying to run osmocomBB with motorola c118 with openbsc.I tried to get
openbsc network on my phone it works well and I am able to register on
openbsc network.
But when i try to run osmocomBB with openBSC i am not able to get network.
Also when i run rssi firmware on c118 phone i get network and its working
fine.
I am using default configuration file for OpenBSC and using NanoBTS with
1800Mhz support.
Is there any configuration change is needed in openBSC ?
<https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campai…>
Virus-free.
www.avast.com
<https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campai…>
<#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
Dear guys,
anyone can help me, Im using osmo-gsm-tester osmo-bts.cfg and openbsc.cfg
also/
BTS is up and auth policy accept-all is set and even I add subscriber IMSI,
but my phone still cannot register on the network.
my setup is LimeSDR and using the latest osmo-bts and openbsc with OpenUSRP.
do you think its a hardware issue?
please help! Thanks
--
best regards,
DUO
Hi.
The OsmoBTS build depends (unfortunately) on OpenBSC because it uses
gsm_data_shared.h header. ATM this file is available in osmo-bsc and osmo-msc (maybe
other split repos as well). Which is the "canonical" one from OsmoBTS point of view?
Also, when would be the right time to move OsmoBTS' jenkins job to use one of the
split repos instead of old OpenBSC?
--
Max Suraev <msuraev(a)sysmocom.de> http://www.sysmocom.de/
=======================================================================
* sysmocom - systems for mobile communications GmbH
* Alt-Moabit 93
* 10559 Berlin, Germany
* Sitz / Registered office: Berlin, HRB 134158 B
* Geschaeftsfuehrer / Managing Director: Harald Welte
Hi Blobb,
I'd like to probe your opinion on a discussion we had today about our
jenkins. So far our setup was manual, and we would like to (somewhat)
automate the process of providing build dependencies on slaves.
One solution that was discussed longer than others would be to use docker.
Each of our repositories that need a build would have their own docker
file, containing the complete setup of dependencies. The idea is that
anyone can easily setup an identical build on any new jenkins build slave
or even at home; no complex config of the jenkins build slave is needed.
The point being, if we adopt docker in such a way, it would be logical to
make use of the docker cache to save unnecessary rebuilds. It is a generic
solution instead our artifact store.
I feel a bit bad for accepting your contributions, doing review and
keeping you busy, just to then talk about docker to solve the problem
instead; I appreciate your presence and would like to keep you involved.
Interestingly enough, we are experimenting with the artifact store on that
one build job that has already been using docker for quite some time...
(It was for the separate network space, not really for artifacts.)
In any case, I would like to include you in the discussion, and maybe you
would also like to be involved in maturing the idea? Until now it is still
wild and no-one has taken actual steps.
An example to follow would be laforge's recently added
https://git.osmocom.org/docker-playground/tree/
One interesting bit is that it has a method to check whether a given git
branch has changed, and rebuilds the docker image only if it has:
https://git.osmocom.org/docker-playground/tree/osmo-ggsn-master/Dockerfile#…
ADD http://git.osmocom.org/openggsn/patch/?h=laforge/osmo-ggsn /tmp/commit
This line fetches the given URL (in this case the latest patch on that
branch) and considers the docker image as unchanged if that URL shows the
same as last time. As soon as a new patch shows, things are rebuilt.
In this sense we could have docker images cascading on top of each other,
adding individual dependencies and reusing identical states auto-detected
by docker. All build steps would be in the Dockerfile.
For builds that aren't used by other builds (like the "final" programs,
osmo-msc, osmo-sgsn, osmo-bsc,...) we don't need to store the result, so
don't need to include the program's build in the Dockerfile: on a docker
image with all dependencies, run the final build step by invoking 'docker
run', like we currently do for the OpenBSC-gerrit job, and then just
discard the changes.
Remotely related: we have the osmo-gsm-tester that is running binaries
produced by jenkins to do automated tests on real GSM hardware. Currently
we compile and tar the binaries, copy them over, extract, set
LD_LIBRARY_PATH and run: a bit tedious and problematic e.g. for
mismatching debian versions. This could be simplified by docker by
guaranteeing a fixed operating system around the binary, actually using
hub.docker.com (or maybe one day a private docker hub) instead of copying
over binary tars manually, sharing across any number of build slaves, and
with the added bonus of having the resulting binaries run in a separate
network space.
As I said, on the one hand I appreciate our work on the artifact store, on
the other hand the docker way undeniably makes for a good overall solution
to simplify things in general, with artifact re-use coming "for free"...
One advantage of the artifact store though is that the artifacts we manage
are not entire debian installations but just a few libs and executables in
a tiny tar.
What is your opinion?
~N
Hi all!
I just installed and configured osmo-bts-virtual and osmocom-bb with
virtphy.
Let me say - this is brilliant! It's so easy to do such things as
trigger IMSI Attach and Detach - so much easier than entering and
exiting airplane mode or power off/on.
Also, MO sms is a breeze.. I have a script to inject SMS regularly, so I
don't have to turn to the mobile and press buttons with each code change
in my SMS handler, I just make the change and wait for the next SMS to
hit it!
So thanks so much to those responsible for making this happen!
Now, one thing that is failing badly for me right "out-of-the-box" is MT
SMS.
As I am totally new to the osmocom-bb code, I guess the best would be a
ticket with some pcaps, but I thought I would ask first if it is a known
issue?
Basically, BSC is doing this:
DLSMS <0024> gsm_04_11.c:127 GSM4.11 TX [HEX of msg redacted]
DLSMS <0024> gsm0411_smc.c:247 SMC(954) TC1* timeout, retrying...
while on the mobile side:
<0005> gsm48_mm.c:3909 (ms 1) Received 'RR_EST_IND' from RR in state MM
idle (sapi 0)
<0005> gsm48_mm.c:912 new state MM IDLE, normal service -> wait for
network command
Then this repeats:
<0001> gsm48_rr.c:4775 Indicated ta 0 (actual ta 0)
<0001> gsm48_rr.c:4777 Indicated tx_power 19
<0001> gsm48_rr.c:4799 ACCH message type 0x00 unknown.
<0001> gsm48_rr.c:664 MON: f=56 lev=>=-47 snr= 0 ber= 63 LAI=262 42 0001
ID=0000 TA=0 pwr=19 TS=1/1
<0001> gsm48_rr.c:2866 MEAS REP: pwr=19 TA=0 meas-invalid=0
rxlev-full=-47 rxlev-sub=-47 rxqual-full=0 rxqual-sub=0 dtx 0 ba 0
no-ncell-n 7
<0001> gsm48_rr.c:2161 PAGING ignored, we are not camping.
I'd like to share a VTY config error analysis that has a tricky solution:
I started osmo-bsc -c osmo-bsc.cfg with, according to
openbsc/doc/examples/osmo-bsc:
net
[...]
bts 0
[...]
periodic location update 30
trx 0
[...]
and get this error:
There is no such command.
Error occurred during reading below line:
trx 0
what? 'net / bts / trx' is no command??
Solution: I'm on the vlr_3G branch (incorporating 2G via A-interface) and on
this branch, I've actually moved the 'periodic location update N' command a
level up, from net / bts / periodic to net / periodic (background: assuming
that OsmoMSC does not have individual BTS info, we moved some settings up to
network level; whether this makes sense for osmo-bsc is a different question,
it's just what happens to be on the vlr_3G branch now).
So there is no net / bts / periodic command.
Why do I get an error for net / bts / trx instead?
two reasons:
1. 'trx 0' was the line following the 'periodic' command,
2. since 'periodic' exists one level above, the vty code goes to the parent
node automatically.
About 2: we tend to indent our VTY config files, but in fact the indentation
has no effect whatsoever, it is just eye candy (very useful eye candy).
The code in question: If a command does not exist, try 'vty_go_parent()' and
see if the command exists there. That's what allows us to omit 'exit' in our
config files to go to the parent node explicitly:
libosmocore/src/vty/command.c:
int config_from_file(struct vty *vty, FILE * fp)
{
int ret;
vector vline;
while (fgets(vty->buf, VTY_BUFSIZ, fp)) {
vline = cmd_make_strvec(vty->buf);
/* In case of comment line */
if (vline == NULL)
continue;
/* Execute configuration command : this is strict match */
ret = cmd_execute_command_strict(vline, vty, NULL);
/* Try again with setting node to CONFIG_NODE */
while (ret != CMD_SUCCESS && ret != CMD_WARNING
&& ret != CMD_ERR_NOTHING_TODO
&& is_config_child(vty)) {
HERE -----> vty_go_parent(vty);
ret = cmd_execute_command_strict(vline, vty, NULL);
}
cmd_free_strvec(vline);
if (ret != CMD_SUCCESS && ret != CMD_WARNING
&& ret != CMD_ERR_NOTHING_TODO)
return ret;
}
return CMD_SUCCESS;
}
In this case the 'periodic...' command does in fact now exist one level above,
so the vty_go_parent() is successful and running that command works. But, the
vty config parsing then sits above on the 'network' level, is no longer in 'bts
0' and hence refuses to accept the following bts-level command, in this case
'trx 0'. Confusing!
So it's not a bug, it's a feature. But it's a feature we might see quite often
if we move the 'periodic' command up to network level and users attempt to use
their old config files.
Same goes for the 'timezone' command, BTW, so we might want to rename commands,
or re-consider moving commands one level up in the first place. Maybe we should
leave backward compat catchers in place that print a warning.
~N
--
- Neels Hofmeyr <nhofmeyr(a)sysmocom.de> http://www.sysmocom.de/
=======================================================================
* sysmocom - systems for mobile communications GmbH
* Alt-Moabit 93
* 10559 Berlin, Germany
* Sitz / Registered office: Berlin, HRB 134158 B
* Geschäftsführer / Managing Directors: Harald Welte
Hi!
We have some builds that happen inside a docker, and some that happen natively
on the (Debian 8) build slave. Can somebody involved with this illustrate
why that is the case, and what's the rationale here?
Just looking at the setup, I'm unable to figure out whether there's any
particular reason for this, or whether it's simply "we started without and
then did some builds in docker but nobody migrated the other builds over"
>From my point of view, I would have assumed that building in different containers
would make sense to e.g. build on different distributions / versions, or
building against older libosmo* vs. building against libosmo* from nightly
package feeds vs. re-building all dependencies from master.
But it appears that only a single container is built, and that container is
used for some jobs (like osmo-msc-gerrit) but not for other jobs.
If I missed some wiki page or mailing list posts with related information,
a pointer would be helpful. Thanks in advance.
Regards,
Harald
--
- Harald Welte <laforge(a)gnumonks.org> http://laforge.gnumonks.org/
============================================================================
"Privacy in residential applications is a desirable marketing option."
(ETSI EN 300 175-7 Ch. A6)
Hi Neels,
Patchset for openBSC's jenkins.sh build script has been uploaded [1].
But a small change [2] of osmo-build.sh is necessarily pending,
because https://git.osmocom.org is down atm. Furthermore, the
osmo-build.sh script should probably not depend on cgit's
availability. :)
After [2] has been submitted following steps are necessary to verify
[1] via gerrit:
- trigger "update-osmo-ci-on-slaves"
- add ARTIFACT_STORE environment variable to all slaves/nodes e.g. [3]
- add following arguments to docker invocations of openBSC jobs [4]:
-e JOB_NAME="$JOB_NAME"
-e ARTIFACT_STORE="/ARTIFACT_STORE"
-v "$ARTIFACT_STORE:/ARTIFACT_STORE"
Afterwards, re-triggering [1] should result in a '+1 Jenkins Builder'.
I have sufficient permission to apply above stated steps, except
+2'ing [2] and you may want to suggest the absolute path for
ARTIFACT_STORE variable?
Regards,
André
[1] https://gerrit.osmocom.org/#/c/3823/
[2] https://gerrit.osmocom.org/#/c/3822/
[3] https://jenkins.osmocom.org/jenkins/computer/OsmocomBuild1/configure
[4] https://jenkins.osmocom.org/jenkins/view/Jenkins-Gerrit/job/OpenBSC-gerrit/