Hi Neels,
in the following commit:
commit 89991fdb7c01fa42e323577b4026985e580763cf
Author: Neels Hofmeyr <neels(a)hofmeyr.de>
Date: Mon Jan 28 19:06:53 2019 +0100
you introduce language about restricting the timeout to a signed 32bit value,
as time_t is not well-defined on 32bit systems.
What I'm somehow missing is where we are using time_t in this context? Neither
osmo_fsm code nor the underlying osmo_timer_list seems to be using time_t.
So why would we bother about time_t here?
Thanks for sharing your thoughts.
Regards,
Harald
--
- Harald Welte <laforge(a)gnumonks.org> http://laforge.gnumonks.org/
============================================================================
"Privacy in residential applications is a desirable marketing option."
(ETSI EN 300 175-7 Ch. A6)
Hi all!
As some of you know, I'm currently using libosmocore, and specifically
osmo_fsm inside some cortex-m microcontroller projects. One of the
features I need there: FSM timeouts below 1s.
osmo_fsm uses osmo_timer_list as underlying timer, and that timer can
express any timeval (seconds + microseconds) as timeout. Only the osmo_fsm
API doesn't expose that part.
What I could now do: Simply add osmo_fsm_inst_state_chg2 which takes one
more argument for microseconds. However, I find that "second, microsecond"
style with two arguments everywhere quite clumsy.
So what I'm instead suggesting is to add new API that use one single timeout
value (like the current API), but specify the timeout in milliseconds. The
old API then becomes a wrapper around the new API, simply multiplyin timeouts
by a factor of 1000.
Does anyone think this is too restrictive? I currently cannot think of use
cases where timeouts below one 1ms or with granularity below 1ms matter *and*
where one would want to use osmo-fsm. But given how speeds of systems (both
processors and communications systems) are increasing, it might be that we'd
eventually need that? I'd currently assume osmo_fsm with all of its internal
logging, etc. is too heavy-weight for such super-time-constrained use cases.
And in terms of value range: Assuming a 32bit architecture, the scale of
a 2^32 microsecond value is sufficient to express timeouts up to 1193
hours, which in turn is about 49 days.
Any comments, ideas, thoughts?
Regards,
Harald
--
- Harald Welte <laforge(a)gnumonks.org> http://laforge.gnumonks.org/
============================================================================
"Privacy in residential applications is a desirable marketing option."
(ETSI EN 300 175-7 Ch. A6)
Hi all,
Could anyone advise me which solution should I look for feeding the
LimeSDR mini with external clock GPSDO locked?
Btw, which you think it's a better option for a home brew BTS: the
LimeSDR mini with external clock plus an intel mini-pc, or the LimeNet mini?
Cheers,
Rafael Diniz
Looking at https://jenkins.osmocom.org/jenkins/view/TTCN3/job/ttcn3-bsc-test-sccplite/…
looks quite horrible since build 357.
However, with my own manual tests, all of those pass.
It looks like most of those failures are sporadic, and I cannot reproduce them.
I am not sure how to find out what is going there, I can just say that osmo-bsc
looks quite stable AFAICT and that it seems to be non-determinism / timing in
the ttcn3 and/or system load causing the failures.
~N
I think here is a bug:
char *osmo_quote_str_c(const void *ctx, const char *str, int in_len)
{
char *buf = talloc_size(ctx, OSMO_MAX(in_len+2, 32));
if (!buf)
return NULL;
return osmo_quote_str_buf2(buf, 32, str, in_len);
}
We may allocate more than 32 bytes (see OSMO_MAX()) but still allow to write
only 32 bytes?
Looks like the allocated len should be stored in a local variable to pass to
osmo_quote_str_buf2().
And if I'm right, what is the 32 for? At least 32??
~N
Dear all,
as you can see at
https://jenkins.osmocom.org/jenkins/job/ttcn3-bsc-test-sccplite/test_result…
there has been a large increase in test failures of the SCCPlite related tests in
osmo-bsc over the last two builds/tests.
Does anyone know what kind of changes they made which could have impacted the
relted behavior?
We don't see any such failures on AoIP, leading me to suspect that some changes
were tested only for AoIP but not for SCCPlite?
Regards,
Harald
--
- Harald Welte <laforge(a)gnumonks.org> http://laforge.gnumonks.org/
============================================================================
"Privacy in residential applications is a desirable marketing option."
(ETSI EN 300 175-7 Ch. A6)
Dear Osmocom community,
A question about this part of code - function sgsn_ggsn_ctx_drop_pdp:
http://git.osmocom.org/osmo-sgsn/tree/src/gprs/gprs_sgsn.c#n720
The second branch of the condition (hard-dropping) is called even when the phone
is registered, and hence no Deactivate PDP Context Request is sent to the phone.
Due to that, the phone doesn't know that the PDP Context was deleted on the
network side and keeps acting as if it is still active -> the PS isn't working
when this happens.
Any suggestions/thoughts on how this can be fixed?
Thanks
Kind regards,
Mykola
Hi all,
Have anyone tried Osmo GSM stack (including the trx) with the LimeNet
Micro with the Pi compute module? My question is if it is stable enough
for any real deployment or just another toy for the lab.
With the USB3 LimeSDR, after a few hours I _always_ get a broken trx,
with lots of failure messages...
Thanks,
Rafael Diniz
Rhizomatica
Hi,
As I'm looking into adding a generic cell modem framework to the linux
kernel (to create session netdevs etc.), I started looking for a
metadata encapsulation, a la Radiotap (I'm a wifi guy :-) ).
So obviously, I found gsmtap, but for my use case it doesn't really
address most of the interesting data, and it got me wondering. So a few
questions, if I may:
1) Why the design with encapsulating it in UDP? Radiotap is just a raw
header without IP etc. in front, and you use it with tcpdump,
wireshark or similar tools on the local system. What's the value in
having something "network transparent"?
2) The format of gsmtap doesn't seem very extensible, but I guess a new
version could be made that has a TLV-based format or so. I'd have
argued that a new version isn't even needed, but the length field is
only 8 bits right now which seems too short.
(speaking of versions - the docs say "version, set to 0x01 currently"
but "#define GSMTAP_VERSION 0x02")
3) Does the packet data follow the gsmtap header? It's not really clear
to me based on reading the wireshark code.
In particular, the data I'm thinking of is higher-level things, like the
session ID for a frame when it's going through the kernel, or perhaps a
flow label on RX, etc.
Also, vendor-specific data would be useful, e.g. to encapsulate the
device-specific headers like QMI, where such metadata is encapsulated in
a vendor- or device-specific way, which you'd want to see for debugging
certain things, but for other things the generic "session ID" type
information - encoded in a vendor-agnostic way - would be better to show
in wireshark.
Since it doesn't seem possible to use gsmtap in the current version,
would it make sense to define a new gsmtap that (say) has version 3 or
something, followed by an overall length and TLVs? I do note that this
wouldn't be compatible with the current wireshark code as it doesn't
check the version, just shows it...
Or would it make more sense to define a new ARPHDR_WWANTAP like
ARPHDR_IEEE80211_RADIOTAP and just use that instead of encapsulating in
IP/UDP, and then have a completely new (extensible) protocol inside of
that? I'm not really sure I see the point of UDP encapsulation anyway.
Thanks,
johannes