Hi.
Attached is a small patch which replaces direct call to comp128 from libosmocore to
auth api call. This will help to remove comp128 from libosmocore public api and to
use other auth functions in openbsc in future.
--
best regards,
Max, http://fairwaves.ru
Hi LAPDm guys,
while I was digging into the LAPDm code to find out, why a broken SABM
message is sent on a SAPI3 establish SACCH request during an active
call (leading to a delay of 3s between RSL EST request and response), I
stumbled over the following:
The initial SABM message (not the retransmitted one after T200) has a
non-zero length and ends with 3 bytes that have been taken from the end
of the RSL EST REQ message. The MS does not answer to this.
Interestingly the second SABM message that gets sent after T200 (2s) has
a length field of 0 and no trailing garbage.
The difference lies in the way the msgb is handled. In
rslms_rx_rll_est_req() the msg buffer passed from RSL is being used.
In the case of IPA, the l2h is prepended by the 3 byte IPA header.
Since all code in lapdm.c that handles RSL message seem to assume,
that msg->data == msg->l2h, length computation is done based on that
assumption in some places. In addition, msgb_pull() is used in a way
that would also lead to undefined results in this case, e.g. in
msgb_pull_l2h() just the difference between l2h and l3h is pulled from
the beginning (msg->data).
The main difficulty to find this, was that much msgb handling is done
by manual access to the msgb fields. I'd really favor the use of the
predefined macros/inlines instead of meddling around with the fields.
I've added a function msgb_pull_to_l3() to msgb.h which just skips over
everything in front of l3 (and therefore invalidates l2h and l1h) and
replaced all calls to msgb_pull_l2h() by calls to msgb_pull_to_l3().
In addition, I replaced manual l3 length adjustment by calls to
msgb_trim(). That alone fixed the SABM issue described above. See the
jerlbeck/fixes/lapd-sms branch for details.
But AFAICS there is still something to do:
- The remaining msgb_pull() need to be checked (at least the one in
l2_ph_data_ind() looks suspicious to me.
- L3 length computation should be done with the macros, how it is done
in lapdm_send_ph_data_req() is broken.
- Why does lapd_msg_ctx have a length field that is not used?
- ladp_test.c should be extended to check other execution paths, too.
I've tried it for dummy packets and I didn't get it working without
failing assertions (see below)
- How l2/data/.. in a msg are expected to be used/set should be
documented somewhere.
- It should be clarified, whether all abis driver should reset the msg
to start with l2.
Cheers
Jacob
======
The following patch still breaks the assertions, at least changing the
l3len computation in lapdm_send_ph_data_req() influences but doesn't fix
it.
--- a/tests/lapd/lapd_test.c
+++ b/tests/lapd/lapd_test.c
@@ -123,8 +123,10 @@ static struct msgb *create_empty_msg(void)
static struct msgb *create_dummy_data_req(void)
{
struct msgb *msg;
+ const int dummy_l1len = 3;
msg = msgb_from_array(dummy1, sizeof(dummy1));
+ msgb_push(msg, dummy_l1len);
rsl_rll_push_l3(msg, RSL_MT_DATA_REQ, 0, 0, 1);
return msg;
}
Hi all,
We're moving this discussion to the mailing list, as it seems it is
more generic and complex than we've thought initially.
The issue arose when I started doing load testing of the OsmoTRX
transceiver and disabled all gating in it. As a result, all incoming
noise was processed as valid Normal Bursts and Access Bursts and sent
up to OsmoBTS. This leads to a situation, similar to a RACH flood,
when there are more RACH requests coming, than a BTS could reasonably
process. And this leads to an unbounded increase of the AGCH queue in
the BTS - it consumes a few Mb per minute.
I think that this is the root cause of the issue we've seen at a
Netherlands festival installation, when 20K phones suddenly started
connecting to our station after official networks went down. When the
amount of RACH requests exceeded available CCCH capacity (took <5
seconds), mobile phones stopped answering out IMM.ASS messages.
Hypothesis is that the AGCH queue became so long, requests were sent
too late for a phone to receive it. And thus no phones answered to our
IMM.ASS messages. Unfortunately, I wasn't able to collect enough data
to check this hypothesis during that time and we don't have another
big festival on hands atm.
An attached is a quick fix for the unbounded queue growth. It uses a
hardcoded value for the maximum queue length, which is fine for our
load testing, but not flexible enough for the real life usage. We
should make the AGCH queue long enough to keep high performance. At
the same time, it must not exceed MS timeout or _all_ IMM.ASS messages
will miss their target MS's.
We could make this parameter user-configurable on a BTS side, but it
seems more reasonable to automatically calculate it, depending on the
channel combination and timeout values. But this should be done on the
BSC side. So the questions are:
1) what is a good way to calculate it?
2) should we configure this queue length over OML, or move the queue
from BTS to BSC?
--
Regards,
Alexander Chemeris.
CEO, Fairwaves LLC / ООО УмРадио
http://fairwaves.ru
Hi, I am trying to use osmoSGSN in topology with openGGSN and sim-bss,
which is simulator of BSS made by company Alcatel-Lucent. I have a problem
that my osmoSGSN can not communicate with sim-BSS and it can not connect
correctly. It is caused by non function bssgp. I would like to ask you, how
can I configure bssgp on osmoSGSN? I can configure ns states also from vty
on osmoSGSN but I can not bssgp states such as bvci etc...Please could you
help me? I hope you will reply me soon.
Thank you very much
Best regards,
Michal Grznár
On Sat, Jan 25, 2014 at 2:57 AM, Andreas Eversberg <andreas(a)eversberg.eu> wrote:
> so when i stop sending the bursts, the filler table will continue to
> transmit bursts. do i understand it correctly: if i send a single idle burst
> (frame??) after sending bursts, the filler table is disabled until new
> bursts are sent?
Sorry, I misspoke about disabling transmission. The filler table
resends the last burst in the multiframe, so the idle burst is usually
just a dummy burst, which does not actually stop physical
transmission. To fully turn off the downlink signal on a particular
slot requires disabling the slot by the channel combination.
That said, I agree with Alexander that the retransmission portion of
the filler table should be turned off because the behavior is
incorrect. With a few exceptions (e.g. FCCH), bursts should *not* be
retransmitted at L1 and doing so generates an invalid signal.
Note that the filler table cannot be completely disabled because it is
part of the real-time loop that drives the device I/O. If the upper
layer does not send a burst for a particular slot, or that burst
arrives late (stale burst), something still must be transmitted.
Currently, in that case, the burst comes from the filler table if the
slot is active or zeros if the slot is turned off. Again, I do not
think the current implementation is entirely correct, but that depends
on expectations of the upper layers.
-TT
Hello,
I have found a couple bugs in the read callback of smpp_smsc.c with regards to
guarding against malformed packets.
Please see attached patches for fixes. They are also published in openbsc branch
daniel/smpp-fixes
Regarding the last fix I don't think it is necessary to receive messages up to
SSIZE_MAX, but since I don't have/know a value for the maximum sensible size I
left it like that for now.
Regards,
Daniel
--
- Daniel Willmann <dwillmann(a)sysmocom.de> http://www.sysmocom.de/
=======================================================================
* sysmocom - systems for mobile communications GmbH
* Schivelbeiner Str. 5
* 10439 Berlin, Germany
* Sitz / Registered office: Berlin, HRB 134158 B
* Geschaeftsfuehrer / Managing Directors: Holger Freyther, Harald Welte