Hi Holger,
I'm getting back to the following libosmocore commit introduced in 2015:
commit f558ed4bb9c0f00997b8f97c2b251a574c1a64c4 Author: Holger Hans Peter Freyther holger@moiji-mobile.com Date: Tue Jun 2 15:52:06 2015 +0200
I can see what you are doing, but I have absolutely no idea as to why.
AFAICT, the IPA CCM ID TLVs have the following structure:
* 16bit length field * one byte tag * optional payload
The length field *includes* the tag, so the actual payload length is the value encoded in the length field minus one.
This means that the existing/classic ipa_ccm_idtag_parse() always returns one byte too much for the length of the IPA payload. I'm trying to address this in https://gerrit.osmocom.org/#/c/libosmocore/+/10216/
Your commit introduces ipa_ccm_idtag_parse_off(), which introduces a noffset field. However, that offset is used not only to compute the actual "payload" size, but it's also used for computing the subsequent CCM information elements. Hence, I cannot use any non-zero offset to parse a CCM blob.
I also don't see any of our code using the ipa_ccm_idtag_parse_off() function, except the test case - where the test case seems to use a different encoding as seen on the wire, i.e. it uses only a single-byte length field.
So if the function was just added for that test case, why not structure the data used in the test to reflect the on-the-wire protocol reality?
There must be some genius rationale behind it, but I'm unable to figure it out.
Maybe you still remember? Thanks!