Change in libosmocore[master]: add osmo_use_count API

Harald Welte gerrit-no-reply at lists.osmocom.org
Thu Mar 7 17:28:51 UTC 2019


Harald Welte has posted comments on this change. ( https://gerrit.osmocom.org/13121 )

Change subject: add osmo_use_count API
......................................................................


Patch Set 2:

> I've gone through a few iterations, some using limited size arrays,
 > some using enum values, and this is the final result that I am
 > actually quite satisfied with. I know it *looks* like it is
 > spamming on dynamic allocation and list iteration but consider:
 > 
 > - in osmo-msc I actually avoid all dynamic allocation using static
 > use count entries (see in-code comment below, use_count.h:174).
 > 
 > - use count lists so far have at most six entries in practice,
 > usually more like 3. Traversing six llist->next pointers is less
 > effort than doing your common strlen(); compare with osmo_strlcpy()
 > which always does a strlen() no matter what, and we don't think
 > twice.

"Premature Optimization is the root of all evil". But SCNR:  The expensive
cost about linked list iteration is not CPU cost, it's cache line misses. Your
CPU is likely going to spend thousands of cycles doing nothing just waiting
for data from RAM to arrive, and in the worst case for each of the list elements
you iterate over, as they are not contiguously allocated and spread across RAM.

I'm not saying this level of optimization is important here, I'm just sharing
some perspective.  The kernel has some explicit prefetch macros in the linked
list code to improve the situation, not sure how much.

 > - Compared to get_value_string() which we do *all* *the* *time* (at
 > least when logging is enabled), this API imposes negligible
 > traversal overhead.

yes.  wireshark has an optimzied get_value_string derivative now, which
generates a hash table at runtime.  We could switch to that if it ever
was an issue. Debug logging is not enabled in typical operation, though.


 > - Let alone comparing to ran connection or VLR subscriber or
 > gsm_trans lookup: so far we keep all gsm_trans items for all
 > subscribers in *one single global* llist, and when looking for a
 > trans for a given subscriber, we more often than not traverse the
 > entire list. 

Yes, but that's rather easy to replace with a hash table or the like,
(Linux kernel has plenty of hash tables where the individual buckets
are llist_entry) where only few places will need to be changed.


 > I can understand that this *seems* quite wasteful at first, but if
 > you take a closer look I expect that you guys will agree with me:
 > it is a fairly small overhead for a very nice API.

Contrary to some of the other things we do so far, I think with this new
infrastructure it may not be as easy to optimize the implementation without
breaking the users.

I'll read the patch one more time and provide more review.  I'm not fundamentally
opposed and if you think it's what you need, so be it.


-- 
To view, visit https://gerrit.osmocom.org/13121
To unsubscribe, or for help writing mail filters, visit https://gerrit.osmocom.org/settings

Gerrit-Project: libosmocore
Gerrit-Branch: master
Gerrit-MessageType: comment
Gerrit-Change-Id: Ife31e6798b4e728a23913179e346552a7dd338c0
Gerrit-Change-Number: 13121
Gerrit-PatchSet: 2
Gerrit-Owner: Neels Hofmeyr <nhofmeyr at sysmocom.de>
Gerrit-Reviewer: Jenkins Builder (1000002)
Gerrit-Reviewer: Neels Hofmeyr <nhofmeyr at sysmocom.de>
Gerrit-CC: Harald Welte <laforge at gnumonks.org>
Gerrit-Comment-Date: Thu, 07 Mar 2019 17:28:51 +0000
Gerrit-HasComments: No
Gerrit-HasLabels: No
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.osmocom.org/pipermail/gerrit-log/attachments/20190307/264ef945/attachment.html>


More information about the gerrit-log mailing list