libosmocore wishlist

This is merely a historical archive of years 2008-2021, before the migration to mailman3.

A maintained and still updated list archive can be found at https://lists.osmocom.org/hyperkitty/list/OpenBSC@lists.osmocom.org/.

Holger Freyther holger at freyther.de
Tue Mar 26 05:23:31 UTC 2019



> On 20. Mar 2019, at 10:13, Harald Welte <laforge at gnumonks.org> wrote:
> 
> While working on the talloc context patches, I was wondering if we should
> spend a bit of time to further improve libosmocore and collect something
> like a wishlist.



> I would currently identify the following areas:
> 
> 1) initialization of the various sub-systems is too complex, there are too
>   many functions an application has to call.  I would like to move more
>   to a global "application initialization", where an application registers
>   some large struct [of structs, ...] at start-up and tells the library
>   the log configuration, the copyright statement, the VTY IP/port, the config
>   file name, ... (some of those can of course be NULL and hence not used)

ack. One big struct for options? But how would this work across libosmocore
and libosmonetif/libosmo-abis?

In Go there is a "pattern" to pass an options struct into the method.


> 2) have some kind of extensible command line options/arguments parser
>   It would be useful to have common/library parts register some common
>   command line arguments (like config file, logging, daemonization, ..)
>   while the actual appliacation extending that with only its application-specific
>   options.  I don't think this is possible with how getopt() works, so
>   it would require some new/different infrastructure how applications would
>   register their arguments

I started to like the absl flags infrastructure (but we need to make sure
to not have an excessive amount of them):

https://abseil.io/docs/python/guides/flags

flags.DEFINE_integer('age', None, 'Your age in years.', lower_bound=0)

The same concept exsists for C++, Java, Go, Python and Bash.


> 3) move global select() state into some kind of structure.  This would mean
>   that there could be multiple lists of file descriptors rather than the
>   one implicit global one.  Alternatively, turn the state into thread-local
>   storage, so each thread would have its own set of registered file descriptors,
>   which probably makes most sense. Not sure if one would have diffeent 'sets'
>   of registered file descriptors in a single thread.  The same would apply
>   for timers: Have a list of timers for each thread; timeouts would then
>   also always execute on the same thread. This would put talloc context, select
>   and timers all in the same concept: Have one set of each on each thread,
>   used automatically.

Do we plan to have threads? On the low-end we could have an EventServer that runs
one epoll_wait per thread. But then we are in the game of scheduling across the
threads, work stealing, etc. Maybe something already exists we can use?

On the high-end I wondered if we could have something like "fibers" and FSMs and
CSP as first class citizens?

* When creating a new fsm it gets scheduled on the least busy worker thread
* When creating a child it stays on the same thread.
* Components communicate strictly using a CSP like primitive.
* We can scale up/scale down worker threads based on load.


4) Adopt/build an RPC mechanism (maybe evolve GSUP to it). I underestimated the
"network" effect of every binary offering the same RPC interface. Suddenly sending a
SMS, placing a call.. becomes..

	the_rpc_cli endpoint service.method < arguments

And to inspect a service

	the_rpc_cli endpoint ls [service.method]

5) Plan for seamless/cooperative upgrades. E.g. by passing fds somewhere else. E.g.
leave existing TCP connections in the old process and accept new ones in the new
version. The difficulty is how to deal with the VTY and other services. We probably
need a meta server.. and meta server upgrades.

Or this might be the time to break from VTY. Give up on runtime reconfiguration (we
never had a solid model for it) and see how plain rpc can save our day?








More information about the OpenBSC mailing list