OpenGGSN on live networks

This is merely a historical archive of years 2008-2021, before the migration to mailman3.

A maintained and still updated list archive can be found at https://lists.osmocom.org/hyperkitty/list/osmocom-net-gprs@lists.osmocom.org/.

Holger Freyther holger at freyther.de
Tue Oct 4 07:11:49 UTC 2016


> On 04 Oct 2016, at 08:38, Bjørn Remseth <rmz at telenordigital.com> wrote:
> 
> 
> In the sense that it does not talk to any policy control functions and has no external billing interfaces? It’s only functioning as a packet forwarding mechanism, right?


right. it will happily create a PDP context for anyone that asks.



>> I think people that have deployed it in real networks have some (small) extra patches to make it work with specific networks and so far no one contributed them back (which makes a good case for the AGPLv3).
> 
> Do you have any indication about what the changes they had to do in order to be able to pass traffic?

I don't.




>> In terms of moving forward it might be good to see what is missing from it and then see how/if to add it. E.g. I have written a nice architecture of a scalable GGSN using ZeroMQ between the different parts of the system.
> 
> Interesting.  A few questions:
> 
>     * What kind of traffic do you envisage running over the ØMQ?
>     * Why do you  think  ØMQ is a good technology choice?
>     * What do you think of instead of  a more traditional command/control structure with a set of control plane nodes talking to a bunch of packet forwarding nodes over an RPC mechanism (such as grpc.io)?
>     * Or perhaps there is no contradiction here?

The question is how to scale a GGSN. One can announce multiple independent ones through DNS, one can have a central (redundant, e.g. keepalived) entry into the network that then dispatches (and re-routes broken SGSNs) requests to a/the worker.

In my architecture I had decided to pick a central one (e.g. to allow more easy logging, hide the load-balancer in the network instead of having to wait for DNS to propagate through GRX). Which means the front-end needs to manage the active workers and distribute requests across them.

For the worker process(es) they need to communicate with a process that is managing the GTP-U resources (allocate tunnel id, make the reservation in the kernel). This would need to be a command/response operation (e.g. only acknowledge the PDP context create if the GTP-u resource was set-up).

In both cases ØMQ has built-in support for it. It supports request/response operations, it supports load-balancing across a number of workers, it is managing which workers are active (active tcp connections). At the same time it allows to implement/prototype parts in different languages.

I'm not an ØMQ expert but played with it in the PCAP central storage/client application and it seemed robust/good enough for the intended usecases.


>> But then there is only one way forward. Deploy (for a subset of subscribers) and then see which SGSNs fail.
> 
> Ok, so one thing that could be useful would be to populate a fleet of phones with our sims, put the phones in various networks we can roam into that  use different SGSNs, then make the phones (automatically, on regular intervals) connect back to a dedicated APN that routes traffic back to an OpenGGSN instance and then  bring out the popcorn to see what goes wrong, fix the GGSN so that it’s no longer wrong then  rinse&repeat.  Are you thinking somewhat along these lines?


yes. I wouldn't expect a lot of fails, mostly dealing with quirks of specific networks. Maybe send GTP-C messages not to the IANA assigned port but remember where the SGSN sent it from.


holger


More information about the osmocom-net-gprs mailing list