Appreciate that the Osmocom Docker configurations exist principally for automated testing and here it may not make sense, but I wondered if builds are published to a registry?
The Makefile in the make project located in the docker-playground repo sets docker.io as registry and has a target for docker push. Also see OBS Release.key files in project sub-directories, but couldn't find container builds being published anywhere.
Was hoping I could just pull containers for the latest project versions, without cloning and building the containers in docker-playground.
Also saw the comments from 2017 on Docker shortcomings and assuming SCTP works OK now in Docker networking?
Regards,
Andrew
Hi Andrew,
On Sat, Apr 25, 2020 at 11:28:52AM +0100, Andrew Back wrote:
Appreciate that the Osmocom Docker configurations exist principally for automated testing and here it may not make sense, but I wondered if builds are published to a registry?
no, they are intentionally not. As long as the container community does not provide proper tools helping to generate license compliant binary builds, I don't think we want to (or anyone should!) publicly release binary container images/layers.
Docker and other related entities have been ignoring this for years, despite lots of voices raising this topic. See for example https://lwn.net/Articles/752982/
How, for example, would you comply with copyleft style licenses such as LGPL/GPL/AGPL to automatize the process of providing the "complete corresponding source code" to all the binaries that go into a Docker image? Particularly if you are using upstream images that other people have built who don't provide you with this information?
You would basically have to recursively collect all the debian source packages of both your layer, as well as all the layers you inherited from. I'm not aware of any tooling that one could use to automatize that process.
The Makefile in the make project located in the docker-playground repo sets docker.io as registry and has a target for docker push. Also see OBS Release.key files in project sub-directories, but couldn't find container builds being published anywhere.
In my opinion, the only way to safely distribute container images is to not distribute them, but only distribute the Dockerfile and have users build the images from there.
So we do have our own internal docker registry that is used within the Osmocom CI and testing, exactly to prevent any potentially license incompliant public distribution.
Also saw the comments from 2017 on Docker shortcomings and assuming SCTP works OK now in Docker networking?
I don't know what is the status. In general, I consider the standard way how docker deals with networking completely broken. The incredible ignorance of assuming the internet only has TCP and UDP is a 1980ies assumption. Doing NAT everywhere and programmatically inserting iptables rules from the docker daemon is also not quite elegant.
For any kind of Docker use for cellular network infrastructure the only option is to use docker networks (i.e. virtual ethernet segments) and have static per-container IP addresses. So you don't ever use any of the EXPOSE / NAT / ... stuff.
Beware that Docker also doesn't fully support this scenario for any reasonable production deployment, as you cannot have a container being part of multiple docker networks while having reliable network device names. Allegedly the eth0..N are now assigned in "alphabetically increasing order of the docker-network name", which is also way too fragile for my point of view.
So in the end, my observation still remains: One cannot use Docker in cellular network infrastructure, at least not in 2G, 3G or 4G, where everything relies on static IP addreses and SCTP.
It's enough for test autmation, but certainly not for production use.
Regards, Harald
Small Follow-up, before it gets too off-topic:
On Sat, Apr 25, 2020 at 04:30:44PM +0200, Harald Welte wrote:
Docker and other related entities have been ignoring this for years, despite lots of voices raising this topic. See for example https://lwn.net/Articles/752982/
Just yesterday there also is the following publication of the Linux Foundation on this topic: https://www.linuxfoundation.org/blog/2020/04/docker-containers-what-are-the-...
Quote:
How do we collect and publish the required source code? [...] This is currently an unsolved problem.
Seeing this seven years after the initial release of Docker clearly sends one message: They don't give a s**t about enabling their users to be able to follow open source license compliance.
Regards, Harald
Hi Harald,
On 25/04/2020 15:30, Harald Welte wrote:
Hi Andrew,
On Sat, Apr 25, 2020 at 11:28:52AM +0100, Andrew Back wrote:
Appreciate that the Osmocom Docker configurations exist principally for automated testing and here it may not make sense, but I wondered if builds are published to a registry?
no, they are intentionally not. As long as the container community does not provide proper tools helping to generate license compliant binary builds, I don't think we want to (or anyone should!) publicly release binary container images/layers.
Not even if manual efforts are made to provide such information in, for example, a README file that is then published via the registry? I get that pointing at parent images/layers and additional packages and sources pulled-in via the Dockerfile, is not ideal.
Docker and other related entities have been ignoring this for years, despite lots of voices raising this topic. See for example https://lwn.net/Articles/752982/
Right, so I think I recall some of these discussions the first time around and had assumed the issue had been addressed in some way.
How, for example, would you comply with copyleft style licenses such as LGPL/GPL/AGPL to automatize the process of providing the "complete corresponding source code" to all the binaries that go into a Docker image? Particularly if you are using upstream images that other people have built who don't provide you with this information?
With many layers and inheritance I can see how this could be problematic without tooling that provides comprehensive coverage, but saying e.g. Debian based image + src1 + src2 .. etc. would not be sufficient?
Debian publish official images to Docker Hub, so perhaps I would benefit from taking a look at their approach to this also.
You would basically have to recursively collect all the debian source packages of both your layer, as well as all the layers you inherited from. I'm not aware of any tooling that one could use to automatize that process.
The Makefile in the make project located in the docker-playground repo sets docker.io as registry and has a target for docker push. Also see OBS Release.key files in project sub-directories, but couldn't find container builds being published anywhere.
In my opinion, the only way to safely distribute container images is to not distribute them, but only distribute the Dockerfile and have users build the images from there.
So we do have our own internal docker registry that is used within the Osmocom CI and testing, exactly to prevent any potentially license incompliant public distribution.
Also saw the comments from 2017 on Docker shortcomings and assuming SCTP works OK now in Docker networking?
I don't know what is the status. In general, I consider the standard way how docker deals with networking completely broken. The incredible ignorance of assuming the internet only has TCP and UDP is a 1980ies assumption. Doing NAT everywhere and programmatically inserting iptables rules from the docker daemon is also not quite elegant.
The documentation suggests overlay networks support SCTP now:
https://docs.docker.com/network/overlay/#operations-for-standalone-container...
Not sure about bridge networks.
I can see how extensive use of NAT may be problematic.
For any kind of Docker use for cellular network infrastructure the only option is to use docker networks (i.e. virtual ethernet segments) and have static per-container IP addresses. So you don't ever use any of the EXPOSE / NAT / ... stuff.
Beware that Docker also doesn't fully support this scenario for any reasonable production deployment, as you cannot have a container being part of multiple docker networks while having reliable network device names. Allegedly the eth0..N are now assigned in "alphabetically increasing order of the docker-network name", which is also way too fragile for my point of view.
So in the end, my observation still remains: One cannot use Docker in cellular network infrastructure, at least not in 2G, 3G or 4G, where everything relies on static IP addreses and SCTP.
Various vendors seem to be using Docker in NFV/SDN architectures and it would be nice to think there will be a way forward, which does not mean that we end up with a dichotomy, whereby other infrastructure is part of that world and Osmocom excluded. Be it through compliance tooling and best practices, plus Docker improvements — or whatever it takes.
It's enough for test autmation, but certainly not for production use.
Thanks for the detailed reply, it's appreciated.
Regards,
Andrew
Hi Andrew,
On Sat, Apr 25, 2020 at 04:48:13PM +0100, Andrew Back wrote:
The documentation suggests overlay networks support SCTP now: Not sure about bridge networks.
Well, as soon as you have an actual Layer3 or Layer2 network, for sure you can use SCTP.
Various vendors seem to be using Docker in NFV/SDN architectures
I am convinced that either they must be building their own "network drivers/plugins" to docker, or they can not implement 2G/3G 3GPP interfaces as we know them. This is not an Osmocom specific topic.
would be nice to think there will be a way forward, which does not mean that we end up with a dichotomy, whereby other infrastructure is part of that world and Osmocom excluded. Be it through compliance tooling and best practices, plus Docker improvements — or whatever it takes.
Honestly, I seriously don't think that Docker is the right tool. It is actively hostile towards anyone doing anything serious in terms of networking. It is already an enormous nightmare to model the most trivial of networking topologies. It is centered around dns and dynamic IP addresses, while classic 2G/3G is based around static IP addresses everywhere. You cannot have SS7/SIGTRAN on a container that gets a dynamic IP address via DHCP.
I could imagine that other container runtimes with a less monolithic and more modular approach might be more amenable to the requirements of 3GPP networks, but I lack any hands-on experience with that. Docker is just trying to solve too many things at the same time, without giving users the kind of tools/access to the underlying infrastructure. Just take the example about no way to get persistent network device naming: https://github.com/moby/moby/issues/25181 https://github.com/docker/compose/issues/4645
Similarly, there is no functionality in Docker how you can move a physical network device into a specific container. Peaple have built kludges/workarounds for that (https://github.com/jpetazzo/pipework) but that's not Docker itself.
There are so many things people have been able to do for decades with the Linux network stack on bare metal, and which they still can do with Linux e.g. in lxc containers that I assume no Docker developer has even imagined possible. It's like you used to have a full mechanical workshop available, an Docker then arbitrarily limits this to a set of three philips screw drivers because that's what most people need ;)
Just because some people in the industry have heard some buzzwords and think that Docker is the right tool to virtualize 3GPP network elements doesn't mean that we have to agree :)
Sorry, I've already wasted way too many days of my life trying to work around random arbitrary constraints of docker networking :/
On Sat, Apr 25, 2020 at 04:48:13PM +0100, Andrew Back wrote:
With many layers and inheritance I can see how this could be problematic without tooling that provides comprehensive coverage, but saying e.g. Debian based image + src1 + src2 .. etc. would not be sufficient?
Do you have a tool to provide the "complete corresponding source code" to a given "Debian bsaed image"? Specifically, a tool that is capable of producing this source code for every of your container image builds that you may have created during the past 3 years (you may not even have that binary image around anymore)?
Because that's what license compliance takes, as those three years after last distribution is the period during which you must provide the complete and corresponding sources.
So until there is easy tooling to automatize this, I don't think there is a way to safely provide container images.
Debian publish official images to Docker Hub, so perhaps I would benefit from taking a look at their approach to this also.
It would be interesting, indeed.
However, even if Debian provided the CCS (complete+corrsponding source) to every of their image/layer builds, passing along that written offer of a third party (such as Debian) to provide the source is nothing we can do, at least not from servers operated by and paid for by sysmocom, a for-profit entity. The GPLv2 Section 2c exception for noncommercial distribution hence doesn't apply, i.e. we would be directly responsible for providing the complete and corresponding source for each build of each image/layer we ever publish.
Osmocom is a project about open source mobile communications software, and we are creating + distributing our own software. We provide source and binary packages for a variety of distributions/releases, instructions how to build from source as well as Dockerfiles and Ansible playbooks on how to automatically build/install the software. I think that shipping container or VM images is out of scope. We have plenty of issues within the core scope of our projects to work on (1004 open issues on osmocom.org at the point of this writing), and I'd suggest we focus on that.
Regards, Harald