Hi Harald,
On 25/04/2020 15:30, Harald Welte wrote:
Hi Andrew,
On Sat, Apr 25, 2020 at 11:28:52AM +0100, Andrew Back wrote:
Appreciate that the Osmocom Docker configurations
exist principally for
automated testing and here it may not make sense, but I wondered if
builds are published to a registry?
no, they are intentionally not. As long as the container community does
not provide proper tools helping to generate license compliant binary
builds, I don't think we want to (or anyone should!) publicly release
binary container images/layers.
Not even if manual efforts are made to provide such information in, for
example, a README file that is then published via the registry? I get
that pointing at parent images/layers and additional packages and
sources pulled-in via the Dockerfile, is not ideal.
Docker and other related entities have been ignoring
this for years,
despite lots of voices raising this topic. See for example
https://lwn.net/Articles/752982/
Right, so I think I recall some of these discussions the first time
around and had assumed the issue had been addressed in some way.
How, for example, would you comply with copyleft style
licenses such as
LGPL/GPL/AGPL to automatize the process of providing the "complete
corresponding source code" to all the binaries that go into a Docker
image? Particularly if you are using upstream images that other people
have built who don't provide you with this information?
With many layers and inheritance I can see how this could be problematic
without tooling that provides comprehensive coverage, but saying e.g.
Debian based image + src1 + src2 .. etc. would not be sufficient?
Debian publish official images to Docker Hub, so perhaps I would benefit
from taking a look at their approach to this also.
You would basically have to recursively collect all
the debian source
packages of both your layer, as well as all the layers you inherited
from. I'm not aware of any tooling that one could use to automatize
that process.
The Makefile in the make project located in the
docker-playground repo
sets docker.io as registry and has a target for docker push. Also see
OBS Release.key files in project sub-directories, but couldn't find
container builds being published anywhere.
In my opinion, the only way to safely distribute container images is to
not distribute them, but only distribute the Dockerfile and have users
build the images from there.
So we do have our own internal docker registry that is used within the
Osmocom CI and testing, exactly to prevent any potentially license
incompliant public distribution.
Also saw the comments from 2017 on Docker
shortcomings and assuming SCTP
works OK now in Docker networking?
I don't know what is the status. In general, I consider the standard
way how docker deals with networking completely broken. The incredible
ignorance of assuming the internet only has TCP and UDP is a 1980ies
assumption. Doing NAT everywhere and programmatically inserting
iptables rules from the docker daemon is also not quite elegant.
The documentation suggests overlay networks support SCTP now:
https://docs.docker.com/network/overlay/#operations-for-standalone-containe…
Not sure about bridge networks.
I can see how extensive use of NAT may be problematic.
For any kind of Docker use for cellular network
infrastructure the only
option is to use docker networks (i.e. virtual ethernet segments) and
have static per-container IP addresses. So you don't ever use any of
the EXPOSE / NAT / ... stuff.
Beware that Docker also doesn't fully support this scenario for any
reasonable production deployment, as you cannot have a container being
part of multiple docker networks while having reliable network device
names. Allegedly the eth0..N are now assigned in "alphabetically
increasing order of the docker-network name", which is also way too
fragile for my point of view.
So in the end, my observation still remains: One cannot use Docker in
cellular network infrastructure, at least not in 2G, 3G or 4G, where
everything relies on static IP addreses and SCTP.
Various vendors seem to be using Docker in NFV/SDN architectures and it
would be nice to think there will be a way forward, which does not mean
that we end up with a dichotomy, whereby other infrastructure is part of
that world and Osmocom excluded. Be it through compliance tooling and
best practices, plus Docker improvements — or whatever it takes.
It's enough for test autmation, but certainly not
for production use.
Thanks for the detailed reply, it's appreciated.
Regards,
Andrew