FYI, Blobb visited us in the sysmocom office yesterday, and we had a personal conversation on the jenkins build setup. It was good meeting you in person, André :) For the sake of this ML, let me answer briefly here as well.
On Mon, Mar 06, 2017 at 04:15:20PM +0100, André Boddenberg wrote:
When we speak about the "easier docker integration", we mean that everyone would be happy if not every matrix-configuration axis builds all deps like libosmocore, libosmo-netif etc pp, right?
We're rebuilding everything simply because no-one has found time to make this more efficient yet. There is no need to rebuild all of libosmocore through to to libsmpp34 for every matrix cell.
It would also be useful to build all libosmo* only once for each update of the master branch, and in the dependent builds (openbsc, openbsc-gerrit, ...) always re-use the last successfully built binaries of the dependencies.
To address this issue I'd like to create a local temporary docker image which extends osmocom:amd64 and holds all deps. This image can then be used for all openBSC builds and the temporary local docker image can be removed as a "post-build"-step (which should get triggered regardless the build result).
How about each built library extends a docker image produced by a previous dependency's last successful build? e.g. if libosmocore rebuilds, that updates the libosmocore docker image, then libosmo-abis is triggered and adds itself, producing a last-successfully-built libosmo-abis docker image, and so on, down to openbsc re-using a docker image that already holds all its dependencies?
Even though our libraries don't necessarily have a linear dependency, it could make sense to artificially define such a linear line of dependencies (e.g. even though libsmpp doesn't need libosmo-abis, we make libsmpp build on top of the libosmo-abis docker to collect it into the dependecies docker image). But, then again, if libosmocore succeeded and libosmo-abis failed, we would omit a libsmpp build for no reason. So maybe it would also make sense to somehow build the non-dependent libraries independently and later combine into a joint docker image?? ...I would leave that up to your choices, just brain storming...
A slightly different topic, did you think about pushing your docker images to hub.docker.com [2]?
In my opinion this will be a step forward to a transparent and local reproducible build environment. Afair Harald was quite clear about this requirement?
No idea / not aware of it / no opinion so far :) I was once invited to design a logo for the reproducible builds project, but haven't found time for that (yet?). That's about all I know about the topic...
and "Wiki" is an ancient African word for "hopelessly outdated"...
Thanks for pointing this out, usual I am hesitating to correct something to avoid being called nit-picky.
My middle name is "Nit-pick" :P It's sometimes really hard for me to ignore a small detail that is obviously wrong for the benefit of moving ahead faster...
In the Wiki's case, there is no drawback of correcting mistakes -- no noise in code review or mailing lists is generated, so if you have the time, just do it, as much as you like.
[2] https://wiki.jenkins-ci.org/display/JENKINS/Job+DSL+Plugin
This sounds like we want to use it!
Alright, so I will work on the following migrations: - "as is" -> "JobDSL" - "as is" -> "Pipeline" (probably just for Max)
hehe
Just to be clear, I'm merely one person here, just because I like the sound of Job-DSL, it doesn't mean everyone else does. I think it would be good to see an example of an openbsc build using it and then let everyone decide.
so far that doesn't look like an improvement of #!/bin/sh ./contrib/jenkins.sh
Agreed, the biggest advantages of Pipeline are:
the "eye-candy" (but who cares? :)
easy artifacts sharing across different steps, which run on
different nodes (but afaics you don't even use the "Copy Artifacts Plugin", which makes this argument pointless).
But if we introduce docker images as artifacts, this could become useful?
- the duration/sub-steps can be easily seen (as mentioned by Max), but
this can be achieved which some simple python scripts/plugins + influxDB + grafana as well.
I personally mostly care about the overall time a build takes, so that I get V+1/V-1 on my gerrit patches faster.
Neels, can you please help me fixing the following build error [3]:
01:59:20 checking dbi/dbd.h usability... no 01:59:20 checking dbi/dbd.h presence... no 01:59:20 checking for dbi/dbd.h... no 01:59:20 configure: error: DBI library is not installed
Install libdbi-dev and libdbd-sqlite3.
I already added the following package to the image, by:
docker run ........ osmocom:amd64 /bin/bash -c sudo apt-get install -y r-cran-dbi; /build/contrib/jenkins.sh
r-cran-dbi?? what is that? :)
Why is this build dependency not part of the Docker image?
That's a good question, it should be, right? According to the dockerfile, libdbi-dev is actually installed there. So is libdbd-sqlite3.
Hard to say why configure tells you that that is missing. It should work.
I've attached the Dockerfile config used to generate the docker image for openbsc. There's some stuff in there not needed for openbsc in particular, e.g. the smalltalk things near the end. And it should probably use the osmo-ci git repos instead of echoing the osmo-deps.sh to /usr/local/bin manually. Actually, our openbsc-gerrit build job has osmo-ci in ~/bin and links that to /build_bin in the docker build, also adding /build_bin to the PATH ... that's a bit cumbersome and is my fault from when I wasn't too familiar with docker yet. These things could be streamlined.
docker run --rm=true -e HOME=/build -e MAKE=make -e PARALLEL_MAKE="$PARALLEL_MAKE" \ -e IU="$IU" -e SMPP="$SMPP" -e MGCP="$MGCP" -e PATH="$PATH:/build_bin" \ -e OSMOPY_DEBUG_TCP_SOCKETS="1" -w /build -i -u build -v "$PWD:/build" \ -v "$HOME/bin:/build_bin" osmocom:amd64 /build/contrib/jenkins.sh
OT: does your nick "Blobb" indicate affinity to Binary Large Ob(b)jects? ... like docker images? ;)
~N