It was good meeting you in person, André :)
It was a pleasure to meet you all and to speak with you in person about jenkins.osmocom! I am looking forward to see you all on the OsmoCon :)
How about each built library extends a docker image produced by a previous dependency's last successful build? e.g. if libosmocore rebuilds, that updates the libosmocore docker image, then libosmo-abis is triggered and adds itself, producing a last-successfully-built libosmo-abis docker image, and so on, down to openbsc re-using a docker image that already holds all its dependencies?
In general that's my thought too, but I am afraid of the complexity of handling/maintaining all these build images. Afaics right now only two docker images have to be (re)build (osmobuild:amd64/32bit), when using docker images and its inheritance model for dependency handling one would end up building a lot more.
I need to tinker around more to understand better.
Just to be clear, I'm merely one person here, just because I like the sound of Job-DSL, it doesn't mean everyone else does. I think it would be good to see an example of an openbsc build using it and then let everyone decide.
No worry, right now I am just sharing my experience about the topic "CI as Code" with you. Whether jobDSL will be used by the osmocom projects is a decision taken by you, not me :)
That's what have been created so far:
osmo-seed [2]: holds inline jobDSL script for [3][4], normally this jobs polls a repo holding all jobs and deploy them after a change has been detected.
openBSC_jobDSL [3]: the openBSC build job as a script deployed by the seed. Basically looks the same, the configuration can be still changed in the web-ui. Manually changes will be marked [4].
Furthermore this wiki [5] is a good entrypoint to jobDSL. Afterwards, these sites [6][7] are helpful imho for a first and second hands-on.
Why is this build dependency not part of the Docker image?
That's a good question, it should be, right?
In general yes, but huge dependencies e.g. an Android SDK are often mounted from the local file system and not backed into a docker image, so there's not a clear general answer.
Can I may ask how you rebuild your docker images? I'd assume that an image is rebuild after a patch submission to osmo-ci, which introduced a change to its Dockerfile. At least according to the Dockerfile attached in this mail thread in which the omso-ci repo is backed in.
I've attached the Dockerfile config used to generate the docker image for openbsc.
Thanks a lot for your support and the Dockerfile. I'm just wondering why the osmo-ci repo [8] doesn't hold the latest state of the Dockerfile is it used for builds?
And it should probably use the osmo-ci git
repos instead of echoing the osmo-deps.sh to /usr/local/bin manually. Actually, our openbsc-gerrit build job has osmo-ci in ~/bin and links that to /build_bin in the docker build, also adding /build_bin to the PATH ... that's a bit cumbersome and is my fault from when I wasn't too familiar with docker yet. These things could be streamlined.
hehe, I recognized that osmo-ci must live in your slave's home directory. I mounted each script to /build_bin/$script to work around it.
OT: does your nick "Blobb" indicate affinity to Binary Large Ob(b)jects?... like docker images? ;)
Hehe, this assumption is made by a lot of coders/techies, but it origins from the game blobby volley [1]. It may sounds weird, but I am called blobb(y), because I am looking similar to the characters in the game. Maybe it's time to cut another letter at the end? ;)
blobb
[1] https://sourceforge.net/projects/blobby/ [2] https://jenkins.blobb.me/view/osmocom/job/omso-seed/ (add "configure-readonly/" to read config) [3] https://jenkins.blobb.me/view/osmocom/job/openBSC_jobDSL/ [4] https://jenkins.blobb.me/view/osmocom/job/openBSC_jobDSL_manually_changed/ [5] https://github.com/jenkinsci/job-dsl-plugin/wiki [6] https://github.com/sheehan/job-dsl-gradle-example [7] http://job-dsl.herokuapp.com/ [8] http://git.osmocom.org/osmo-ci/tree/docker/Dockerfile.deb8_amd64
2017-03-08 17:48 GMT+01:00 Neels Hofmeyr nhofmeyr@sysmocom.de:
FYI, Blobb visited us in the sysmocom office yesterday, and we had a personal conversation on the jenkins build setup. It was good meeting you in person, André :) For the sake of this ML, let me answer briefly here as well.
On Mon, Mar 06, 2017 at 04:15:20PM +0100, André Boddenberg wrote:
When we speak about the "easier docker integration", we mean that everyone would be happy if not every matrix-configuration axis builds all deps like libosmocore, libosmo-netif etc pp, right?
We're rebuilding everything simply because no-one has found time to make this more efficient yet. There is no need to rebuild all of libosmocore through to to libsmpp34 for every matrix cell.
It would also be useful to build all libosmo* only once for each update of the master branch, and in the dependent builds (openbsc, openbsc-gerrit, ...) always re-use the last successfully built binaries of the dependencies.
To address this issue I'd like to create a local temporary docker image which extends osmocom:amd64 and holds all deps. This image can then be used for all openBSC builds and the temporary local docker image can be removed as a "post-build"-step (which should get triggered regardless the build result).
How about each built library extends a docker image produced by a previous dependency's last successful build? e.g. if libosmocore rebuilds, that updates the libosmocore docker image, then libosmo-abis is triggered and adds itself, producing a last-successfully-built libosmo-abis docker image, and so on, down to openbsc re-using a docker image that already holds all its dependencies?
Even though our libraries don't necessarily have a linear dependency, it could make sense to artificially define such a linear line of dependencies (e.g. even though libsmpp doesn't need libosmo-abis, we make libsmpp build on top of the libosmo-abis docker to collect it into the dependecies docker image). But, then again, if libosmocore succeeded and libosmo-abis failed, we would omit a libsmpp build for no reason. So maybe it would also make sense to somehow build the non-dependent libraries independently and later combine into a joint docker image?? ...I would leave that up to your choices, just brain storming...
A slightly different topic, did you think about pushing your docker images to hub.docker.com [2]?
In my opinion this will be a step forward to a transparent and local reproducible build environment. Afair Harald was quite clear about this requirement?
No idea / not aware of it / no opinion so far :) I was once invited to design a logo for the reproducible builds project, but haven't found time for that (yet?). That's about all I know about the topic...
and "Wiki" is an ancient African word for "hopelessly outdated"...
Thanks for pointing this out, usual I am hesitating to correct something to avoid being called nit-picky.
My middle name is "Nit-pick" :P It's sometimes really hard for me to ignore a small detail that is obviously wrong for the benefit of moving ahead faster...
In the Wiki's case, there is no drawback of correcting mistakes -- no noise in code review or mailing lists is generated, so if you have the time, just do it, as much as you like.
[2] https://wiki.jenkins-ci.org/display/JENKINS/Job+DSL+Plugin
This sounds like we want to use it!
Alright, so I will work on the following migrations: - "as is" -> "JobDSL" - "as is" -> "Pipeline" (probably just for Max)
hehe
Just to be clear, I'm merely one person here, just because I like the sound of Job-DSL, it doesn't mean everyone else does. I think it would be good to see an example of an openbsc build using it and then let everyone decide.
so far that doesn't look like an improvement of #!/bin/sh ./contrib/jenkins.sh
Agreed, the biggest advantages of Pipeline are:
the "eye-candy" (but who cares? :)
easy artifacts sharing across different steps, which run on
different nodes (but afaics you don't even use the "Copy Artifacts Plugin", which makes this argument pointless).
But if we introduce docker images as artifacts, this could become useful?
- the duration/sub-steps can be easily seen (as mentioned by Max), but
this can be achieved which some simple python scripts/plugins + influxDB + grafana as well.
I personally mostly care about the overall time a build takes, so that I get V+1/V-1 on my gerrit patches faster.
Neels, can you please help me fixing the following build error [3]:
01:59:20 checking dbi/dbd.h usability... no 01:59:20 checking dbi/dbd.h presence... no 01:59:20 checking for dbi/dbd.h... no 01:59:20 configure: error: DBI library is not installed
Install libdbi-dev and libdbd-sqlite3.
I already added the following package to the image, by:
docker run ........ osmocom:amd64 /bin/bash -c sudo apt-get install -y r-cran-dbi; /build/contrib/jenkins.sh
r-cran-dbi?? what is that? :)
Why is this build dependency not part of the Docker image?
That's a good question, it should be, right? According to the dockerfile, libdbi-dev is actually installed there. So is libdbd-sqlite3.
Hard to say why configure tells you that that is missing. It should work.
I've attached the Dockerfile config used to generate the docker image for openbsc. There's some stuff in there not needed for openbsc in particular, e.g. the smalltalk things near the end. And it should probably use the osmo-ci git repos instead of echoing the osmo-deps.sh to /usr/local/bin manually. Actually, our openbsc-gerrit build job has osmo-ci in ~/bin and links that to /build_bin in the docker build, also adding /build_bin to the PATH ... that's a bit cumbersome and is my fault from when I wasn't too familiar with docker yet. These things could be streamlined.
docker run --rm=true -e HOME=/build -e MAKE=make -e PARALLEL_MAKE="$PARALLEL_MAKE" \ -e IU="$IU" -e SMPP="$SMPP" -e MGCP="$MGCP" -e PATH="$PATH:/build_bin" \ -e OSMOPY_DEBUG_TCP_SOCKETS="1" -w /build -i -u build -v "$PWD:/build" \ -v "$HOME/bin:/build_bin" osmocom:amd64 /build/contrib/jenkins.sh
OT: does your nick "Blobb" indicate affinity to Binary Large Ob(b)jects? ... like docker images? ;)
~N
--
- Neels Hofmeyr nhofmeyr@sysmocom.de http://www.sysmocom.de/
=======================================================================
- sysmocom - systems for mobile communications GmbH
- Alt-Moabit 93
- 10559 Berlin, Germany
- Sitz / Registered office: Berlin, HRB 134158 B
- Geschäftsführer / Managing Directors: Harald Welte