Hi,
So, what would be the actual benefits?
- complete job config in git? (to be confirmed)
Just to clarify, the Pipeline plugin on its own doesn't provide a complete config handling in git. This means only the code shown in image [3] lives in git, configurations within the "General" or "Build Triggers" tab will still be handled in the web-ui.*** For the sake of completeness the JobDSL plugin [2][ has to be mention. This plugin allows to put the entire job configuration to git and deploys jobs without previous manual setup in the web-ui.
- faster because less things have to be rebuilt? (to be confirmed)
Somehow I doubt that this improvement dependents on the Pipeline plugin, but I know to few details to have an opinion based on facts. :)
- faster because easily parallelizable/easier integration with docker? (to be
confirmed)
first point: e.g. Pipeline can trigger "down-stream-projects/stages" after one specific part of a parallel-block/matrix-configurations-project is finished.
second point: Pipeline brings a docker wrapper, so one don't have to pass one build script holding the entire build sequence when invoking the container. You can invoke as much scripts as you want within the docker wrapper. (no black-box god-script :)
If dr.blobb and/or Max could clarify these points that would be great (while I guess they will clarify in depth only after actual trials). First tests could run on a privately setup Jenkins ... dr.blobb?
For sure, let's use this Jenkins instance [1] for trials. Everyone is welcome to sign up and send me a mail to get the necessary permissions.
I start migrating the OpenBSC build pipeline to Pipeline and keep you updated. Is this mailing list the appropriate communication channel?
Is there already a osmocom ci wiki page, which I overlooked? I'd be happy to contribute!
~André
*** Some "web-ui configurations" can be injected from the jenkinsfile, though.
[1] https://jenkins.blobb.me/view/osmocom/ [2] https://wiki.jenkins-ci.org/display/JENKINS/Job+DSL+Plugin [2.5] https://jenkinsci.github.io/job-dsl-plugin/#method/javaposse.jobdsl.dsl.DslF... [3] http://tinyurl.com/zbavrtp
On Sun, Mar 05, 2017 at 10:03:20PM +0100, Klaus Müller wrote:
Hi,
So, what would be the actual benefits?
- complete job config in git? (to be confirmed)
Just to clarify, the Pipeline plugin on its own doesn't provide a complete config handling in git. This means only the code shown in image [3] lives in git, configurations within the "General" or "Build Triggers" tab will still be handled in the web-ui.*** For the sake of completeness the JobDSL plugin [2][ has to be mention. This plugin allows to put the entire job configuration to git and deploys jobs without previous manual setup in the web-ui.
- faster because less things have to be rebuilt? (to be confirmed)
Somehow I doubt that this improvement dependents on the Pipeline plugin, but I know to few details to have an opinion based on facts. :)
- faster because easily parallelizable/easier integration with docker? (to be
confirmed)
first point: e.g. Pipeline can trigger "down-stream-projects/stages" after one specific part of a parallel-block/matrix-configurations-project is finished.
second point: Pipeline brings a docker wrapper, so one don't have to pass one build script holding the entire build sequence when invoking the container. You can invoke as much scripts as you want within the docker wrapper. (no black-box god-script :)
This sounds to me like Pipelines don't actually provide any of the features I was dreaming about, except maybe easier docker integration?
Is this mailing list the appropriate communication channel?
yes.
Is there already a osmocom ci wiki page, which I overlooked? I'd be happy to contribute!
well, searching osmocom.org for jenkins gave me
https://osmocom.org/projects/cellular-infrastructure/wiki/Jenkins https://osmocom.org/projects/osmocom-servers/wiki/Jenkins_Node_Setup https://osmocom.org/projects/cellular-infrastructure/wiki/Nightly_Builds [...]
and "Wiki" is an ancient African word for "hopelessly outdated"... We appreciate every checked and/or corrected facts on those pages, and the Jenkins_Node_Setup could probably be merged into the Jenkins one.
nice, a sleepy pipeline ;)
[2] https://wiki.jenkins-ci.org/display/JENKINS/Job+DSL+Plugin
This sounds like we want to use it!
[2.5] https://jenkinsci.github.io/job-dsl-plugin/#method/javaposse.jobdsl.dsl.DslF... [3] http://tinyurl.com/zbavrtp
direct link without javascript cruft: https://media.comsysto.com/images/2016-11-23-jenkins-pipelines/new-jenkins-p...
so far that doesn't look like an improvement of
#!/bin/sh ./contrib/jenkins.sh
~N
This sounds to me like Pipelines don't actually provide any of the features I was dreaming about, except maybe easier docker integration?
first yes! second as you said -> maybe, still doubting it :)
When we speak about the "easier docker integration", we mean that everyone would be happy if not every matrix-configuration axis builds all deps like libosmocore, libosmo-netif etc pp, right?
To address this issue I'd like to create a local temporary docker image which extends osmocom:amd64 and holds all deps. This image can then be used for all openBSC builds and the temporary local docker image can be removed as a "post-build"-step (which should get triggered regardless the build result).
A slightly different topic, did you think about pushing your docker images to hub.docker.com [2]?
In my opinion this will be a step forward to a transparent and local reproducible build environment. Afair Harald was quite clear about this requirement?
and "Wiki" is an ancient African word for "hopelessly outdated"... We appreciate every checked and/or corrected facts on those pages, and the Jenkins_Node_Setup could probably be merged into the Jenkins one.
Thanks for pointing this out, usual I am hesitating to correct something to avoid being called nit-picky.
[2] https://wiki.jenkins-ci.org/display/JENKINS/Job+DSL+Plugin
This sounds like we want to use it!
Alright, so I will work on the following migrations: - "as is" -> "JobDSL" - "as is" -> "Pipeline" (probably just for Max)
so far that doesn't look like an improvement of #!/bin/sh ./contrib/jenkins.sh
Agreed, the biggest advantages of Pipeline are:
- the "eye-candy" (but who cares? :)
- easy artifacts sharing across different steps, which run on different nodes (but afaics you don't even use the "Copy Artifacts Plugin", which makes this argument pointless).
- the duration/sub-steps can be easily seen (as mentioned by Max), but this can be achieved which some simple python scripts/plugins + influxDB + grafana as well.
Neels, can you please help me fixing the following build error [3]:
01:59:20 checking dbi/dbd.h usability... no 01:59:20 checking dbi/dbd.h presence... no 01:59:20 checking for dbi/dbd.h... no 01:59:20 configure: error: DBI library is not installed 01:59:22 Build step 'Execute shell' marked build as failure 01:59:22 [WARNINGS] Skipping publisher since build result is FAILURE 01:59:22 Finished: FAILURE
I already added the following package to the image, by:
docker run ........ osmocom:amd64 /bin/bash -c sudo apt-get install -y r-cran-dbi; /build/contrib/jenkins.sh
but the r-cran-dbi package doesn't solve the issue, which package is needed?
Why is this build dependency not part of the Docker image?
André
[1] https://wiki.jenkins-ci.org/display/JENKINS/Copy+Artifact+Plugin [2] https://hub.docker.com/ [3] https://jenkins.blobb.me/job/openBSC_multi-configuration/label=master/36/con...
2017-03-06 14:53 GMT+01:00 Neels Hofmeyr nhofmeyr@sysmocom.de:
On Sun, Mar 05, 2017 at 10:03:20PM +0100, Klaus Müller wrote:
Hi,
So, what would be the actual benefits?
- complete job config in git? (to be confirmed)
Just to clarify, the Pipeline plugin on its own doesn't provide a complete config handling in git. This means only the code shown in image [3] lives in git, configurations within the "General" or "Build Triggers" tab will still be handled in the web-ui.*** For the sake of completeness the JobDSL plugin [2][ has to be mention. This plugin allows to put the entire job configuration to git and deploys jobs without previous manual setup in the web-ui.
- faster because less things have to be rebuilt? (to be confirmed)
Somehow I doubt that this improvement dependents on the Pipeline plugin, but I know to few details to have an opinion based on facts. :)
- faster because easily parallelizable/easier integration with docker? (to be
confirmed)
first point: e.g. Pipeline can trigger "down-stream-projects/stages" after one specific part of a parallel-block/matrix-configurations-project is finished.
second point: Pipeline brings a docker wrapper, so one don't have to pass one build script holding the entire build sequence when invoking the container. You can invoke as much scripts as you want within the docker wrapper. (no black-box god-script :)
This sounds to me like Pipelines don't actually provide any of the features I was dreaming about, except maybe easier docker integration?
Is this mailing list the appropriate communication channel?
yes.
Is there already a osmocom ci wiki page, which I overlooked? I'd be happy to contribute!
well, searching osmocom.org for jenkins gave me
https://osmocom.org/projects/cellular-infrastructure/wiki/Jenkins https://osmocom.org/projects/osmocom-servers/wiki/Jenkins_Node_Setup https://osmocom.org/projects/cellular-infrastructure/wiki/Nightly_Builds [...]
and "Wiki" is an ancient African word for "hopelessly outdated"... We appreciate every checked and/or corrected facts on those pages, and the Jenkins_Node_Setup could probably be merged into the Jenkins one.
nice, a sleepy pipeline ;)
[2] https://wiki.jenkins-ci.org/display/JENKINS/Job+DSL+Plugin
This sounds like we want to use it!
[2.5] https://jenkinsci.github.io/job-dsl-plugin/#method/javaposse.jobdsl.dsl.DslF... [3] http://tinyurl.com/zbavrtp
direct link without javascript cruft: https://media.comsysto.com/images/2016-11-23-jenkins-pipelines/new-jenkins-p...
so far that doesn't look like an improvement of
#!/bin/sh ./contrib/jenkins.sh
~N
--
- Neels Hofmeyr nhofmeyr@sysmocom.de http://www.sysmocom.de/
=======================================================================
- sysmocom - systems for mobile communications GmbH
- Alt-Moabit 93
- 10559 Berlin, Germany
- Sitz / Registered office: Berlin, HRB 134158 B
- Geschäftsführer / Managing Directors: Harald Welte
FYI, Blobb visited us in the sysmocom office yesterday, and we had a personal conversation on the jenkins build setup. It was good meeting you in person, André :) For the sake of this ML, let me answer briefly here as well.
On Mon, Mar 06, 2017 at 04:15:20PM +0100, André Boddenberg wrote:
When we speak about the "easier docker integration", we mean that everyone would be happy if not every matrix-configuration axis builds all deps like libosmocore, libosmo-netif etc pp, right?
We're rebuilding everything simply because no-one has found time to make this more efficient yet. There is no need to rebuild all of libosmocore through to to libsmpp34 for every matrix cell.
It would also be useful to build all libosmo* only once for each update of the master branch, and in the dependent builds (openbsc, openbsc-gerrit, ...) always re-use the last successfully built binaries of the dependencies.
To address this issue I'd like to create a local temporary docker image which extends osmocom:amd64 and holds all deps. This image can then be used for all openBSC builds and the temporary local docker image can be removed as a "post-build"-step (which should get triggered regardless the build result).
How about each built library extends a docker image produced by a previous dependency's last successful build? e.g. if libosmocore rebuilds, that updates the libosmocore docker image, then libosmo-abis is triggered and adds itself, producing a last-successfully-built libosmo-abis docker image, and so on, down to openbsc re-using a docker image that already holds all its dependencies?
Even though our libraries don't necessarily have a linear dependency, it could make sense to artificially define such a linear line of dependencies (e.g. even though libsmpp doesn't need libosmo-abis, we make libsmpp build on top of the libosmo-abis docker to collect it into the dependecies docker image). But, then again, if libosmocore succeeded and libosmo-abis failed, we would omit a libsmpp build for no reason. So maybe it would also make sense to somehow build the non-dependent libraries independently and later combine into a joint docker image?? ...I would leave that up to your choices, just brain storming...
A slightly different topic, did you think about pushing your docker images to hub.docker.com [2]?
In my opinion this will be a step forward to a transparent and local reproducible build environment. Afair Harald was quite clear about this requirement?
No idea / not aware of it / no opinion so far :) I was once invited to design a logo for the reproducible builds project, but haven't found time for that (yet?). That's about all I know about the topic...
and "Wiki" is an ancient African word for "hopelessly outdated"...
Thanks for pointing this out, usual I am hesitating to correct something to avoid being called nit-picky.
My middle name is "Nit-pick" :P It's sometimes really hard for me to ignore a small detail that is obviously wrong for the benefit of moving ahead faster...
In the Wiki's case, there is no drawback of correcting mistakes -- no noise in code review or mailing lists is generated, so if you have the time, just do it, as much as you like.
[2] https://wiki.jenkins-ci.org/display/JENKINS/Job+DSL+Plugin
This sounds like we want to use it!
Alright, so I will work on the following migrations: - "as is" -> "JobDSL" - "as is" -> "Pipeline" (probably just for Max)
hehe
Just to be clear, I'm merely one person here, just because I like the sound of Job-DSL, it doesn't mean everyone else does. I think it would be good to see an example of an openbsc build using it and then let everyone decide.
so far that doesn't look like an improvement of #!/bin/sh ./contrib/jenkins.sh
Agreed, the biggest advantages of Pipeline are:
the "eye-candy" (but who cares? :)
easy artifacts sharing across different steps, which run on
different nodes (but afaics you don't even use the "Copy Artifacts Plugin", which makes this argument pointless).
But if we introduce docker images as artifacts, this could become useful?
- the duration/sub-steps can be easily seen (as mentioned by Max), but
this can be achieved which some simple python scripts/plugins + influxDB + grafana as well.
I personally mostly care about the overall time a build takes, so that I get V+1/V-1 on my gerrit patches faster.
Neels, can you please help me fixing the following build error [3]:
01:59:20 checking dbi/dbd.h usability... no 01:59:20 checking dbi/dbd.h presence... no 01:59:20 checking for dbi/dbd.h... no 01:59:20 configure: error: DBI library is not installed
Install libdbi-dev and libdbd-sqlite3.
I already added the following package to the image, by:
docker run ........ osmocom:amd64 /bin/bash -c sudo apt-get install -y r-cran-dbi; /build/contrib/jenkins.sh
r-cran-dbi?? what is that? :)
Why is this build dependency not part of the Docker image?
That's a good question, it should be, right? According to the dockerfile, libdbi-dev is actually installed there. So is libdbd-sqlite3.
Hard to say why configure tells you that that is missing. It should work.
I've attached the Dockerfile config used to generate the docker image for openbsc. There's some stuff in there not needed for openbsc in particular, e.g. the smalltalk things near the end. And it should probably use the osmo-ci git repos instead of echoing the osmo-deps.sh to /usr/local/bin manually. Actually, our openbsc-gerrit build job has osmo-ci in ~/bin and links that to /build_bin in the docker build, also adding /build_bin to the PATH ... that's a bit cumbersome and is my fault from when I wasn't too familiar with docker yet. These things could be streamlined.
docker run --rm=true -e HOME=/build -e MAKE=make -e PARALLEL_MAKE="$PARALLEL_MAKE" \ -e IU="$IU" -e SMPP="$SMPP" -e MGCP="$MGCP" -e PATH="$PATH:/build_bin" \ -e OSMOPY_DEBUG_TCP_SOCKETS="1" -w /build -i -u build -v "$PWD:/build" \ -v "$HOME/bin:/build_bin" osmocom:amd64 /build/contrib/jenkins.sh
OT: does your nick "Blobb" indicate affinity to Binary Large Ob(b)jects? ... like docker images? ;)
~N
Sorry for the slight detour:
Docker (and other related) container images pose still largely unresolved license compliance conflicts. We as Osmocom project are on the safe side as long as we distribute only source code of our projects, or binaries built from our source code, where the binaries include the respective license texts as well as an indication where the corresponding source for that build can be found.
If we distribute images with binaries in them, we need to be 100% sure that we can ensure license compliance for everything inside such an image, for each and every version of that image, and also make sure that the source code (if not includeD) can be later produced for a given build even if somebody asks three years later.
So in general, i have big reservations against containers and the way how people in that area seem to ignore FOSS (and other?) license compliance. Maybe that's just my pre-occupation, and they have this all sorted out. But to me, so far, it seems like this technology is just inviting people to commit license infringements.
Before anyone uploads/publishes any containers on our servers or on hub.docker.com or any other site, please think twice and thrice about how you ensure compliance withe every license of every software in such an image. Thanks.
Regards, Harald
Hi Harald,
On Mar 8, 2017 8:24 PM, "Harald Welte" laforge@gnumonks.org wrote:
Sorry for the slight detour:
Docker (and other related) container images pose still largely unresolved license compliance conflicts. We as Osmocom project are on the safe side as long as we distribute only source code of our projects, or binaries built from our source code, where the binaries include the respective license texts as well as an indication where the corresponding source for that build can be found.
If we distribute images with binaries in them, we need to be 100% sure that we can ensure license compliance for everything inside such an image, for each and every version of that image, and also make sure that the source code (if not includeD) can be later produced for a given build even if somebody asks three years later.
So in general, i have big reservations against containers and the way how people in that area seem to ignore FOSS (and other?) license compliance. Maybe that's just my pre-occupation, and they have this all sorted out. But to me, so far, it seems like this technology is just inviting people to commit license infringements.
Before anyone uploads/publishes any containers on our servers or on hub.docker.com or any other site, please think twice and thrice about how you ensure compliance withe every license of every software in such an image. Thanks.
Could you elaborate what kind of license infringement does Docker have and what license issues might arise from publishing eg Osmocom images at some repository?
I was confident it's ok and we did publish some code to create Docker containers in the past. I would like to understand potential issues with this.
Eg what's wrong with images built from publicly available sources without any binary blobs? And are the issues with Docker itself?
Please excuse typos. Written with a touchscreen keyboard.
-- Regards, Alexander Chemeris CTO/Founder Fairwaves, Inc. https://fairwaves.co
On Thu, Mar 09, 2017 at 10:58:08AM +0300, Alexander Chemeris wrote:
Could you elaborate what kind of license infringement does Docker have and what license issues might arise from publishing eg Osmocom images at some repository?
The issue is not with docker itself. The issue is that people are likely distributing large sets of pre-compiled binaries, which typically has all kinds of obligations under copyleft licenses.
Distributing source is always easy. As soon as you distribute binaries, you need to either include the complete and corresponding source code (under GPL/LGPL/AGPL), or you need to provide a written offer as to how the exact corresponding source code for that particular software can be obtained.
So doing something like a "Ubuntu Live derivative" or something like a VM image, container or other image means you have to provude the *exact* corresponding source code, up to three years later, for that given image. And if you update the image once per month, you have to keep a record of all those source bases.
Passing on a written offer (like saying: Go to Ubuntu/Debian/... and download the source there) is permitted in non-commercial distribution, and relies on the fact that nobody else will distribute such an image in acommercial context, ... Also, can you guarantee that this third party (UBuntu, Debian, whoever) will have those exact source versions around for yeasr into the future? What if not?
I was confident it's ok and we did publish some code to create Docker containers in the past. I would like to understand potential issues with this.
Code to create docker images (or VM images) is perfectly fine. That's not distributing actual binaries of programs.
Eg what's wrong with images built from publicly available sources without any binary blobs?
see above, the fact that source is available (at the time of build?) somewhere else is insufficient for compliance with LGPL, GPL and AGPL, at least as soon as you ever distribute such an image in a commercial context.
And are the issues with Docker itself?
not that I'm aware of. I just have some preconception about people who work a lot with containers without having properly solved their copyleft license compliance first. And it might be that there now are solutions for this - I just happen to hear horrors about public websites / repositories full of binary images without any of them providing the complete and corresponding source code to all the programs they have packaged...
It was good meeting you in person, André :)
It was a pleasure to meet you all and to speak with you in person about jenkins.osmocom! I am looking forward to see you all on the OsmoCon :)
How about each built library extends a docker image produced by a previous dependency's last successful build? e.g. if libosmocore rebuilds, that updates the libosmocore docker image, then libosmo-abis is triggered and adds itself, producing a last-successfully-built libosmo-abis docker image, and so on, down to openbsc re-using a docker image that already holds all its dependencies?
In general that's my thought too, but I am afraid of the complexity of handling/maintaining all these build images. Afaics right now only two docker images have to be (re)build (osmobuild:amd64/32bit), when using docker images and its inheritance model for dependency handling one would end up building a lot more.
I need to tinker around more to understand better.
Just to be clear, I'm merely one person here, just because I like the sound of Job-DSL, it doesn't mean everyone else does. I think it would be good to see an example of an openbsc build using it and then let everyone decide.
No worry, right now I am just sharing my experience about the topic "CI as Code" with you. Whether jobDSL will be used by the osmocom projects is a decision taken by you, not me :)
That's what have been created so far:
osmo-seed [2]: holds inline jobDSL script for [3][4], normally this jobs polls a repo holding all jobs and deploy them after a change has been detected.
openBSC_jobDSL [3]: the openBSC build job as a script deployed by the seed. Basically looks the same, the configuration can be still changed in the web-ui. Manually changes will be marked [4].
Furthermore this wiki [5] is a good entrypoint to jobDSL. Afterwards, these sites [6][7] are helpful imho for a first and second hands-on.
Why is this build dependency not part of the Docker image?
That's a good question, it should be, right?
In general yes, but huge dependencies e.g. an Android SDK are often mounted from the local file system and not backed into a docker image, so there's not a clear general answer.
Can I may ask how you rebuild your docker images? I'd assume that an image is rebuild after a patch submission to osmo-ci, which introduced a change to its Dockerfile. At least according to the Dockerfile attached in this mail thread in which the omso-ci repo is backed in.
I've attached the Dockerfile config used to generate the docker image for openbsc.
Thanks a lot for your support and the Dockerfile. I'm just wondering why the osmo-ci repo [8] doesn't hold the latest state of the Dockerfile is it used for builds?
And it should probably use the osmo-ci git
repos instead of echoing the osmo-deps.sh to /usr/local/bin manually. Actually, our openbsc-gerrit build job has osmo-ci in ~/bin and links that to /build_bin in the docker build, also adding /build_bin to the PATH ... that's a bit cumbersome and is my fault from when I wasn't too familiar with docker yet. These things could be streamlined.
hehe, I recognized that osmo-ci must live in your slave's home directory. I mounted each script to /build_bin/$script to work around it.
OT: does your nick "Blobb" indicate affinity to Binary Large Ob(b)jects?... like docker images? ;)
Hehe, this assumption is made by a lot of coders/techies, but it origins from the game blobby volley [1]. It may sounds weird, but I am called blobb(y), because I am looking similar to the characters in the game. Maybe it's time to cut another letter at the end? ;)
blobb
[1] https://sourceforge.net/projects/blobby/ [2] https://jenkins.blobb.me/view/osmocom/job/omso-seed/ (add "configure-readonly/" to read config) [3] https://jenkins.blobb.me/view/osmocom/job/openBSC_jobDSL/ [4] https://jenkins.blobb.me/view/osmocom/job/openBSC_jobDSL_manually_changed/ [5] https://github.com/jenkinsci/job-dsl-plugin/wiki [6] https://github.com/sheehan/job-dsl-gradle-example [7] http://job-dsl.herokuapp.com/ [8] http://git.osmocom.org/osmo-ci/tree/docker/Dockerfile.deb8_amd64
2017-03-08 17:48 GMT+01:00 Neels Hofmeyr nhofmeyr@sysmocom.de:
FYI, Blobb visited us in the sysmocom office yesterday, and we had a personal conversation on the jenkins build setup. It was good meeting you in person, André :) For the sake of this ML, let me answer briefly here as well.
On Mon, Mar 06, 2017 at 04:15:20PM +0100, André Boddenberg wrote:
When we speak about the "easier docker integration", we mean that everyone would be happy if not every matrix-configuration axis builds all deps like libosmocore, libosmo-netif etc pp, right?
We're rebuilding everything simply because no-one has found time to make this more efficient yet. There is no need to rebuild all of libosmocore through to to libsmpp34 for every matrix cell.
It would also be useful to build all libosmo* only once for each update of the master branch, and in the dependent builds (openbsc, openbsc-gerrit, ...) always re-use the last successfully built binaries of the dependencies.
To address this issue I'd like to create a local temporary docker image which extends osmocom:amd64 and holds all deps. This image can then be used for all openBSC builds and the temporary local docker image can be removed as a "post-build"-step (which should get triggered regardless the build result).
How about each built library extends a docker image produced by a previous dependency's last successful build? e.g. if libosmocore rebuilds, that updates the libosmocore docker image, then libosmo-abis is triggered and adds itself, producing a last-successfully-built libosmo-abis docker image, and so on, down to openbsc re-using a docker image that already holds all its dependencies?
Even though our libraries don't necessarily have a linear dependency, it could make sense to artificially define such a linear line of dependencies (e.g. even though libsmpp doesn't need libosmo-abis, we make libsmpp build on top of the libosmo-abis docker to collect it into the dependecies docker image). But, then again, if libosmocore succeeded and libosmo-abis failed, we would omit a libsmpp build for no reason. So maybe it would also make sense to somehow build the non-dependent libraries independently and later combine into a joint docker image?? ...I would leave that up to your choices, just brain storming...
A slightly different topic, did you think about pushing your docker images to hub.docker.com [2]?
In my opinion this will be a step forward to a transparent and local reproducible build environment. Afair Harald was quite clear about this requirement?
No idea / not aware of it / no opinion so far :) I was once invited to design a logo for the reproducible builds project, but haven't found time for that (yet?). That's about all I know about the topic...
and "Wiki" is an ancient African word for "hopelessly outdated"...
Thanks for pointing this out, usual I am hesitating to correct something to avoid being called nit-picky.
My middle name is "Nit-pick" :P It's sometimes really hard for me to ignore a small detail that is obviously wrong for the benefit of moving ahead faster...
In the Wiki's case, there is no drawback of correcting mistakes -- no noise in code review or mailing lists is generated, so if you have the time, just do it, as much as you like.
[2] https://wiki.jenkins-ci.org/display/JENKINS/Job+DSL+Plugin
This sounds like we want to use it!
Alright, so I will work on the following migrations: - "as is" -> "JobDSL" - "as is" -> "Pipeline" (probably just for Max)
hehe
Just to be clear, I'm merely one person here, just because I like the sound of Job-DSL, it doesn't mean everyone else does. I think it would be good to see an example of an openbsc build using it and then let everyone decide.
so far that doesn't look like an improvement of #!/bin/sh ./contrib/jenkins.sh
Agreed, the biggest advantages of Pipeline are:
the "eye-candy" (but who cares? :)
easy artifacts sharing across different steps, which run on
different nodes (but afaics you don't even use the "Copy Artifacts Plugin", which makes this argument pointless).
But if we introduce docker images as artifacts, this could become useful?
- the duration/sub-steps can be easily seen (as mentioned by Max), but
this can be achieved which some simple python scripts/plugins + influxDB + grafana as well.
I personally mostly care about the overall time a build takes, so that I get V+1/V-1 on my gerrit patches faster.
Neels, can you please help me fixing the following build error [3]:
01:59:20 checking dbi/dbd.h usability... no 01:59:20 checking dbi/dbd.h presence... no 01:59:20 checking for dbi/dbd.h... no 01:59:20 configure: error: DBI library is not installed
Install libdbi-dev and libdbd-sqlite3.
I already added the following package to the image, by:
docker run ........ osmocom:amd64 /bin/bash -c sudo apt-get install -y r-cran-dbi; /build/contrib/jenkins.sh
r-cran-dbi?? what is that? :)
Why is this build dependency not part of the Docker image?
That's a good question, it should be, right? According to the dockerfile, libdbi-dev is actually installed there. So is libdbd-sqlite3.
Hard to say why configure tells you that that is missing. It should work.
I've attached the Dockerfile config used to generate the docker image for openbsc. There's some stuff in there not needed for openbsc in particular, e.g. the smalltalk things near the end. And it should probably use the osmo-ci git repos instead of echoing the osmo-deps.sh to /usr/local/bin manually. Actually, our openbsc-gerrit build job has osmo-ci in ~/bin and links that to /build_bin in the docker build, also adding /build_bin to the PATH ... that's a bit cumbersome and is my fault from when I wasn't too familiar with docker yet. These things could be streamlined.
docker run --rm=true -e HOME=/build -e MAKE=make -e PARALLEL_MAKE="$PARALLEL_MAKE" \ -e IU="$IU" -e SMPP="$SMPP" -e MGCP="$MGCP" -e PATH="$PATH:/build_bin" \ -e OSMOPY_DEBUG_TCP_SOCKETS="1" -w /build -i -u build -v "$PWD:/build" \ -v "$HOME/bin:/build_bin" osmocom:amd64 /build/contrib/jenkins.sh
OT: does your nick "Blobb" indicate affinity to Binary Large Ob(b)jects? ... like docker images? ;)
~N
--
- Neels Hofmeyr nhofmeyr@sysmocom.de http://www.sysmocom.de/
=======================================================================
- sysmocom - systems for mobile communications GmbH
- Alt-Moabit 93
- 10559 Berlin, Germany
- Sitz / Registered office: Berlin, HRB 134158 B
- Geschäftsführer / Managing Directors: Harald Welte
On Sat, Mar 11, 2017 at 10:47:41PM +0100, André Boddenberg wrote:
Can I may ask how you rebuild your docker images?
Manually. Basically so far changes to the osmo-python-tests were the only events that needed a rebuild of that image. So when I changed something on osmo-python-tests, I login on the build server, trivially bump the dockerfile and launch a rebuild. All other changes, libosmocore thru openbsc, are built for each job, so all we need there is a basis for building.
osmo-ci is pulled in from the docker commandline.
Would probably be good to have the/a docker image in osmo-ci and use that. One could autogenerate the tail of the dockerfile to add 'git checkout's of the current HEAD hashes and trigger a rebuild as an easy way to re-use.
I'd assume that an image is rebuild after a patch submission to osmo-ci
I agree that this would be a good idea.
Thanks a lot for your support and the Dockerfile. I'm just wondering why the osmo-ci repo [8] doesn't hold the latest state of the Dockerfile is it used for builds?
osmo-ci is pulled in from the docker commandline, 'mounted' at /build_bin. I believe I explained that here:
our openbsc-gerrit build job has osmo-ci in ~/bin and links that to /build_bin in the docker build, also adding /build_bin to the PATH ... that's a bit cumbersome and is my fault from when I wasn't too familiar with docker yet.
If there's another osmo-ci in the dockerfile, we're not using that.
[2] https://jenkins.blobb.me/view/osmocom/job/omso-seed/ (add
lol 'omso'
~N
Hi!
On 06.03.2017 16:15, André Boddenberg wrote:
Agreed, the biggest advantages of Pipeline are:
the "eye-candy" (but who cares? :)
easy artifacts sharing across different steps, which run on
different nodes (but afaics you don't even use the "Copy Artifacts Plugin", which makes this argument pointless).
- the duration/sub-steps can be easily seen (as mentioned by Max), but
this can be achieved which some simple python scripts/plugins + influxDB + grafana as well.
I don't think that installing and maintaining 2 additional software packages + writing custom code is as easy as enabling plugin in jenkins and configuring it. Being able to immediately see how long each step in the pipeline took is not only eye-candy - it's a great hint as to where our potential optimization targets are.
Having said that, if similar visibility could be achieved without using pipeline than by all means - go for it.