Hi.
Right now we have rather sophisticated CI/CD setup which is spread over several services (OBS for nightly packages, jenkins for CI) and repositories.
It works well, but if I've got to somehow extend/alter it than I have to look in at least 3 different places: - ocmo-ci repo for common scripts - jenkins job in jenkins web ui - project's jenkins*.sh scripts
I think it's not meant to be that way, it's just what has grown organically over the years.
Right now we use recent enough jenkins which support Pipelines [1] plugin. The basic idea behind it is that each project has Jenkinsfile [2] in the repository which is self-contained configuration for CI.
In theory enabling this plugin should not affect existing jobs as it's entirely separate job type.
I suggest following: - enable Pipelines plugin in jenkins - configure new pipelines-based job pointing to non-master branch which we can use as a playground - once we comfortable with it - migrate existing CI (and possibly CD) to pipelines (as time permit) - switch gerrit from old jobs to new pipelines
What do you think?
Details are available via links below but here are some quick takeaways which motivated me to write this email:
- we'll have actual CI job description under version control (right now it just sits in web ui) - we'll have single place to look for CI-related things (ok, maybe 2 places if we still need common library) - it's more eye-candy (it's clear which build step we're in and how long each step took) - it's easier to parallelize (e. g. run sanitizer and regular builds in parallel)
On a related note: in general, who's point of contact for things like "please update jenkins plugin X to version Y"?
[1] https://jenkins.io/doc/book/pipeline/ [2] https://jenkins.io/doc/book/pipeline/jenkinsfile/
On Fri, Mar 03, 2017 at 07:27:16PM +0100, Max wrote:
Hi.
Right now we have rather sophisticated CI/CD setup which is spread over several services (OBS for nightly packages, jenkins for CI) and repositories.
It works well, but if I've got to somehow extend/alter it than I have to look in at least 3 different places:
- ocmo-ci repo for common scripts
- jenkins job in jenkins web ui
- project's jenkins*.sh scripts
I think it's not meant to be that way, it's just what has grown organically over the years.
Right now we use recent enough jenkins which support Pipelines [1] plugin.
Well, whaddaya know, one of the accelerate3g5 contestants actually wanted to help introduce Jenkins Pipelines. Sounds like it's going to happen :) dr.blobb, could you join the discussion please?
About the distribution of scripts: the osmo-ci has common scripts, and each project's jenkins.sh has project-specific steps in it. Either we keep all project-specific details in one central place instead of with the project (where one might say it belongs), or we copy the osmo-ci scripts to every project duplicating the code -- I don't really see a way to get out of that part... But I've been annoyed by having to edit a dozen different repositories to tweak the same detail in each jenkins.sh (like adding that value_string check), so keeping all jenkins.sh in one central place has occurred to me several times before as being a good idea. It has the disadvantage that you can't change jenkins.sh at the same time as a patch changes the behavior (like your --enable-sanitizer change in libosmo-abis, which was all nicely applied in one patch), but so far the disadvantage of having to edit N separate repositories has far outweighed that for me.
I suggest following:
- enable Pipelines plugin in jenkins
- configure new pipelines-based job pointing to non-master branch which we
can use as a playground
- once we comfortable with it - migrate existing CI (and possibly CD) to
pipelines (as time permit)
- switch gerrit from old jobs to new pipelines
What do you think?
sounds excellent.
- we'll have actual CI job description under version control (right now it
just sits in web ui)
except for the parts we relayed into the jenkins.sh scripts for the same reason -- but this includes the job config as well, right? which sounds excellent. I'd trade any web interface for text files any time.
- we'll have single place to look for CI-related things (ok, maybe 2 places
if we still need common library)
ah ok
- it's more eye-candy (it's clear which build step we're in and how long
each step took)
- it's easier to parallelize (e. g. run sanitizer and regular builds in
parallel)
how does this come about? Separate workspace per pipeline run? The point being: does jenkins keep separate network interfaces for each? Our 'make check' often starts up things on ports that must not be reused. That's why docker was such an improvement for the openbsc job. Ah, I see that pipelines can actually integrate with docker as "agent"!
On a related note: in general, who's point of contact for things like "please update jenkins plugin X to version Y"?
I guess Holger or me, but the decision whether to do that is probably more with Holger and Harald, besides the community at large.
[1] https://jenkins.io/doc/book/pipeline/ [2] https://jenkins.io/doc/book/pipeline/jenkinsfile/
One thing that catches my attention: do we have to trade shell script for groovy scripting? That would be a potential drawback, because we're familiar with shell, not with groovy. Groovy is very Java, we're more C and sh.
It seems that shell commands become
sh 'ls -alh'
so I guess we would put everything except one-liners into an actual .sh file and call that from the jenkinsfile. I wouldn't want to shell-script with prepending 'sh' everywhere and losing all of the env on every line.
Overall I would +1 pipeline trials.
The jenkins.osmocom.org plugin page though says:
" Pipeline A suite of plugins that lets you orchestrate automation, simple or complex. See the Jenkins website for more details and documentation. <red>Warning: This plugin requires dependent plugins be upgraded and at least one of these dependent plugins claims to use a different settings format than the installed version. Jobs using that plugin may need to be reconfigured, and/or you may not be able to cleanly revert to the prior version without manually restoring old settings. Consult the plugin release notes for details.</red> "
whatever that means in practice.
~N
On 4 Mar 2017, at 06:28, Neels Hofmeyr nhofmeyr@sysmocom.de wrote:
About the distribution of scripts: the osmo-ci has common scripts, and each project's jenkins.sh has project-specific steps in it. Either we keep all project-specific details in one central place instead of with the project (where one might say it belongs), or we copy the osmo-ci scripts to every project duplicating the code -- I don't really see a way to get out of that part... But I've been annoyed by having to edit a dozen different repositories to tweak the same detail in each jenkins.sh (like adding that value_string check), so keeping all jenkins.sh in one central place has occurred to me several times before as being a good idea. It has the disadvantage that you can't change jenkins.sh at the same time as a patch changes the behavior (like your --enable-sanitizer change in libosmo-abis, which was all nicely applied in one patch), but so far the disadvantage of having to edit N separate repositories has far outweighed that for me.
The reason I moved the build instruction out of the Job into a file is to have people easily rebuild/reproduce it. E.g. it helps with people that either don't know make distcheck or can't copy the invocation from the log.
I think if someone wants to reproduce the failure, it will be difficult for that person to checkout the right repository with the build script. :)
What I think should be avoided is to use Jenkins specific files. You might need to install Java and tons of jars to locally drive your build. ;)
holger
Hi Holger,
On Sat, Mar 04, 2017 at 08:57:08AM +0100, Holger Freyther wrote:
The reason I moved the build instruction out of the Job into a file is to have people easily rebuild/reproduce it. E.g. it helps with people that either don't know make distcheck or can't copy the invocation from the log.
I think tat makes a lot of sense, and I would generally like to see more (if possible) of the jenkins jobs move into the repositories, maybe even those of the sysmocom jenkins.
It should be possible (and documented!) how somebody can locally reproduce a build (particularly a build error) that is shown in a jenkins job.
I think if someone wants to reproduce the failure, it will be difficult for that person to checkout the right repository with the build script. :)
agreed.
What I think should be avoided is to use Jenkins specific files. You might need to install Java and tons of jars to locally drive your build. ;)
also fully agreed here.
On Sat, Mar 04, 2017 at 08:57:08AM +0100, Holger Freyther wrote:
About the distribution of scripts: the osmo-ci has common scripts, and each project's jenkins.sh has project-specific steps in it. Either we keep all
The reason I moved the build instruction out of the Job into a file is to have people easily rebuild/reproduce it. E.g. it helps with people that either don't know make distcheck or can't copy the invocation from the log.
I think if someone wants to reproduce the failure, it will be difficult for that person to checkout the right repository with the build script. :)
At the moment the jenkins.sh actually depends on osmo-ci scripts, so said someone would also need the right repository as well -- short of understanding what 'osmo-build-dep.sh libosmocore' means :P
On Sat, Mar 04, 2017 at 10:43:27AM +0100, Harald Welte wrote:
what about git-subtree or git-submodule for osmo-ci?
Probably a good idea. Though, submodules require manual steps to check out, right? So a README is needed anyway.
With osmo-ci submoduled it's also a small step to actually move the various jenkins.sh into osmo-ci and be able to edit them centrally? :) anyway, not high up on the agenda.
On Sat, Mar 04, 2017 at 08:57:08AM +0100, Holger Freyther wrote:
What I think should be avoided is to use Jenkins specific files. You might need to install Java and tons of jars to locally drive your build. ;)
[and] On Sat, Mar 04, 2017 at 10:43:27AM +0100, Harald Welte wrote:
I have a strong opinion against our developers having to learn a new programming language just for continuous integration. That would be a *very* high price to pay.
Have all build steps in shell scripts that can run on their own, and accompany that with a jenkins specific file to drive the pipeline, which basically calls the shell scripts?
On Sat, Mar 04, 2017 at 10:43:27AM +0100, Harald Welte wrote:
Also, in general I appreciate steps that improve productivity. But please keep in mind that using new tools/toys just because they exist and may be hyped is not a good reason. I'm not saying this is the case here, but I'm saying we have to be careful.
So, what would be the actual benefits?
* complete job config in git? (to be confirmed) * faster because less things have to be rebuilt? (to be confirmed) * faster because easily parallelizable/easier integration with docker? (to be confirmed)
If dr.blobb and/or Max could clarify these points that would be great (while I guess they will clarify in depth only after actual trials). First tests could run on a privately setup Jenkins ... dr.blobb?
~N
Hi Neels,
On Sat, Mar 04, 2017 at 06:28:24AM +0100, Neels Hofmeyr wrote:
About the distribution of scripts: the osmo-ci has common scripts, and each project's jenkins.sh has project-specific steps in it.
Either we keep all project-specific details in one central place instead of with the project (where one might say it belongs), or we copy the osmo-ci scripts to every project duplicating the code -- I don't really see a way to get out of that part...
what about git-subtree or git-submodule for osmo-ci?
except for the parts we relayed into the jenkins.sh scripts for the same reason -- but this includes the job config as well, right? which sounds excellent. I'd trade any web interface for text files any time.
I also think that having the job configuration in the repository would be a plus.
[1] https://jenkins.io/doc/book/pipeline/ [2] https://jenkins.io/doc/book/pipeline/jenkinsfile/
One thing that catches my attention: do we have to trade shell script for groovy scripting? That would be a potential drawback, because we're familiar with shell, not with groovy. Groovy is very Java, we're more C and sh.
I have a strong opinion against our developers having to learn a new programming language just for continuous integration. That would be a *very* high price to pay.
Also, in general I appreciate steps that improve productivity. But please keep in mind that using new tools/toys just because they exist and may be hyped is not a good reason. I'm not saying this is the case here, but I'm saying we have to be careful.
Regards,