Hi!
We have some builds that happen inside a docker, and some that happen natively on the (Debian 8) build slave. Can somebody involved with this illustrate why that is the case, and what's the rationale here?
Just looking at the setup, I'm unable to figure out whether there's any particular reason for this, or whether it's simply "we started without and then did some builds in docker but nobody migrated the other builds over"
From my point of view, I would have assumed that building in different containers
would make sense to e.g. build on different distributions / versions, or building against older libosmo* vs. building against libosmo* from nightly package feeds vs. re-building all dependencies from master.
But it appears that only a single container is built, and that container is used for some jobs (like osmo-msc-gerrit) but not for other jobs.
If I missed some wiki page or mailing list posts with related information, a pointer would be helpful. Thanks in advance.
Regards, Harald
On 25. Aug 2017, at 19:15, Harald Welte laforge@gnumonks.org wrote:
Hi!
Hi!
We have some builds that happen inside a docker, and some that happen natively on the (Debian 8) build slave. Can somebody involved with this illustrate why that is the case, and what's the rationale here?
historically I wanted to speed up the -gerrit builds so this is where I started experimenting with containers and ended with docker.
For the main * build it was a question does build time matter more than multi-platform builds. I thought that FreeBSD with clang provides more value than a quick non-gerrit build.
From my point of view, I would have assumed that building in different containers would make sense to e.g. build on different distributions / versions, or building against older libosmo* vs. building against libosmo* from nightly package feeds vs. re-building all dependencies from master.
But it appears that only a single container is built, and that container is used for some jobs (like osmo-msc-gerrit) but not for other jobs.
Right a single container is used as the goal was to be able to run VTY tests in parallel without messing with the configs (pick ports at random or such). I think multi-distribution support should be carefully considered as this will have an explosion in builds. It might only make sense if we compile with -Werror as well?
I can't comment if osmo-msc-gerrit is using a different container than other builds.
holger
On Fri, Aug 25, 2017 at 08:37:12PM +0800, Holger Freyther wrote:
I can't comment if osmo-msc-gerrit is using a different container than other builds.
Until recently only OpenBSC and OpenBSC-gerrit were using docker. osmo-msc and osmo-msc-gerrit were copied from (one of) those, hence those are also using docker. osmo-{bsc,mgw,sgsn} don't exist yet but could do the same.
They (will) use the same docker image.
The mentioned vty tests port conflicts are related to the build matrix containing eight different builds, which can be parallelized by using the docker container; and also allows building separate patches in parrallel.
Unfortunately, documenting the jenkins setup while tweaking it is time consuming and tends to become outdated quickly. I think documenting our entire setup is nontrivial. I did so for the osmo-gsm-tester jobs, which takes up a large part of the osmo-gsm-tester manual. We need to find a feasible balance of detail, which I guess should be rather coarse. Numerous documentation and wiki restructuring tasks for Osmocom components themselves are continuously pending and IMHO higher in priority.
~N
On 27. Aug 2017, at 16:28, Neels Hofmeyr nhofmeyr@sysmocom.de wrote:
Hey!
Unfortunately, documenting the jenkins setup while tweaking it is time consuming and tends to become outdated quickly. I think documenting our entire setup is nontrivial. I did so for the osmo-gsm-tester jobs, which takes up a large part of the osmo-gsm-tester manual. We need to find a feasible balance of detail, which I guess should be rather coarse. Numerous documentation and wiki restructuring tasks for Osmocom components themselves are continuously pending and IMHO higher in priority.
Documentation is always prone to being outdated (leave alone some random notes in a wiki...) and it takes a lot of effort to keep it updated. What I think works is to automate as much as possible. This allows someone to _read_ the log and try things.
In the sysmocom system-images I played with the XML "template" and the java client to create/update jobs but Lynxis found that yaml DSL to describe jobs more easily and I think we could explore that. We could have one repository will all the jobs and run that to populate jenkins. E.g. this even enforces standards like discarding old builds (we don't run on turing tape after all).
cheers holger
On Wed, Aug 30, 2017 at 01:12:06AM +0200, Holger Freyther wrote:
In the sysmocom system-images I played with the XML "template" and the java client to create/update jobs but Lynxis found that yaml DSL to describe jobs more easily and I think we could explore that. We could have one repository will all the jobs and run that to populate jenkins. E.g. this even enforces standards like discarding old builds (we don't run on turing tape after all).
That sounds excellent! Setting up jobs on the web ui can be quite cumbersome and documenting it is even worse...
Yet someone needs to setup the yaml DSL. (Sounds like a job for dr.blobb, but haven't heard anything in a while)
~N
+1 for moving towards describing jobs instead of clicking them!
I haven't used yaml DSL, only JobDSL and Pipeline so far. I will try to migrate some jobs (nightly, gerrit) to yaml DSL to see if all needed configurations are available and let you know. Or did someone already check this?
2017-08-30 13:53 GMT+02:00 Neels Hofmeyr nhofmeyr@sysmocom.de:
On Wed, Aug 30, 2017 at 01:12:06AM +0200, Holger Freyther wrote:
In the sysmocom system-images I played with the XML "template" and the java client to create/update jobs but Lynxis found that yaml DSL to describe jobs more easily and I think we could explore that. We could have one repository will all the jobs and run that to populate jenkins. E.g. this even enforces standards like discarding old builds (we don't run on turing tape after all).
That sounds excellent! Setting up jobs on the web ui can be quite cumbersome and documenting it is even worse...
Yet someone needs to setup the yaml DSL. (Sounds like a job for dr.blobb, but haven't heard anything in a while)
~N