jenkins slave setup and artifact re-use

This is merely a historical archive of years 2008-2021, before the migration to mailman3.

A maintained and still updated list archive can be found at https://lists.osmocom.org/hyperkitty/list/OpenBSC@lists.osmocom.org/.

Neels Hofmeyr nhofmeyr at sysmocom.de
Fri Sep 8 22:33:46 UTC 2017


On Thu, Sep 07, 2017 at 03:34:48PM +0200, André Boddenberg wrote:
> My knowledge about the gsm-tester is quite limited to its manual and I

On the tester, all that we want to do is build (usually current master) and
keep the binaries in the image, so that we can launch them with specific config
files on another computer. That's all we need to know in this context.


> Of course this will work, but afaics it won't suite for gerrit
> verifications. Because docker always takes the latest available base
> image. So if libosmocore image is currently rebuild and didn't finish
> yet, a libosmo-abis docker build, which uses libosmocore as base image
> wouldn't wait until the "new" libosmocore image is built, it would

If a libosmo-abis patch starts building just before the latest merge to
libosmo-core master has finished docker building, it doesn't matter much. The
libosmo-abis patch usually does not depend on libosmocore work that is just
being merged. If it does, the libosmo-abis patch submitter will have to wait
for libosmocore to complete. This is the same as our current gerrit patch
submission works and "a law of nature". It's expected and not harmful.


On Thu, Sep 07, 2017 at 04:09:26PM +0200, Harald Welte wrote:
> this would mean you would have to
> * docker build the libosmocore image to check/update to current master
> * docker build the libosmo-abis image
> * docker run the build for osmo-hlr

* I expect the libosmocore-master jenkins job to docker build the libosmocore
  image whenever a patch was merged.
* The libosmo-abis image simply builds on the last stable libosmocore docker
  image it finds in the hub (what was the generic name for the hub again?).
* In turn osmo-hlr takes the last stable libosmo-abis image and just adds
  building osmo-hlr on top.

Each jenkins job takes exactly one 'FROM' image, builds exactly one git tree,
stores exactly one state in the docker cache.

To be precise, the 'master' build jobs would store the built images in the
docker hub thing, the gerrit build jobs just take the build rc and discard the
image changes (could keep the result in the cache for a short time).


> If this splits up to even more images, you will end up having something
> like 8 'docker build' followed by one 'docker run' in each gerrit job.
> I'm not sure how much of the performance gain we will loose that way.

IIUC, we win tremendously by only 'docker build'ing when something is merged to
master. One goal for the osmo-gem-tester is to anyway have docker images ready
for each project's current master.

> manually having to re-trigger builds in the right
> inverse dependency order in every jenkins job.

I don't see why that is required?

> To avoid complexity and having too maintain too many Dockerfiles, related images, etc.

I accept that.  But I don't have a clear picture in my mind of how it would
look in practice with a joint Dockerfile:

So we have one Dockerfile like
https://git.osmocom.org/docker-playground/tree/osmo-gerrit-libosmo/Dockerfile
which contains all osmo gits.

This file also actually updates from git and builds *all* the various libs when
we update the image.

How does this translate to us wanting to e.g. have one jenkins job verifying
libosmocore, one for libosmo-abis, [...], one for osmo-msc?

Each start out with an image where the very project we want to check is already
built and installed, so we need to actively remove installed files from
/usr/local/{include,lib,bin} first, for only that project under scrutiny. We
can't really rely on 'make uninstall' being correct all the time. How about
using 'stow' that showed up recently? It allows wiping installs separately,
right?


I still see chicken-egg problems: when I run a libosmocore jenkins job, I want
to update the image first.
*) That inherently already may build libosmocore. For a gerrit patch, it
   possibly builds libosmocore master, and I can later introduce the patch. If
   I want to test master though, updating the image already *will* build master,
   which I actually wanted to test; at what stage then do I detect a failure?
   If a 'docker build' failure already counts as failure, then:
*) What if e.g. libosmo-abis fails during image update: does it cause a failure
   to be counted for the libosmocore build job instead? I.e. does an unrelated
   broken libosmo-abis master cross-fire onto non-depending builds?
How do we solve this?


And I see a possibility: say for every libosmocore patch we actually also build
the whole chain and verify that this new libosmocore works with all depending
projects. That way we would detect whether libosmocore breaks builds down the
line in other projects, which we don't do on gerrit level yet.

OTOH then we can't easily introduce a change that needs patches in more than
one repos; so far we e.g.  first change libosmo-sccp (hence temporarily break
the build on osmo-msc), then follow right up with a patch on osmo-msc. When we
always verify all depending projects as well, we'll never get the first
libosmo-sccp patch past the osmo-msc check. To solve we'd need to get both
patches in at the same time.

We could parse the commit message 'Depends:' marker, which would actually be
interesting to explore: We could have only one single process that is identical
for *all* gerrit +V builds across all projects, and wherever a patch is
submitted, it always rebuilds and verifies the whole ecosystem from that
project on downwards to the projects that depend on it. By docker build
arguments we can build with specific git hashes, e.g. one with a new patch, the
others with master.  A "Depends:" commit msg marker could take in N such git
hashes at the same time. Non-trivial to implement, takes more build time
(longest for libosmocore, shorter the farther we go towards leaf projects), but
concept wise quite appealing.

For the osmo-gsm-tester, it's actually ok to have only one image with all
binaries ready in it (and launch it N times if we have to separate networks).
Having separate images per each program is helpful to be able to quickly
rebuild only one binary to a different git hash for debugging, but there are
easy ways to do so also with one joint image.



So... my dream setup is one joint image and one build job for all projects,
rebuilding all dependent projects always, using stow to safely clean up for
re-building, and automatically building 'Depends:' marked patches together.

But it seems to me to be less trouble to manage N separate Dockerfiles that
reference each other and are inherently clean every time.
Is there something I'm missing?

~N

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 819 bytes
Desc: Digital signature
URL: <http://lists.osmocom.org/pipermail/openbsc/attachments/20170909/8a5610a1/attachment.bin>


More information about the OpenBSC mailing list