jenkins slave setup and artifact re-use

This is merely a historical archive of years 2008-2021, before the migration to mailman3.

A maintained and still updated list archive can be found at https://lists.osmocom.org/hyperkitty/list/OpenBSC@lists.osmocom.org/.

Harald Welte laforge at gnumonks.org
Wed Sep 6 13:05:27 UTC 2017


Hi Andre,

On Wed, Sep 06, 2017 at 02:05:16PM +0200, André Boddenberg wrote:
> 
> >> This line fetches the given URL (in this case the latest patch on that
> >> branch) and considers the docker image as unchanged if that URL shows the
> >> same as last time. As soon as a new patch shows, things are rebuilt.
> 
> Great idea! So, the hourly/nightly jobs would "docker build..."
> instead of "docker run..."?

no, every 'docker run' job would be preceeded with a 'docker build', which
would either rebuild the image if needed (based on the "ADD" patch URL)
or simply use the most recent image from the cache.

There would be one image/Dockerfile that builds libosmo* in it at
'docker build' time, and which then is used with 'docker run' to build
the specific application for which you're doing build testing, e.g.
osmo-bsc or osmo-hlr.

The build test at 'docker run' time would then happen completely inside
the docker tmpfs overlay and is expected to run super quick,
particularly given that build-2 has 64GB of RAM.  If we want to keep
some artefacts, then those would have to be copied (e.g. during "make
install" to a bind-mount/volume).

> Will there be one Dockerfile per each branch or is planned to use
> docker's "ARG  and "--build-arg" to pass branch while building?

In the basic form I would see only one dockerfile+image for libosmo* in
master branch that can be used for all osmo-* application build testing.

The given project that you want to build-test would either
a) not be part of that dockerfile/image but simply cloned at 'docker
   run' time.  Disadvantage: Must be cloned from scratch.

b) a 'git clone' base version of the (or all?) application-under-test
   could be part of the Dockerfile/image, so that at 'docker run' time
   we simply do a git pull + checkout -f -B of the specific branch we
   want to build.  Advantage: no need to clone from scratch at every
   build, only the delta between 'when image was built last' and the
   branch/patch-under-test needs to be fetched from the git server

for 'b' all git repos could be cloned into the base Dockerfile/image,
or we could have one program-specific Dockerfile/image.  In the context
of simplicity, I would try to reduce the different number of
Dockerfiles/images, and simply have "all-in-one".  The age of those
"program under test" clones doesn't matter (so no "ADD http://cgit..."),
as opposed to the age of the build dependencies, which must be ensured.

> Furthermore, the nightly package of libosmocore-dev confuses me,
> especially when thinking about gerrit jobs. How often are these
> packages updated?

The current Dockerfiles in docker-playground.git are built for executing
test software.  They re not meant for build testing, please don't
confuse those two.

> Afaiu images will be rebuild if a new patch is introduced. But who is
> invoking the rebuild when the parent or libosmocore-dev in the example
> have changed?

image[s] will only need to be rebuilt when a patch to the build
dependencies is introduced to the master of such a dependency.  They
will not be upated by a patch to the project/repo/app-under-test, as
that one is not part of the image.

The rebuild of the 'libosmo*' image is triggered by 'docker build' at
the beginning of e.g. osmo-bsc-gerrit job.

> Sharing same layer for "RUN apt-get install ..." command as shown in
> osmo-nitb-master and osmo-bts-master Dockerfile could be promising.

I think that's not really relevant to build-testing.

> In general I like the "move" towards docker compared to lxc, which
> does not provide something similar to a Dockerfile.

Well, it provides templates.  Similar, but of course different (and no
layer cachce, ...)

> On the other hand I am skeptic about the whole life cycle, which imo
> needs some external management as described to keep everything up to
> date. 

no, fully automatic.

> Additionally, every "docker run ..." command would need a
> "docker pull ..." before to ensure latest image from repository.

If you do a 'docker build' ahead of every 'docker run', I don't see that
need.

> I will definitely setup some build jobs on my Jenkins with those
> Docker images to get a better understanding.

Please don't use the Dockerfiles in docker-playground.git.  As
indicated, they are built for a completely different purpose and hence
work differently.

Regards,
	Harald
-- 
- Harald Welte <laforge at gnumonks.org>           http://laforge.gnumonks.org/
============================================================================
"Privacy in residential applications is a desirable marketing option."
                                                  (ETSI EN 300 175-7 Ch. A6)



More information about the OpenBSC mailing list