jenkins slave setup and artifact re-use

This is merely a historical archive of years 2008-2021, before the migration to mailman3.

A maintained and still updated list archive can be found at https://lists.osmocom.org/hyperkitty/list/OpenBSC@lists.osmocom.org/.

Harald Welte laforge at gnumonks.org
Wed Sep 6 19:37:20 UTC 2017


Hi Andre,

On Wed, Sep 06, 2017 at 08:45:02PM +0200, André Boddenberg wrote:
> The osmo-gerrit-libosmo Docker file is great. A first hands on showed
> that a openbsc verification build finishes in 5+ minutes and a
> osmo-msc build in breath taking 18s!!!

This is good news. Happy you like it.

> All my concerns about the full automation of using latest dependencies
> AND latest base image are gone.

Great.

After that much praise, there's also a downside:

* we currently install libosmo* to /usr/local using 'sudo'.  There's actually
  no real reason for this, one could install into some other
  user-writable PREFIX and use that at compile ('docker run') time.
  Would be great to see some patches cleaning this up

* we always have all dependencies installed, i.e. we're no longer trying
  to do builds with certain libraries not present.  Let's say we wnat to
  do an osmo-msc --disable-smpp build: libsmpp34 will still be present,
  and we *could* introduce unnoticed bugs that would only show up once
  libsmpp34 is actually not present.   One could either not worry too
  much about it, or one could do some more PKG_CONFIG_PATH hackery with
  different PREFIX for the different libraries so that each library is
  in a different PREFIX and the librray is not found unless we
  explicitly pass the related paths to ./configure.

  I'm not sure if it's worth investing too mcuh time into this, given
  that lots of conditionals go away in the split-nitb scenario:
  osmo-bsc always implicitly requires "--enable-osmo-bsc" and osmo-sgsn
  always implicitly requires libgtp.  However, there's still the SMPP
  example.

  What do others say? Is this important to test?  If so, do we have
  volunteers to look into writing scripts for this?

* In terms of artefacts, we should figure out which ones we want to
  keep.  For sure any kind of log files like config.log should be copied
  from the tmpfs to the workspace before we kill the container.  They
  might contain useful information.  One *might* also want to do a "make
  install" to the workspace?  So to me config.log is a must, everything
  else is "optional, later, if somebody needs it".  But then, Pau had
  some other opionion, AFAIR.

> >> This is of course all just a prototype, and proper scripts/integration
> >> is needed.
> 
> Would it be helpful to set up gerrit verification jobs of mentioned
> "applications" in the Dockerfile.? I am currently trying to set up a
> gerrit verification job with YAML DSL (Jenkins Job Builder) and could
> combine both spikes.

That would be much appreciated.  I think the biggest missing part is
figuring out some helper scripts to easily 'run' that Docker container
with the related arguments.  I guess a given jenkins job then should
only call that helper script and pass the "configure" arguments and some
environment variables like our PARALLEL_MAKE?  I wouldn't want to
clutter the jenkins job definitions with repetetive long hand-crafted
'docker run' commands.

The next question is then where to store all of this.  Given that this
helper script as well as the Dockerfile is quite generic, it should
probably go into osmo-ci.  But then, the Dockerfile depens on stuff from
docker-playground.  We could merge the two or keep the Dockerfile in
docker-playground and just put the helper script in osmo-ci?  Actually,
the part of the script that's running inside the container could be
included in the image at 'docker build' time.  Only the 'docker run'
wrapper that's used to start a container is external.

In any case, from my point of view a given jenkins gerrit job should do:
* git fetch && git checkout -f -B master origin/master on * osmo-ci/docker-playground
  to make sure we catch any updates to those
* 'docker build' of the respective docker image
* 'docker run' by means of some helper script, using the respective
  arguments / build matrix options as required by the given job/project,
  as well as the exact git commit we want to test-build (instead of
  master)

The current image should work for
{openbsc,osmo-{bsc,bts,pcu,mgw,sgsn,ggsn,trx,hlr},openggsn}-gerrit jobs

For the library projects {libasn1c,libsmpp34,libosmo{core,-abis,-netif,-sccp}}
and others which have some downstream build dependencies, we could also
use the same docker base image.  However, some additional concerns:
* when building e.g. libosmocore on a container that already has a
  system-wide installation of libosmocore, we could accidentially use
  include files from the system (/usr/local/include), rather than those
  of the current branch/commit that we're trying to build.  One more
  reason not to install into /usr/local but into specific prefixes (see
  above) and then tell each given build which of the prefixes to use or
  not, depending on its build dependencies
* we might want to test to build (some of?) the downstream dependencies,
  as e.g. a commit in libosmocore might break osmo-bts.

Any help / work in the above areas (and anything I might have missed) is
much appreciated.  I won't have any more time to work on this, too many
other topics going on :/

Regards,
	Harald
-- 
- Harald Welte <laforge at gnumonks.org>           http://laforge.gnumonks.org/
============================================================================
"Privacy in residential applications is a desirable marketing option."
                                                  (ETSI EN 300 175-7 Ch. A6)



More information about the OpenBSC mailing list