This is merely a historical archive of years 2008-2021, before the migration to mailman3.
A maintained and still updated list archive can be found at https://lists.osmocom.org/hyperkitty/list/OpenBSC@lists.osmocom.org/.
Holger Freyther holger at freyther.de> On 07 Jun 2016, at 02:26, Neels Hofmeyr <nhofmeyr at sysmocom.de> wrote: Hi! > One of the builds for this patch failed already. > > It seems the ssh is not retrying to connect but just failed ~12 hours ago and > will sit there forever; so I removed the scheduled build. > > Alas, the next build (for the same patch) goes stuck the same way, so the build > slave seems to be offline for reals. How to fix? sorry about that. I fixed it and it builds again but it will break again as well. Long story: At OsmoDevCon I upgraded jenkins to a less vulnerable version. This required a JDK/JRE upgrade on our Debian6.0/i386 (Linux syscall compat by FreeBSD) build system and somehow this still failed. So in a rush I moved the builds to use the Ubuntu based AMD64 build that has been used for the asciidoc generation. Now to the bad stuff. The VM/jail is not reboot safe as on boot /usr/local and other directories are not in the path _and_ the VirtualBox disk image is a plain file in a filesystem with quota. It runs out of quota because once a day the zfs-snap tool runs and makes snapshots of all volumes (it can't exclude a specific one) and this means that even removing files will not lead to more space. The Plan: Sysmocom has agreed to move the builder from my own machine to a newly rented one and then I will use bhyve + ZFS disk volume (block device backed by ZFS) and the problem will be gone. The only issue is that I didn't have time for that the last two weekends. holger PS: I will probably write a small script to undo some of the work zfs-snap did everyday.