How are you handling building local Debian/Ubuntu packages?

I’m in the middle of some conversations about Debian/Ubuntu repositories, and I’m curious how others are handling this.

How are people maintaining repos for an organization? Are you integrating them with a git/CI (github/gitlab, jenkins/travis, etc) workflow? How do packages propagate into repos? How do you separate prod from testing? Is anyone running buildd locally, or integrating with more common CI tools?

I’m also interested in how people handle local modifications of packages — anything from newer versions of C libraries to newer interpreters. Do you just use the regular Debian toolchain, packaging them up for (potentially) the multiple distros/versions that you have in production? Pin them in apt?

Just curious what’s out there.

Some Googling has so far turned up just one relevant hit: Michael Prokop’s DebConf15 slides, “Continuous Delivery of Debian packages”. Looks very interesting, and discusses jenkins-debian-glue.

Some tangentially-related but interesting items:

Edit 2018-02-02: I should have also mentioned BuildStream

12 thoughts on “How are you handling building local Debian/Ubuntu packages?

      1. While I think we have a fairly different organisation of package builds, I think you can achieve what you need using OBS branches.

        Basically, what we’re doing is the following. First of all, we have an OBS project per a distro component. In Apertis, it’s target, development, sdk, hmi, helper-libs. So the projects will be: apertis:18.03:target, apertis:18.03:development and so on. This way we separate different releases or different distributions.

        Each project has associated “internal” OBS repositories, which aren’t in APT format, but used by OBS to bootstrap chroots or provide build dependencies. Should we need a full rebuild, we just add an extra repository to each project and let OBS build all packages. Those “extra” repositories are not normally exported as APT trees.

        For development, we have two approaches. First of all, branches, the mechanism I mentioned above. If you want to test an update for a package, you branch it, creating a new private project home:andrewsh:apertis:18.03:target/bash, for example. If uses the build dependencies from the parent, but doesn’t publish the build results, so you can try things out, download binaries if you wish, test them locally, test other packages building against this one (you need to branch them to your private project too). When you’re ready, you submit a merge request, which commits your changes into the main projects, optionally deleting your copy. (Just to make it clear, branching here resembles Subversion much more than, say, Git.)

        Another way is :snapshots. We have a bunch of packages in Git, and a Jenkins instance. Jenkins builds heads of certain branches (normally, master or apertis/master, and branches for the recent releases) and when builds in a controlled environment succeed, it branches the package into, say, apertis:18.03:snapshots, commits the automatically built version-bumped source package there, leaving it for OBS to build. If the commit is properly tagged, it prepares a release, which basically means it doesn’t put a Git hash in the version number and the changelog, and submits a merge request.

        Well, that’s mostly it, but obviously that’s not the only possible workflow.

  1. I’ve just started poking at that for Passbolt packages (https://passbolt-packages.liip.ch/) at work and the “not-reinventing the wheel, use tools common for coworkers, season with Debian knowledge” is the following schema:
    * a “passbolt-debify” repository uses drifter (https://github.com/liip/drifter) to setup a real host (LXC container) easily (and in a way that my coworkers are used to).
    * On GitLab CI, the LXC container is spun up, some standard setup is done, and then a sid pbuilder base tgz is built if not yet there (update is not happening, but should); then packages are fetched and built:
    * git clone in a temporary directory, gbp buildpackage -S to get the upstream source; then pdebuild to build the packages
    * within GitlabCI, a repository-specific SSH key is configured and used to dput the packages to an inoticoming-powered host, that only allows pushes from that key.
    * reprepro puts the packages in a `unstable-ci` suite.

    My plan is then to manually build on proper sbuilds, GPG sign and upload for non-CI branches, as well as manage migration within reprepro, but my plan was to first get packages from git to .deb with as few manual actions from me as needed.

    It’s bulky, undocumented, and error-prone, but gets the job done.

    If I were to do it for more packages, I’d probably spin up OBS or https://debomatic.github.io/ , but you know how things goes: when you have a hammer (GitLab, vagrant, bash), everything looks like a nail. :-)

  2. We have about twenty packages that we build ourselves; of those, only two are original to us. The others are things that we have specific tweaks for, or are the result of packaging things that other people maintain and don’t have good repos for (odd CPAN modules, for instance).

    There’s no unified build process; we keep two apt repos with ftp-archive scripts to update them, and move debs from the testing repo to the production repo when we’re happy.

  3. It’s not directly at work, but for my own packages, I’ve implemented a set of scripts and Makefiles to make it easier to package stuff.

    Repository:
    https://github.com/kakwa/amkecpak

    Documentation:
    http://amkecpak.readthedocs.io/en/latest/index.html

    The motivation and goals are listed here:
    http://amkecpak.readthedocs.io/en/latest/index.html#motivation

    I’m not very proud of the implementation quality, but it gets the job done.

    At work, I’ve used derivative of it. This was quite successful for several reasons:
    * initializing a package skeleton (.spec or debian/ dir) is quite easy, just run the init_pkg.sh script.
    * it’s easy to recover the upstream sources (it’s a simple wget of the .tar.gz generally).
    * it’s easy to remember simple commands like “make deb”, and “make rpm” (far easier than rpmbuild -ba […] some.spec, or dpkg-buildpackage -us -uc).
    * it’s easy to package a new version (just update the VERSION variable in one file and you are set).
    * it’s nearly self contain, checkout the repo, copy past the apt install command and you can begin to work.
    * the INs (source recovery) and OUTs (package(s) built) are clearly defined and intuitive (ex: a simple out/ directory for the OUTs).

    Having predictable OUTs permits to chain the build of each components inside a larger build (a complete repo or an install ISO for example).

    As for the INs, at home, I directly point to the upstream project to download the source archives.

    At work, I prefer to have some kind of artifact management (I’ve to be able to rebuild the package, even if upstream disappear). In its crudest form, the artifact manager is an apache, serving the archives, with a normalize directory tree managed by hand/scp which looks like:

    ├── soft1
    │   ├── 0.0.1
    │   │   └── soft1_0.0.1.tar.gz
    │   ├── 0.0.2
    │   │   └── soft1_0.0.2.tar.gz
    │   └── 0.1.0
    │   └── soft1_0.1.0.tar.gz
    └── soft2
    ├── 0.42.0
    │   └── soft2_0.42.0.tar.gz
    └── 10.42.0
    └── soft2_10.42.0.tar.gz

    I’ve plan to implement a less crud artifact manager, but I’ve not found the motivation/time to start this project yet.

    If implemented early on in a project, the result is generally quite good because, as it’s the path of least resistance, everybody is following the same normalized pattern, and there is less need to enforce gate keeping and reviews.

  4. At work we use Github (Enterprise) which triggers Jenkins jobs (autogenereated via jenkins-job-builder). Jenkins builds debian packages in multiple steps for different distros with jenkins-debian-glue, signs and uploads them into aptly-based debian repositories. Package are installed and/or rolled back using ansible playbooks on target machines. Sometimes we also backport external software for newer versions or package upstream software ourselves (using the same toolchain/processes). It works well, but it is always hard for poeple to wrap their head around building debian packages if it’s not their daily business.

  5. For prod/testing, I am using this scheme: https://vincent.bernat.im/en/blog/2014-local-apt-repositories. You can add more stages if you need them (test, staging, prod for example). The per-platform repositories enable you to have different versions of the same software available. Today, I use aptly instead of reprepro but aptly can be quite slow. Not a clear win.

    As for building, we are also using Jenkins. For backports (or similar), I use gbp buildpackage-based repositories with a mixed layout to keep things simple for others and one branch per distribution to be supported. See for example: https://github.com/exoscale/pkg-qemu/tree/xenial.

    For in-house stuff, I provided some recipes to closely match what was possible with fpm. This works fine for us as we have almost migrated out of fpm while people were quite relunctant at the beginning because of the complexity. See https://vincent.bernat.im/en/blog/2016-pragmatic-debian-packaging for more details.

    From all aspects, the most important point is to keep complexity down for non-Debian people. For example, for repository management, don’t use an incoming queue. If all your jobs are running with Jenkins, add an additional synchronous upload step. Every failure is then logged in Jenkins, like everything else. And for building packages, people will just fallback to tools like fpm if you try to make them read any of our tutorials for Debian packaging.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.