Development system (1)

Date: 2021/04/03 (initial publish), 2022/08/23 (last update)

Previous Post Top Next Post

TOC

Development infrastructure

In order to keep development setups to be simple and robust, I changed development infrastructure.

(I was suffering sudden cowbuilder failures. That was another motivation.)

Planned infrastructure includes:

APT proxy on host WS

In order to save the network traffic, I decided to set up APT proxy on the host WS.

There are 4 good candidates.

It looks like apt-cacher-ng is the one for me to use.

Set up proxy for the local APT

There are 2 interesting packages.

WARN: Installing auto-apt-proxy package without functioning proxy may cause problem.

For my purpose, proxy is always the local for the host WS which is the gateway WS for guest VMs(=kvm). Simply installing auto-apt-proxy is suffice to set up APT proxy for them. For host WS itself or chroot (including ones created through schroot, sbuild) environment on the host WS, proxy is always the local. So I simply add /etc/apt/apt.conf.d/00proxy as:

Acquire::http { Proxy "http://localhost:3142"; }

Permission and ownership

The permission and ownership of /var/cache/apt-cacher-ng/ is drwxr-sr-x (2755) with apt-cacher-ng:apt-cacher-ng.

If you reinstall system, you must be careful with UID and GID since they may be different for the reinstalled system.

If package files in upstream repo are replaced with different file with the same name, it will create caching problem with apt-cacher-ng. Non Debian repository tends to do this. If you experience strange caching data related issues, erase the whole data under /var/cache/apt-cacher-ng/ for that repo to reset situation.

Sbuild infrastructure

Sbuild provides clean binary package building environment using schroot as its backend.

Setup

Here, I follow https://wiki.debian.org/sbuild (I have made a few updates there, too.).

$ sudo apt install sbuild piuparts autopkgtest lintian
$ adduser osamu sbuild

Logout and login to check you are a member of sbuild group.

$ id
uid=1000(osamu) gid=1000(osamu) groups=1000(osamu),...,132(sbuild)

Let’s create the configuration file ~/.sbuildrc in line with recent Debian practice https://wiki.debian.org/SourceOnlyUpload as:

cat >~/.sbuildrc << 'EOF'
##############################################################################
# PACKAGE BUILD RELATED (source-only-upload as default)
##############################################################################
# -d
$distribution = 'unstable';
# -A
$build_arch_all = 1;
# -s
$build_source = 1;
# --source-only-changes or not
# (1: for "dput", 0: for "dgit source-push")
$source_only_changes = 1;
# -v
$verbose = 1;
# parallel build
$ENV{'DEB_BUILD_OPTIONS'} = 'parallel=12';
##############################################################################
# POST-BUILD RELATED (turn off functionality by setting variables to 0)
##############################################################################
$run_lintian = 1;
$lintian_opts = ['-i', '-I'];
$run_piuparts = 1;
$piuparts_opts = ['--schroot', 'unstable-amd64-sbuild', '--no-eatmydata'];
$run_autopkgtest = 1;
$autopkgtest_root_args = '';
$autopkgtest_opts = [ '--', 'schroot', '%r-%a-sbuild' ];

##############################################################################
# PERL MAGIC
##############################################################################
1;
EOF

You can disable and enable each functionality by assigning 0 or 1 to corresponding variables. You can customize this further for your GPG key etc.

Then create the baseline chroot environment as:

If apt-cacher-ng is NOT used:

$ sudo sbuild-createchroot --include=eatmydata,ccache unstable /srv/chroot/unstable-amd64-sbuild http://deb.debian.org/debian

If apt-cacher-ng is used:

$ sudo sbuild-createchroot --include=eatmydata,ccache unstable /srv/chroot/unstable-amd64-sbuild http://127.0.0.1:3142/deb.debian.org/debian

(The difference can be resolved later by editing the .../etc/apt/sources.list file found under /srv/chroot/unstable-amd64-sbuild.)

Please note that the above create /etc/schroot/chroot.d/unstable-amd64-sbuild-$suffix to be used by schroot. (This file is effectively a split-up /etc/schroot/schroot.conf and this file needs to be removed together with the chroot system directory tree under /srv/chroot/unstable-amd64-sbuild to rerun above sbuild-createchroot if you messed up chroot contents.)

TODO: decide on technical merits: eatmydata or tmpfs for /build etc.

Prior to every use, you need to keep it up to date.

$ sudo sbuild-update -udcar u

Package build with sbuild

For unstable

Package build from the source directory is simply as:

$ sbuild

or using source package as:

$ sbuild <packagename>.dsc

Hmmm… sbuild is now easier than pbuilder/cowbuilder and uses the new bind-mount kernel feature smartly to make build environment quickly.

For testing

Package build from the source directory is simply as:

$ sbuild -c testing-amd64-sbuild

gbp-buildpackage configuration

I changed gbp-buildpackage to use sbuild.

[DEFAULT]
# the default build command:
builder = sbuild -A -s --source-only-changes -v -d unstable
...

(Actually, with the above ~/.sbuidrc, we should be able to skip options after sbuild here.)

Access to the source chroot environment used by sbuild as root

In order to access the minimum unstable chroot environment used by sbuild, use the standard command.

$ sudo sbuild-shell u

In order to update the minimum unstable chroot environment used by sbuild, use the standard command.

$ sudo sbuild-update -udcar u

In order to access the minimum oldstable chroot environment used by sbuild, use the standard command.

$ sudo schroot-shell -u o

In order to update the minimum oldstable chroot environment used by sbuild, use the standard command.

$ sudo sbuild-update -udcar o

Schroot infrastructure (shared data, persistent)

For package development, creating the test build environment starting from the latest minimal chroot environment is time consuming. For this situation, it’s better to have all required packages installed and have access to data in your home directory.

Let’s create package pre-loaded chroot environments which can run x-apps using the schroot directly using a configuration profile based on /etc/schroot/desktop/.

schroot-shell

Since this is a bit repetitive tasks for oldstable, stable, testing, and unstable, I created a simple shell script:

$ schroot-shell -h
NAME
    schroot-shell - start a shell in chroot with current home data.

SYNOPSIS
    schroot-shell [DIST [CONFIG]]
    schroot-shell --config [CONFIG]
    schroot-shell [--init|--prep|--upgrade] [DIST [CONFIG]]

DESCRIPTION

This wrapper command of *schroot* offers a set of helper commands to work in
the persistent pre-loaded (source) chroot system environment while sharing the
user data from the user's home directory of the host system.

Here, if 'CONFIG' and 'DIST' are missing in the command line, 'desktop' for
'CONFIG' and 'unstable' for 'DIST' are used as the default values.  Normally,
you skip specifying 'CONFIG' to use its default value.  This command
automatically becomes root when needed.

The *schroot-shell* command is normally invoked as:

   schroot-shell [DIST [CONFIG]]

This starts an interactive shell in the 'DIST-amd64-CONFIG' source chroot
managed by the *schroot* command while bind mounting the user data from the
user's home directory of the host system.  This is somewhat modeled after
*sbuild-shell*.  While *sbuild-shell* is designed to work with the session
chroot and the changes made from *sbuild-shell* to the chroot is discarded,
the changes made from *schroot-shell* to the chroot is persistent.

In order for the *schroot-shell* command to function, you need to set up the
'DIST-amd64-CONFIG' source chroot properly.

The 'DIST-amd64-CONFIG' source chroot can be initialized in 3
steps.

  1. Create the setup script configuration files for *schroot* under
     '/etc/schroot/CONFIG/' as explained in the 'FILES' section under 'Setup
     script configuration' in schroot.conf(5) for '/etc/schroot/default'.

       schroot-shell --config [CONFIG]

  2. Customize the setup script configuration files for *schroot* at
     '/etc/schroot/CONFIG/fstab' to list all mounted devices in your home
     directory.

  3. Create a chroot file system for 'DIST' at
     '/srv/chroot/DIST-amd64-CONFIG' using the setup script configuration
     files in '/etc/schroot/CONFIG'.

       schroot-shell --init [DIST [CONFIG]]

    Here, 'DIST' specifies base distribution of the chroot system.  'DIST'
    can be 'unstable' (default), 'testing', 'stable', 'oldstable', or
    'oldoldstable'.  These can be shortened as 'u', 't', 's', 'o', or 'oo'
    in the command line.

The 'DIST-amd64-CONFIG' source chroot can be updated to the latest
state by:

   schroot-shell --update [DIST [CONFIG]]

The 'DIST-amd64-CONFIG' source chroot can be prepped up with a few
pre-defined extra packages desirable for the shell activities by:

   schroot-shell --prep [DIST [CONFIG]]

For 'CONFIG', values 'default', 'minimal', 'buildd', and 'sbuild'
should be avoided unless you know what exactly happens since these setup
scripts are already used by *sbuild* and *schroot*.

The *schroot-shell* command offers a generarized and similar functionality of
combined *sbuild-createchroot*, *sbuild-update*, and *sbuild-shell* for the
persistent chroot shell environment instead of the session one.

Parametrs:
  TYPE    = 'source'
  DIST    = 'unstable'
  ARCH    = 'amd64'
  CONFIG  = 'desktop'

Customizing /etc/schroot/desktop/fstab

I customized my desktop profile at /etc/schroot/desktop/fstab as follows:

Access to the persistent chroot environment as user

In order to access the pre-loaded unstable chroot environment, I simply type.

$ schroot-shell

In order to update the pre-loaded unstable chroot environment, I simply type.

$ schroot-shell -u

In order to access the pre-loaded oldstable chroot environment, I simply type.

$ schroot-shell o

In order to update the pre-loaded oldstable chroot environment, I simply type.

$ schroot-shell -u o

Chroot update optimization

Slow fsync(2) process problem caused by dpkg and SSD waring problem can be avoided.

Via eatmydata

Add following line into /etc/schroot/chroot.d/unstable-amd64-sbuild-$suffix or similar to speed up the package installation process.

command-prefix=eatmydata

Via tmpfs

Add following to my /etc/fstab:

# For speeding up sbuild/schroot and prevent SSD wear-out
none /var/lib/schroot/session        tmpfs uid=root,gid=root,mode=0755 0 0
none /var/lib/schroot/union/overlay  tmpfs uid=root,gid=root,mode=0755 0 0
none /var/lib/sbuild/build           tmpfs uid=sbuild,gid=sbuild,mode=2770 0 0

Here, I note a few notable directories:

Please be aware, these directory may eat up some disk spaces and you should have enough DRAM.

Bench mark results

Very rough benchmarks were performed to see impact of eatmydata, tmpfs, and parallel build.

System used:

Result:

source eatmydata tmpfs parallel real user system Note
DR no yes 1 20m54s 18m12s 0m47s Cold APT
DR yes yes 1 17m08s 17m32s 0m34s
DR no yes 1 17m56s 18m09s 0m45s
DR no no 1 19m24s 18m12s 1m21s No speed-up
DR yes no 1 17m10s 17m28s 0m43s
DR' yes no 24 5m02s 31m05s 0m54s
DR' yes yes 24 5m02s 31m05s 0m54s
DR' yes no 24 6m34s 35m37s 0m56s
GIT yes yes 24 7m24s 33m23s 9m45s Cool CPU
GIT yes no 24 7m51s 38m18s 12m00s Hot CPU
GIT yes yes 24 7m21s 35m08s 12m34s Hotter CPU

Observation for real:

Conclusion:

Files used by sbuild and schroot-shell.

The sbuild and schroot-shell use following files.

Install VM infrastructure

$ sudo bash
# virsh net-start default
# virsh net-autostart default
# mkdir /etc/dnsmasq.d
# cat <<EOF >/etc/dnsmasq.d/00_libvirtd.conf
# only bind to loopback by default
interface=lo
bind-interfaces
EOF
# ^D

Create minimal stable VM

(This needs to be updated for “bullseye”. It should be quite similar. I primarily use gnome-boxes now.)

Creating GNOME VM

Creating sid VM for development

deb http://deb.debian.org/debian/ sid main
deb-src http://deb.debian.org/debian/ sid main

VM house keepings

Mixing gnome-boxes and virt-manager

You can add VMs created by gnome-boxes to virt-manager by selecting QEM/KVM User session instead of QEM/KVM in File -> New connection menu.

Changing hostname

To change hostname of the VM permanently, you need to edit /etc/hostname and /etc/hosts and reboot the VM. (The hostname of running system can be set to “” just as hostname <newhostname> but updates of mDNS record seen from other VMs and the host WS need reboot of the VM.)

Setting up SSH

As long as the host WS and guest VMs are installed with avahi-utils, we can use each host name appended with “.local” to access using good old SSH. For example between VMs.

osamu@sid-dev:~$ ssh sid-GNOME.local

SSH login directly to root account of a system are probably blocked. In that case, you must SSH into user account first and sudo/su to root.

Use of SSH key placed in user’s home directory is a good idea. Typical key generation goes:

osamu@sid-dev:~$ ssh-keygen -t rsa

Setting up sudo

Following will configure sudo to allow user “osamu” for root privilege.

 $ su
Password:
root@sid-dev:/home/osamu# /sbin/adduser osamu sudo
...
root@sid-dev:/home/osamu# echo >/etc/sudoers.d/passwordless <<EOF
# NO password for the primary user
%sudo ALL = (ALL) NOPASSWD: ALL
EOF

VM tools

I am not familiar with VM oriented tools yet. Following packages seem something to learn.

Helper script

I created short scripts and use them to initialize Debian VMs. This also takes care SSH keys: https://github.com/osamuaoki/vm-setup

Previous Post Top Next Post