Previous Post | Top | Next Post |
TOC
Development infrastructure
In order to keep development setups to be simple and robust, I changed development infrastructure.
- For package test build, I decided to move to
schroot
. - For package build, I decided to move to
sbuild
(aschroot
wrapper). - For package test for each GUI setups, I decided to set up KVM.
(I was suffering sudden cowbuilder failures. That was another motivation.)
Planned infrastructure includes:
- APT proxy on the host WS.
- Avahi mDNS environment which allows to use domain name “local”.
- sbuild infrastructure for the clean chroot package build.
- schroot infrastructure for the fully popullated chroot package test build.
- VM as KVM for alternative package build environment (non-GUI)
- VM as KVM for GUI package manual testing. (GUI)
- SSH connections among host WS and guest VM systems.
- Mini private APT repo or Personal DEB package repository to satisfy dependency as needed.
Personal DEB package repository
APT proxy on host WS
In order to save the network traffic, I decided to set up APT proxy on the host WS.
There are 4 good candidates.
- apt-cacher: This needs to be installed with the web server. Older code in perl (but it was newer than apt-. Getting replaced by apt-cacher-ng.
- apt-cacher-ng: Newer and well maintained code in C++.
- This package is the most popular package.
- This package had some problem in 2010.
- Bookworm (testing) version seems to be in very good shape.
- approx: Less docs but its simplicity seems to make it more robust.
- squid-deb-proxy: This needs generic proxy server (No way to read cached package data)
It looks like apt-cacher-ng
is the one for me to use.
- Execute:
apt install apt-cacher-ng
- “Set up once” -> “keep”… -> “Allow HTTP tunnels…”: Yes
Set up proxy for the local APT
There are 2 interesting packages.
- auto-apt-proxy – no mDNS compatible – https://salsa.debian.org/debian/auto-apt-proxy
- squid-deb-proxy-client – mDNS compatible – https://launchpad.net/squid-deb-proxy
WARN: Installing auto-apt-proxy
package without functioning proxy may cause
problem.
For my purpose, proxy is always the local for the host WS which is the gateway
WS for guest VMs(=kvm). Simply installing auto-apt-proxy
is suffice to set
up APT proxy for them. For host WS itself or chroot
(including ones created
through schroot
, sbuild
) environment on the host WS, proxy is always the
local. So I simply add /etc/apt/apt.conf.d/00proxy
as:
Acquire::http { Proxy "http://localhost:3142"; }
Permission and ownership
The permission and ownership of /var/cache/apt-cacher-ng/
is drwxr-sr-x
(2755) with apt-cacher-ng:apt-cacher-ng
.
If you reinstall system, you must be careful with UID and GID since they may be different for the reinstalled system.
If package files in upstream repo are replaced with different file with the
same name, it will create caching problem with apt-cacher-ng
. Non Debian
repository tends to do this. If you experience strange caching data related
issues, erase the whole data under /var/cache/apt-cacher-ng/
for that repo to
reset situation.
Sbuild infrastructure
Sbuild provides clean binary package building environment using schroot as its backend.
Setup
Here, I follow https://wiki.debian.org/sbuild (I have made a few updates there, too.).
$ sudo apt install sbuild piuparts autopkgtest lintian
$ adduser osamu sbuild
Logout and login to check you are a member of sbuild
group.
$ id
uid=1000(osamu) gid=1000(osamu) groups=1000(osamu),...,132(sbuild)
Let’s create the configuration file ~/.sbuildrc
in line with recent Debian
practice https://wiki.debian.org/SourceOnlyUpload as:
cat >~/.sbuildrc << 'EOF'
##############################################################################
# PACKAGE BUILD RELATED (source-only-upload as default)
##############################################################################
# -d
$distribution = 'unstable';
# -A
$build_arch_all = 1;
# -s
$build_source = 1;
# --source-only-changes or not
# (1: for "dput", 0: for "dgit source-push")
$source_only_changes = 1;
# -v
$verbose = 1;
# parallel build
$ENV{'DEB_BUILD_OPTIONS'} = 'parallel=12';
##############################################################################
# POST-BUILD RELATED (turn off functionality by setting variables to 0)
##############################################################################
$run_lintian = 1;
$lintian_opts = ['-i', '-I'];
$run_piuparts = 1;
$piuparts_opts = ['--schroot', 'unstable-amd64-sbuild', '--no-eatmydata'];
$run_autopkgtest = 1;
$autopkgtest_root_args = '';
$autopkgtest_opts = [ '--', 'schroot', '%r-%a-sbuild' ];
##############################################################################
# PERL MAGIC
##############################################################################
1;
EOF
You can disable and enable each functionality by assigning 0 or 1 to corresponding variables. You can customize this further for your GPG key etc.
Currently, I have several variants and switch among them with symlink.
$ ls -l .sbuild*
lrwxrwxrwx 1 osamu osamu 13 May 4 07:58 .sbuildrc -> .sbuildrc-src
-rw-r--r-- 1 osamu osamu 1377 May 4 07:58 .sbuildrc-bin
-rw-r--r-- 1 osamu osamu 1330 May 4 07:58 .sbuildrc-no-test
-rw-r--r-- 1 osamu osamu 1330 May 4 07:58 .sbuildrc-src
Then create the baseline chroot environment as:
If apt-cacher-ng
is NOT used:
$ sudo sbuild-createchroot --include=eatmydata,ccache unstable /srv/chroot/unstable-amd64-sbuild http://deb.debian.org/debian
If apt-cacher-ng
is used:
$ sudo sbuild-createchroot --include=eatmydata,ccache unstable /srv/chroot/unstable-amd64-sbuild http://127.0.0.1:3142/deb.debian.org/debian
(The difference can be resolved later by editing the .../etc/apt/sources.list
file found under /srv/chroot/unstable-amd64-sbuild
.)
Please note that the above create /etc/schroot/chroot.d/unstable-amd64-sbuild-$suffix
to be used by schroot
.
(This file is effectively a split-up /etc/schroot/schroot.conf
and this file needs to be removed together with the chroot system directory tree under /srv/chroot/unstable-amd64-sbuild
to rerun above sbuild-createchroot
if you messed up chroot contents.)
TODO: decide on technical merits: eatmydata or tmpfs for /build etc.
Prior to every use, you need to keep it up to date.
$ sudo sbuild-update -udcar u
Package build with sbuild
For unstable
Package build from the source directory is simply as:
$ sbuild
or using source package as:
$ sbuild <packagename>.dsc
Hmmm… sbuild is now easier than pbuilder/cowbuilder and uses the new bind-mount kernel feature smartly to make build environment quickly.
For testing
Package build from the source directory is simply as:
$ sbuild -c testing-amd64-sbuild
For stable
Package build from the source directory is simply as:
$ sbuild -c stable-amd64-sbuild
gbp-buildpackage configuration
I changed gbp-buildpackage to use sbuild.
[DEFAULT]
# the default build command:
builder = sbuild -A -s --source-only-changes -v -d unstable
...
(Actually, with the above ~/.sbuildrc
, we should be able to skip options after
sbuild
here.)
The source chroot environment used by sbuild
In order to access the minimum unstable chroot environment used by sbuild, use the standard command.
$ sudo sbuild-shell u
In order to update the minimum unstable chroot environment used by sbuild, use the standard command.
$ sudo sbuild-update -udcar u
In order to access the minimum oldstable chroot environment used by sbuild, use the standard command.
$ sudo schroot-shell -u o
In order to update the minimum oldstable chroot environment used by sbuild, use the standard command.
$ sudo sbuild-update -udcar o
Reminder for adduser
package in the source chroot environment used by sbuild
The above update method for schroot
generally works for normal situation.
But when an essential package becomes a non-essential one (e.g., adduser
),
this causes a problem with piuparts
test. Make sure to remove such package
manually from the source chroot environment.
Schroot infrastructure (shared data, persistent)
For package development, creating the test build environment starting from the latest minimal chroot environment is time consuming. For this situation, it’s better to have all required packages installed and have access to data in your home directory.
Let’s create package pre-loaded chroot environments which can run x-apps using
the schroot directly using a configuration profile based on
/etc/schroot/desktop/
.
schroot-shell
Since this is a bit repetitive tasks for oldstable, stable, testing, and unstable, I created a simple shell script:
$ schroot-shell -h
NAME
schroot-shell - start a shell in chroot with current home data.
SYNOPSIS
schroot-shell [DIST [CONFIG]]
schroot-shell --config [CONFIG]
schroot-shell [--init|--prep|--upgrade] [DIST [CONFIG]]
DESCRIPTION
This wrapper command of *schroot* offers a set of helper commands to work in
the persistent pre-loaded (source) chroot system environment while sharing the
user data from the user's home directory of the host system.
Here, if 'CONFIG' and 'DIST' are missing in the command line, 'desktop' for
'CONFIG' and 'unstable' for 'DIST' are used as the default values. Normally,
you skip specifying 'CONFIG' to use its default value. This command
automatically becomes root when needed.
The *schroot-shell* command is normally invoked as:
schroot-shell [DIST [CONFIG]]
This starts an interactive shell in the 'DIST-amd64-CONFIG' source chroot
managed by the *schroot* command while bind mounting the user data from the
user's home directory of the host system. This is somewhat modeled after
*sbuild-shell*. While *sbuild-shell* is designed to work with the session
chroot and the changes made from *sbuild-shell* to the chroot is discarded,
the changes made from *schroot-shell* to the chroot is persistent.
In order for the *schroot-shell* command to function, you need to set up the
'DIST-amd64-CONFIG' source chroot properly.
The 'DIST-amd64-CONFIG' source chroot can be initialized in 3
steps.
1. Create the setup script configuration files for *schroot* under
'/etc/schroot/CONFIG/' as explained in the 'FILES' section under 'Setup
script configuration' in schroot.conf(5) for '/etc/schroot/default'.
schroot-shell --config [CONFIG]
2. Customize the setup script configuration files for *schroot* at
'/etc/schroot/CONFIG/fstab' to list all mounted devices in your home
directory.
3. Create a chroot file system for 'DIST' at
'/srv/chroot/DIST-amd64-CONFIG' using the setup script configuration
files in '/etc/schroot/CONFIG'.
schroot-shell --init [DIST [CONFIG]]
Here, 'DIST' specifies base distribution of the chroot system. 'DIST'
can be 'unstable' (default), 'testing', 'stable', 'oldstable', or
'oldoldstable'. These can be shortened as 'u', 't', 's', 'o', or 'oo'
in the command line.
The 'DIST-amd64-CONFIG' source chroot can be updated to the latest
state by:
schroot-shell --update [DIST [CONFIG]]
The 'DIST-amd64-CONFIG' source chroot can be prepped up with a few
pre-defined extra packages desirable for the shell activities by:
schroot-shell --prep [DIST [CONFIG]]
For 'CONFIG', values 'default', 'minimal', 'buildd', and 'sbuild'
should be avoided unless you know what exactly happens since these setup
scripts are already used by *sbuild* and *schroot*.
The *schroot-shell* command offers a generarized and similar functionality of
combined *sbuild-createchroot*, *sbuild-update*, and *sbuild-shell* for the
persistent chroot shell environment instead of the session one.
Parametrs:
TYPE = 'source'
DIST = 'unstable'
ARCH = 'amd64'
CONFIG = 'desktop'
Customizing /etc/schroot/desktop/fstab
I customized my desktop profile at /etc/schroot/desktop/fstab
as follows:
- Explicitly bind mount subdirectories under my home directory which are on different devices (including Btrfs subvolumes).
- Bind mount /root
Access to the persistent chroot environment as user
In order to access the pre-loaded unstable chroot environment, I simply type.
$ schroot-shell
In order to update the pre-loaded unstable chroot environment, I simply type.
$ schroot-shell -u
In order to access the pre-loaded oldstable chroot environment, I simply type.
$ schroot-shell o
In order to update the pre-loaded oldstable chroot environment, I simply type.
$ schroot-shell -u o
Chroot update optimization
Slow fsync
(2) process problem caused by dpkg
and SSD waring problem can be avoided.
Via eatmydata
Add following line into /etc/schroot/chroot.d/unstable-amd64-sbuild-$suffix
or similar to speed up the package installation process.
command-prefix=eatmydata
Via tmpfs
Add following to my /etc/fstab
:
# For speeding up sbuild/schroot and prevent SSD wear-out
none /var/lib/schroot/session tmpfs uid=root,gid=root,mode=0755 0 0
none /var/lib/schroot/union/overlay tmpfs uid=root,gid=root,mode=0755 0 0
none /var/lib/sbuild/build tmpfs uid=sbuild,gid=sbuild,mode=2770 0 0
Here, I note a few notable directories:
/srv/chroot
: directory containing whole initial chroot environment./var/lib/schroot/union/overlay
: directory where additional packages are installed during the build process./var/lib/sbuild/build
: directory where the source package is unpacked and build.
Please be aware, these directory may eat up some disk spaces and you should have enough DRAM.
Bench mark results
Very rough benchmarks were performed to see impact of eatmydata, tmpfs, and parallel build.
System used:
- AMD Ryzen 5 PRO 4650U with Radeon Graphics (Units/Processor: 12)
- TS2TMTE220S (2TB NVMe SSD) – used
- WDC PC SN520 SDAPMUW-128G-1001 (128GB NVMe SSD)
- Wifi (802.11ac 5GHz) to Optical connection. (about 3 M Bytes/sec)
- Source:
- DR: debian-reference source (original not to do parallel build)
- DR’: debian-reference source (modified to do parallel build)
- GIT: git source
Result:
source | eatmydata | tmpfs | parallel | real | user | system | Note |
---|---|---|---|---|---|---|---|
DR | no | yes | 1 | 20m54s | 18m12s | 0m47s | Cold APT |
DR | yes | yes | 1 | 17m08s | 17m32s | 0m34s | |
DR | no | yes | 1 | 17m56s | 18m09s | 0m45s | |
DR | no | no | 1 | 19m24s | 18m12s | 1m21s | No speed-up |
DR | yes | no | 1 | 17m10s | 17m28s | 0m43s | |
DR' | yes | no | 24 | 5m02s | 31m05s | 0m54s | |
DR' | yes | yes | 24 | 5m02s | 31m05s | 0m54s | |
DR' | yes | no | 24 | 6m34s | 35m37s | 0m56s | |
GIT | yes | yes | 24 | 7m24s | 33m23s | 9m45s | Cool CPU |
GIT | yes | no | 24 | 7m51s | 38m18s | 12m00s | Hot CPU |
GIT | yes | yes | 24 | 7m21s | 35m08s | 12m34s | Hotter CPU |
Observation for real:
- Cold APT cache – 10% slow down (not much)
- Hot CPU – 7% slowdown (not much)
- Use of tmpfs – no impact
- Use of eatmydata – 10% speed up (not much)
- Use of parallel – less than half (big gain but build log became cryptic)
Conclusion:
- Use of APT cache is for reducing repository server load. (I have fast connection)
- Use of eatmydata is for skipping unnecessary waiting. (It may cause complication due to version so I stop using it.)
- Use of tmpfs is more for longer SSD life. (I have lots of DRAM)
Files used by sbuild
and schroot-shell
.
The sbuild
and schroot-shell
use following files.
~/.sbuildrc
bysbuild
./etc/sbuild/chroot
containing symlinks to the root of thechroot
used bysbuild
./etc/schroot/chroot.d/
containing configuration files forschroot
used bysbuild
andschroot-shell
./etc/schroot/desktop/
contents customized for use byschroot-shell
./srv/chroot/
containing the root of thechroot
used bysbuild
andschroot-shell
.
Install VM infrastructure
- Execute:
apt install virt-manager gnome-boxes
- Read /usr/share/doc/libvirt-daemon/README.Debian.gz and follow it.
$ sudo bash
# virsh net-start default
# virsh net-autostart default
# mkdir /etc/dnsmasq.d
# cat <<EOF >/etc/dnsmasq.d/00_libvirtd.conf
# only bind to loopback by default
interface=lo
bind-interfaces
EOF
# ^D
Create minimal stable VM
(This needs to be updated for “bullseye”. It should be quite similar. I primarily use gnome-boxes now.)
- Download the ISO for Debian stable. (netinst: 337MiB)
- Start virt-manager
- Create a new virtual machine from Debian stable ISO without Desktop (Use
en_US).
- Name as “debian10” (automtic)
- Format: qcow2
- Capacity: 20GiB
- Mostly go with defaults
- Activate virtual network
- Guided - use entire disk
- Write disk - Yes
- Unset “Debian desktop environment” and “print server”, and set “SSH server”
- GRUB install to MBR: Yes,
/dev/vda
- Installation complete and GRUB screen -> VM started
- Shutdown VM
- Clone from “debian10” to “buster-mini”
- Start “buster-mini” and login to “root”
apt-get update&& apt-get install vim nano- aptitude screen gpm avahi-utils
- Installation of
gpm
package allows me to see mouse pointer on the Linux console. This makes it easier to get out of fullscreen mode by moving mouse cursor to the top center. (Some times it moves strangely but OK for this purpose.) - Installation of
avahi-utils
allows easy access to local hosts.
- Installation of
- Shutdown “buster-mini” with
shutdown -h now
.
Creating GNOME VM
- Clone from “buster-mini” to “buster-GNOME”
- Start “buster-GNOME” and login to “root”
apt-get update&& apt-get install task-gnome-desktop
- If you get errors from aptitude, go to host and fix apt-cacher-ng
repository and than try again. (Don’t use
--fix-missing
) - Rebooting VM got me to nice GNOME system up.
Creating sid VM for development
- Clone from “debian10” to “sid-dev”
- Change
/etc/apt/sources.list
deb http://deb.debian.org/debian/ sid main
deb-src http://deb.debian.org/debian/ sid main
apt-get update && apt-get dist-upgrade && apt autoremove
apt-get install devscripts
VM house keepings
Mixing gnome-boxes
and virt-manager
You can add VMs created by gnome-boxes
to virt-manager
by selecting
QEM/KVM User session
instead of QEM/KVM
in File -> New connection
menu.
Changing hostname
To change hostname of the VM permanently, you need to edit /etc/hostname
and
/etc/hosts
and reboot the VM. (The hostname of running system can be set to
“” just as hostname <newhostname>
but updates of mDNS record
seen from other VMs and the host WS need reboot of the VM.)
Setting up SSH
As long as the host WS and guest VMs are installed with avahi-utils
, we can
use each host name appended with “.local
” to access using good old SSH. For
example between VMs.
osamu@sid-dev:~$ ssh sid-GNOME.local
SSH login directly to root account of a system are probably blocked. In that case, you must SSH into user account first and sudo/su to root.
Use of SSH key placed in user’s home directory is a good idea. Typical key generation goes:
osamu@sid-dev:~$ ssh-keygen -t rsa
This should create:
id_rsa
contains the private keyid_rsa.pub
contains the public key
In order to give passwordless SSH access from an account on sid-dev
to an
account on sid-GNOME
, you need to append the content of id_rsa.pub
on
sid-dev
to ~/.ssh/authorized_keys
on the sid-GNOME
.
Please make sure to set proper permissions on the remote host:
osamu@sid-GNOME $ chmod 700 ~/.ssh
osamu@sid-GNOME $ chmod 600 ~/.ssh/authorized_keys
Setting up sudo
Following will configure sudo
to allow user “osamu” for root privilege.
$ su
Password:
root@sid-dev:/home/osamu# /sbin/adduser osamu sudo
...
root@sid-dev:/home/osamu# echo >/etc/sudoers.d/passwordless <<EOF
# NO password for the primary user
%sudo ALL = (ALL) NOPASSWD: ALL
EOF
VM tools
I am not familiar with VM oriented tools yet. Following packages seem something to learn.
- https://libvirt.org/ supporting EMU, KVM, XEN, OpenVZ, LXC, and
VirtualBox.
libvirt-clients
(virsh
command etc.)virt-manager
: desktop applicationvirtinst
: CLI equiv ofvirt-manage
libvirt-login-shell
: isolate user sessions using LXC container
- http://libguestfs.org FAQ
libguestfs-tools
: guest disk image management system
Helper script
I created short scripts and use them to initialize Debian VMs. This also takes care SSH keys: https://github.com/osamuaoki/vm-setup
Previous Post | Top | Next Post |