Previous Post | Top | Next Post |
TOC
Here is a series of memos of me trying to use LXC/LXD on Debian 12 (bookworm
).
What is LXC
What is LXD and Incus
- Upstream LXD (5.18 released 2023-09-20) is at https://github.com/canonical/lxd by Canonical
- Upstream Incus (0.1 released 2023-10-07) is at https://github.com/lxc/incus by linuxcontainers.org
- Incus is a modern, secure and powerful system container and virtual machine manager.
- Incus Tutorial
- Incus was created (2023-08-07) from LXD as a fork of LXD 5.16 (released 2023-07-20).
- “Incus 0.1 has been released” announcement gives explanations on technical changes.
- The command name has been changed to
incus
.
- The command name has been changed to
- Debian 12 LXD (5.0.2-5, packaged 2023-05-05) was created before LXD moving to Canonical and linuxcontainers.org staring Incus as a LXD fork.
- Debian LXD documentaion: Under
/usr/share/doc/lxd/
, especiallyREADME.Debian
. - The command name is
lxd
for on Debianlxd
package. - The command name is
lxc
for on Debianlxd-client
package.
- Debian LXD documentaion: Under
On Debian 12, there is no need to use snap package mentioned in LXD documentation from Ubuntu/Canonical since it is fairly new one.
Future Debian may migrate to Incus. So for the future proof, I should interact with instances only through /1.0/instances
as in Incus when using Debian 12 LXD.
For now, I will focus on container usages on Debian 12.
Installation of LXD to Debian 12
On a Debian 12 system with its /var/lib
on btrfs, I run following commands to
install LXD.
$ sudo aptitude install lxc lxd lxd-tools
$ sudo adduser osamu lxd
$ sudo newusers group
$ sudo lxd init
... go with defaults
This seems to create the /var/lib/lxd/images/
directory.
Tracing “First steps with LXD”
I traced “First steps with LXD” with some extra operations to check what is really happening.
Getting system images for LXD
Since manpage for lxc-launch
on Debian 12 is useless, let’s see:
$ lxc launch -h
Description:
Create and start instances from images
Usage:
lxc launch [<remote>:]<image> [<remote>:][<name>] [flags]
...
Full list of remote
is available by issuing lxc remote list
. Notable ones
are:
images
for all images at https://images.linuxcontainers.org/ubuntu
for specifically Ubuntu images at https://cloud-images.ubuntu.com/releases
Let me create and start instances from images for “Ubuntu 22.04” and “Debian 12”.
$ lxc launch images:debian/12 debian-12
$ lxc launch ubuntu:22.04 ubuntu-2204
System images specified as ubuntu:22.04
or images:debian/12
are downloaded
to /var/lib/lxd/images/
. Each of these seems to be made of 2 files. One for
rootfs and another for templates(?) sharing the same hash value as a part of
their file name. These images seems to be offered in a squashfs. The hash
values are listed by lxc images ls
. The same hash values are used for the
directory name under /var/lib/lxd/storage-pools/default/images/
too.
The first invocation of lxc launch
for an image seems to download it while
its subsequent invocations seems to use the previously downloaded corresponding
image.
Local instance names are ubuntu-2204
and debian-12
. They seem to be
created under /var/lib/lxd/containers/
. These instance names are listed by
lxc ls
and also used for the directory name under
/var/lib/lxd/storage-pools/default/containers
Here:lxc launch ...
= lxc init ...
+ lxc start ...
Inspect instances
Let me inspect instances.
$ lxc list
+-------------+---------+----------------------+---------------------------------------------+-----------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+-------------+---------+----------------------+---------------------------------------------+-----------+-----------+
| debian-12 | RUNNING | 10.30.104.128 (eth0) | fd42:2b6:d45:cc06:216:3eff:febf:4ec6 (eth0) | CONTAINER | 0 |
+-------------+---------+----------------------+---------------------------------------------+-----------+-----------+
| ubuntu-2204 | RUNNING | 10.30.104.155 (eth0) | fd42:2b6:d45:cc06:216:3eff:fecc:8b51 (eth0) | CONTAINER | 0 |
+-------------+---------+----------------------+---------------------------------------------+-----------+-----------+
Let me stop these instances and inspect them:
$ lxc stop debian-12
$ lxc stop ubuntu-2204
$ lxc list
+-------------+---------+------+------+-----------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+-------------+---------+------+------+-----------+-----------+
| debian-12 | STOPPED | | | CONTAINER | 0 |
+-------------+---------+------+------+-----------+-----------+
| ubuntu-2204 | STOPPED | | | CONTAINER | 0 |
+-------------+---------+------+------+-----------+-----------+
More detailed information can be obtained:
$ lxc info debian-12
Name: debian-12
Status: STOPPED
Type: container
Architecture: x86_64
Created: 2023/10/11 10:59 JST
Last Used: 2023/10/12 05:08 JST
Let me turn-on debian-12
and quickly inspect it twice in the row.
$ lxc start debian-12
$ lxc list
+-------------+---------+------+------+-----------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+-------------+---------+------+------+-----------+-----------+
| debian-12 | RUNNING | | | CONTAINER | 0 |
+-------------+---------+------+------+-----------+-----------+
| ubuntu-2204 | STOPPED | | | CONTAINER | 0 |
+-------------+---------+------+------+-----------+-----------+
$ lxc list
+-------------+---------+----------------------+---------------------------------------------+-----------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+-------------+---------+----------------------+---------------------------------------------+-----------+-----------+
| debian-12 | RUNNING | 10.30.104.128 (eth0) | fd42:2b6:d45:cc06:216:3eff:febf:4ec6 (eth0) | CONTAINER | 0 |
+-------------+---------+----------------------+---------------------------------------------+-----------+-----------+
| ubuntu-2204 | STOPPED | | | CONTAINER | 0 |
+-------------+---------+----------------------+---------------------------------------------+-----------+-----------+
This changed results are because of slow network activation.
Let me set IPV6 network configuration to be disabled. I need to restart it to activate this network configuration change.
$ lxc network set lxdbr0 ipv6.address none
$ lxc restart debian-12
$ lxc list
+-------------+---------+----------------------+------+-----------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+-------------+---------+----------------------+------+-----------+-----------+
| debian-12 | RUNNING | 10.30.104.128 (eth0) | | CONTAINER | 0 |
+-------------+---------+----------------------+------+-----------+-----------+
| ubuntu-2204 | STOPPED | | | CONTAINER | 0 |
+-------------+---------+----------------------+------+-----------+-----------+
The fact of changing results for repeated last 2 commands indicate that establishment of network address takes time.
Inspect images
Let me inspect images
$ lxc image list
+-------+--------------+--------+---------------------------------------------+--------------+-----------+----------+-------------------------------+
| ALIAS | FINGERPRINT | PUBLIC | DESCRIPTION | ARCHITECTURE | TYPE | SIZE | UPLOAD DATE |
+-------+--------------+--------+---------------------------------------------+--------------+-----------+----------+-------------------------------+
| | 80496135241f | no | Debian bookworm amd64 (20231012_05:24) | x86_64 | CONTAINER | 94.20MB | Oct 12, 2023 at 8:07am (UTC) |
+-------+--------------+--------+---------------------------------------------+--------------+-----------+----------+-------------------------------+
| | b948dd91cd5a | no | ubuntu 22.04 LTS amd64 (release) (20231010) | x86_64 | CONTAINER | 435.08MB | Oct 10, 2023 at 10:20am (UTC) |
+-------+--------------+--------+---------------------------------------------+--------------+-----------+----------+-------------------------------+
Inspect filesystem
Let’s see how all these images and instances are stored under /var/lib/lxd
$ sudo btrfs subvolume list /var/lib/lxd|grep "var/lib/lxd"
ID 42668 gen 1952301 top level 22924 path var/lib/lxd/storage-pools/default
ID 43251 gen 1951314 top level 42668 path var/lib/lxd/storage-pools/default/images/b948dd91cd5a8da89f6dcd4949d7189f064cf6d4dc5bd70b7f9b7aff1883babf
ID 43284 gen 1952313 top level 42668 path var/lib/lxd/storage-pools/default/containers/debian-12
ID 43315 gen 1951563 top level 42668 path var/lib/lxd/storage-pools/default/containers/ubuntu-2204
ID 43369 gen 1952233 top level 42668 path var/lib/lxd/storage-pools/default/images/80496135241fade673df5db2b3a8d3fca280370b17578227892776839bbeb678
$ sudo ls -li /var/lib/lxd/storage-pools/default/images
total 0
256 drwx--x--x 1 root root 56 Oct 12 17:07 80496135241fade673df5db2b3a8d3fca280370b17578227892776839bbeb678
256 drwx--x--x 1 root root 56 Oct 10 19:20 b948dd91cd5a8da89f6dcd4949d7189f064cf6d4dc5bd70b7f9b7aff1883babf
$ sudo ls -li /var/lib/lxd/storage-pools/default/containers
total 0
256 d--x------ 1 231072 root 78 Oct 11 10:59 debian-12
256 d--x------ 1 root root 78 Oct 11 21:53 ubuntu-2204
I see btrfs subvolume is generated here. inode=256
means the root of btrfs subvolume.
$ sudo ls -l /var/lib/lxd/images
total 541992
-rw-r--r-- 1 root root 680 Oct 12 17:07 80496135241fade673df5db2b3a8d3fca280370b17578227892776839bbeb678
-rw-r--r-- 1 root root 98775040 Oct 12 17:07 80496135241fade673df5db2b3a8d3fca280370b17578227892776839bbeb678.rootfs
-rw-r--r-- 1 root root 412 Oct 10 19:20 b948dd91cd5a8da89f6dcd4949d7189f064cf6d4dc5bd70b7f9b7aff1883babf
-rw-r--r-- 1 root root 456216576 Oct 10 19:20 b948dd91cd5a8da89f6dcd4949d7189f064cf6d4dc5bd70b7f9b7aff1883babf.rootfs
These seem to be downloaded squashfs files
$ sudo bash -c "ls -il /var/lib/lxd/storage-pools/default/images/80496135241f*"
total 4
257 -rw-r--r-- 1 root root 535 Oct 12 14:29 metadata.yaml
261 drwxr-xr-x 1 root root 154 Oct 12 14:29 rootfs
258 drwxr-xr-x 1 root root 42 Oct 12 14:29 templates
$ sudo bash -c "ls -l /var/lib/lxd/storage-pools/default/images/80496135241f*/rootfs"
total 24
lrwxrwxrwx 1 root root 7 Oct 12 14:25 bin -> usr/bin
drwxr-xr-x 1 root root 0 Sep 30 05:04 boot
drwxr-xr-x 1 root root 0 Oct 12 14:29 dev
drwxr-xr-x 1 root root 1570 Oct 12 14:29 etc
drwxr-xr-x 1 root root 0 Sep 30 05:04 home
lrwxrwxrwx 1 root root 7 Oct 12 14:25 lib -> usr/lib
lrwxrwxrwx 1 root root 9 Oct 12 14:25 lib32 -> usr/lib32
lrwxrwxrwx 1 root root 9 Oct 12 14:25 lib64 -> usr/lib64
lrwxrwxrwx 1 root root 10 Oct 12 14:25 libx32 -> usr/libx32
drwxr-xr-x 1 root root 0 Oct 12 14:25 media
drwxr-xr-x 1 root root 0 Oct 12 14:25 mnt
drwxr-xr-x 1 root root 0 Oct 12 14:25 opt
drwxr-xr-x 1 root root 0 Sep 30 05:04 proc
drwx------ 1 root root 38 Oct 12 14:25 root
drwxr-xr-x 1 root root 0 Oct 12 14:25 run
lrwxrwxrwx 1 root root 8 Oct 12 14:25 sbin -> usr/sbin
drwxr-xr-x 1 root root 0 Oct 12 14:25 srv
drwxr-xr-x 1 root root 0 Sep 30 05:04 sys
drwxrwxrwt 1 root root 0 Oct 12 14:25 tmp
drwxr-xr-x 1 root root 116 Oct 12 14:25 usr
drwxr-xr-x 1 root root 90 Oct 12 14:25 var
$ sudo ls -l /var/lib/lxd/storage-pools/default/containers/debian-12
total 8
-r-------- 1 root root 3136 Oct 12 19:38 backup.yaml
-rw-r--r-- 1 root root 535 Oct 10 14:28 metadata.yaml
drwxr-xr-x 1 root root 154 Oct 10 14:28 rootfs
drwxr-xr-x 1 root root 42 Oct 10 14:28 templates
$ sudo ls -l /var/lib/lxd/storage-pools/default/containers/debian-12/rootfs
total 24
lrwxrwxrwx 1 root root 7 Oct 10 14:25 bin -> usr/bin
drwxr-xr-x 1 root root 0 Sep 30 05:04 boot
drwxr-xr-x 1 root root 0 Oct 10 14:28 dev
drwxr-xr-x 1 root root 1570 Oct 10 14:28 etc
drwxr-xr-x 1 root root 0 Sep 30 05:04 home
lrwxrwxrwx 1 root root 7 Oct 10 14:25 lib -> usr/lib
lrwxrwxrwx 1 root root 9 Oct 10 14:25 lib32 -> usr/lib32
lrwxrwxrwx 1 root root 9 Oct 10 14:25 lib64 -> usr/lib64
lrwxrwxrwx 1 root root 10 Oct 10 14:25 libx32 -> usr/libx32
drwxr-xr-x 1 root root 0 Oct 10 14:25 media
drwxr-xr-x 1 root root 0 Oct 10 14:25 mnt
drwxr-xr-x 1 root root 0 Oct 10 14:25 opt
drwxr-xr-x 1 root root 0 Sep 30 05:04 proc
drwx------ 1 root root 38 Oct 10 14:25 root
drwxr-xr-x 1 root root 0 Oct 10 14:25 run
lrwxrwxrwx 1 root root 8 Oct 10 14:25 sbin -> usr/sbin
drwxr-xr-x 1 root root 0 Oct 10 14:25 srv
drwxr-xr-x 1 root root 0 Sep 30 05:04 sys
drwxrwxrwt 1 root root 74 Oct 12 19:44 tmp
drwxr-xr-x 1 root root 116 Oct 10 14:25 usr
drwxr-xr-x 1 root root 90 Oct 10 14:25 var
I see btrfs subvolume is used for each file tree based on images and instances.
Hmmm…. debian-12
is owned by 231072
while ubuntu-2204
is owned by root
for directory under /var/lib/lxd/storage-pools/default/containers
. The uid=231072
is found in /etc/subuid
as:
_lxd:231072:10000001
root:231072:10000001
This directory ownership changes:
- from
root
to231072
when container state is changed fromSTOPPED
toRUNNING
. - from
231072
toroot
when container state is changed fromRUNNING
toSTOPPED
.
I also realized that /etc/passwd
files inside of the above mentioned
container images and instances don’t have the normal uid=1000
user.
As I checked this Debian image, I realize that this is a minimalistic one and doesn’t include cloud-init
nor netplan.io
packages. This was because I didn’t specify the /cloud
suffix when lxc lauch ...
was used. See cloud-init support in images.
Introduction to operations on instances
I followed more in “First steps with LXD” and “How-to guides” (Especially around “Instances” section) to learn operations on container instances.
lxc copy ...
to duplicate a container instance.lxc delete ...
to delete a container instance.lxc exec ...
to execute a command as root within the instancelxc config ...
to configure container instances with instance options- This can provide cloud-init configuration.
lxc restart ...
to stop and start a container instance to reload the updated configuration.lxc file edit ...
to edit a file in a container instance.lxc file push ...
to copy a file to a container instance.lxc file pull ...
to copy a file from a container instance.lxc snapshot ...
to make snapshot of a container instance.
Issues of different UID between outside and inside of the container can be taken care nicely with above commands.
Example of operations on instances
Here is an example of basic operations on instances.
$ lxc copy debian-12 debian-12-ephemerical
$ lxc exec debian-12-ephemerical -- adduser osamu
Adding user `osamu' ...
...
$ lxc start debian-12-ephemerical --console
To detach from the console, press: <ctrl>+a q
Queued start job for default target graphical.target.
[ OK ] Created slice system-getty.slice - Slice /system/getty.
...
[ OK ] Finished systemd-update-utmp-runlev… - Record Runlevel Change in UTMP.
Debian GNU/Linux 12 debian-12-ephemerical console
debian-12-ephemerical login: osamu
Password:
Linux debian-12-ephemerical 6.4.0-0.deb12.2-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.4.4-3~bpo12+1 (2023-08-08) x86_64
The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.
Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
osamu@debian-12-ephemerical:~$
One can detach from the console for the container instance with CTRL-A+q
and gain console for the host machine. One can regain access to the console for the container instance by:
$ lxc console debian-12-ephemerical
To detach from the console, press: <ctrl>+a q
osamu@debian-12-ephemerical:~$
You can stop and delete this instance from the console on the host machine as:
$ lxc stop debian-12-ephemerical
$ lxc delete debian-12-ephemerical
More example of operations on instances and images
Let’s play more …
$ cd path/to/working-space
$ lxc init images:debian/bookworm/cloud dbc
$ lxc start dbc
$ lxc export dbc backup-dbc.tar.gz
$ lxc stop dbc
$ lxc publish dbc --alias idbc
$ lxc image export idbc
Initial downloaded image get downloaded in 2 files. One with .rootfs
extension and the other smaller file with the same filename without .rootfs
extension to /var/lib/lxd/images
. This is PUBLIC=yes image. This filename
is long hash string.
The created instance creates symlink in /var/lib/lxd/containers
with its
name. It points to a directory with the same name in
/var/lib/lxd/storage-pools/default/containers
. This directory holds an open
file tree of image contents of PUBLIC=no ones only.
The published image created from instance is a single file with PUBLIC=no in
/var/lib/lxd/images
.
/var/lib/lxd/storage-pools/default/images
has long hash named directories
each of which holds an open tree of image contents of PUBLIC=no ones only.
When you export, tar.gz
of image is created. It has file tree like ones
under /var/lib/lxd/storage-pools/default/images
in it for both PUBLIC=yes and
no.
/var/lib/lxd/images
- all accessble images (local and copy of remote)
- image in 1 or 2 files with autogenerated hash name
/var/lib/lxd/storage-pools/default/images
- all local images as subdirectories in it
- each subdirectory as an open file tree
- subdirectory name as image name with autogenerated hash name
- subdirectory contains:
rootfs/
templates/
metadata.yaml
/var/lib/lxd/storage-pools/default/containers
- all instances of containers as subdirectories in it
- each subdirectory as an open file tree
- subdirectory name as container name with instance name
- subdirectory contains:
rootfs/
templates/
metadata.yaml
backup.yaml
path/to/working-space
backup-dbc.tar.gz
holding a backup of an instance<long_hash_name>.tar.gz
holding an exported image
Supose there are 2 types of images:
- Unified taraball: 32988437b9eec43d0a87f1e7dbbbf02f27a2525cd1379a9d73f1b95774572a92
- Split tarballs: 187a9674b77056a0d466f5058ea72660cb52430dcdf06974ca8cd6c5a47fb6b3 187a9674b77056a0d466f5058ea72660cb52430dcdf06974ca8cd6c5a47fb6b3.rootfs
# cd /var/lib/lxd/images
# file 187a9674b77056a0d466f5058ea72660cb52430dcdf06974ca8cd6c5a47fb6b3
187a9674b77056a0d466f5058ea72660cb52430dcdf06974ca8cd6c5a47fb6b3: XZ compressed data, checksum CRC64
# file 187a9674b77056a0d466f5058ea72660cb52430dcdf06974ca8cd6c5a47fb6b3.rootfs
187a9674b77056a0d466f5058ea72660cb52430dcdf06974ca8cd6c5a47fb6b3.rootfs: Squashfs filesystem, little endian, version 4.0, xz compressed, 456497140 bytes, 42504 inodes, blocksize: 131072 bytes, created: Thu Oct 26 02:21:34 2023
# file 32988437b9eec43d0a87f1e7dbbbf02f27a2525cd1379a9d73f1b95774572a92
32988437b9eec43d0a87f1e7dbbbf02f27a2525cd1379a9d73f1b95774572a92: gzip compressed data, from Unix, original size modulo 2^32 524245504 gzip compressed data, unknown method, ASCII, has CRC, from FAT filesystem (MS-DOS, OS/2, NT), original size modulo 2^32 524245504
# sha256sum 32988437b9eec43d0a87f1e7dbbbf02f27a2525cd1379a9d73f1b95774572a92
32988437b9eec43d0a87f1e7dbbbf02f27a2525cd1379a9d73f1b95774572a92 32988437b9eec43d0a87f1e7dbbbf02f27a2525cd1379a9d73f1b95774572a92
# sha256sum <(cat 187a9674b77056a0d466f5058ea72660cb52430dcdf06974ca8cd6c5a47fb6b3 187a9674b77056a0d466f5058ea72660cb52430dcdf06974ca8cd6c5a47fb6b3.rootfs)
187a9674b77056a0d466f5058ea72660cb52430dcdf06974ca8cd6c5a47fb6b3 /dev/fd/63
See:
- image format
- Unified tarball
- the SHA-256 of the tarball (tar.gz)
- Split tarballs
- the SHA-256 of the concatenation of the metadata (xz) and rootfs tarball (squashfs, xz) (in this order)
- Unified tarball
- instance backup and recovery
- Creating custom LXD images
- How to use remote images
Tweaking image content before starting an instance
I realized current Debian bookworm has cloud-init
with a
bug #1055786.
Here is how I get around this problem by removing toxic netdev
out of
installed file /etc/cloud/cloud.cfg
:
$ cd path/to
$ lxc init images:debian/bookworm/cloud dbc0
$ lxc file pull dbc0/etc/cloud/cloud.cfg .
$ sed -i -e 's/ netdev,//' cloud.cfg
$ lxc file push cloud.cfg dbc0/etc/cloud/
$ lxc publish dbc0 --alias dbc
Instance published with fingerprint: 379e858cc15808dbdf6a27a028a8b0098213656c0b4565bbc1b64b90b61d9dbd
$ lxc start dbc0
$ lxc launch dbc dbc1
Now I have a fixed image 379e858cc15808dbdf6a27a028a8b0098213656c0b4565bbc1b64b90b61d9dbd
with its alias dbc
as a local image.
2 fixed instancesdbc0
and dbc1
running.
I needed lxc pull ...
+ substitution + lxc push ...
combination in the
above commmand sequence since I wanted these file modification to happen before
starting instance. (lxc exec ... -- command ...
requires running instance)
You can see this behavior as:
$ lxc stop dbc0
$ lxc exec dbc0 -- echo "Hello"
Error: Instance is not running
$ lxc start dbc0
$ lxc exec dbc0 -- echo "Hello"
Hello
For interactive modification, lxc pull ...
+ substitution + lxc push ...
combination here can be replaced with lxc file edit dbc0/etc/cloud/cloud.cfg
.
This allows me easily change other values such as the primary account name debian
.
Tweaking metadata of an image
If you wish to also change image.description
and image.name
of an image,
use lxc config metadata edit dbz0
to the instance before publishing. Then
the published image file in /var/lib/lxd/images/
and corresponding
/var/lib/lxd/storage-pools/default/images/
has updated names.
Editing with published image like lxc image edit dbc
doesn’t change image
data in these directory nor hash value used for the image name but updates
description displayed by lxc image ls dbc
.
Although there seem to be only interactive editor available to modify metadata, you can do more with this by changing actual program for editor via exported environment variables EDITOR
and VISUAL
(VISUAL
supersedes EDITOR
).
To get a copy of metadata.yaml
as the copy_metadata.yaml
on the host
$ unset VISUAL
$ export EDITOR=cat
$ lxc config metadata edit d0 >copy_metadata.yaml
To update the metadata.yaml
with the copy_metadata.yaml
on the host
$ unset VISUAL
$ export EDITOR="tee"
$ lxc config metadata edit d0 <copy_metadata.yaml
LXD one liners
Stop all instances:
$ for f in $(lxc ls -cn -fcsv) ; do lxc stop $f; done
Delete all instances:
$ for f in $(lxc ls -cn -fcsv) ; do lxc delete $f; done
Delete all images:
$ for f in $(lxc image ls -cf -fcsv) ; do lxc image delete $f; done
Delete all images without alias:
$ for f in $(lxc image ls -cfl -fcsv|sed -ne "/,$/s/,//p") ; do lxc image delete $f; done
Delete all images with alias:
$ for f in $(lxc image ls -cfl -fcsv|sed -ne "/,..*$/s/,.*$//p") ; do lxc image delete $f; done
I think I got some glimpses of LXC/LXD commmands.
Review: GUI tools to support LXD management
Debian 12 doesn’t ship any GUI tools for LXD.
I decided to give up on finding good GUI tools for LXD after my cursory research as below.
Canonical provides snap package for lxd containing lxd-ui which is based on TypeScript and JavaScript (actively updated). This offers easy and accessible container and virtual machine management with a browser interface for LXD. But adding lxd-ui to Debian lxd is non-trivial.
Mw web search finds GUI UI candidates but no luck on their usability …:
- lxdui by AdaptiveScale (py, sh, 2 years) – installation guide – Fails to install even if I use PR364 … too many un-applied PRs
- lxdmanager-vue-dashboard and lxd-api-gateway by Miso-K (vue, js, 2 year)
- LXD dashboard by lxdware (PHP, last year)
- nuber (PHP, 2022-07-11)
Reference: LXC and LXD related commands on Debian 12
“About lxd and lxc” explains a bit confusing situation of LXC and LXD.
The lxc
command with many subcommands in the lxd-client
package seems to be the primary tool to use LXC/LXD. Many hyphenated lxc-*
commands in the lxc
package are not the primary tools for the administrator.
When migration to Incus happens, the lxc
command in lxd-client
seems to be renamed to incus
.
- The
lxd-client
package pulled in bylxd
offerslxc
command with many subcommands. Auto-generated manpages on Debian 12 are mostly useless. Use-h
with command to see reasonable command usage syntax.
$ lxc -h --all --sub-commands
Description:
Command line client for LXD
All of LXD's features can be driven through the various commands below.
For help with any of those, simply call them with --help.
Usage:
lxc [command]
Available Commands:
alias Manage command aliases
add Add new aliases
list List aliases
remove Remove aliases
rename Rename aliases
cluster Manage cluster members
add Request a join token for adding a cluster member
edit Edit cluster member configurations as YAML
enable Enable clustering on a single non-clustered LXD server
evacuate Evacuate cluster member
get Get values for cluster member configuration keys
group Manage cluster groups
assign Assign sets of groups to cluster members
create Create a cluster group
delete Delete a cluster group
edit Edit a cluster group
list List all the cluster groups
remove Remove member from group
rename Rename a cluster group
show Show cluster group configurations
list List all the cluster members
list-tokens List all active cluster member join tokens
remove Remove a member from the cluster
rename Rename a cluster member
restore Restore cluster member
revoke-token Revoke cluster member join token
role Manage cluster roles
add Add roles to a cluster member
remove Remove roles from a cluster member
set Set a cluster member's configuration keys
show Show details of a cluster member
unset Unset a cluster member's configuration keys
update-certificate Update cluster certificate
config Manage instance and server configuration options
device Manage devices
add Add instance devices
get Get values for device configuration keys
list List instance devices
override Copy profile inherited devices and override configuration keys
remove Remove instance devices
set Set device configuration keys
show Show full device configuration
unset Unset device configuration keys
edit Edit instance or server configurations as YAML
get Get values for instance or server configuration keys
metadata Manage instance metadata files
edit Edit instance metadata files
show Show instance metadata files
set Set instance or server configuration keys
show Show instance or server configurations
template Manage instance file templates
create Create new instance file templates
delete Delete instance file templates
edit Edit instance file templates
list List instance file templates
show Show content of instance file templates
trust Manage trusted clients
add Add new trusted client
edit Edit trust configurations as YAML
list List trusted clients
list-tokens List all active certificate add tokens
remove Remove trusted client
revoke-token Revoke certificate add token
show Show trust configurations
unset Unset instance or server configuration keys
console Attach to instance consoles
copy Copy instances within or in between LXD servers
delete Delete instances and snapshots
exec Execute commands in instances
export Export instance backups
file Manage files in instances
delete Delete files in instances
edit Edit files in instances
mount Mount files from instances
pull Pull files from instances
push Push files into instances
help Help about any command
image Manage images
alias Manage image aliases
create Create aliases for existing images
delete Delete image aliases
list List image aliases
rename Rename aliases
copy Copy images between servers
delete Delete images
edit Edit image properties
export Export and download images
get-property Get image properties
import Import images into the image store
info Show useful information about images
list List images
refresh Refresh images
set-property Set image properties
show Show image properties
unset-property Unset image properties
import Import instance backups
info Show instance or server information
init Create instances from images
launch Create and start instances from images
list List instances
manpage Generate manpages for all commands
monitor Monitor a local or remote LXD server
move Move instances within or in between LXD servers
network Manage and attach instances to networks
acl Manage network ACLs
create Create new network ACLs
delete Delete network ACLs
edit Edit network ACL configurations as YAML
get Get values for network ACL configuration keys
list List available network ACLS
rename Rename network ACLs
rule Manage network ACL rules
add Add rules to an ACL
remove Remove rules from an ACL
set Set network ACL configuration keys
show Show network ACL configurations
show-log Show network ACL log
unset Unset network ACL configuration keys
attach Attach network interfaces to instances
attach-profile Attach network interfaces to profiles
create Create new networks
delete Delete networks
detach Detach network interfaces from instances
detach-profile Detach network interfaces from profiles
edit Edit network configurations as YAML
forward Manage network forwards
create Create new network forwards
delete Delete network forwards
edit Edit network forward configurations as YAML
get Get values for network forward configuration keys
list List available network forwards
port Manage network forward ports
add Add ports to a forward
remove Remove ports from a forward
set Set network forward keys
show Show network forward configurations
unset Unset network forward configuration keys
get Get values for network configuration keys
info Get runtime information on networks
list List available networks
list-leases List DHCP leases
peer Manage network peerings
create Create new network peering
delete Delete network peerings
edit Edit network peer configurations as YAML
get Get values for network peer configuration keys
list List available network peers
set Set network peer keys
show Show network peer configurations
unset Unset network peer configuration keys
rename Rename networks
set Set network configuration keys
show Show network configurations
unset Unset network configuration keys
zone Manage network zones
create Create new network zones
delete Delete network zones
edit Edit network zone configurations as YAML
get Get values for network zone configuration keys
list List available network zoneS
record Manage network zone records
create Create new network zone record
delete Delete network zone record
edit Edit network zone record configurations as YAML
entry Manage network zone record entries
get Get values for network zone record configuration keys
list List available network zone records
set Set network zone record configuration keys
show Show network zone record configuration
unset Unset network zone record configuration keys
set Set network zone configuration keys
show Show network zone configurations
unset Unset network zone configuration keys
operation List, show and delete background operations
delete Delete a background operation (will attempt to cancel)
list List background operations
show Show details on a background operation
pause Pause instances
profile Manage profiles
add Add profiles to instances
assign Assign sets of profiles to instances
copy Copy profiles
create Create profiles
delete Delete profiles
device Manage devices
add Add instance devices
get Get values for device configuration keys
list List instance devices
remove Remove instance devices
set Set device configuration keys
show Show full device configuration
unset Unset device configuration keys
edit Edit profile configurations as YAML
get Get values for profile configuration keys
list List profiles
remove Remove profiles from instances
rename Rename profiles
set Set profile configuration keys
show Show profile configurations
unset Unset profile configuration keys
project Manage projects
create Create projects
delete Delete projects
edit Edit project configurations as YAML
get Get values for project configuration keys
info Get a summary of resource allocations
list List projects
rename Rename projects
set Set project configuration keys
show Show project options
switch Switch the current project
unset Unset project configuration keys
publish Publish instances as images
query Send a raw query to LXD
remote Manage the list of remote servers
add Add new remote servers
get-default Show the default remote
list List the available remotes
remove Remove remotes
rename Rename remotes
set-url Set the URL for the remote
switch Switch the default remote
rename Rename instances and snapshots
restart Restart instances
restore Restore instances from snapshots
snapshot Create instance snapshots
start Start instances
stop Stop instances
storage Manage storage pools and volumes
create Create storage pools
delete Delete storage pools
edit Edit storage pool configurations as YAML
get Get values for storage pool configuration keys
info Show useful information about storage pools
list List available storage pools
set Set storage pool configuration keys
show Show storage pool configurations and resources
unset Unset storage pool configuration keys
volume Manage storage volumes
attach Attach new storage volumes to instances
attach-profile Attach new storage volumes to profiles
copy Copy storage volumes
create Create new custom storage volumes
delete Delete storage volumes
detach Detach storage volumes from instances
detach-profile Detach storage volumes from profiles
edit Edit storage volume configurations as YAML
export Export custom storage volume
get Get values for storage volume configuration keys
import Import custom storage volumes
info Show storage volume state information
list List storage volumes
move Move storage volumes between pools
rename Rename storage volumes and storage volume snapshots
restore Restore storage volume snapshots
set Set storage volume configuration keys
show Show storage volume configurations
snapshot Snapshot storage volumes
unset Unset storage volume configuration keys
version Show local and remote versions
warning Manage warnings
acknowledge Acknowledge warning
delete Delete warning
list List warnings
show Show warning
Flags:
--all Show less common commands
--debug Show all debug messages
--force-local Force using the local unix socket
-h, --help Print help
--project Override the source project
-q, --quiet Don't show progress information
--sub-commands Use with help or --help to view sub-commands
-v, --verbose Show all information messages
--version Print version number
Use "lxc [command] --help" for more information about a command.
- The
lxd
package offerslxd
andlxd-user
daemons with many subcommands forlxd
. Subcommand tool description tends to include “low-level”.
$ lxd -h
Description:
The LXD container manager (daemon)
This is the LXD daemon command line. It's typically started directly by your
init system and interacted with through a tool like `lxc`.
There are however a number of subcommands that let you interact directly with
the local LXD daemon and which may not be performed through the REST API alone.
Usage:
lxd [flags]
lxd [command]
Available Commands:
activateifneeded Check if LXD should be started
cluster Low-level cluster administration commands
help Help about any command
import Command has been replaced with "lxd recover"
init Configure the LXD daemon
recover Recover missing instances and volumes from existing and unknown storage pools
shutdown Tell LXD to shutdown all containers and exit
version Show the server version
waitready Wait for LXD to be ready to process requests
Flags:
-d, --debug Show all debug messages
--group The group of users that will be allowed to talk to LXD
-h, --help Print help
--logfile Path to the log file
--syslog Log to syslog
--trace Log tracing targets
-v, --verbose Show all information messages
--version Print version number
Use "lxd [command] --help" for more information about a command.
Since I used this init
subcommand, let’s see:
$ lxd init -h
Description:
Configure the LXD daemon
Usage:
lxd init [flags]
Examples:
init --minimal
init --auto [--network-address=IP] [--network-port=8443] [--storage-backend=dir]
[--storage-create-device=DEVICE] [--storage-create-loop=SIZE]
[--storage-pool=POOL] [--trust-password=PASSWORD]
init --preseed
init --dump
Flags:
--auto Automatic (non-interactive) mode
--dump Dump YAML config to stdout
--minimal Minimal configuration (non-interactive)
--network-address Address to bind LXD to (default: none)
--network-port Port to bind LXD to (default: 8443) (default -1)
--preseed Pre-seed mode, expects YAML config from stdin
--storage-backend Storage backend to use (btrfs, dir, lvm or zfs, default: dir)
--storage-create-device Setup device based storage using DEVICE
--storage-create-loop Setup loop based storage with SIZE in GB (default -1)
--storage-pool Storage pool to use or create
--trust-password Password required to add new clients
Global Flags:
-d, --debug Show all debug messages
-h, --help Print help
--logfile Path to the log file
--syslog Log to syslog
--trace Log tracing targets
-v, --verbose Show all information messages
--version Print version number
I also see another command in this package.
$ lxd-user -h
Description:
LXD user project daemon
This daemon is used to allow users that aren't considered to be LXD
administrators access to a personal LXD project with suitable
restrictions.
Usage:
lxd-user [flags]
Flags:
-h, --help Print help
--version Print version number
- The
lxd-tools
package offers extra tools forlxd
.fuidshift
(1)lxc-to-lxd
(1)lxd-benchmark
(1)
- The
lxc
package offers many hyphenatedlxc-*
commands for low-level Linux Containers userspace tools.- This doesn’t include
lxc
command. - It looks like we should indirectly access these functionalities using
lxc
commands which seems to take care some other things.
- This doesn’t include
References: Container landscape essays
- Namespaces in operation January 4, 2013 – series index up to June 15, 2016
- Understanding the new control groups API March 23, 2016
- The Kubernetes Containers runtime jungle November 9, 2019
- LXC 5.0 LTS has been released June, 2022
- Docker and the OCI container ecosystem July 26, 2022
- The container orchestrator landscape August 23, 2022
- LXC and LXD: a different container story September 13, 2022
- Progress for unprivileged containers September 28, 2022
- Summary: Podman vs. Docker – will Podman fill Docker’s shoes? 11/30/2022
- LXD is now under Canonical July 4, 2023
- What is Incus?
- Introducing Incus August 7, 2023
- Docker一強の終焉にあたり、押さえるべきContainer事情 2023/04/03 (rev.2023/05/01)
- Series “Dockerless” by mkdev (2023?)
- Hacker News: LXD isn’t an alternative to podman 2023-07-05
- Podman is meant to run ‘application containers’, where each container has just one running process.
- LXD is meant to run ‘system containers’ where each container is a full Linux distribution with an init system and (possibly) multiple daemons.
- LXD containers are like light-weight VMs. Unlike VMs, LXD containers share the host kernel.
- You could run podman or other OCI containers inside LXD. I use LXD to test multi node K8s (K3s) on my desktop system.
Previous Post | Top | Next Post |