Incus

From ArchWiki

Incus is a manager/hypervisor for containers (via LXC) and virtual-machines (via QEMU).

It is a fork of LXD by the original maintainers. Documentation from the LXD wiki page is still largely relevant and encouraged reading.

Installation

Install the incus package, then enable the incus.socket.

Alternatively, you can enable/start the incus.service directly, in case you want instances to autostart for example.

To delegate container creation to users, enable/start the incus-user.socket unit. See #Accessing Incus as an unprivileged user for group delegation.

Migrating from LXD

If you wish to migrate from an existing LXD installation, you should do so at this point, as the migration tool will only run against an empty target Incus server.

After verifying that both the lxc info and incus info commands are running correctly, read the upstream documentation about the process, and afterwards run the migration tool:

# lxd-to-incus

Configuration

Unprivileged containers

Incus launches unprivileged containers by default (see Linux Containers#Privileged or unprivileged containers for an explanation of the difference).

For this to work, you need to setup an appropriate range of sub{u,g}ids for the root user[1]: unlike e.g. podman, Incus uses a daemon that needs to run as root.[2]

Verify the content of both /etc/subuid and /etc/subgid, and if needed add a contiguous range of at least 10M UID/GID for the root user:

# usermod -v 1000000-1000999999 -w 1000000-1000999999 root

Then restart incus.

For the alternative, see LXD#Privileged containers.

Accessing Incus as an unprivileged user

From the official documentation:

"Access to Incus is controlled through two groups:

  • incus allows basic user access, no configuration and all actions restricted to a per-user project.
  • incus-admin allows full control over Incus."

To have a normal user capable of launching and operating instances, add the user to the incus group.

To give a normal user full control over Incus without having to use sudo, add the user to incus-admin (not recommended).

Warning: Anyone added to the incus-admin group is root equivalent. For more information, see [3] and [4].

Initialize Incus config

Before it can be used, Incus' config needs to be initialized:

$ incus admin init

From the official documentation:

"For simple configurations, you can run this command as a normal user. However, some more advanced operations during the initialization process (for example, joining an existing cluster) require root privileges. In this case, run the command with sudo or as root."

This will start an interactive configuration guide in the terminal, that covers different topics like storages, networks etc.
You can find an overview in the official Getting Started Guide.

Adding a Web-UI

The lxd-ui browser frontend has been patched to fit Incus. These patches are found in the debian packge source. [5]

To make use of this UI install the incus-uiAUR package.

Then set the address and port for the webserver:

$ incus config set core.https_address=127.0.0.1:8443

And restart Incus.

Usage

Overview of commands

You can get an overview of all available commands by typing:

$ incus

Create a container

Container are based on images, that are downloaded from image servers or remote LXD servers.

You can see the list of already added servers with:

$ incus remote list
Note: An image server can be referred to by using the name displayed in the NAME column on the left, e.g. images in the following examples.

You can list all images on a server with incus image list <server-name>:, for example:

Tip: It is recommended to pipe the output of the following command through a pager like less due the high number of images available.
$ incus image list images:

This will show you all images on one of the default servers: images.linuxcontainers.org

You can also search for images by adding terms like the distribution name:

$ incus image list images:debian

Launch a container with an image from a specific server with:

$ incus launch servername:imagename

For example to create a randomly named container instance from the Ubuntu Noble image from the default server:

$ incus launch images:ubuntu/noble

To specify a name for the instance simply add it afterwards, e.g.:

$ incus launch images:archlinux/current/amd64 arch

will create an amd64 Arch container named arch.

Tips and tricks

Access the containers by name on the host

This assumes that you are using the default bridge that it is named incusbr0 and that you are using systemd-resolved.

# systemd-resolve --interface incusbr0 --set-domain '~incus' --set-dns $(incus network get incusbr0 ipv4.address | cut -d / -f 1)

You can now access the containers by name:

$ ping containername.incus

To make this change permanent, edit the incus.service systemd unit to include an ExecStartPost directive, which runs the command after launch:

# systemctl edit incus.service
...

[Service]
ExecStartPost=/bin/sh -c 'systemd-resolve --interface incusbr0 --set-domain "~incus" --set-dns $(incus network get incusbr0 ipv4.address | cut -d / -f 1)'

...

Troubleshooting

Starting a virtual machine fails

If you see the error:

Error: Couldn't find one of the required UEFI firmware files: [{code:OVMF_CODE.4MB.fd vars:OVMF_VARS.4MB.ms.fd} {code:OVMF_CODE.2MB.fd vars:OVMF_VARS.2MB.ms.fd} {code:OVMF_CODE.fd vars:OVMF_VARS.ms.fd} {code:OVMF_CODE.fd vars:qemu.nvram}]

It's because Arch Linux does not distribute secure boot signed ovmf firmware. To boot virtual machines, you need to disable secure boot for the time being:

$ incus launch ubuntu:18.04 test-vm --vm -c security.secureboot=false

This can also be added to the default profile by doing:

$ incus profile set default security.secureboot=false

Incus does not respect Shell's environment proxy variables

Examples are incus launch or incus image commands not using value of *_proxy/*_PROXY variables when downloading images.

Incus implements a server-client paradigm. It simply means that operations are done by incusd acting as the Incus server — usually running in the background, unless invoked from an interactive shell. And incus commandline interface is used to communicate with Incus server acting as the Incus client.

That makes incusd, typically started as a service, not inheriting shell's environment variables of the client. But respecting variables of the environment that it's invoked from, instead.[6] In Arch Linux, Incus server is started by systemd.

There can be many workarounds to this difficulty, following exist some examples. See Incus's issue#574 for more information.

Temporary

Import Shell variables to systemd's environment

First, export *_PROXY variables:

$ export ALL_PROXY="socks://proxy_server_address:port/"

Import them to systemd's environment:

# systemctl import-environment ALL_PROXY

Re/start incus.service unit.

Tip: Use systemctl unset-environment command to unset a variable and restart the service.

Persistent

Edit incus service unit

If you want Incus daemon to always start with some static environment variables, like *_proxy, you can use Environment directive of systemd. systemctl set-property command cannot manipulate Environment directive. Edit incus.service and add Environment key with appropriate variable=value pair. For example:

# systemctl edit incus.service
...

[Service]
Environment=ALL_PROXY="socks://proxy_server_address:port/"

...
Use Incus core.proxy options

One can make Incus server use a desired proxy with configuring Incus's server with core.proxy options. For instance:

# incus config set core.proxy_http "proxy_address:proxy_port"
Note: core.proxy options have global scopes. I.e. they apply to cluster members, immediately.

Uninstall

Stop and disable the services. Then uninstall the incus package.

If you want to remove all data:

# rm -r /var/lib/incus

If you used any of the example networking configuration, you should remove those as well.

See also