In a previous article, I explained why developers should use libvirt session virtual machines (VMs) over libvirt system VMs for their inner-loop testing. Session VMs are rootless, but they do not provide ingress network connectivity. If you need to access network services inside your VMs, an easy solution is to configure a libvirt virtual network, backed by a Linux kernel bridge.
If your work machine runs Fedora Linux 42, or any other Linux distribution with a recent enough virt-install
package (including Red Hat Enterprise Linux 10!), there’s an easier and more secure alternative: user-mode networking with Passt. With Passt, you no longer need to manage libvirt virtual networks and thus there is no need for root privileges, provided that libvirt and its tooling are already installed on the work machine.
Managing libvirt virtual networks, as I did in the previous article, requires root access (or libvirt group membership), which some developers might not have on their work machines. Even if you do, you should avoid using it as much as possible to ensure a good security posture. And don’t be fooled: being a member of the libvirt group makes you root equivalent.
Using virtual networks was a compromise. It required a one-time privileged operation, and after that you could create and manage rootless VMs. As long as you never connect that virtual network to any real network, the security risk should be minimal.
But how do you connect to network servers running on rootless VMs if using user-mode networking? You do so by configuring port forwarding, much like you would do with rootless containers.
Cooking pasta with virtual machines
If you’re a Podman user, you have probably heard of pasta
, the binary that enables Podman to use Passt. Passt is an improved user-mode networking stack, proposed to replace the older Slirp-based stack. pasta
became the default for Podman a while ago.
Remember that VMs, much like containers, are just regular Linux processes, so what works for containers should work for VMs, too. However, while it has always been easy to forward ports to containers, this wasn't the case for VMs until recently.
Like Podman, libvirt can take advantage of Passt, but until recently, configuring port forwarding required fiddling with XML configuration files. None of the popular front-ends to libvirt, such as Virt-Manager and Cockpit, had support for configuring any settings of user-mode networking.
To avoid dealing with XML files, and sticking to easy front-ends, my previous article proposes the compromise of configuring a libvirt virtual network, backed by a Linux kernel bridge, and granting access to session VMs to that virtual network.
Fortunately, things improved with recent updates to the virt-install
command (part of the Virt-Manager project) that now include a convenience option to enable port forwarding using the Passt user-mode networking stack. Just add the following to your virt-install
command:
--network passt,portForward:<host-port>:<vm-port>
Then any application on your host can connect to 127.0.01:<host-port> to access whatever services are listening on vm-port on your VM.
The convenience option also enables forwarding UDP ports, port ranges, and setting the listening IP address on the host, which you could use to expose your session VMs to access from outside its host. The following example, straight from the man page, illustrates some of those alternatives:
--network passt,portForward0=7000-8000/udp,portForward1=127.0.0.1:2222:22
Note
While it was possible to select Passt and configure port forwarding with previous releases of the virt-install
command, it required a long and convoluted syntax. For that reason, I chose not to discuss it in the previous article.
A concrete example
I’m running this on my Fedora 42 work machine. First, make sure you have recent enough libvirt tooling. If not, you might need a dnf update
.
$ rpm -q libvirt
libvirt-11.0.0-2.fc42.x86_64
$ rpm -q virt-install
virt-install-5.0.0-2.fc42.noarch
You need virt-install (or the larger virt-manager package, depending on your Linux distribution) on version 5.0.0 or newer. If you’re on a recent RHEL release, such as RHEL 10.0 or RHEL 9.6, you should be good to go.
Now, create the simplest VM from the RHEL installation boot ISO and run through the Anaconda prompts. Make sure you create a user with a password and administrator access so you can later ssh
as this user.
$ virt-install --name rhel95pasta --osinfo rhel9.5 --network passt,portForward=8022:22 --memory 4096 --vcpus 2 --disk size=20 --location ~/Downloads/rhel-9.5-x86_64-boot.iso
Of course, you could use newer (or older) RHEL installation media.
After the installation finishes, you can ssh
into your VM:
$ ssh -p 8022 flozano@127.0.0.1
Now be ready for a surprise: your VM might be set to the same hostname as your host. This means your VM's Bash prompt ends up looking exactly like your host's, potentially making it seem as though your SSH client failed to connect. I had to double-check to confirm I was actually in my VM and not the host. The libvirt team is working on this issue. In the meantime, setting a different user name during installation or changing the VM's host name can help avoid confusion.
$ sudo hostnamectl hostname rhel95pasta
$ sudo shutdown -r now
Wait a few moments to reconnect to your rootless VM and enjoy your virtual pasta!
Why Passt is a better choice
If you want some context about why Passt and pasta
are better alternatives to Slirp, see David Gibson's Rootless Networking presentation from Everything Open 2024 and this blog post by Stefan Hajnoczi: A new approach to usermode networking with passt
Passt provides a number of performance and security improvements because of its streamlined architecture. Slirp is a decades-old software, created to enable TCP/IP connections over serial lines. It was not designed for container or VM networking.
Because Passt is the preferred stack and Slirp is not actively maintained anymore, it’s unlikely that convenience options such as port forwarding will ever be implemented for Slirp in any front-end.
At the time I write this article, you must either use the virt-install
command or edit your VMs' libvirt XML to use Passt. None of the popular graphical front-ends to libvirt, such as virt-manager and Cockpit, support selecting Passt as the user-mode networking stack. They only offer two choices: virtual networks or user-mode networking, which implicitly selects Slirp.
The fact that none of the libvirt front-ends provided an easy way of setting up port forwarding for user-mode networking caused many users to just ignore session VMs and use system VMs every time. This approach was perceived as the only easy way of connecting to network services inside their VMs. That was unfortunate, because a limitation of front-ends encouraged a bad security posture.
Thanks to the recent improvements in the virt-install
command, there's no longer a compelling reason to avoid using user-mode networking for most of your virtual machine needs. Hopefully other libvirt front-ends will implement similar features soon.
From local virtualization to enterprise virtualization
Running hardware-accelerated VMs is a native feature of the Linux kernel, provided by its KVM module. VMs are just Linux processes, nothing fancy there. You never required root access to run VMs natively on Linux, just as you never required it for containers. In both cases, it was more of a limitation from early tools than a conscious design choice that caused rootful operation to be the starting point.
To make it clear, let’s dig a bit into the architecture of the Linux virtualization stack.
Understanding the Linux virtualization stack
Hardware acceleration for VMs does less than you might think, providing just CPU, memory, and device bus virtualization. Everything else must come from either software-based emulation or hardware passthrough.
KVM, a feature of the Linux kernel, provides access to the hardware acceleration capabilities, and QEMU, a user space program, provides the remaining hardware emulation required to offer a complete virtual machine abstraction. Your VMs are actually QEMU processes.
Libvirt is a management layer that coordinates QEMU, KVM, and other Linux kernel features that may be required, depending on which capabilities you need to provide to your VMs.
Without a user-mode networking stack, libvirt requires root privileges to create virtual network devices. If you need to connect your VMs to real networks—in such a way they look directly connected and can interact with layer 2 and layer 3 protocols—you must connect their virtual network devices to virtual bridge devices, and those bridges to physical network devices. Libvirt virtual networks are an abstraction to manage such groups of virtual network devices and virtual bridges, which are Linux kernel features.
From desktop to data center: Scaling up virtualization
Linux-based enterprise virtualization, such as Red Hat OpenShift Virtualization, uses the same libvirt + KVM + QEMU stack, but they also provide advanced software-defined networking over multiple hosts. They use specialized components, such as Open Virtual Networking (OVN), to create those virtual networks, which connect virtual bridge devices from multiple Linux hosts.
Enterprise virtualization software runs their VMs rootless. This is just good security design. They run their actual VMs, that is, the Linux processes which correspond to those VMs, as unprivileged processes, and restrict elevated privileges for just their components that must manage Linux kernel devices.
In the end, your local VMs can expect similar performance as enterprise VMs because they run on the same core virtualization stack. For inner-loop testing, developers need ease of use which is secure by default, and libvirt provides that for most desktop Linux distributions, either in a fully rootless mode, or with selected rootful pieces, if you need.
Many thanks to Andrea Bolognani, Daniel Berrangé, and Stefano Brivio for their review of this article.