Antoine Beaupr : building Debian packages under qemu with sbuild
I've been using sbuild for a while to build my Debian packages,
mainly because it's what is used by the Debian autobuilders, but
also because it's pretty powerful and efficient. Configuring it just
right, however, can be a challenge. In my quick Debian development
guide, I had a few pointers on how to
configure sbuild with the normal Why
I want to use qemu mainly because it provides better isolation than a
chroot. I sponsor packages sometimes and while I typically audit
the source code before building, it still feels like the extra
protection shouldn't hurt.
I also like the idea of unifying my existing virtual machine setup
with my build setup. My current VM is kind of all over the place:
libvirt, vagrant, GNOME Boxes, etc?). I've been slowly
converging over libvirt however, and most solutions I use right now
rely on qemu under the hood, certainly not chroots...
I could also have decided to go with containers like LXC, LXD, Docker
(with conbuilder, whalebuilder, docker-buildpackage),
systemd-nspawn (with debspawn), unshare (with
schroot
setup, but today I finished
a qemu based configuration.
Why
I want to use qemu mainly because it provides better isolation than a
chroot. I sponsor packages sometimes and while I typically audit
the source code before building, it still feels like the extra
protection shouldn't hurt.
I also like the idea of unifying my existing virtual machine setup
with my build setup. My current VM is kind of all over the place:
libvirt, vagrant, GNOME Boxes, etc?). I've been slowly
converging over libvirt however, and most solutions I use right now
rely on qemu under the hood, certainly not chroots...
I could also have decided to go with containers like LXC, LXD, Docker
(with conbuilder, whalebuilder, docker-buildpackage),
systemd-nspawn (with debspawn), unshare (with schroot
--chroot-mode=unshare
), or whatever: I didn't feel those offer the
level of isolation that is provided by qemu.
The main downside of this approach is that it is (obviously) slower
than native builds. But on modern hardware, that cost should be
minimal.
How
Basically, you need this:
sudo mkdir -p /srv/sbuild/qemu/
sudo apt install sbuild-qemu
sudo sbuild-qemu-create -o /srv/sbuild/qemu/unstable.img unstable https://deb.debian.org/debian
Then to make this used by default, add this to ~/.sbuildrc
:
# run autopkgtest inside the schroot
$run_autopkgtest = 1;
# tell sbuild to use autopkgtest as a chroot
$chroot_mode = 'autopkgtest';
# tell autopkgtest to use qemu
$autopkgtest_virt_server = 'qemu';
# tell autopkgtest-virt-qemu the path to the image
# use --debug there to show what autopkgtest is doing
$autopkgtest_virt_server_options = [ '--', '/srv/sbuild/qemu/%r-%a.img' ];
# tell plain autopkgtest to use qemu, and the right image
$autopkgtest_opts = [ '--', 'qemu', '/srv/sbuild/qemu/%r-%a.img' ];
# no need to cleanup the chroot after build, we run in a completely clean VM
$purge_build_deps = 'never';
# no need for sudo
$autopkgtest_root_args = '';
Note that the above will use the default autopkgtest (1GB, one core)
and qemu (128MB, one core) configuration, which might be a little low
on resources. You probably want to be explicit about this, with
something like this:
# extra parameters to pass to qemu
# --enable-kvm is not necessary, detected on the fly by autopkgtest
my @_qemu_options = ['--ram-size=4096', '--cpus=2'];
# tell autopkgtest-virt-qemu the path to the image
# use --debug there to show what autopkgtest is doing
$autopkgtest_virt_server_options = [ @_qemu_options, '--', '/srv/sbuild/qemu/%r-%a.img' ];
$autopkgtest_opts = [ '--', 'qemu', @qemu_options, '/srv/sbuild/qemu/%r-%a.img'];
This configuration will:
- create a virtual machine image in
/srv/sbuild/qemu
for
unstable
- tell
sbuild
to use that image to create a temporary VM to build
the packages
- tell
sbuild
to run autopkgtest
(which should really be
default)
- tell
autopkgtest
to use qemu
for builds and for tests
Note that the VM created by sbuild-qemu-create
have an unlocked root
account with an empty password.
Other useful tasks
- enter the VM to make test, changes will be discarded (thanks Nick
Brown for the
sbuild-qemu-boot
tip!):
sbuild-qemu-boot /srv/sbuild/qemu/unstable-amd64.img
That program is shipped only with bookworm and later, an equivalent
command is:
qemu-system-x86_64 -snapshot -enable-kvm -object rng-random,filename=/dev/urandom,id=rng0 -device virtio-rng-pci,rng=rng0,id=rng-device0 -m 2048 -nographic /srv/sbuild/qemu/unstable-amd64.img
The key argument here is -snapshot
.
- enter the VM to make permanent changes, which will not be
discarded:
sudo sbuild-qemu-boot --readwrite /srv/sbuild/qemu/unstable-amd64.img
Equivalent command:
sudo qemu-system-x86_64 -enable-kvm -object rng-random,filename=/dev/urandom,id=rng0 -device virtio-rng-pci,rng=rng0,id=rng-device0 -m 2048 -nographic /srv/sbuild/qemu/unstable-amd64.img
- update the VM (thanks lavamind):
sudo sbuild-qemu-update /srv/sbuild/qemu/unstable-amd64.img
- build in a specific VM regardless of the suite specified in the
changelog (e.g.
UNRELEASED
, bookworm-backports
,
bookworm-security
, etc):
sbuild --autopkgtest-virt-server-opts="-- qemu /var/lib/sbuild/qemu/bookworm-amd64.img"
Note that you'd also need to pass --autopkgtest-opts
if you want
autopkgtest
to run in the correct VM as well:
sbuild --autopkgtest-opts="-- qemu /var/lib/sbuild/qemu/unstable.img" --autopkgtest-virt-server-opts="-- qemu /var/lib/sbuild/qemu/bookworm-amd64.img"
You might also need parameters like --ram-size
if you customized
it above.
And yes, this is all quite complicated and could be streamlined a
little, but that's what you get when you have years of legacy and just
want to get stuff done. It seems to me autopkgtest-virt-qemu
should
have a magic flag starts a shell for you, but it doesn't look like
that's a thing. When that program starts, it just
says ok
and sits there.
Maybe because the authors consider the above to be simple enough (see
also bug #911977 for a discussion of this problem).
Live access to a running test
When autopkgtest
starts a VM, it uses this funky qemu
commandline:
qemu-system-x86_64 -m 4096 -smp 2 -nographic -net nic,model=virtio -net user,hostfwd=tcp:127.0.0.1:10022-:22 -object rng-random,filename=/dev/urandom,id=rng0 -device virtio-rng-pci,rng=rng0,id=rng-device0 -monitor unix:/tmp/autopkgtest-qemu.w1mlh54b/monitor,server,nowait -serial unix:/tmp/autopkgtest-qemu.w1mlh54b/ttyS0,server,nowait -serial unix:/tmp/autopkgtest-qemu.w1mlh54b/ttyS1,server,nowait -virtfs local,id=autopkgtest,path=/tmp/autopkgtest-qemu.w1mlh54b/shared,security_model=none,mount_tag=autopkgtest -drive index=0,file=/tmp/autopkgtest-qemu.w1mlh54b/overlay.img,cache=unsafe,if=virtio,discard=unmap,format=qcow2 -enable-kvm -cpu kvm64,+vmx,+lahf_lm
... which is a typical qemu commandline, I'm sorry to say. That
gives us a VM with those settings (paths are relative to a temporary
directory, /tmp/autopkgtest-qemu.w1mlh54b/
in the above example):
- the
shared/
directory is, well, shared with the VM
- port
10022
is forward to the VM's port 22
, presumably for SSH,
but not SSH server is started by default
- the
ttyS1
and ttyS2
UNIX sockets are mapped to the first two
serial ports (use nc -U
to talk with those)
- the
monitor
UNIX socket is a qemu control socket (see the QEMU
monitor documentation, also nc -U
)
In other words, it's possible to access the VM with:
nc -U /tmp/autopkgtest-qemu.w1mlh54b/ttyS2
The nc
socket interface is ... not great, but it works well
enough. And you can probably fire up an SSHd to get a better shell if
you feel like it.
Nitty-gritty details no one cares about
Fixing hang in sbuild cleanup
I'm having a hard time making heads or tails of this, but please bear
with me.
In sbuild
+ schroot
, there's this notion that we don't really need
to cleanup after ourselves inside the schroot, as the schroot will
just be delted anyways. This behavior seems to be handled by the
internal "Session Purged" parameter.
At least in lib/Sbuild/Build.pm, we can see this:
my $is_cloned_session = (defined ($session->get('Session Purged')) &&
$session->get('Session Purged') == 1) ? 1 : 0;
[...]
if ($is_cloned_session)
$self->log("Not cleaning session: cloned chroot in use\n");
else
if ($purge_build_deps)
# Removing dependencies
$resolver->uninstall_deps();
else
$self->log("Not removing build depends: as requested\n");
The schroot
builder defines that parameter as:
$self->set('Session Purged', $info-> 'Session Purged' );
... which is ... a little confusing to me. $info is:
my $info = $self->get('Chroots')->get_info($schroot_session);
... so I presume that depends on whether the schroot was correctly
cleaned up? I stopped digging there...
ChrootUnshare.pm
is way more explicit:
$self->set('Session Purged', 1);
I wonder if we should do something like this with the autopkgtest
backend. I guess people might technically use it with something else
than qemu, but qemu is the typical use case of the autopkgtest
backend, in my experience. Or at least certainly with things that
cleanup after themselves. Right?
For some reason, before I added this line to my configuration:
$purge_build_deps = 'never';
... the "Cleanup" step would just completely hang. It was quite
bizarre.
Disgression on the diversity of VM-like things
There are a lot of different virtualization solutions one can use
(e.g. Xen, KVM, Docker or Virtualbox). I have also
found libguestfs to be useful to operate on virtual images in
various ways. Libvirt and Vagrant are also useful wrappers on
top of the above systems.
There are particularly a lot of different tools which use Docker,
Virtual machines or some sort of isolation stronger than chroot to
build packages. Here are some of the alternatives I am aware of:
- Whalebuilder - Docker builder
- conbuilder - "container" builder
- debspawn - system-nspawn builder
- docker-buildpackage - Docker builder
- qemubuilder - qemu builder
- qemu-sbuild-utils - qemu + sbuild + autopkgtest
Take, for example, Whalebuilder, which uses Docker to build
packages instead of pbuilder
or sbuild
. Docker provides more
isolation than a simple chroot
: in whalebuilder
, packages are
built without network access and inside a virtualized
environment. Keep in mind there are limitations to Docker's security
and that pbuilder
and sbuild
do build under a different user
which will limit the security issues with building untrusted
packages.
On the upside, some of things are being fixed: whalebuilder
is now
an official Debian package (whalebuilder) and has added
the feature of passing custom arguments to dpkg-buildpackage.
None of those solutions (except the autopkgtest
/qemu
backend) are
implemented as a sbuild plugin, which would greatly reduce their
complexity.
I was previously using Qemu directly to run virtual machines, and
had to create VMs by hand with various tools. This didn't work so well
so I switched to using Vagrant as a de-facto standard to build
development environment machines, but I'm returning to Qemu because it
uses a similar backend as KVM and can be used to host longer-running
virtual machines through libvirt.
The great thing now is that autopkgtest
has good support for qemu
and sbuild
has bridged the gap and can use it as a build
backend. I originally had found those bugs in that setup, but all of
them are now fixed:
- #911977: sbuild: how do we correctly guess the VM name in autopkgtest?
- #911979: sbuild: fails on chown in autopkgtest-qemu backend
- #911963: autopkgtest qemu build fails with proxy_cmd: parameter not set
- #911981: autopkgtest: qemu server warns about missing CPU features
So we have unification! It's possible to run your virtual machines
and Debian builds using a single VM image backend storage, which is
no small feat, in my humble opinion. See the sbuild-qemu blog post
for the annoucement
Now I just need to figure out how to merge Vagrant, GNOME Boxes, and
libvirt together, which should be a matter of placing images in the
right place... right? See also hosting.
pbuilder vs sbuild
I was previously using pbuilder
and switched in 2017 to sbuild
.
AskUbuntu.com has a good comparative between pbuilder and sbuild
that shows they are pretty similar. The big advantage of sbuild is
that it is the tool in use on the buildds and it's written in Perl
instead of shell.
My concerns about switching were POLA (I'm used to pbuilder), the fact
that pbuilder runs as a separate user (works with sbuild as well now,
if the _apt
user is present), and setting up COW semantics in sbuild
(can't just plug cowbuilder there, need to configure overlayfs or
aufs, which was non-trivial in Debian jessie).
Ubuntu folks, again, have more documentation there. Debian
also has extensive documentation, especially about how to
configure overlays.
I was ultimately convinced by stapelberg's post on the topic which
shows how much simpler sbuild really is...
Who
Thanks lavamind for the introduction to the sbuild-qemu
package.
sudo mkdir -p /srv/sbuild/qemu/
sudo apt install sbuild-qemu
sudo sbuild-qemu-create -o /srv/sbuild/qemu/unstable.img unstable https://deb.debian.org/debian
Then to make this used by default, add this to ~/.sbuildrc
:
# run autopkgtest inside the schroot
$run_autopkgtest = 1;
# tell sbuild to use autopkgtest as a chroot
$chroot_mode = 'autopkgtest';
# tell autopkgtest to use qemu
$autopkgtest_virt_server = 'qemu';
# tell autopkgtest-virt-qemu the path to the image
# use --debug there to show what autopkgtest is doing
$autopkgtest_virt_server_options = [ '--', '/srv/sbuild/qemu/%r-%a.img' ];
# tell plain autopkgtest to use qemu, and the right image
$autopkgtest_opts = [ '--', 'qemu', '/srv/sbuild/qemu/%r-%a.img' ];
# no need to cleanup the chroot after build, we run in a completely clean VM
$purge_build_deps = 'never';
# no need for sudo
$autopkgtest_root_args = '';
Note that the above will use the default autopkgtest (1GB, one core)
and qemu (128MB, one core) configuration, which might be a little low
on resources. You probably want to be explicit about this, with
something like this:
# extra parameters to pass to qemu
# --enable-kvm is not necessary, detected on the fly by autopkgtest
my @_qemu_options = ['--ram-size=4096', '--cpus=2'];
# tell autopkgtest-virt-qemu the path to the image
# use --debug there to show what autopkgtest is doing
$autopkgtest_virt_server_options = [ @_qemu_options, '--', '/srv/sbuild/qemu/%r-%a.img' ];
$autopkgtest_opts = [ '--', 'qemu', @qemu_options, '/srv/sbuild/qemu/%r-%a.img'];
This configuration will:
- create a virtual machine image in
/srv/sbuild/qemu
forunstable
- tell
sbuild
to use that image to create a temporary VM to build the packages - tell
sbuild
to runautopkgtest
(which should really be default) - tell
autopkgtest
to useqemu
for builds and for tests
sbuild-qemu-create
have an unlocked root
account with an empty password.
Other useful tasks
- enter the VM to make test, changes will be discarded (thanks Nick
Brown for the
sbuild-qemu-boot
tip!):
sbuild-qemu-boot /srv/sbuild/qemu/unstable-amd64.img
That program is shipped only with bookworm and later, an equivalent
command is:
qemu-system-x86_64 -snapshot -enable-kvm -object rng-random,filename=/dev/urandom,id=rng0 -device virtio-rng-pci,rng=rng0,id=rng-device0 -m 2048 -nographic /srv/sbuild/qemu/unstable-amd64.img
The key argument here is -snapshot
.
- enter the VM to make permanent changes, which will not be
discarded:
sudo sbuild-qemu-boot --readwrite /srv/sbuild/qemu/unstable-amd64.img
Equivalent command:
sudo qemu-system-x86_64 -enable-kvm -object rng-random,filename=/dev/urandom,id=rng0 -device virtio-rng-pci,rng=rng0,id=rng-device0 -m 2048 -nographic /srv/sbuild/qemu/unstable-amd64.img
- update the VM (thanks lavamind):
sudo sbuild-qemu-update /srv/sbuild/qemu/unstable-amd64.img
- build in a specific VM regardless of the suite specified in the
changelog (e.g.
UNRELEASED
, bookworm-backports
,
bookworm-security
, etc):
sbuild --autopkgtest-virt-server-opts="-- qemu /var/lib/sbuild/qemu/bookworm-amd64.img"
Note that you'd also need to pass --autopkgtest-opts
if you want
autopkgtest
to run in the correct VM as well:
sbuild --autopkgtest-opts="-- qemu /var/lib/sbuild/qemu/unstable.img" --autopkgtest-virt-server-opts="-- qemu /var/lib/sbuild/qemu/bookworm-amd64.img"
You might also need parameters like --ram-size
if you customized
it above.
And yes, this is all quite complicated and could be streamlined a
little, but that's what you get when you have years of legacy and just
want to get stuff done. It seems to me autopkgtest-virt-qemu
should
have a magic flag starts a shell for you, but it doesn't look like
that's a thing. When that program starts, it just
says ok
and sits there.
Maybe because the authors consider the above to be simple enough (see
also bug #911977 for a discussion of this problem).
Live access to a running test
When autopkgtest
starts a VM, it uses this funky qemu
commandline:
qemu-system-x86_64 -m 4096 -smp 2 -nographic -net nic,model=virtio -net user,hostfwd=tcp:127.0.0.1:10022-:22 -object rng-random,filename=/dev/urandom,id=rng0 -device virtio-rng-pci,rng=rng0,id=rng-device0 -monitor unix:/tmp/autopkgtest-qemu.w1mlh54b/monitor,server,nowait -serial unix:/tmp/autopkgtest-qemu.w1mlh54b/ttyS0,server,nowait -serial unix:/tmp/autopkgtest-qemu.w1mlh54b/ttyS1,server,nowait -virtfs local,id=autopkgtest,path=/tmp/autopkgtest-qemu.w1mlh54b/shared,security_model=none,mount_tag=autopkgtest -drive index=0,file=/tmp/autopkgtest-qemu.w1mlh54b/overlay.img,cache=unsafe,if=virtio,discard=unmap,format=qcow2 -enable-kvm -cpu kvm64,+vmx,+lahf_lm
... which is a typical qemu commandline, I'm sorry to say. That
gives us a VM with those settings (paths are relative to a temporary
directory, /tmp/autopkgtest-qemu.w1mlh54b/
in the above example):
- the
shared/
directory is, well, shared with the VM
- port
10022
is forward to the VM's port 22
, presumably for SSH,
but not SSH server is started by default
- the
ttyS1
and ttyS2
UNIX sockets are mapped to the first two
serial ports (use nc -U
to talk with those)
- the
monitor
UNIX socket is a qemu control socket (see the QEMU
monitor documentation, also nc -U
)
In other words, it's possible to access the VM with:
nc -U /tmp/autopkgtest-qemu.w1mlh54b/ttyS2
The nc
socket interface is ... not great, but it works well
enough. And you can probably fire up an SSHd to get a better shell if
you feel like it.
Nitty-gritty details no one cares about
Fixing hang in sbuild cleanup
I'm having a hard time making heads or tails of this, but please bear
with me.
In sbuild
+ schroot
, there's this notion that we don't really need
to cleanup after ourselves inside the schroot, as the schroot will
just be delted anyways. This behavior seems to be handled by the
internal "Session Purged" parameter.
At least in lib/Sbuild/Build.pm, we can see this:
my $is_cloned_session = (defined ($session->get('Session Purged')) &&
$session->get('Session Purged') == 1) ? 1 : 0;
[...]
if ($is_cloned_session)
$self->log("Not cleaning session: cloned chroot in use\n");
else
if ($purge_build_deps)
# Removing dependencies
$resolver->uninstall_deps();
else
$self->log("Not removing build depends: as requested\n");
The schroot
builder defines that parameter as:
$self->set('Session Purged', $info-> 'Session Purged' );
... which is ... a little confusing to me. $info is:
my $info = $self->get('Chroots')->get_info($schroot_session);
... so I presume that depends on whether the schroot was correctly
cleaned up? I stopped digging there...
ChrootUnshare.pm
is way more explicit:
$self->set('Session Purged', 1);
I wonder if we should do something like this with the autopkgtest
backend. I guess people might technically use it with something else
than qemu, but qemu is the typical use case of the autopkgtest
backend, in my experience. Or at least certainly with things that
cleanup after themselves. Right?
For some reason, before I added this line to my configuration:
$purge_build_deps = 'never';
... the "Cleanup" step would just completely hang. It was quite
bizarre.
Disgression on the diversity of VM-like things
There are a lot of different virtualization solutions one can use
(e.g. Xen, KVM, Docker or Virtualbox). I have also
found libguestfs to be useful to operate on virtual images in
various ways. Libvirt and Vagrant are also useful wrappers on
top of the above systems.
There are particularly a lot of different tools which use Docker,
Virtual machines or some sort of isolation stronger than chroot to
build packages. Here are some of the alternatives I am aware of:
- Whalebuilder - Docker builder
- conbuilder - "container" builder
- debspawn - system-nspawn builder
- docker-buildpackage - Docker builder
- qemubuilder - qemu builder
- qemu-sbuild-utils - qemu + sbuild + autopkgtest
Take, for example, Whalebuilder, which uses Docker to build
packages instead of pbuilder
or sbuild
. Docker provides more
isolation than a simple chroot
: in whalebuilder
, packages are
built without network access and inside a virtualized
environment. Keep in mind there are limitations to Docker's security
and that pbuilder
and sbuild
do build under a different user
which will limit the security issues with building untrusted
packages.
On the upside, some of things are being fixed: whalebuilder
is now
an official Debian package (whalebuilder) and has added
the feature of passing custom arguments to dpkg-buildpackage.
None of those solutions (except the autopkgtest
/qemu
backend) are
implemented as a sbuild plugin, which would greatly reduce their
complexity.
I was previously using Qemu directly to run virtual machines, and
had to create VMs by hand with various tools. This didn't work so well
so I switched to using Vagrant as a de-facto standard to build
development environment machines, but I'm returning to Qemu because it
uses a similar backend as KVM and can be used to host longer-running
virtual machines through libvirt.
The great thing now is that autopkgtest
has good support for qemu
and sbuild
has bridged the gap and can use it as a build
backend. I originally had found those bugs in that setup, but all of
them are now fixed:
- #911977: sbuild: how do we correctly guess the VM name in autopkgtest?
- #911979: sbuild: fails on chown in autopkgtest-qemu backend
- #911963: autopkgtest qemu build fails with proxy_cmd: parameter not set
- #911981: autopkgtest: qemu server warns about missing CPU features
So we have unification! It's possible to run your virtual machines
and Debian builds using a single VM image backend storage, which is
no small feat, in my humble opinion. See the sbuild-qemu blog post
for the annoucement
Now I just need to figure out how to merge Vagrant, GNOME Boxes, and
libvirt together, which should be a matter of placing images in the
right place... right? See also hosting.
pbuilder vs sbuild
I was previously using pbuilder
and switched in 2017 to sbuild
.
AskUbuntu.com has a good comparative between pbuilder and sbuild
that shows they are pretty similar. The big advantage of sbuild is
that it is the tool in use on the buildds and it's written in Perl
instead of shell.
My concerns about switching were POLA (I'm used to pbuilder), the fact
that pbuilder runs as a separate user (works with sbuild as well now,
if the _apt
user is present), and setting up COW semantics in sbuild
(can't just plug cowbuilder there, need to configure overlayfs or
aufs, which was non-trivial in Debian jessie).
Ubuntu folks, again, have more documentation there. Debian
also has extensive documentation, especially about how to
configure overlays.
I was ultimately convinced by stapelberg's post on the topic which
shows how much simpler sbuild really is...
Who
Thanks lavamind for the introduction to the sbuild-qemu
package.
sbuild-qemu-boot
tip!):
sbuild-qemu-boot /srv/sbuild/qemu/unstable-amd64.img
That program is shipped only with bookworm and later, an equivalent
command is:
qemu-system-x86_64 -snapshot -enable-kvm -object rng-random,filename=/dev/urandom,id=rng0 -device virtio-rng-pci,rng=rng0,id=rng-device0 -m 2048 -nographic /srv/sbuild/qemu/unstable-amd64.img
The key argument here is -snapshot
. sudo sbuild-qemu-boot --readwrite /srv/sbuild/qemu/unstable-amd64.img
Equivalent command:
sudo qemu-system-x86_64 -enable-kvm -object rng-random,filename=/dev/urandom,id=rng0 -device virtio-rng-pci,rng=rng0,id=rng-device0 -m 2048 -nographic /srv/sbuild/qemu/unstable-amd64.img
sudo sbuild-qemu-update /srv/sbuild/qemu/unstable-amd64.img
UNRELEASED
, bookworm-backports
,
bookworm-security
, etc):
sbuild --autopkgtest-virt-server-opts="-- qemu /var/lib/sbuild/qemu/bookworm-amd64.img"
Note that you'd also need to pass --autopkgtest-opts
if you want
autopkgtest
to run in the correct VM as well:
sbuild --autopkgtest-opts="-- qemu /var/lib/sbuild/qemu/unstable.img" --autopkgtest-virt-server-opts="-- qemu /var/lib/sbuild/qemu/bookworm-amd64.img"
You might also need parameters like --ram-size
if you customized
it above.autopkgtest
starts a VM, it uses this funky qemu
commandline:
qemu-system-x86_64 -m 4096 -smp 2 -nographic -net nic,model=virtio -net user,hostfwd=tcp:127.0.0.1:10022-:22 -object rng-random,filename=/dev/urandom,id=rng0 -device virtio-rng-pci,rng=rng0,id=rng-device0 -monitor unix:/tmp/autopkgtest-qemu.w1mlh54b/monitor,server,nowait -serial unix:/tmp/autopkgtest-qemu.w1mlh54b/ttyS0,server,nowait -serial unix:/tmp/autopkgtest-qemu.w1mlh54b/ttyS1,server,nowait -virtfs local,id=autopkgtest,path=/tmp/autopkgtest-qemu.w1mlh54b/shared,security_model=none,mount_tag=autopkgtest -drive index=0,file=/tmp/autopkgtest-qemu.w1mlh54b/overlay.img,cache=unsafe,if=virtio,discard=unmap,format=qcow2 -enable-kvm -cpu kvm64,+vmx,+lahf_lm
... which is a typical qemu commandline, I'm sorry to say. That
gives us a VM with those settings (paths are relative to a temporary
directory, /tmp/autopkgtest-qemu.w1mlh54b/
in the above example):
- the
shared/
directory is, well, shared with the VM - port
10022
is forward to the VM's port22
, presumably for SSH, but not SSH server is started by default - the
ttyS1
andttyS2
UNIX sockets are mapped to the first two serial ports (usenc -U
to talk with those) - the
monitor
UNIX socket is a qemu control socket (see the QEMU monitor documentation, alsonc -U
)
nc -U /tmp/autopkgtest-qemu.w1mlh54b/ttyS2
The nc
socket interface is ... not great, but it works well
enough. And you can probably fire up an SSHd to get a better shell if
you feel like it.
Nitty-gritty details no one cares about
Fixing hang in sbuild cleanup
I'm having a hard time making heads or tails of this, but please bear
with me.
In sbuild
+ schroot
, there's this notion that we don't really need
to cleanup after ourselves inside the schroot, as the schroot will
just be delted anyways. This behavior seems to be handled by the
internal "Session Purged" parameter.
At least in lib/Sbuild/Build.pm, we can see this:
my $is_cloned_session = (defined ($session->get('Session Purged')) &&
$session->get('Session Purged') == 1) ? 1 : 0;
[...]
if ($is_cloned_session)
$self->log("Not cleaning session: cloned chroot in use\n");
else
if ($purge_build_deps)
# Removing dependencies
$resolver->uninstall_deps();
else
$self->log("Not removing build depends: as requested\n");
The schroot
builder defines that parameter as:
$self->set('Session Purged', $info-> 'Session Purged' );
... which is ... a little confusing to me. $info is:
my $info = $self->get('Chroots')->get_info($schroot_session);
... so I presume that depends on whether the schroot was correctly
cleaned up? I stopped digging there...
ChrootUnshare.pm
is way more explicit:
$self->set('Session Purged', 1);
I wonder if we should do something like this with the autopkgtest
backend. I guess people might technically use it with something else
than qemu, but qemu is the typical use case of the autopkgtest
backend, in my experience. Or at least certainly with things that
cleanup after themselves. Right?
For some reason, before I added this line to my configuration:
$purge_build_deps = 'never';
... the "Cleanup" step would just completely hang. It was quite
bizarre.
Disgression on the diversity of VM-like things
There are a lot of different virtualization solutions one can use
(e.g. Xen, KVM, Docker or Virtualbox). I have also
found libguestfs to be useful to operate on virtual images in
various ways. Libvirt and Vagrant are also useful wrappers on
top of the above systems.
There are particularly a lot of different tools which use Docker,
Virtual machines or some sort of isolation stronger than chroot to
build packages. Here are some of the alternatives I am aware of:
- Whalebuilder - Docker builder
- conbuilder - "container" builder
- debspawn - system-nspawn builder
- docker-buildpackage - Docker builder
- qemubuilder - qemu builder
- qemu-sbuild-utils - qemu + sbuild + autopkgtest
Take, for example, Whalebuilder, which uses Docker to build
packages instead of pbuilder
or sbuild
. Docker provides more
isolation than a simple chroot
: in whalebuilder
, packages are
built without network access and inside a virtualized
environment. Keep in mind there are limitations to Docker's security
and that pbuilder
and sbuild
do build under a different user
which will limit the security issues with building untrusted
packages.
On the upside, some of things are being fixed: whalebuilder
is now
an official Debian package (whalebuilder) and has added
the feature of passing custom arguments to dpkg-buildpackage.
None of those solutions (except the autopkgtest
/qemu
backend) are
implemented as a sbuild plugin, which would greatly reduce their
complexity.
I was previously using Qemu directly to run virtual machines, and
had to create VMs by hand with various tools. This didn't work so well
so I switched to using Vagrant as a de-facto standard to build
development environment machines, but I'm returning to Qemu because it
uses a similar backend as KVM and can be used to host longer-running
virtual machines through libvirt.
The great thing now is that autopkgtest
has good support for qemu
and sbuild
has bridged the gap and can use it as a build
backend. I originally had found those bugs in that setup, but all of
them are now fixed:
- #911977: sbuild: how do we correctly guess the VM name in autopkgtest?
- #911979: sbuild: fails on chown in autopkgtest-qemu backend
- #911963: autopkgtest qemu build fails with proxy_cmd: parameter not set
- #911981: autopkgtest: qemu server warns about missing CPU features
So we have unification! It's possible to run your virtual machines
and Debian builds using a single VM image backend storage, which is
no small feat, in my humble opinion. See the sbuild-qemu blog post
for the annoucement
Now I just need to figure out how to merge Vagrant, GNOME Boxes, and
libvirt together, which should be a matter of placing images in the
right place... right? See also hosting.
pbuilder vs sbuild
I was previously using pbuilder
and switched in 2017 to sbuild
.
AskUbuntu.com has a good comparative between pbuilder and sbuild
that shows they are pretty similar. The big advantage of sbuild is
that it is the tool in use on the buildds and it's written in Perl
instead of shell.
My concerns about switching were POLA (I'm used to pbuilder), the fact
that pbuilder runs as a separate user (works with sbuild as well now,
if the _apt
user is present), and setting up COW semantics in sbuild
(can't just plug cowbuilder there, need to configure overlayfs or
aufs, which was non-trivial in Debian jessie).
Ubuntu folks, again, have more documentation there. Debian
also has extensive documentation, especially about how to
configure overlays.
I was ultimately convinced by stapelberg's post on the topic which
shows how much simpler sbuild really is...
Who
Thanks lavamind for the introduction to the sbuild-qemu
package.
sbuild
+ schroot
, there's this notion that we don't really need
to cleanup after ourselves inside the schroot, as the schroot will
just be delted anyways. This behavior seems to be handled by the
internal "Session Purged" parameter.
At least in lib/Sbuild/Build.pm, we can see this:
my $is_cloned_session = (defined ($session->get('Session Purged')) &&
$session->get('Session Purged') == 1) ? 1 : 0;
[...]
if ($is_cloned_session)
$self->log("Not cleaning session: cloned chroot in use\n");
else
if ($purge_build_deps)
# Removing dependencies
$resolver->uninstall_deps();
else
$self->log("Not removing build depends: as requested\n");
The schroot
builder defines that parameter as:
$self->set('Session Purged', $info-> 'Session Purged' );
... which is ... a little confusing to me. $info is:
my $info = $self->get('Chroots')->get_info($schroot_session);
... so I presume that depends on whether the schroot was correctly
cleaned up? I stopped digging there...
ChrootUnshare.pm
is way more explicit:
$self->set('Session Purged', 1);
I wonder if we should do something like this with the autopkgtest
backend. I guess people might technically use it with something else
than qemu, but qemu is the typical use case of the autopkgtest
backend, in my experience. Or at least certainly with things that
cleanup after themselves. Right?
For some reason, before I added this line to my configuration:
$purge_build_deps = 'never';
... the "Cleanup" step would just completely hang. It was quite
bizarre.
Disgression on the diversity of VM-like things
There are a lot of different virtualization solutions one can use
(e.g. Xen, KVM, Docker or Virtualbox). I have also
found libguestfs to be useful to operate on virtual images in
various ways. Libvirt and Vagrant are also useful wrappers on
top of the above systems.
There are particularly a lot of different tools which use Docker,
Virtual machines or some sort of isolation stronger than chroot to
build packages. Here are some of the alternatives I am aware of:
- Whalebuilder - Docker builder
- conbuilder - "container" builder
- debspawn - system-nspawn builder
- docker-buildpackage - Docker builder
- qemubuilder - qemu builder
- qemu-sbuild-utils - qemu + sbuild + autopkgtest
Take, for example, Whalebuilder, which uses Docker to build
packages instead of pbuilder
or sbuild
. Docker provides more
isolation than a simple chroot
: in whalebuilder
, packages are
built without network access and inside a virtualized
environment. Keep in mind there are limitations to Docker's security
and that pbuilder
and sbuild
do build under a different user
which will limit the security issues with building untrusted
packages.
On the upside, some of things are being fixed: whalebuilder
is now
an official Debian package (whalebuilder) and has added
the feature of passing custom arguments to dpkg-buildpackage.
None of those solutions (except the autopkgtest
/qemu
backend) are
implemented as a sbuild plugin, which would greatly reduce their
complexity.
I was previously using Qemu directly to run virtual machines, and
had to create VMs by hand with various tools. This didn't work so well
so I switched to using Vagrant as a de-facto standard to build
development environment machines, but I'm returning to Qemu because it
uses a similar backend as KVM and can be used to host longer-running
virtual machines through libvirt.
The great thing now is that autopkgtest
has good support for qemu
and sbuild
has bridged the gap and can use it as a build
backend. I originally had found those bugs in that setup, but all of
them are now fixed:
- #911977: sbuild: how do we correctly guess the VM name in autopkgtest?
- #911979: sbuild: fails on chown in autopkgtest-qemu backend
- #911963: autopkgtest qemu build fails with proxy_cmd: parameter not set
- #911981: autopkgtest: qemu server warns about missing CPU features
So we have unification! It's possible to run your virtual machines
and Debian builds using a single VM image backend storage, which is
no small feat, in my humble opinion. See the sbuild-qemu blog post
for the annoucement
Now I just need to figure out how to merge Vagrant, GNOME Boxes, and
libvirt together, which should be a matter of placing images in the
right place... right? See also hosting.
pbuilder vs sbuild
I was previously using pbuilder
and switched in 2017 to sbuild
.
AskUbuntu.com has a good comparative between pbuilder and sbuild
that shows they are pretty similar. The big advantage of sbuild is
that it is the tool in use on the buildds and it's written in Perl
instead of shell.
My concerns about switching were POLA (I'm used to pbuilder), the fact
that pbuilder runs as a separate user (works with sbuild as well now,
if the _apt
user is present), and setting up COW semantics in sbuild
(can't just plug cowbuilder there, need to configure overlayfs or
aufs, which was non-trivial in Debian jessie).
Ubuntu folks, again, have more documentation there. Debian
also has extensive documentation, especially about how to
configure overlays.
I was ultimately convinced by stapelberg's post on the topic which
shows how much simpler sbuild really is...
Who
Thanks lavamind for the introduction to the sbuild-qemu
package.
pbuilder
and switched in 2017 to sbuild
.
AskUbuntu.com has a good comparative between pbuilder and sbuild
that shows they are pretty similar. The big advantage of sbuild is
that it is the tool in use on the buildds and it's written in Perl
instead of shell.
My concerns about switching were POLA (I'm used to pbuilder), the fact
that pbuilder runs as a separate user (works with sbuild as well now,
if the _apt
user is present), and setting up COW semantics in sbuild
(can't just plug cowbuilder there, need to configure overlayfs or
aufs, which was non-trivial in Debian jessie).
Ubuntu folks, again, have more documentation there. Debian
also has extensive documentation, especially about how to
configure overlays.
I was ultimately convinced by stapelberg's post on the topic which
shows how much simpler sbuild really is...