The folks from the Reproducibility Project have come a long way since they started working on it 10 years ago, and we believe it s time for the next step in Debian. Several weeks ago, we enabled a migration policy in our migration software that checks for regression in reproducibility. At this moment, that is presented as just for info, but we intend to change that to delays in the not so distant future. We eventually want all packages to be reproducible. To stimulate maintainers to make their packages reproducible now, we ll soon start to apply a bounty [speedup] for reproducible builds, like we ve done with passing autopkgtests for years. We ll reduce the bounty for successful autopkgtests at that moment in time.
What we have done, explains Sollins, is to develop, prove correct, and demonstrate the viability of an approach that allows the [software] maintainers to remain anonymous. Preserving anonymity is obviously important, given that almost everyone software developers included value their confidentiality. This new approach, Sollins adds, simultaneously allows [software] users to have confidence that the maintainers are, in fact, legitimate maintainers and, furthermore, that the code being downloaded is, in fact, the correct code of that maintainer. [ ]The corresponding paper is published on the arXiv preprint server in various formats, and the announcement has also been covered in MIT News.
I noticed that a small but fixed subset of [Git] repositories are getting backed up despite having no changes made. That is odd because I would think that repeated bundling of the same repository state should create the exact same bundle. However [it] turns out that for some, repositories bundling is nondeterministic.Paul goes on to to describe his solution, which involves forcing git to be single threaded makes the output deterministic . The article was also discussed on Hacker News.
libxlst
now deterministic
libxslt is the XSLT C library developed for the GNOME project, where XSLT itself is an XML language to define transformations for XML files. This month, it was revealed that the result of the generate-id()
XSLT function is now deterministic across multiple transformations, fixing many issues with reproducible builds. As the Git commit by Nick Wellnhofer describes:
Rework the generate-id() function to return deterministic values. We use
a simple incrementing counter and store ids in the 'psvi' member of
nodes which was freed up by previous commits. The presence of an id is
indicated by a new "source node" flag.
This fixes long-standing problems with reproducible builds, see
https://bugzilla.gnome.org/show_bug.cgi?id=751621
This also hardens security, as the old implementation leaked the
difference between a heap and a global pointer, see
https://bugs.chromium.org/p/chromium/issues/detail?id=1356211
The old implementation could also generate the same id for dynamically
created nodes which happened to reuse the same memory. Ids for namespace
nodes were completely broken. They now use the id of the parent element
together with the hex-encoded namespace prefix.
generate-draft
script to not blow up if the input files have been corrupted today or even in the past [ ], Holger Levsen updated the Hamburg 2023 summit to add a link to farewell post [ ] & to add a picture of a Post-It note. [ ], and Pol Dellaiera updated the paragraph about tar
and the --clamp-mtime
flag [ ].
On our mailing list this month, Bernhard M. Wiedemann posted an interesting summary on some of the reasons why packages are still not reproducible in 2023.
diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. This month, Chris Lamb made a number of changes, including processing objdump
symbol comment filter inputs as Python byte
(and not str
) instances [ ] and Vagrant Cascadian extended diffoscope support for GNU Guix [ ] and updated the version in that distribution to version 253 [ ].
deep-dive into 6 tools and the accuracy of the SBOMs they produce for complex open-source Java projects. Our novel insights reveal some hard challenges regarding the accurate production and usage of software bills of materials.The paper is available on arXiv.
crack
[ ] (#1021521 & #1021522)dustmite
[ ] (#1020878 & #1020879)edid-decode
[ ] (#1020877)gentoo
[ ] (#1024284)haskell98-report
[ ] (#1024007)infinipath-psm
[ ] (#990862)lcm
[ ] (#1024286)libapache-mod-evasive
[ ] (#1020800)libccrtp
[ ] (#860470)libinput
[ ] (#995809)lirc
[ ] (#979019, #979023 & #979024)mm-common
[ ] (#977177)mpl-sphinx-theme
[ ] (#1005826)psi
[ ] (#1017473)python-parse-type
[ ] (#1002671)ruby-tioga
[ ] (#1005727)ucspi-proxy
[ ] (#1024125)ypserv
[ ] (#983138).buildinfo
files in Debian trixie, specifically lorene
(0.0.0~cvs20161116+dfsg-1.1), maria
(1.3.5-4.2) and ruby-rinku
(1.7.3-2.1).
create-meta-pkgs
tool. [ ][ ]python3-setuptools
and swig
packages, which are now needed to build OpenWrt. [ ]pkg-config
needed to build Coreboot artifacts. [ ]fakeroot
tool is implicitly required but not automatically installed. [ ]vmlinuz
file. [ ]freebsd-jenkins.debian.net
has been updated to FreeBSD 14.0. [ ]apr
(hostname issue)dune
(parallelism)epy
(time-based .pyc
issue)fpc
(Year 2038)gap
(date)gh
(FTBFS in 2024)kubernetes
(fixed random build path)libgda
(date)libguestfs
(tar)metamail
(date)mpi-selector
(date)neovim
(randomness in Lua)nml
(time-based .pyc
)pommed
(parallelism)procmail
(benchmarking)pysnmp
(FTBFS in 2038)python-efl
(drop Sphinx doctrees)python-pyface
(time)python-pytest-salt-factories
(time-based .pyc
issue)python-quimb
(fails to build on single-CPU systems)python-rdflib
(random)python-yarl
(random path)qt6-webengine
(parallelism issue in documentation)texlive
(Gzip modification time issue)waf
(time-based .pyc
)warewulf
(CPIO modification time and inode issue)xemacs
(toolchain hostname)python-aiostream
.openpyxl
.python-multipletau
.wxmplot
.stunnel4
.qttools-opensource-src
.#reproducible-builds
on irc.oftc.net
.
rb-general@lists.reproducible-builds.org
Books
Films Recent releases
pg_basebackup
) are protected, but the vast majority of attacks aren t stopped by TDE.
Any attacker who can access the database while it s running can just ask for an SQL-level dump of the stored data, and they ll get the unencrypted data quick as you like.
pg_crypto
PostgreSQL ships a contrib module called pg_crypto
, which provides encryption and decryption functions.
This sounds ideal to use for encrypting data within our applications, as it s available no matter what we re using to write our application.
It avoids the problem of framework-specific cryptography, because you call the same PostgreSQL functions no matter what language you re using, which produces the same output.
However, I don t recommend ever using pg_crypto
s data encryption functions, and I doubt you will find many other cryptographic engineers who will, either.
First up, and most horrifyingly, it requires you to pass the long-term keys to the database server.
If there s an attacker actively in the database server, they can capture the keys as they come in, which means all the data encrypted using that key is exposed.
Sending the keys can also result in the keys ending up in query logs, both on the client and server, which is obviously a terrible result.
Less scary, but still very concerning, is that pg_crypto
s available cryptography is, to put it mildly, antiquated.
We have a lot of newer, safer, and faster techniques for data encryption, that aren t available in pg_crypto
.
This means that if you do use it, you re leaving a lot on the table, and need to have skilled cryptographic engineers on hand to avoid the potential pitfalls.
In short: friends don t let friends use pg_crypto
.
Publisher: | W.W. Norton & Company |
Copyright: | 2023 |
ISBN: | 1-324-07434-5 |
Format: | Kindle |
Pages: | 255 |
Welcome to the August 2023 report from the Reproducible Builds project!
In these reports we outline the most important things that we have been up to over the past month. As a quick recap, whilst anyone may inspect the source code of free software for malicious flaws, almost all software is distributed to end users as pre-compiled binaries.
The motivation behind the reproducible builds effort is to ensure no flaws have been introduced during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised. If you are interested in contributing to the project, please visit our Contribute page on our website.
serde_derive
macro as a precompiled binary. As Ax Sharma writes:
The move has generated a fair amount of push back among developers who worry about its future legal and technical implications, along with a potential for supply chain attacks, should the maintainer account publishing these binaries be compromised.After intensive discussions, use of the precompiled binary was phased out.
[ ] an overview about reproducible builds, the past, the presence and the future. How it started with a small [meeting] at DebConf13 (and before), how it grew from being a Debian effort to something many projects work on together, until in 2021 it was mentioned in an executive order of the president of the United States. (HTML slides)Holger repeated the talk later in the month at Chaos Communication Camp 2023 in Zehdenick, Germany: A video of the talk is available online, as are the HTML slides.
Vagrant walks us through his role in the project where the aim is to ensure identical results in software builds across various machines and times, enhancing software security and creating a seamless developer experience. Discover how this mission, supported by the Software Freedom Conservancy and a broad community, is changing the face of Linux distros, Arch Linux, openSUSE, and F-Droid. They also explore the challenges of managing random elements in software, and Vagrant s vision to make reproducible builds a standard best practice that will ideally become automatic for users. Vagrant shares his work in progress and their commitment to the last mile problem.The episode is available to listen (or download) from the Sustain podcast website. As it happens, the episode was recorded at FOSSY 2023, and the video of Vagrant s talk from this conference (Breaking the Chains of Trusting Trust is now available on Archive.org: It was also announced that Vagrant Cascadian will be presenting at the Open Source Firmware Conference in October on the topic of Reproducible Builds All The Way Down.
hello-traditional
package from Debian. The entire thread can be viewed from the archive page, as can Vagrant Cascadian s reply.
247
, 248
and 249
were uploaded to Debian unstable by Chris Lamb, who also added documentation for the new specialize_as
method and expanding the documentation of the existing specialize
as well [ ]. In addition, Fay Stegerman added specialize_as
and used it to optimise .smali
comparisons when decompiling Android .apk
files [ ], Felix Yan and Mattia Rizzolo corrected some typos in code comments [ , ], Greg Chabala merged the RUN commands into single layer in the package s Dockerfile
[ ] thus greatly reducing the final image size. Lastly, Roland Clobus updated tool descriptions to mark that the xb-tool
has moved package within Debian [ ].
timestamp_in_documentation_using_sphinx_zzzeeksphinx_theme
toolchain issue.
arimo
(modification time in build results)apptainer
(random Go build identifier)arrow
(fails to build on single-CPU machines)camlp
(parallelism-related issue)developer
(Go ordering-related issue)elementary-xfce-icon-theme
(font-related problem)gegl
(parallelism issue)grommunio
(filesystem ordering issue)grpc
(drop nondetermistic log)guile-parted
(parallelism-related issue)icinga
(hostname-based issue)liquid-dsp
(CPU-oriented problem)memcached
(package fails to build far in the future)openmpi5/openpmix
(date/copyright year issue)openmpi5
(date/copyright year issue)orthanc-ohif+orthanc-volview
(ordering related issue plus timestamp in a Gzip)perl-Net-DNS
(package fails to build far in the future)postgis
(parallelism issue)python-scipy
(uses an arbitrary build path)python-trustme
(package fails to build far in the future)qtbase/qmake/goldendict-ng
(timestamp-related issue)qtox
(date-related issue)ring
(filesytem ordering related issue)scipy
(1 & 2) (drop arbtirary build path and filesytem-ordering issue)snimpy
(1 & 3) (fails to build on single-CPU machines as well far in the future)tango-icon-theme
(font-related issue)reproducible-tracker.json
data file. [ ]pbuilder.tgz
for Debian unstable due to #1050784. [ ][ ]usrmerge
. [ ][ ]armhf
nodes (wbq0
and jtx1a
) as down; investigation is needed. [ ]buildd.debian.org
. [ ][ ]
#reproducible-builds
on irc.oftc.net
.
rb-general@lists.reproducible-builds.org
/msg nickserv help
what you are doing is asking nickserv what services they have and Nickserv shares the numbers of services it offers. After looking into, you are looking for register
/msg nickerv register
Both the commands tell you what you need to do as can be seen by this
Let s say you are XYZ and your e-mail address is xyz@xyz.com This is just a throwaway id I am taking for the purpose of showing how the process is done. For this, also assume your passowrd is 1234xyz;0x something like this. I have shared about APG (Advanced Password Generator) before so you could use that to generate all sorts of passwords for yourself.
So next would be
/msg nickserv register 1234xyz;0x xyz@xyz.com
Now the thing to remember is you need to be sure that the email is valid and in your control as it would generate a link with hcaptcha. Interestingly, their accessibility signup fails or errors out. I just entered my email and it errors out. Anyway back to it. Even after completing the puzzle, even with the valid username and password neither pidgin or hexchat would let me in. Neither of the clients were helpful in figuring out what was going wrong.
At this stage, I decided to see the specs of ircv3 if they would help out in anyway and came across this. One would have thought that this is one of the more urgent things that need to be fixed, but for reasons unknown it s still in draft mode. Maybe they (the participants) are not in consensus, no idea. Unfortunately, it seems that the participants of IRCv3 have chosen a sort of closed working model as the channel is restricted. The only notes of any consequence are being shared by Ilmari Lauhakangas from Finland. Apparently, Mr/Ms/they Ilmari is also a libreoffice hacker. It is possible that their is or has been lot of drama before or something and that s why things are the way they are. In either way, doesn t tell me when this will be fixed, if ever. For people who are on mobiles and whatnot, without element, it would be 10x times harder.
Update :- Saw this discussion on github. Don t see a way out
It seems I would be unable to unable to be part of Debconf Kochi 2023. Best of luck to all the participants and please share as much as possible of what happens during the event.
Package | bullseye/v11 | bookworm/v12 |
---|---|---|
ansible | 2.10.7 | 2.14.3 |
apache | 2.4.56 | 2.4.57 |
apt | 2.2.4 | 2.6.1 |
bash | 5.1 | 5.2.15 |
ceph | 14.2.21 | 16.2.11 |
docker | 20.10.5 | 20.10.24 |
dovecot | 2.3.13 | 2.3.19 |
dpkg | 1.20.12 | 1.21.22 |
emacs | 27.1 | 28.2 |
gcc | 10.2.1 | 12.2.0 |
git | 2.30.2 | 2.39.2 |
golang | 1.15 | 1.19 |
libc | 2.31 | 2.36 |
linux kernel | 5.10 | 6.1 |
llvm | 11.0 | 14.0 |
lxc | 4.0.6 | 5.0.2 |
mariadb | 10.5 | 10.11 |
nginx | 1.18.0 | 1.22.1 |
nodejs | 12.22 | 18.13 |
openjdk | 11.0.18 + 17.0.6 | 17.0.6 |
openssh | 8.4p1 | 9.2p1 |
openssl | 1.1.1n | 3.0.8-1 |
perl | 5.32.1 | 5.36.0 |
php | 7.4+76 | 8.2+93 |
podman | 3.0.1 | 4.3.1 |
postfix | 3.5.18 | 3.7.5 |
postgres | 13 | 15 |
puppet | 5.5.22 | 7.23.0 |
python2 | 2.7.18 | (gone!) |
python3 | 3.9.2 | 3.11.2 |
qemu/kvm | 5.2 | 7.2 |
ruby | 2.7+2 | 3.1 |
rust | 1.48.0 | 1.63.0 |
samba | 4.13.13 | 4.17.8 |
systemd | 247.3 | 252.6 |
unattended-upgrades | 2.8 | 2.9.1 |
util-linux | 2.36.1 | 2.38.1 |
vagrant | 2.2.14 | 2.3.4 |
vim | 8.2.2434 | 9.0.1378 |
zsh | 5.8 | 5.9 |
--fsync
: fsync every written file--old-dirs
: works like dirs when talking to old rsync--old-args
: disable the modern arg-protection idiom--secluded-args, -s
: use the protocol to safely send the args (replaces protect-args option)--trust-sender
: trust the remote sender s file listMy CRANberries also provides a short report with changes from the previous release. More information is on the RInside page. Questions, comments etc should go to the rcpp-devel mailing list off the Rcpp R-Forge page, or to issues tickets at the GitHub repo. If you like this or other open-source work I do, you can now sponsor me at GitHub.Changes in RInside version 0.2.18 (2023-02-01)
- The random number initialization was updated as in R.
- The main REPL is now running via 'run_Rmainloop()'.
- Small routine update to package and continuous integration.
This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.
Our attention was recently caught by a nice slide deck on the methods and tools for reproducible research in the R programming language. Among those, the talk mentions Guix, stating that it is for professional, sensitive applications that require ultimate reproducibility , which is probably a bit overkill for Reproducible Research . While we were flattered to see Guix suggested as good tool for reproducibility, the very notion that there s a kind of reproducibility that is ultimate and, essentially, impractical, is something that left us wondering: What kind of reproducibility do scientists need, if not the ultimate kind? Is reproducibility practical at all, or is it more of a horizon?The post goes on to outlines the concept of reproducibility, situating examples within the context of the GNU Guix operating system.
218
, 219
and 220
to Debian, as well as made the following changes:
acarsdec
(embeds CPU info with march=native
)casacore
(embeds CPU info with march=native
)kubernetes
(uses random name of temporary directory)setuptools/python-brotlicffi
(toolchain, filesys/readdir)sysstat
(FTBFS in single CPU mode)sundials
(FTBFS in single CPU mode)nim
(FTBFS in single CPU mode)doxygen/libzypp
(toolchain readdir)python-pyquil
(build failure)openssl-1_0_0
(build failure)jsonrpc-glib
(FTBFS in single CPU mode)slurm
(Link-Time Optimisation and .tar
issues)wasi-libc
(sort the output from find
).buildinfo
files per each Debian suite/arch. [ ][ ]pkg-r
package set definition. [ ][ ][ ]armhf
-architecture mst0X
node. [ ]cbxi4pro0
and wbq0
nodes [ ] and, finally, node maintenance was also performed by Mattia Rizzolo [ ] and Holger Levsen [ ][ ][ ].
#reproducible-builds
on irc.oftc.net
.
rb-general@lists.reproducible-builds.org
udev
when there's a
connection change detected by the kernel, which is kind of a gross
hack. It's clunky, but actually works and I thought for a while about
switching to something else, but it's really the easiest way to go,
and that requires the less interaction by the user.
go mod
. I'm still not sure I got that part right:
LSP is yelling at me because it can't find the imports, and I'm
generally just "YOLO everythihng" every time I get anywhere close
to it. That's not the way to do Go, in general, and not how I like to
do it either.
But I guess that, given time, I'll figure it out and make it work for
me. It certainly works now. I think.
Linux kobo 2.6.35.3-850-gbc67621+ #2049
PREEMPT Mon Jan 9 13:33:11 CST 2017 armv7l GNU/Linux
. That was built
in 2017, but the kernel was actually released in 2010, a whole 5
years before the Glo HD was released, in 2015 which is kind of
outrageous. and yes, that is with the latest firmware release.
My bet is they just don't upgrade the kernel on those things, as the
Glo was probably bought around 2017...
In any case, the problem is we are cross-compiling here. And Golang is
pretty good about cross-compiling, but because we have C in there,
we're actually cross-compiling with "CGO" which is really just Golang
with a GCC backend. And that's much, much harder to figure out because
you need to pass down flags into GCC and so on. It was a nightmare.
That's until I found this outrageous "little" project called
modernc.org/sqlite. What that thing does (with a hefty does of
dependencies that would make any Debian developer recoil in horror) is
to transpile the SQLite C source code to Golang. You read that
right: it rewrites SQLite in Go. On the fly. It's nuts.
But it works. And you end up with a "pure go" program, and that thing
compiles much faster and runs fine on older kernel.
I still wasn't sure I wanted to just stick with that forever, so I
kept the old sqlite3 code around, behind a compile-time tag. At the
top of the nickel_modernc.go
file, there's this magic string:
//+build !sqlite3
And at the top of nickel_sqlite3.go
file, there's this magic string:
//+build sqlite3
So now, by default, the modernc
file gets included, but if I pass
--tags sqlite3
to the Go compiler (to go install
or whatever), it
will actually switch to the other implementation. Pretty neat stuff.
Ship a lot of fixes that have accumulated in the 3 years since the last release. Features:Documentation changes:
- add timestamp and git version to build artifacts
- cleanup and improve debugging output
- switch to pure go sqlite implementation, which helps
- update all module dependencies
- port to wallabago v6
- support Plato library changes from 0.8.5+
- support reading koreader progress/read status
- Allow containerized builds, use gomod and avoid GOPATH hell
- overhaul Dockerfile
- switch to
go mod
Bugfixes:
- remove instability warning: this works well enough
- README: replace branch name master by main in links
- tweak mention of libreoffice to clarify concern
- replace "kobo" references by "nickel" where appropriate
- make a section about related projects
- mention NickelMenu
- quick review of the koreader implementation
- handle errors in http request creation
- Use OutputDir configuration instead of hardcoded wallabako paths
- do not noisily fail if there's no entry for book in plato
- regression: properly detect read status again after koreader (or plato?) support was added
This is amazing. I can't believe someone did something that awesome. I want to cover you with gold and Tesla cars and fresh water.You're weird please stop. But if you want to use Wallabako, head over to the README file which has installation instructions. It basically uses a hack in Kobo e-readers that will happily overwrite their root filesystem as soon as you drop this file named KoboRoot.tgz in the
.kobo
directory of your e-reader.
Note that there is no uninstall procedure and it messes with the
reader's udev
configuration (to trigger runs on wifi
connect). You'll also need to create a JSON configuration file
and configure a client in Wallabag.
And if you're looking for Wallabag hosting, Wallabag.it offers a
14-day free trial. You can also, obviously, host it yourself. Which is
not the case for Pocket, even years after Mozilla bought the
company. All this wouldn't actually be necessary if Pocket was
open-source because Nickel actually ships with a Pocket client.
Shame on you, Mozilla. But you still make an awesome browser, so keep
doing that.
schroot
setup, but today I finished
a qemu based configuration.
schroot
--chroot-mode=unshare
), or whatever: I didn't feel those offer the
level of isolation that is provided by qemu.
The main downside of this approach is that it is (obviously) slower
than native builds. But on modern hardware, that cost should be
minimal.
sudo mkdir -p /srv/sbuild/qemu/
sudo apt install sbuild-qemu
sudo sbuild-qemu-create -o /srv/sbuild/qemu/unstable.img unstable https://deb.debian.org/debian
Then to make this used by default, add this to ~/.sbuildrc
:
# run autopkgtest inside the schroot
$run_autopkgtest = 1;
# tell sbuild to use autopkgtest as a chroot
$chroot_mode = 'autopkgtest';
# tell autopkgtest to use qemu
$autopkgtest_virt_server = 'qemu';
# tell autopkgtest-virt-qemu the path to the image
# use --debug there to show what autopkgtest is doing
$autopkgtest_virt_server_options = [ '--', '/srv/sbuild/qemu/%r-%a.img' ];
# tell plain autopkgtest to use qemu, and the right image
$autopkgtest_opts = [ '--', 'qemu', '/srv/sbuild/qemu/%r-%a.img' ];
# no need to cleanup the chroot after build, we run in a completely clean VM
$purge_build_deps = 'never';
# no need for sudo
$autopkgtest_root_args = '';
Note that the above will use the default autopkgtest (1GB, one core)
and qemu (128MB, one core) configuration, which might be a little low
on resources. You probably want to be explicit about this, with
something like this:
# extra parameters to pass to qemu
# --enable-kvm is not necessary, detected on the fly by autopkgtest
my @_qemu_options = ['--ram-size=4096', '--cpus=2'];
# tell autopkgtest-virt-qemu the path to the image
# use --debug there to show what autopkgtest is doing
$autopkgtest_virt_server_options = [ @_qemu_options, '--', '/srv/sbuild/qemu/%r-%a.img' ];
$autopkgtest_opts = [ '--', 'qemu', @qemu_options, '/srv/sbuild/qemu/%r-%a.img'];
This configuration will:
/srv/sbuild/qemu
for
unstable
sbuild
to use that image to create a temporary VM to build
the packagessbuild
to run autopkgtest
(which should really be
default)autopkgtest
to use qemu
for builds and for testssbuild-qemu-create
have an unlocked root
account with an empty password.
sbuild-qemu-boot
tip!):
sbuild-qemu-boot /srv/sbuild/qemu/unstable-amd64.img
That program is shipped only with bookworm and later, an equivalent
command is:
qemu-system-x86_64 -snapshot -enable-kvm -object rng-random,filename=/dev/urandom,id=rng0 -device virtio-rng-pci,rng=rng0,id=rng-device0 -m 2048 -nographic /srv/sbuild/qemu/unstable-amd64.img
The key argument here is -snapshot
. sudo sbuild-qemu-boot --readwrite /srv/sbuild/qemu/unstable-amd64.img
Equivalent command:
sudo qemu-system-x86_64 -enable-kvm -object rng-random,filename=/dev/urandom,id=rng0 -device virtio-rng-pci,rng=rng0,id=rng-device0 -m 2048 -nographic /srv/sbuild/qemu/unstable-amd64.img
sudo sbuild-qemu-update /srv/sbuild/qemu/unstable-amd64.img
UNRELEASED
, bookworm-backports
,
bookworm-security
, etc):
sbuild --autopkgtest-virt-server-opts="-- qemu /var/lib/sbuild/qemu/bookworm-amd64.img"
Note that you'd also need to pass --autopkgtest-opts
if you want
autopkgtest
to run in the correct VM as well:
sbuild --autopkgtest-opts="-- qemu /var/lib/sbuild/qemu/unstable.img" --autopkgtest-virt-server-opts="-- qemu /var/lib/sbuild/qemu/bookworm-amd64.img"
You might also need parameters like --ram-size
if you customized
it above.autopkgtest-virt-qemu
should
have a magic flag starts a shell for you, but it doesn't look like
that's a thing. When that program starts, it just
says ok
and sits there.
Maybe because the authors consider the above to be simple enough (see
also bug #911977 for a discussion of this problem).
autopkgtest
starts a VM, it uses this funky qemu
commandline:
qemu-system-x86_64 -m 4096 -smp 2 -nographic -net nic,model=virtio -net user,hostfwd=tcp:127.0.0.1:10022-:22 -object rng-random,filename=/dev/urandom,id=rng0 -device virtio-rng-pci,rng=rng0,id=rng-device0 -monitor unix:/tmp/autopkgtest-qemu.w1mlh54b/monitor,server,nowait -serial unix:/tmp/autopkgtest-qemu.w1mlh54b/ttyS0,server,nowait -serial unix:/tmp/autopkgtest-qemu.w1mlh54b/ttyS1,server,nowait -virtfs local,id=autopkgtest,path=/tmp/autopkgtest-qemu.w1mlh54b/shared,security_model=none,mount_tag=autopkgtest -drive index=0,file=/tmp/autopkgtest-qemu.w1mlh54b/overlay.img,cache=unsafe,if=virtio,discard=unmap,format=qcow2 -enable-kvm -cpu kvm64,+vmx,+lahf_lm
... which is a typical qemu commandline, I'm sorry to say. That
gives us a VM with those settings (paths are relative to a temporary
directory, /tmp/autopkgtest-qemu.w1mlh54b/
in the above example):
shared/
directory is, well, shared with the VM10022
is forward to the VM's port 22
, presumably for SSH,
but not SSH server is started by defaultttyS1
and ttyS2
UNIX sockets are mapped to the first two
serial ports (use nc -U
to talk with those)monitor
UNIX socket is a qemu control socket (see the QEMU
monitor documentation, also nc -U
)nc -U /tmp/autopkgtest-qemu.w1mlh54b/ttyS2
The nc
socket interface is ... not great, but it works well
enough. And you can probably fire up an SSHd to get a better shell if
you feel like it.
sbuild
+ schroot
, there's this notion that we don't really need
to cleanup after ourselves inside the schroot, as the schroot will
just be delted anyways. This behavior seems to be handled by the
internal "Session Purged" parameter.
At least in lib/Sbuild/Build.pm, we can see this:
my $is_cloned_session = (defined ($session->get('Session Purged')) &&
$session->get('Session Purged') == 1) ? 1 : 0;
[...]
if ($is_cloned_session)
$self->log("Not cleaning session: cloned chroot in use\n");
else
if ($purge_build_deps)
# Removing dependencies
$resolver->uninstall_deps();
else
$self->log("Not removing build depends: as requested\n");
The schroot
builder defines that parameter as:
$self->set('Session Purged', $info-> 'Session Purged' );
... which is ... a little confusing to me. $info is:
my $info = $self->get('Chroots')->get_info($schroot_session);
... so I presume that depends on whether the schroot was correctly
cleaned up? I stopped digging there...
ChrootUnshare.pm
is way more explicit:
$self->set('Session Purged', 1);
I wonder if we should do something like this with the autopkgtest
backend. I guess people might technically use it with something else
than qemu, but qemu is the typical use case of the autopkgtest
backend, in my experience. Or at least certainly with things that
cleanup after themselves. Right?
For some reason, before I added this line to my configuration:
$purge_build_deps = 'never';
... the "Cleanup" step would just completely hang. It was quite
bizarre.
pbuilder
or sbuild
. Docker provides more
isolation than a simple chroot
: in whalebuilder
, packages are
built without network access and inside a virtualized
environment. Keep in mind there are limitations to Docker's security
and that pbuilder
and sbuild
do build under a different user
which will limit the security issues with building untrusted
packages.
On the upside, some of things are being fixed: whalebuilder
is now
an official Debian package (whalebuilder) and has added
the feature of passing custom arguments to dpkg-buildpackage.
None of those solutions (except the autopkgtest
/qemu
backend) are
implemented as a sbuild plugin, which would greatly reduce their
complexity.
I was previously using Qemu directly to run virtual machines, and
had to create VMs by hand with various tools. This didn't work so well
so I switched to using Vagrant as a de-facto standard to build
development environment machines, but I'm returning to Qemu because it
uses a similar backend as KVM and can be used to host longer-running
virtual machines through libvirt.
The great thing now is that autopkgtest
has good support for qemu
and sbuild
has bridged the gap and can use it as a build
backend. I originally had found those bugs in that setup, but all of
them are now fixed:
pbuilder
and switched in 2017 to sbuild
.
AskUbuntu.com has a good comparative between pbuilder and sbuild
that shows they are pretty similar. The big advantage of sbuild is
that it is the tool in use on the buildds and it's written in Perl
instead of shell.
My concerns about switching were POLA (I'm used to pbuilder), the fact
that pbuilder runs as a separate user (works with sbuild as well now,
if the _apt
user is present), and setting up COW semantics in sbuild
(can't just plug cowbuilder there, need to configure overlayfs or
aufs, which was non-trivial in Debian jessie).
Ubuntu folks, again, have more documentation there. Debian
also has extensive documentation, especially about how to
configure overlays.
I was ultimately convinced by stapelberg's post on the topic which
shows how much simpler sbuild really is...
sbuild-qemu
package.
Publisher: | Amazon |
Copyright: | 1899 |
Printing: | May 2012 |
ASIN: | B0082ZBXSI |
Format: | Kindle |
Pages: | 136 |
This is the story of the different ways we looked for treasure, and I think when you have read it you will see that we were not lazy about the looking. There are some things I must tell before I begin to tell about the treasure-seeking, because I have read books myself, and I know how beastly it is when a story begins, "Alas!" said Hildegarde with a deep sigh, "we must look our last on this ancestral home" and then some one else says something and you don't know for pages and pages where the home is, or who Hildegarde is, or anything about it.The first-person narrator of The Story of the Treasure Seekers is one of the six kids.
It is one of us that tells this story but I shall not tell you which: only at the very end perhaps I will.The narrator then goes on to elaborately praise one of the kids, occasionally accidentally uses "I" instead of their name, and then remembers and tries to hide who is telling the story again. It's beautifully done and had me snickering throughout the book. It's not much of a mystery (you will figure out who is telling the story very quickly), but Nesbit captures the writing style of a kid astonishingly well without making the story poorly written. Descriptions of events have a headlong style that captures a child's sense of adventure and heedless immortality mixed with quiet observations that remind the reader that kids don't miss as much as people think they do. I think the most skillful part of this book is the way Nesbit captures a kid's disregard of literary convention. The narrator in a book written by an adult tends to fit into a standard choice of story-telling style and follow it consistently. Even first-person narrators who break some of those rules feel like intentionally constructed characters. The Story of the Treasure Seekers is instead half "kid telling a story" and half "kid trying to emulate the way stories are told in books" and tends to veer wildly between the two when the narrator gets excited, as if they're vaguely aware of the conventions they're supposed to be following but are murky on the specifics. It feels exactly like the sort of book a smart and well-read kid would write (with extensive help from an editor). The other thing that Nesbit handles exceptionally well is the dynamic between the six kids. This is a collection of fairly short stories, so there isn't a lot of room for characterization. The kids are mostly sketched out with one or two memorable quirks. But Nesbit puts a lot of effort into the dynamics that arise between the children in a tight-knit family, properly making the group of kids as a whole and in various combinations a sort of character in their own right. Never for a moment does either the reader or the kids forget that they have siblings. Most adventures involve some process of sorting out who is going to come along and who is going to do other things, and there's a constant but unobtrusive background rhythm of bickering, making up, supporting each other, being frustrated by each other, and getting exasperated at each other's quirks. It's one of the better-written sibling dynamics that I've read. I somehow managed to miss Nesbit entirely as a kid, probably because she didn't write long series and child me was strongly biased towards books that were part of long series. (One book was at most a pleasant few hours; there needed to be a whole series attached to get any reasonable amount of reading out of the world.) This was nonetheless a fun bit of nostalgia because it was so much like the books I did read: kids finding adventures and making things up, getting into various trouble but getting out of it by being honest and kind, and only occasional and spotty adult supervision. Reading as an adult, I can see the touches of melancholy of loss that Nesbit embeds into this quest for riches, but part of the appeal of the stories is that the kids determinedly refuse to talk about it except as a problem to be solved. Nesbit was a rather famous progressive, but this is still a book of its time, which means there's one instance of the n-word and the kids have grown up playing the very racist version of cowboys and indians. The narrator also does a lot of stereotyping of boys and girls, although Nesbit undermines that a bit by making Alice a tomboy. I found all of this easier to ignore because the story is narrated by one of the kids who doesn't know any better, but your mileage may vary. I am always entertained by how anyone worth writing about in a British children's novel of this era has servants. You know the Bastables have fallen upon hard times because they only have one servant. The kids don't have much respect for Eliza, which I found a bit off-putting, and I wondered what this world looks like from her perspective. She clearly did a lot of the work of raising these motherless kids, but the kids view her as the hired help or an obstacle to be avoided, and there's not a lot of gratitude present. As the stories unfold, it becomes more and more clear that there's a quiet conspiracy of surrounding adults to watch out for these kids, which the kids never notice. This says good things about society, but it does undermine the adventures a little, and by the end of the book the sameness of the stories was wearing a bit thin. The high point of the book is probably chapter eight, in which the kids make their own newspaper, the entirety of which is reproduced in the book and is a note-perfect recreation of what an enterprising group of kids would come up with. In the last two stories, Nesbit tacks on an ending that was probably obligatory, but which I thought undermined some of the emotional subtext of the rest of the book. I'm not sure how else one could have put an ending on this book, but the ending she chose emphasized the degree to which the adventures really were just play, and the kids are rewarded in these stories for their ethics and their circumstances rather than for anything they concretely do. It's a bit unsatisfying. This is mostly a nostalgia read, but I'm glad I read it. If this book was not part of your childhood, it's worth reading if only for how well Nesbit captures a child's narrative voice. Rating: 7 out of 10
stty (change and print terminal line settings)
as a program.
Thanks to some heroes, basenc, pr, chcon and runcon have been implemented. For example, for the two last programs, Koutheir Attouchi wrote new crates to manage SELinux properly. This crate has been used for some other utilities like cp, ls or id.
Leveraging the GNU testsuite to test this implementation
Because the GNU testsuite is excellent, we now have a proper CI using it to run the tests. It is pretty long on the Github action CI (almost two hours to run it) but it is an amazing improvement to the way we work. It was a joint work from a bunch of folks (James Robson, Roy Ivy III, etc). To achieve this, we also made it easier to run the GNU testsuite locally with the Rust implementation but also to ignore some tests or adjust some error messages (see build-gnu.sh and run-gnu-test.sh).
Following a suggestion of Brian G, a colleague at Mozilla (he did the same for some Firefox major change), we are now collecting the history of fail/pass/error into a separate repository and generating a daily graph showing the evolution of regression. At this date, we have, with GNU/Coreutils 9.0:
Total | 611 tests |
Pass | 214 |
Skip | 84 |
Fail | 298 |
Error | 15 |
Warning: Congrats! The gnu test tests/chmod/c-option is now passing!
<br />Warning: Congrats! The gnu test tests/chmod/silent is now passing!
<br />Warning: Congrats! The gnu test tests/chmod/umask-x is now passing!
<br />Error: GNU test failed: tests/du/long-from-unreadable. tests/du/long-from-unreadable is passing on 'master'. Maybe you have to rebase?
[...]
<br />Warning: Changes from master: PASS +4 / FAIL +0 / ERROR -4 / SKIP +0
This is also beneficial to GNU as, by implementing some options, Michael Debertol noticed some incorrect behaviors (with sort and cat) or an uninitialized variable (with chmod).
Documentations
Every day, we are generating the user documentation and of the internal coreutils.
User documentation: https://uutils.github.io/coreutils-docs/user/ Example: ls or cp
The internal documentation can be seen on: https://uutils.github.io/coreutils-docs/dev/uucore/ Next.