/lib, etc.), but there are other significant differences too, such as Guix being scriptable using Guile/Scheme, as well as Guix s dedication and focus on free software.
sed, etc., by Gash and Gash-Utils. The final goal of Mes is to help create a full-source bootstrap for any interested UNIX-like operating system.
guile-modulesupport and support running Gash and Gash Utils. In working to create a full-source bootstrap, I have disregarded the kernel and Guix build system for now, but otherwise, all packages should be built from source, and obviously, no binary blobs should go in. We still need a Guile binary to execute some scripts, and it will take at least another one to two years to remove that binary. I m using the 80/20 approach, cutting corners initially to get something working and useful early. Another metric would be how many architectures we have. We are quite a way with ARM, tinycc now works, but there are still problems with GCC and Glibc. RISC-V is coming, too, which could be another metric. Someone has looked into picking up NixOS this summer. How many distros do anything about reproducibility or bootstrappability? The bootstrappability community is so small that we don t need metrics, sadly. The number of bytes of binary seed is a nice metric, but running the whole thing on a full-fledged Linux system is tough to put into a metric. Also, it is worth noting that I m developing on a modern Intel machine (ie. a platform with a management engine), that s another key component that doesn t have metrics.
hex0, 357-byte binary, we can now build the entire Guix system. This past year we have not made significant visible progress, however, as our funding was unfortunately not there. The Stage0 project has advanced in RISC-V. A month ago, though, I secured NLnet funding for another year, and thanks to NLnet, Ekaitz Zarraga and Timothy Sample will work on GNU Mes and the Guix bootstrap as well. Separate to this, the bootstrappable community has grown a lot from two people it was six years ago: there are now currently over 100 people in the
#bootstrappableIRC channel, for example. The enlarged community is possibly an even more important win going forward.
dh-clojuretool to help make packaging Clojure libraries easier. At the moment, most of the packaging is done manually, by invoking build tools by hand. Having a tool to automate many of the steps required to build Clojure packages would go a long way in making them more uniform. His work (although still very much a WIP) can be found here: https://salsa.debian.org/rlb/dh-clojure/ ehashman Elana:
sjacket-clojureto version 0.1.1.1 and uploaded it to experimental.
puppetdbto the 7.x release.
encore-clojureand uploaded them to NEW.
/usr/bin/clojureby upstream's, a task he had already started during GSoC 2021. Sadly, none of us were familiar with Debian's mechanism for alternatives. If you (yes you, dear reader) are familiar with it, I'm sure he would warmly welcome feedback on his development branch. pollo As for me, I:
core-async-clojurethat was breaking other libraries.
nrepl-clojureto the latest upstream version and revamped the way they were packaged.
g++compilers was found once 11.1.0 was tagged to this upstream release is now 11.1.1. Also fixed is an OpenMP setup issue where Justin Silverman noticed that we did not propagate the
-fopenmpsetting correctly. The full set of changes (since the last CRAN release 0.11.0.0.0) follows.
Courtesy of my CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page. If you like this or other open-source work I do, you can sponsor me at GitHub.
Changes in RcppArmadillo version 0.11.1.1.0 (2022-05-15)
- Upgraded to Armadillo release 11.1.1 (Angry Kitchen Appliance)
inv_sympd()to disallow inverses of poorly conditioned matrices
- more efficient handling of rank-deficient matrices via
- better detection of rank deficient matrices by
- faster handling of symmetric and diagonal matrices by
configurescript again propagates the'found' case again, thanks to Justin Silverman for the heads-up and suggested fix (Dirk and Justin in #376 and #377 fixing #375).
Changes in RcppArmadillo version 0.11.0.1.0 (2022-04-14)
- Upgraded to Armadillo release 11.0.1 (Creme Brulee)
- fix miscompilation of
inv_sympd()functions when using
Needs to be able to create two copies always. Can get stuck in irreversible read-only mode if only one copy can be made.Even as of now, RAID-1 and RAID-10 has this note:
The simple redundancy RAID levels utilize different mirrors in a way that does not achieve the maximum performance. The logic can be improved so the reads will spread over the mirrors evenly or based on device congestion.Granted, that's not a stability concern anymore, just performance. A reviewer of a draft of this article actually claimed that BTRFS only reads from one of the drives, which hopefully is inaccurate, but goes to show how confusing all this is. There are other warnings in the Debian wiki that are quite scary. Even the legendary Arch wiki has a warning on top of their BTRFS page, still. Even if those issues are now fixed, it can be hard to tell when they were fixed. There is a changelog by feature but it explicitly warns that it doesn't know "which kernel version it is considered mature enough for production use", so it's also useless for this. It would have been much better if BTRFS was released into the world only when those bugs were being completely fixed. Or that, at least, features were announced when they were stable, not just "we merged to mainline, good luck". Even now, we get mixed messages even in the official BTRFS documentation which says "The Btrfs code base is stable" (main page) while at the same time clearly stating unstable parts in the status page (currently RAID56). There are much harsher BTRFS critics than me out there so I will stop here, but let's just say that I feel a little uncomfortable trusting server data with full RAID arrays to BTRFS. But surely, for a workstation, things should just work smoothly... Right? Well, let's see the snags I hit.
(This might not entirely be accurate: I rebuilt this from the Debian side of things.) This is pretty straightforward, except for the swap partition: normally, I just treat swap like any other logical volume and create it in a logical volume. This is now just speculation, but I bet it was setup this way because "swap" support was only added in BTRFS 5.0. I fully expect BTRFS experts to yell at me now because this is an old setup and BTRFS is so much better now, but that's exactly the point here. That setup is not that old (2018? old? really?), and migrating to a new partition scheme isn't exactly practical right now. But let's move on to more practical considerations.
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 931,5G 0 disk sda1 8:1 0 200M 0 part /boot/efi sda2 8:2 0 1G 0 part /boot sda3 8:3 0 7,8G 0 part fedora_swap 253:5 0 7.8G 0 crypt [SWAP] sda4 8:4 0 922,5G 0 part fedora_crypt 253:4 0 922,5G 0 crypt /
vg_tbbuild05(multiple PVs can be added to a single VG which is why there is that abstraction)
I stripped the other
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT nvme0n1 259:0 0 1.7T 0 disk nvme0n1p1 259:1 0 8M 0 part nvme0n1p2 259:2 0 512M 0 part md0 9:0 0 511M 0 raid1 /boot nvme0n1p3 259:3 0 1.7T 0 part md1 9:1 0 1.7T 0 raid1 crypt_dev_md1 253:0 0 1.7T 0 crypt vg_tbbuild05-root 253:1 0 30G 0 lvm / vg_tbbuild05-swap 253:2 0 125.7G 0 lvm [SWAP] vg_tbbuild05-srv 253:3 0 1.5T 0 lvm /srv nvme0n1p4 259:4 0 1M 0 part
nvme1n1disk because it's basically the same. Now, if we look at my BTRFS-enabled workstation, which doesn't even have RAID, we have the following:
/dev/sda4being where BTRFS lives
fedora_crypt, which is, confusingly, kind of like a volume group. it's where everything lives. i think.
/, etc. those are actually the things that get mounted. you'd think you'd mount a filesystem, but no, you mount a subvolume. that is backwards.
Notice how we don't see all the BTRFS volumes here? Maybe it's because I'm mounting this from the Debian side, but
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 931,5G 0 disk sda1 8:1 0 200M 0 part /boot/efi sda2 8:2 0 1G 0 part /boot sda3 8:3 0 7,8G 0 part [SWAP] sda4 8:4 0 922,5G 0 part fedora_crypt 253:4 0 922,5G 0 crypt /srv
lsblkdefinitely gets confused here. I frankly don't quite understand what's going on, even after repeatedly looking around the rather dismal documentation. But that's what I gather from the following commands:
I only got to that point through trial and error. Notice how I use an existing mountpoint to list the related subvolumes. If I try to use the filesystem path, the one that's listed in
root@curie:/home/anarcat# btrfs filesystem show Label: 'fedora' uuid: 5abb9def-c725-44ef-a45e-d72657803f37 Total devices 1 FS bytes used 883.29GiB devid 1 size 922.47GiB used 916.47GiB path /dev/mapper/fedora_crypt root@curie:/home/anarcat# btrfs subvolume list /srv ID 257 gen 108092 top level 5 path home ID 258 gen 108094 top level 5 path root ID 263 gen 108020 top level 258 path root/var/lib/machines
filesystem show, I fail:
Maybe I just need to use the label? Nope:
root@curie:/home/anarcat# btrfs subvolume list /dev/mapper/fedora_crypt ERROR: not a btrfs filesystem: /dev/mapper/fedora_crypt ERROR: can't access '/dev/mapper/fedora_crypt'
This is really confusing. I don't even know if I understand this right, and I've been staring at this all afternoon. Hopefully, the lazyweb will correct me eventually. (As an aside, why are they called "subvolumes"? If something is a "sub" of "something else", that "something else" must exist right? But no, BTRFS doesn't have "volumes", it only has "subvolumes". Go figure. Presumably the filesystem still holds "files" though, at least empirically it doesn't seem like it lost anything so far. In any case, at least I can refer to this section in the future, the next time I fumble around the
root@curie:/home/anarcat# btrfs subvolume list fedora ERROR: cannot access 'fedora': No such file or directory ERROR: can't access 'fedora'
btrfscommandline, as I surely will. I will possibly even update this section as I get better at it, or based on my reader's judicious feedback.
/etc/fstab, on the Debian side of things:
This thankfully ignores all the subvolume nonsense because it relies on the UUID.
UUID=5abb9def-c725-44ef-a45e-d72657803f37 /srv btrfs defaults 0 2
mounttells me that's actually the "root" (?
Let's see if I can mount the other volumes I have on there. Remember that
root@curie:/home/anarcat# mount grep /srv /dev/mapper/fedora_crypt on /srv type btrfs (rw,relatime,space_cache,subvolid=5,subvol=/)
subvolume listshowed I had
var/lib/machines. Let's try
mount -o subvol=root /dev/mapper/fedora_crypt /mnt
rootis not the same as
/, it's a different subvolume! It seems to be the Fedora root (
/, really) filesystem. No idea what is happening here. I also have a
homesubvolume, let's mount it too, for good measure:
mount -o subvol=home /dev/mapper/fedora_crypt /mnt/home
lsblkdoesn't notice those two new mountpoints, and that's normal: it only lists block devices and subvolumes (rather inconveniently, I'd say) do not show up as devices:
This is really, really confusing. Maybe I did something wrong in the setup. Maybe it's because I'm mounting it from outside Fedora. Either way, it just doesn't feel right.
root@curie:/home/anarcat# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 931,5G 0 disk sda1 8:1 0 200M 0 part sda2 8:2 0 1G 0 part sda3 8:3 0 7,8G 0 part sda4 8:4 0 922,5G 0 part fedora_crypt 253:4 0 922,5G 0 crypt /srv
(Notice, in passing, that it looks like the same filesystem is mounted in different places. In that sense, you'd expect
root@curie:/home/anarcat# df -h /srv /mnt /mnt/home Filesystem Size Used Avail Use% Mounted on /dev/mapper/fedora_crypt 923G 886G 31G 97% /srv /dev/mapper/fedora_crypt 923G 886G 31G 97% /mnt /dev/mapper/fedora_crypt 923G 886G 31G 97% /mnt/home
/mnt/home?!) to be exactly the same, but no: they are entirely different directory structures, which I will not call "filesystems" here because everyone's head will explode in sparks of confusion.) Yes, disk space is shared (that's the
Availcolumns, makes sense). But nope, no cookie for you: they all have the same
Usedcolumns, so you need to actually walk the entire filesystem to figure out what each disk takes. (For future reference, that's basically:
And yes, that was painfully slow.) ZFS actually has some oddities in that regard, but at least it tells me how much disk each volume (and snapshot) takes:
root@curie:/home/anarcat# time du -schx /mnt/home /mnt /srv 124M /mnt/home 7.5G /mnt 875G /srv 883G total real 2m49.080s user 0m3.664s sys 0m19.013s
That's 56360 times faster, by the way. But yes, that's not fair: those in the know will know there's a different command to do what
root@tubman:~# time df -t zfs -h Filesystem Size Used Avail Use% Mounted on rpool/ROOT/debian 3.5T 1.4G 3.5T 1% / rpool/var/tmp 3.5T 384K 3.5T 1% /var/tmp rpool/var/spool 3.5T 256K 3.5T 1% /var/spool rpool/var/log 3.5T 2.0G 3.5T 1% /var/log rpool/home/root 3.5T 2.2G 3.5T 1% /root rpool/home 3.5T 256K 3.5T 1% /home rpool/srv 3.5T 80G 3.5T 3% /srv rpool/var/cache 3.5T 114M 3.5T 1% /var/cache bpool/BOOT/debian 571M 90M 481M 16% /boot real 0m0.003s user 0m0.002s sys 0m0.000s
dfdoes with BTRFS filesystems, the
btrfs filesystem usagecommand:
Almost as fast as ZFS's df! Good job. But wait. That doesn't actually tell me usage per subvolume. Notice it's
root@curie:/home/anarcat# time btrfs filesystem usage /srv Overall: Device size: 922.47GiB Device allocated: 916.47GiB Device unallocated: 6.00GiB Device missing: 0.00B Used: 884.97GiB Free (estimated): 30.84GiB (min: 27.84GiB) Free (statfs, df): 30.84GiB Data ratio: 1.00 Metadata ratio: 2.00 Global reserve: 512.00MiB (used: 0.00B) Multiple profiles: no Data,single: Size:906.45GiB, Used:881.61GiB (97.26%) /dev/mapper/fedora_crypt 906.45GiB Metadata,DUP: Size:5.00GiB, Used:1.68GiB (33.58%) /dev/mapper/fedora_crypt 10.00GiB System,DUP: Size:8.00MiB, Used:128.00KiB (1.56%) /dev/mapper/fedora_crypt 16.00MiB Unallocated: /dev/mapper/fedora_crypt 6.00GiB real 0m0,004s user 0m0,000s sys 0m0,004s
filesystem usage, not
subvolume usage, which unhelpfully refuses to exist. That command only shows that one "filesystem" internal statistics that are pretty opaque.. You can also appreciate that it's wasting 6GB of "unallocated" disk space there: I probably did something Very Wrong and should be punished by Hacker News. I also wonder why it has 1.68GB of "metadata" used... At this point, I just really want to throw that thing out of the window and restart from scratch. I don't really feel like learning the BTRFS internals, as they seem oblique and completely bizarre to me. It feels a little like the state of PHP now: it's actually pretty solid, but built upon so many layers of cruft that I still feel it corrupts my brain every time I have to deal with it (needle or haystack first? anyone?)...
* DRM synchronization objects (syncobj, see struct &drm_syncobj) provide a * container for a synchronization primitive which can be used by userspace * to explicitly synchronize GPU commands, can be shared between userspace * processes, and can be shared between different DRM drivers. * Their primary use-case is to implement Vulkan fences and semaphores. [...] * At it's core, a syncobj is simply a wrapper around a pointer to a struct * &dma_fence which may be NULL.
A struct that represents a (potentially future) event:
- Has a boolean signaled state
- Has a bunch of useful utility helpers/concepts, such as refcount, callback wait mechanisms, etc.
Provides two guarantees:
- One-shot: once signaled, it will be signaled forever
- Finite-time: once exposed, is guaranteed signal in a reasonable amount of time
Author: Chris Wilson <firstname.lastname@example.org> Date: Fri Mar 22 09:23:22 2019 +0000 drm/i915: Introduce the i915_user_extension_method An idea for extending uABI inspired by Vulkan's extension chains. Instead of expanding the data struct for each ioctl every time we need to add a new feature, define an extension chain instead. As we add optional interfaces to control the ioctl, we define a new extension struct that can be linked into the ioctl data only when required by the user. The key advantage being able to ignore large control structs for optional interfaces/extensions, while being able to process them in a consistent manner. In comparison to other extensible ioctls, the key difference is the use of a linked chain of extension structs vs an array of tagged pointers. For example, struct drm_amdgpu_cs_chunk __u32 chunk_id; __u32 length_dw; __u64 chunk_data; ; [...]
i915_user_extension, we opted to extend the V3D interface through a generic interface. After applying some suggestions from Iago Toral (Igalia) and Daniel Vetter, we reached the following struct:
struct drm_v3d_extension __u64 next; __u32 id; #define DRM_V3D_EXT_ID_MULTI_SYNC 0x01 __u32 flags; /* mbz */ ;
multi_syncextension struct that subclasses the generic extension struct. It has arrays of in and out syncobjs, the respective number of elements in each of them, and a
wait_stagevalue used in CL submissions to determine which job needs to wait for syncobjs before running.
struct drm_v3d_multi_sync struct drm_v3d_extension base; /* Array of wait and signal semaphores */ __u64 in_syncs; __u64 out_syncs; /* Number of entries */ __u32 in_sync_count; __u32 out_sync_count; /* set the stage (v3d_queue) to sync */ __u32 wait_stage; __u32 pad; /* mbz */ ;
drm_syncobj_find_fence()+ drm_sched_job_add_dependency()to add all
in_syncs(wait semaphores) as job dependencies, i.e. syncobjs to be checked by the scheduler before running the job. On CL submissions, we have the bin and render jobs, so V3D follows the value of
wait_stageto determine which job depends on those
in_syncsto start its execution. When V3D defines the last job in a submission, it replaces
done_fencefrom this last job. It uses
drm_syncobj_find() + drm_syncobj_replace_fence()to do that. Therefore, when a job completes its execution and signals
out_syncsare signaled too.
drm_sched_job_arm()had recently been introduced to job initialization. Finally, we prepared the semaphore interface to implement timeline syncobjs in the future.
No HDMI support via the USB-C displayport. While I don t expect to go to conferences or even classes in the next several months, I hope this can be fixed before I do. It s a potential important issue for me.
/usr/bin/wf-recorder -g '0,32 960x540' -t --muxer=v4l2 --codec=rawvideo --pixelformat=yuv420p --file=/dev/video10
/dev/video10). You will note I m grabbing a 960 540 rectangle, which is the top of my screen (1920x1080) minus the Waybar. I think I ll increase it to 960 720, as the projector to which I connect the Raspberry has a 4 3 output. After this is sent to
/dev/video10, I tell
ffmpegto send it via RTP to the fixed address of the Raspberry:
/usr/bin/ffmpeg -i /dev/video10 -an -f rtp -sdp_file /tmp/video.sdp rtp://10.0.0.100:7000/
/tmp/video.sdpis created in the laptop itself; this file describes the stream s metadata so it can be used from the client side. I cheated and copied it over to the Raspberry, doing an ugly hardcode along the way:
user@raspi:~ $ cat video.sdp v=0 o=- 0 0 IN IP4 127.0.0.1 s=No Name c=IN IP4 10.0.0.100 t=0 0 a=tool:libavformat 58.76.100 m=video 7000 RTP/AVP 96 b=AS:200 a=rtpmap:96 MP4V-ES/90000 a=fmtp:96 profile-level-id=1
useruser, and dropped the following in my user s
setterm -blank 0 -powersave off -powerdown 0 xset s off xset -dpms xset s noblank mplayer -msglevel all=1 -fs /home/usuario/video.sdp
schrootsetup, but today I finished a qemu based configuration.
schroot --chroot-mode=unshare), or whatever: I didn't feel those offer the level of isolation that is provided by qemu. The main downside of this approach is that it is (obviously) slower than native builds. But on modern hardware, that cost should be minimal.
Then to make this used by default, add this to
sudo mkdir -p /srv/sbuild/qemu/ sudo apt install sbuild-qemu sudo sbuild-qemu-create -o /srv/sbuild/qemu/unstable.img unstable https://deb.debian.org/debian
Note that the above will use the default autopkgtest (1GB, one core) and qemu (128MB, one core) configuration, which might be a little low on resources. You probably want to be explicit about this, with something like this:
# run autopkgtest inside the schroot $run_autopkgtest = 1; # tell sbuild to use autopkgtest as a chroot $chroot_mode = 'autopkgtest'; # tell autopkgtest to use qemu $autopkgtest_virt_server = 'qemu'; # tell autopkgtest-virt-qemu the path to the image # use --debug there to show what autopkgtest is doing $autopkgtest_virt_server_options = [ '--', '/srv/sbuild/qemu/%r-%a.img' ]; # tell plain autopkgtest to use qemu, and the right image $autopkgtest_opts = [ '--', 'qemu', '/srv/sbuild/qemu/%r-%a.img' ]; # no need to cleanup the chroot after build, we run in a completely clean VM $purge_build_deps = 'never'; # no need for sudo $autopkgtest_root_args = '';
This configuration will:
# extra parameters to pass to qemu # --enable-kvm is not necessary, detected on the fly by autopkgtest my @_qemu_options = ['--ram-size=4096', '--cpus=2']; # tell autopkgtest-virt-qemu the path to the image # use --debug there to show what autopkgtest is doing $autopkgtest_virt_server_options = [ @_qemu_options, '--', '/srv/sbuild/qemu/%r-%a.img' ]; $autopkgtest_opts = [ '--', 'qemu', @qemu_options, '/srv/sbuild/qemu/%r-%a.img'];
sbuildto use that image to create a temporary VM to build the packages
autopkgtest(which should really be default)
qemufor builds and for tests
sbuild-qemu-createhave an unlocked root account with an empty password.
That program is shipped only with bookworm and later, an equivalent command is:
The key argument here is
qemu-system-x86_64 -snapshot -enable-kvm -object rng-random,filename=/dev/urandom,id=rng0 -device virtio-rng-pci,rng=rng0,id=rng-device0 -m 2048 -nographic /srv/sbuild/qemu/unstable-amd64.img
sudo sbuild-qemu-boot --readwrite /srv/sbuild/qemu/unstable-amd64.img
sudo qemu-system-x86_64 -enable-kvm -object rng-random,filename=/dev/urandom,id=rng0 -device virtio-rng-pci,rng=rng0,id=rng-device0 -m 2048 -nographic /srv/sbuild/qemu/unstable-amd64.img
sudo sbuild-qemu-update /srv/sbuild/qemu/unstable-amd64.img
Note that you'd also need to pass
sbuild --autopkgtest-virt-server-opts="-- qemu /var/lib/sbuild/qemu/bookworm-amd64.img"
--autopkgtest-optsif you want
autopkgtestto run in the correct VM as well:
You might also need parameters like
sbuild --autopkgtest-opts="-- qemu /var/lib/sbuild/qemu/unstable.img" --autopkgtest-virt-server-opts="-- qemu /var/lib/sbuild/qemu/bookworm-amd64.img"
--ram-sizeif you customized it above.
autopkgtest-virt-qemushould have a magic flag starts a shell for you, but it doesn't look like that's a thing. When that program starts, it just says
okand sits there. Maybe because the authors consider the above to be simple enough (see also bug #911977 for a discussion of this problem).
autopkgteststarts a VM, it uses this funky
... which is a typical qemu commandline, I'm sorry to say. That gives us a VM with those settings (paths are relative to a temporary directory,
qemu-system-x86_64 -m 4096 -smp 2 -nographic -net nic,model=virtio -net user,hostfwd=tcp:127.0.0.1:10022-:22 -object rng-random,filename=/dev/urandom,id=rng0 -device virtio-rng-pci,rng=rng0,id=rng-device0 -monitor unix:/tmp/autopkgtest-qemu.w1mlh54b/monitor,server,nowait -serial unix:/tmp/autopkgtest-qemu.w1mlh54b/ttyS0,server,nowait -serial unix:/tmp/autopkgtest-qemu.w1mlh54b/ttyS1,server,nowait -virtfs local,id=autopkgtest,path=/tmp/autopkgtest-qemu.w1mlh54b/shared,security_model=none,mount_tag=autopkgtest -drive index=0,file=/tmp/autopkgtest-qemu.w1mlh54b/overlay.img,cache=unsafe,if=virtio,discard=unmap,format=qcow2 -enable-kvm -cpu kvm64,+vmx,+lahf_lm
/tmp/autopkgtest-qemu.w1mlh54b/in the above example):
shared/directory is, well, shared with the VM
10022is forward to the VM's port
22, presumably for SSH, but not SSH server is started by default
ttyS2UNIX sockets are mapped to the first two serial ports (use
nc -Uto talk with those)
monitorUNIX socket is a qemu control socket (see the QEMU monitor documentation, also
nc -U /tmp/autopkgtest-qemu.w1mlh54b/ttyS2
ncsocket interface is ... not great, but it works well enough. And you can probably fire up an SSHd to get a better shell if you feel like it.
schroot, there's this notion that we don't really need to cleanup after ourselves inside the schroot, as the schroot will just be delted anyways. This behavior seems to be handled by the internal "Session Purged" parameter. At least in lib/Sbuild/Build.pm, we can see this:
my $is_cloned_session = (defined ($session->get('Session Purged')) && $session->get('Session Purged') == 1) ? 1 : 0; [...] if ($is_cloned_session) $self->log("Not cleaning session: cloned chroot in use\n"); else if ($purge_build_deps) # Removing dependencies $resolver->uninstall_deps(); else $self->log("Not removing build depends: as requested\n");
schrootbuilder defines that parameter as:
... which is ... a little confusing to me. $info is:
$self->set('Session Purged', $info-> 'Session Purged' );
... so I presume that depends on whether the schroot was correctly cleaned up? I stopped digging there...
my $info = $self->get('Chroots')->get_info($schroot_session);
ChrootUnshare.pmis way more explicit:
I wonder if we should do something like this with the autopkgtest backend. I guess people might technically use it with something else than qemu, but qemu is the typical use case of the autopkgtest backend, in my experience. Or at least certainly with things that cleanup after themselves. Right? For some reason, before I added this line to my configuration:
$self->set('Session Purged', 1);
... the "Cleanup" step would just completely hang. It was quite bizarre.
$purge_build_deps = 'never';
sbuild. Docker provides more isolation than a simple
whalebuilder, packages are built without network access and inside a virtualized environment. Keep in mind there are limitations to Docker's security and that
sbuilddo build under a different user which will limit the security issues with building untrusted packages. On the upside, some of things are being fixed:
whalebuilderis now an official Debian package (whalebuilder) and has added the feature of passing custom arguments to dpkg-buildpackage. None of those solutions (except the
qemubackend) are implemented as a sbuild plugin, which would greatly reduce their complexity. I was previously using Qemu directly to run virtual machines, and had to create VMs by hand with various tools. This didn't work so well so I switched to using Vagrant as a de-facto standard to build development environment machines, but I'm returning to Qemu because it uses a similar backend as KVM and can be used to host longer-running virtual machines through libvirt. The great thing now is that
autopkgtesthas good support for
sbuildhas bridged the gap and can use it as a build backend. I originally had found those bugs in that setup, but all of them are now fixed:
pbuilderand switched in 2017 to
sbuild. AskUbuntu.com has a good comparative between pbuilder and sbuild that shows they are pretty similar. The big advantage of sbuild is that it is the tool in use on the buildds and it's written in Perl instead of shell. My concerns about switching were POLA (I'm used to pbuilder), the fact that pbuilder runs as a separate user (works with sbuild as well now, if the
_aptuser is present), and setting up COW semantics in sbuild (can't just plug cowbuilder there, need to configure overlayfs or aufs, which was non-trivial in Debian jessie). Ubuntu folks, again, have more documentation there. Debian also has extensive documentation, especially about how to configure overlays. I was ultimately convinced by stapelberg's post on the topic which shows how much simpler sbuild really is...
|Series:||The Malloreon #4|
|12.8||Bratislava-Petr alka||18:15||REX 7756|
|Wien Hbf||19:15||19:53||NJ 50490|
|London St Pancras||16:03||16:34||TfL|
|London Paddington||16:49||17:04||GWR 59231|
|Exeter St Davids||19:19||19:25||SWR 52706|
... and/or this:
Apr 9 22:17:39 octavia hostapd: wlan0: DFS-CAC-START freq=5500 chan=100 sec_chan=1, width=0, seg0=102, seg1=0, cac_time=60s Apr 9 22:17:39 octavia hostapd: DFS start_dfs_cac() failed, -1
Here, it clearly says
Sat Apr 9 18:05:03 2022 daemon.notice hostapd: Channel 100 (primary) not allowed for AP mode, flags: 0x10095b NO-IR RADAR Sat Apr 9 18:05:03 2022 daemon.warn hostapd: wlan0: IEEE 802.11 Configured channel (100) not found from the channel list of current mode (2) IEEE 802.11a Sat Apr 9 18:05:03 2022 daemon.warn hostapd: wlan0: IEEE 802.11 Hardware does not support configured channel
RADAR(in all caps too, which means it's really important).
NO-IRis also important, I'm not sure what it means but it could be that you're not allowed to transmit in that band because of other local regulations. There might be a way to workaround those by changing the "region" in the Luci GUI, but I didn't mess with that, because I figured that other devices will have that already configured. So using a forbidden channel might make it more difficult for clients to connect (although it's possible this is enforced only on the AP side). In any case, 5GHz is promising, but in reality, you only get from channel 36 (5.170GHz) to 48 (5.250GHz), inclusively. Fast counters will notice that is exactly 80MHz, which means that if an AP is configured for that hungry, all-powerful 80MHz, it will effectively take up all 5GHz channels at once. This, in other words, is as bad as 2.4GHz, where you also have only two 40MHz channels. (Really, what did you expect: this is an unregulated frequency controlled by commercial interests...) So the first thing I did was to switch to 40MHz. This gives me two distinct channels in 5GHz at no noticeable bandwidth cost. (In fact, I couldn't find hard data on what the bandwidth ends up being on those frequencies, but I could still get 400Mbps which is fine for my use case.)
Checking the "Allow b rates" affects what the AP will transmit. In particular it will send most overhead packets including beacons, probe responses, and authentication / authorization as the slow, noisy, 1 Mb DSSS signal. That is bad for you and your neighbors. Do not check that box. The default really should be unchecked.This, in particular, "will make the AP unusable to distant clients, which again is a good thing for public wifi in general". So I just unchecked that box and I feel happier now. I didn't make tests to see the effect separately however, so this is mostly just a guess.
rredissuggestion by adding an
Additional_repositoriesentry as Bryan decided to retire the
rredispackage. You can still install it via
install.packages("rredis")by setting the addtional repo, for example
repos=c("https://ghrr.github.io/drat", getOption("repos"))as documented in package and at our ghrr drat repo. The detailed changes list follows.
Courtesy of CRANberries, there is also a diffstat report for this release. More information is on the RcppRedis page. If you like this or other open-source work I do, you can now sponsor me at GitHub.
Changes in version 0.2.1 (2022-04-09)
rredispackage can be installed via the repo listed in
pubsub.Rtest file makes
rredisoptional and conditional; all demos now note that the optional
rredispackage is installable via the
- The fallback-compilation of
hiredishas been forced to override compilation flags because CRAN knows better than upstream.
- The GLOBEX pub/sub example has small updates.
READMEfile in the bomsh repository.
[ ] implemented what we call package multi-versioning for C/C++ software that lacks function multi-versioning and run-time dispatch [ ]. It is another way to ensure that users do not have to trade reproducibility for performance. (full PDF)
It is one thing to talk about reproducible builds and how they strengthen software supply chain security, but it s quite another to effectively configure a reproducible build. Concrete steps for specific languages are a far larger topic than can be covered in a single blog post, but today we ll be talking about some guiding principles when designing reproducible builds. [ ]The article was discussed on Hacker News.
SOURCE_DATE_EPOCHenvironment variable, both by expanding parts of the existing text [ ][ ] as well as clarifying meaning by removing text in other places [ ]. In addition, Chris Lamb added a Twitter Card to our website s metadata too [ ][ ][ ]. On our mailing list this month:
So now we have 364 source packages for which we have a patch and for which we can show that this patch does not change the build output. Do you agree that with those two properties, the advantages of the 3.0 (quilt) format are sufficient such that the change shall be implemented at least for those 364? [ ]
python-securesystemslibpackage to version
209to Debian unstable, as well as made the following changes to the code itself:
--append-build-commandoption [ ], which was subsequently uploaded to Debian unstable by Holger Levsen.
dsa-check-running-kernelscript with a packaged version. [ ]
sources.lstfile for our mail server as its still running Debian buster. [ ]
debsecanpackage everywhere; it got installed accidentally via the
Recommendsrelation. [ ]