Search Results: "cas"

6 February 2025

Dirk Eddelbuettel: RcppArmadillo 14.2.3-1 on CRAN: Small Upstream Fix

armadillo image Armadillo is a powerful and expressive C++ template library for linear algebra and scientific computing. It aims towards a good balance between speed and ease of use, has a syntax deliberately close to Matlab, and is useful for algorithm development directly in C++, or quick conversion of research code into production environments. RcppArmadillo integrates this library with the R environment and language and is widely used by (currently) 1215 other packages on CRAN, downloaded 38.2 million times (per the partial logs from the cloud mirrors of CRAN), and the CSDA paper (preprint / vignette) by Conrad and myself has been cited 612 times according to Google Scholar. Conrad released a minor version 14.2.3 yesterday. As it has been two months since the last minor release, we prepared a new version for CRAN too which arrived there early this morning. The changes since the last CRAN release are summarised below.

Changes in RcppArmadillo version 14.2.3-1 (2025-02-05)
  • Upgraded to Armadillo release 14.2.3 (Smooth Caffeine)
    • Minor fix for declaration of xSYCON and xHECON functions in LAPACK
    • Fix for rare corner-case in reshape()

Courtesy of my CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the Rcpp R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

Dominique Dumont: Drawbacks of using Cookiecutter with Cruft

Hi Cookiecutter is a tool for building coding project templates. It s often used to provide a scaffolding to build lots of similar project. I ve seen it used to create Symfony projects and several cloud infrastructures deployed with Terraform. This tool was useful to accelerate the creation of new projects.  Since these templates were bound to evolve, the teams providing these template relied on cruft to update the code provided by the template in their user s code. In other words, they wanted their users to apply a diff of the template modification to their code. At the beginning, all was fine. But problems began to appear during the lifetime of these projects. What went wrong ? In both cases, we had the following scenario: User team giving up on updates is a major problem because these update may bring security or compliance fixes.  Note that code conflicts seen with cruft are similar to git merge conflicts, but harder to resolve because, unlike with a git merge, there s no common ancestor, so 3-way merges are not possible. From an organisation point of view, the main problem is the ambiguous ownership of the functionalities brought by template code: who own this code ? The provider team who writes the template or the user team who owns the repository of the code generated from the template ? Conflicts are bound to happen. Possible solutions to get out of this tar pit: Of course your users won t be happy to be faced with a manual migration from the old big template to the new one with external dependencies. On the other hand, this may be easier to sell than updates based on cruft since the painful work will happen once. Further updates will be done by incrementing dependency versions (which can be automated with renovate). If many projects are to be created with this template, it may be more practical to provide use a CLI that will create a skeleton project. See for instance terragrunt scaffold command. My name is Dominique Dumont, I m a devops freelance. You can find the devops and audit services I propose on my website or reach out to me on LinkedIn. All the best

5 February 2025

Reproducible Builds: Reproducible Builds in January 2025

Welcome to the first report in 2025 from the Reproducible Builds project! Our monthly reports outline what we ve been up to over the past month and highlight items of news from elsewhere in the world of software supply-chain security when relevant. As usual, though, if you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. Table of contents:
  1. reproduce.debian.net
  2. Two new academic papers
  3. Distribution work
  4. On our mailing list
  5. Upstream patches
  6. diffoscope
  7. Website updates
  8. Reproducibility testing framework

reproduce.debian.net The last few months saw the introduction of reproduce.debian.net. Announced at the recent Debian MiniDebConf in Toulouse, reproduce.debian.net is an instance of rebuilderd operated by the Reproducible Builds project. Powering that is rebuilderd, our server designed monitor the official package repositories of Linux distributions and attempt to reproduce the observed results there. This month, however, we are pleased to announce that in addition to the existing amd64.reproduce.debian.net and i386.reproduce.debian.net architecture-specific pages, we now build for a three more architectures (for a total of five) arm64 armhf and riscv64.

Two new academic papers Giacomo Benedetti, Oreofe Solarin, Courtney Miller, Greg Tystahl, William Enck, Christian K stner, Alexandros Kapravelos, Alessio Merlo and Luca Verderame published an interesting article recently. Titled An Empirical Study on Reproducible Packaging in Open-Source Ecosystem, the abstract outlines its optimistic findings:
[We] identified that with relatively straightforward infrastructure configuration and patching of build tools, we can achieve very high rates of reproducible builds in all studied ecosystems. We conclude that if the ecosystems adopt our suggestions, the build process of published packages can be independently confirmed for nearly all packages without individual developer actions, and doing so will prevent significant future software supply chain attacks.
The entire PDF is available online to view.
In addition, Julien Malka, Stefano Zacchiroli and Th o Zimmermann of T l com Paris in-house research laboratory, the Information Processing and Communications Laboratory (LTCI) published an article asking the question: Does Functional Package Management Enable Reproducible Builds at Scale?. Answering strongly in the affirmative, the article s abstract reads as follows:
In this work, we perform the first large-scale study of bitwise reproducibility, in the context of the Nix functional package manager, rebuilding 709,816 packages from historical snapshots of the nixpkgs repository[. We] obtain very high bitwise reproducibility rates, between 69 and 91% with an upward trend, and even higher rebuildability rates, over 99%. We investigate unreproducibility causes, showing that about 15% of failures are due to embedded build dates. We release a novel dataset with all build statuses, logs, as well as full diffoscopes: recursive diffs of where unreproducible build artifacts differ.
As above, the entire PDF of the article is available to view online.

Distribution work There as been the usual work in various distributions this month, such as:
  • 10+ reviews of Debian packages were added, 11 were updated and 10 were removed this month adding to our knowledge about identified issues. A number of issue types were updated also.
  • The FreeBSD Foundation announced that a planned project to deliver zero-trust builds has begun in January 2025 . Supported by the Sovereign Tech Agency, this project is centered on the various build processes, and that the primary goal of this work is to enable the entire release process to run without requiring root access, and that build artifacts build reproducibly that is, that a third party can build bit-for-bit identical artifacts. The full announcement can be found online, which includes an estimated schedule and other details.

On our mailing list On our mailing list this month:
  • Following-up to a substantial amount of previous work pertaining the Sphinx documentation generator, James Addison asked a question pertaining to the relationship between SOURCE_DATE_EPOCH environment variable and testing that generated a number of replies.
  • Adithya Balakumar of Toshiba asked a question about whether it is possible to make ext4 filesystem images reproducible. Adithya s issue is that even the smallest amount of post-processing of the filesystem results in the modification of the Last mount and Last write timestamps.
  • James Addison also investigated an interesting issue surrounding our disorderfs filesystem. In particular:
    FUSE (Filesystem in USErspace) filesystems such as disorderfs do not delete files from the underlying filesystem when they are deleted from the overlay. This can cause seemingly straightforward tests for example, cases that expect directory contents to be empty after deletion is requested for all files listed within them to fail.

Upstream patches The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:

diffoscope diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. This month, Chris Lamb made the following changes, including preparing and uploading versions 285, 286 and 287 to Debian:
  • Security fixes:
    • Validate the --css command-line argument to prevent a potential Cross-site scripting (XSS) attack. Thanks to Daniel Schmidt from SRLabs for the report. [ ]
    • Prevent XML entity expansion attacks. Thanks to Florian Wilkens from SRLabs for the report.. [ ][ ]
    • Print a warning if we have disabled XML comparisons due to a potentially vulnerable version of pyexpat. [ ]
  • Bug fixes:
    • Correctly identify changes to only the line-endings of files; don t mark them as Ordering differences only. [ ]
    • When passing files on the command line, don t call specialize( ) before we ve checked that the files are identical or not. [ ]
    • Do not exit with a traceback if paths are inaccessible, either directly, via symbolic links or within a directory. [ ]
    • Don t cause a traceback if cbfstool extraction failed.. [ ]
    • Use the surrogateescape mechanism to avoid a UnicodeDecodeError and crash when any decoding zipinfo output that is not UTF-8 compliant. [ ]
  • Testsuite improvements:
    • Don t mangle newlines when opening test fixtures; we want them untouched. [ ]
    • Move to assert_diff in test_text.py. [ ]
  • Misc improvements:
    • Drop unused subprocess imports. [ ][ ]
    • Drop an unused function in iso9600.py. [ ]
    • Inline a call and check of Config().force_details; no need for an additional variable in this particular method. [ ]
    • Remove an unnecessary return value from the Difference.check_for_ordering_differences method. [ ]
    • Remove unused logging facility from a few comparators. [ ]
    • Update copyright years. [ ][ ]
In addition, fridtjof added support for the ASAR .tar-like archive format. [ ][ ][ ][ ] and lastly, Vagrant Cascadian updated diffoscope in GNU Guix to version 285 [ ][ ] and 286 [ ][ ].
strip-nondeterminism is our sister tool to remove specific non-deterministic results from a completed build. This month version 1.14.1-1 was uploaded to Debian unstable by Chris Lamb, making the following the changes:
  • Clarify the --verbose and non --verbose output of bin/strip-nondeterminism so we don t imply we are normalizing files that we are not. [ ]
  • Bump Standards-Version to 4.7.0. [ ]

Website updates There were a large number of improvements made to our website this month, including:

Reproducibility testing framework The Reproducible Builds project operates a comprehensive testing framework running primarily at tests.reproducible-builds.org in order to check packages and other artifacts for reproducibility. In January, a number of changes were made by Holger Levsen, including:
  • reproduce.debian.net-related:
    • Add support for rebuilding the armhf architecture. [ ][ ]
    • Add support for rebuilding the arm64 architecture. [ ][ ][ ][ ]
    • Add support for rebuilding the riscv64 architecture. [ ][ ]
    • Move the i386 builder to the osuosl5 node. [ ][ ][ ][ ]
    • Don t run our rebuilders on a public port. [ ][ ]
    • Add database backups on all builders and add links. [ ][ ]
    • Rework and dramatically improve the statistics collection and generation. [ ][ ][ ][ ][ ][ ]
    • Add contact info to the main page [ ], thumbnails [ ] as well as the new, missing architectures. [ ]
    • Move the amd64 worker to the osuosl4 and node. [ ]
    • Run the underlying debrebuild script under nice. [ ]
    • Try to use TMPDIR when calling debrebuild. [ ][ ]
  • buildinfos.debian.net-related:
    • Stop creating buildinfo-pool_$ suite _$ arch .list files. [ ]
    • Temporarily disable automatic updates of pool links. [ ]
  • FreeBSD-related:
    • Fix the sudoers to actually permit builds. [ ]
    • Disable debug output for FreeBSD rebuilding jobs. [ ]
    • Upgrade to FreeBSD 14.2 [ ] and document that bmake was installed on the underlying FreeBSD virtual machine image [ ].
  • Misc:
    • Update the real year to 2025. [ ]
    • Don t try to install a Debian bookworm kernel from backports on the infom08 node which is running Debian trixie. [ ]
    • Don t warn about system updates for systems running Debian testing. [ ]
    • Fix a typo in the ZOMBIES definition. [ ][ ]
In addition:
  • Ed Maste modified the FreeBSD build system to the clean the object directory before commencing a build. [ ]
  • Gioele Barabucci updated the rebuilder stats to first add a category for network errors [ ] as well as to categorise failures without a diffoscope log [ ].
  • Jessica Clarke also made some FreeBSD-related changes, including:
    • Ensuring we clean up the object directory for second build as well. [ ][ ]
    • Updating the sudoers for the relevant rm -rf command. [ ]
    • Update the cleanup_tmpdirs method to to match other removals. [ ]
  • Jochen Sprickerhof:
  • Roland Clobus:
    • Update the reproducible_debstrap job to call Debian s debootstrap with the full path [ ] and to use eatmydata as well [ ][ ].
    • Make some changes to deduce the CPU load in the debian_live_build job. [ ]
Lastly, both Holger Levsen [ ] and Vagrant Cascadian [ ] performed some node maintenance.
If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

4 February 2025

Valhalla's Things: Conference Talk Timeout Ring, Part One

Posted on February 4, 2025
Tags: madeof:atoms, madeof:bits
low quality video of a ring of rgb LED in front of a computer: the LED light up one at a time in colours that go from yellow to red. A few ago I may have accidentally bought a ring of 12 RGB LEDs; I soldered temporary leads on it, connected it to a CircuitPython supported board and played around for a while. They we had a couple of friends come over to remote FOSDEM together, and I had talked with one of them about WS2812 / NeoPixels, so I brought them to the living room, in case there was a chance to show them in sort-of-use. Then I was dealing with playing the various streams as we moved from one room to the next, which lead to me being called video team , which lead to me wearing a video team shirt (from an old DebConf, not FOSDEM, but still video team), which lead to somebody asking me whether I also had the sheet with the countdown to the end of the talk, and the answer was sort-of-yes (I should have the ones we used to use for our Linux Day), but not handy. But I had a thing with twelve things in a clock-like circle. A bit of fiddling on the CircuitPython REPL resulted, if I remember correctly, in something like:
import board
import neopixel
import time
num_pixels = 12
pixels = neopixel.NeoPixel(board.GP0, num_pixels)
pixels.brightness = 0.1
def end(min):
    pixels.fill((0, 0, 0))
    for i in range(12):
        pixels[i] = (127 + 10 * i, 8 * (12 - i), 0)
        pixels[i-1] = (0, 0, 0)
        time.sleep(min * 5)  # min * 60 / 12
Now, I wasn t very consistent in running end, especially since I wasn t sure whether I wanted to run it at the beginning of the talk with the full duration or just in the last 5 - 10 minutes depending of the length of the slot, but I ve had at least one person agree that the general idea has potential, so I m taking these notes to be able to work on it in the future. One thing that needs to be fixed is the fact that with the ring just attached with temporary wires and left on the table it isn t clear which LED is number 0, so it will need a bit of a case or something, but that s something that can be dealt with before the next fosdem. And I should probably add some input interface, so that it is self-contained and not tethered to a computer and run from the REPL. (And then I may also have a vague idea for putting that ring into some wearable thing: good thing that I actually bought two :D )

2 February 2025

Dave Hibberd: SOTA Trip Reports: Feb 02, 2025 - Bennachie

To Quote @MM0EFI and the GM0ESS gang, today was a particularly Amateur showing! Having spent all weekend locked in the curling rink ruining my knees and inflicting mild liver damage in the Aberdeen City Open competition, I needed some outside time away from people to stretch the legs and loosen my knees. With my teammates/guests shipped off early on account of our quality performance and the days fair drawin out now, I found myself with a free afternoon to have a quick run up something nearby before a 1640 sunset! Up the back of Bennachie is a quick steady ascent and in 13 years of living up here I ve never summited the big hill! Now is as good a time as any. In SOTA terms, this hill is GM/ES-061. In Geographical terms, it s around 20 miles inland from Aberdeen city here. I ve been experimenting with these Aliexpress whips since the end of last year and the forecast wind was low enough to take one into the hills. I cut and terminated 8x 2.5m radials for an effective ground plane last week and wanted to try that against the flat ribbon that it came with. The ascent was pleasant enough, got to the summit in good time, and out came my Quansheng radio to get the GM/ES-Society on 2m. First my Nagoya whip - called CQ and heard nothing, with general poor reports in WhatsApp I opted to get the slim-g up my aliexpress fibreglass mast. In an amateur showing last week, I broke the tip of the mast on Cat Law helping 2M0HSK do his first activation due to the wind, and had forgotten this until I summited this week. Squeezing my antenna on was tough, and after many failed attempts to get it up (the mast kept collapsing as I was rushing and not getting the friction hold on each section correctly) and still not hearing anything at all, I changed location and tried again. In my new position, I received 2M0RVZ 4/4 at best, but he was hearing my 5/9. Similarly GM5ALX and GM4JXP were patiently receiving me loud and clear but I couldn t hear them at all. I fiddled with settings and decided the receive path of the Quansheng must be fried or sad somehow, but I don t yet have a full set of diagnostics run. I ll take my Anytone on the next hill and compare them against each other I think. I gave up and moved to HF, getting my whip and new radials into the ground: 295B58E1-BA43-4348-A4C7-0B1E013C4006_1_102_o 375x500 Quick to deploy which is what I was after. My new 5m of coax with a choke fitted attached to the radio and we were off to the races - A convenient thing of beauty when it s up: 33C35D56-F470-46BB-B31E-F66361504A1C_1_102_o 375x500 I ve made a single guy with a sotabeams top insulator to brace against wind if need be, but that didn t need to be used today. I hit tune, and the G90 spent ages clicking away. In fact, tuning to 14.074, I could only see the famed FT8 signals at S2. What could be wrong here? Was it my new radials? the whip has behaved before Minutes turned into tens of minutes playing with everything, and eventually I worked out what was up - my coax only passed signal when I the PL259 connector at the antenna juuuust right. Once I did that, I could take the tuner out the system and work 20 spectacularly well. Until now, I d been tuning the coax only. Another Quality Hibby Build Job . That s what s wrong! I managed to struggle my way through a touch of QRM and my wonky cable woes to make enough contacts with some very patient chasers and a summit to summit before my frustration at the situation won out, and down the hill I went after a quick pack up period. I managed to beat the sunset - I think if the system had worked fine, I d have stayed on the hill for sunset. I think it s time for a new mast and a coax retermination!

Joachim Breitner: Coding on my eInk Tablet

For many years I wished I had a setup that would allow me to work (that is, code) productively outside in the bright sun. It s winter right now, but when its summer again it s always a bit. this weekend I got closer to that goal. TL;DR: Using code-server on a beefy machine seems to be quite neat.
Passively lit coding Passively lit coding

Personal history Looking back at my own old blog entries I find one from 10 years ago describing how I bought a Kobo eBook reader with the intent of using it as an external monitor for my laptop. It seems that I got a proof-of-concept setup working, using VNC, but it was tedious to set up, and I never actually used that. I subsequently noticed that the eBook reader is rather useful to read eBooks, and it has been in heavy use for that every since. Four years ago I gave this old idea another shot and bought an Onyx BOOX Max Lumi. This is an A4-sized tablet running Android and had the very promising feature of an HDMI input. So hopefully I d attach it to my laptop and it just works . Turns out that this never worked as well as I hoped: Even if I set the resolution to exactly the tablet s screen s resolution I got blurry output, and it also drained the battery a lot, so I gave up on this. I subsequently noticed that the tablet is rather useful to take notes, and it has been in sporadic use for that. Going off on this tangent: I later learned that the HDMI input of this device appears to the system like a camera input, and I don t have to use Boox s monitor app but could other apps like FreeDCam as well. This somehow managed to fix the resolution issues, but the setup still wasn t as convenient to be used regularly. I also played around with pure terminal approaches, e.g. SSH ing into a system, but since my usual workflow was never purely text-based (I was at least used to using a window manager instead of a terminal multiplexer like screen or tmux) that never led anywhere either.

VSCode, working remotely Since these attempts I have started a new job working on the Lean theorem prover, and working on or with Lean basically means using VSCode. (There is a very good neovim plugin as well, but I m using VSCode nevertheless, if only to make sure I am dogfooding our default user experience). My colleagues have said good things about using VSCode with the remote SSH extension to work on a beefy machine, so I gave this a try now as well, and while it s not a complete game changer for me, it does make certain tasks (rebuilding everything after a switching branches, running the test suite) very convenient. And it s a bit spooky to run these work loads without the laptop s fan spinning up. In this setup, the workspace is remote, but VSCode still runs locally. But it made me wonder about my old goal of being able to work reasonably efficient on my eInk tablet. Can I replicate this setup there? VSCode itself doesn t run on Android directly. There are project that run a Linux chroot or in termux on the Android system, and then you can VNC to connect to it (e.g. on Andronix) but that did not seem promising. It seemed fiddly, and I probably should take it easy on the tablet s system.

code-server, running remotely A more promising option is code-server. This is a fork of VSCode (actually of VSCodium) that runs completely on the remote machine, and the client machine just needs a browser. I set that up this weekend and found that I was able to do a little bit of work reasonably.

Access With code-server one has to decide how to expose it safely enough. I decided against the tunnel-over-SSH option, as I expected that to be somewhat tedious to set up (both initially and for each session) on the android system, and I liked the idea of being able to use any device to work in my environment. I also decided against the more involved reverse proxy behind proper hostname with SSL setups, because they involve a few extra steps, and some of them I cannot do as I do not have root access on the shared beefy machine I wanted to use. That left me with the option of using a code-server s built-in support for self-signed certificates and a password:
$ cat .config/code-server/config.yaml
bind-addr: 1.2.3.4:8080
auth: password
password: xxxxxxxxxxxxxxxxxxxxxxxx
cert: true
With trust-on-first-use this seems reasonably secure.

Service To keep code-server running I created a systemd service that s managed by my user s systemd instance:
~ $ cat ~/.config/systemd/user/code-server.service
[Unit]
Description=code-server
After=network-online.target
[Service]
Environment=PATH=/home/joachim/.nix-profile/bin:/nix/var/nix/profiles/default/bin:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
ExecStart=/nix/var/nix/profiles/default/bin/nix run nixpkgs#code-server
[Install]
WantedBy=default.target
(I am using nix as a package manager on a Debian system there, hence the additional PATH and complex ExecStart. If you have a more conventional setup then you do not have to worry about Environment and can likely use ExecStart=code-server. For this to survive me logging out I had to ask the system administrator to run loginctl enable-linger joachim, so that systemd allows my jobs to linger.

Git credentials The next issue to be solved was how to access the git repositories. The work is all on public repositories, but I still need a way to push my work. With the classic VSCode-SSH-remote setup from my laptop, this is no problem: My local SSH key is forwarded using the SSH agent, so I can seamlessly use that on the other side. But with code-server there is no SSH key involved. I could create a new SSH key and store it on the server. That did not seem appealing, though, because SSH keys on Github always have full access. It wouldn t be horrible, but I still wondered if I can do better. I thought of creating fine-grained personal access tokens that only me to push code to specific repositories, and nothing else, and just store them permanently on the remote server. Still a neat and convenient option, but creating PATs for our org requires approval and I didn t want to bother anyone on the weekend. So I am experimenting with Github s git-credential-manager now. I have configured it to use git s credential cache with an elevated timeout, so that once I log in, I don t have to again for one workday.
$ nix-env -iA nixpkgs.git-credential-manager
$ git-credential-manager configure
$ git config --global credential.credentialStore cache
$ git config --global credential.cacheOptions "--timeout 36000"
To login, I have to https://github.com/login/device on an authenticated device (e.g. my phone) and enter a 8-character code. Not too shabby in terms of security. I only wish that webpage would not require me to press Tab after each character This still grants rather broad permissions to the code-server, but at least only temporarily

Android setup On the client side I could now open https://host.example.com:8080 in Firefox on my eInk Android tablet, click through the warning about self-signed certificates, log in with the fixed password mentioned above, and start working! I switched to a theme that supposedly is eInk-optimized (eInk by Mufanza). It s not perfect (e.g. git diffs are unhelpful because it is not possible to distinguish deleted from added lines), but it s a start. There are more eInk themes on the official Visual Studio Marketplace, but because code-server is a fork it cannot use that marketplace, and for example this theme isn t on Open-VSX. For some reason the F11 key doesn t work, but going fullscreen is crucial, because screen estate is scarce in this setup. I can go fullscreen using VSCode s command palette (Ctrl-P) and invoking the command there, but Firefox often jumps out of the fullscreen mode, which is annoying. I still have to pay attention to when that s happening; maybe its the Esc key, which I am of course using a lot due to me using vim bindings. A more annoying problem was that on my Boox tablet, sometimes the on-screen keyboard would pop up, which is seriously annoying! It took me a while to track this down: The Boox has two virtual keyboards installed: The usual Google ASOP keyboard, and the Onyx Keyboard. The former is clever enough to stay hidden when there is a physical keyboard attached, but the latter isn t. Moreover, pressing Shift-Ctrl on the physical keyboard rotates through the virtual keyboards. Now, VSCode has many keyboard shortcuts that require Shift-Ctrl (especially on an eInk device, where you really want to avoid using the mouse). And the limited settings exposed by the Boox Android system do not allow you configure that or disable the Onyx keyboard! To solve this, I had to install the KISS Launcher, which would allow me to see more Android settings, and in particular allow me to disable the Onyx keyboard. So this is fixed. I was hoping to improve the experience even more by opening the web page as a Progressive Web App (PWA), as described in the code-server FAQ. Unfortunately, that did not work. Firefox on Android did not recognize the site as a PWA (even though it recognizes a PWA test page). And I couldn t use Chrome either because (unlike Firefox) it would not consider a site with a self-signed certificate as a secure context, and then code-server does not work fully. Maybe this is just some bug that gets fixed in later versions. I did not work enough with this yet to assess how much the smaller screen estate, the lack of colors and the slower refresh rate will bother me. I probably need to hide Lean s InfoView more often, and maybe use the Error Lens extension, to avoid having to split my screen vertically. I also cannot easily work on a park bench this way, with a tablet and a separate external keyboard. I d need at least a table, or some additional piece of hardware that turns tablet + keyboard into some laptop-like structure that I can put on my, well, lap. There are cases for Onyx products that include a keyboard, and maybe they work on the lap, but they don t have the Trackpoint that I have on my ThinkPad TrackPoint Keyboard II, and how can you live without that?

Conclusion After this initial setup chances are good that entering and using this environment is convenient enough for me to actually use it; we will see when it gets warmer. A few bits could be better. In particular logging in and authenticating GitHub access could be both more convenient and more safe I could imagine that when I open the page I confirm that on my phone (maybe with a fingerprint), and that temporarily grants access to the code-server and to specific GitHub repositories only. Is that easily possible?

Anuradha Weeraman: DeepSeek-R1, at the cusp of an open revolution

DeepSeek-R1, at the cusp of an open revolutionDeepSeek R1, the new entrant to the Large Language Model wars has created quite a splash over the last few weeks. Its entrance into a space dominated by the Big Corps, while pursuing asymmetric and novel strategies has been a refreshing eye-opener.GPT AI improvement was starting to show signs of slowing down, and has been observed to be reaching a point of diminishing returns as it runs out of data and compute required to train, fine-tune increasingly large models. This has turned the focus towards building "reasoning" models that are post-trained through reinforcement learning, techniques such as inference-time and test-time scaling and search algorithms to make the models appear to think and reason better. OpenAI&aposs o1-series models were the first to achieve this successfully with its inference-time scaling and Chain-of-Thought reasoning.

Intelligence as an emergent property of Reinforcement Learning (RL)Reinforcement Learning (RL) has been successfully used in the past by Google&aposs DeepMind team to build highly intelligent and specialized systems where intelligence is observed as an emergent property through rewards-based training approach that yielded achievements like AlphaGo (see my post on it here - AlphaGo: a journey to machine intuition).DeepMind went on to build a series of Alpha* projects that achieved many notable feats using RL:
  • AlphaGo, defeated the world champion Lee Seedol in the game of Go
  • AlphaZero, a generalized system that learned to play games such as Chess, Shogi and Go without human input
  • AlphaStar, achieved high performance in the complex real-time strategy game StarCraft II.
  • AlphaFold, a tool for predicting protein structures which significantly advanced computational biology.
  • AlphaCode, a model designed to generate computer programs, performing competitively in coding challenges.
  • AlphaDev, a system developed to discover novel algorithms, notably optimizing sorting algorithms beyond human-derived methods.
All of these systems achieved mastery in its own area through self-training/self-play and by optimizing and maximizing the cumulative reward over time by interacting with its environment where intelligence was observed as an emergent property of the system.
DeepSeek-R1, at the cusp of an open revolutionThe RL feedback loop
RL mimics the process through which a baby would learn to walk, through trial, error and first principles.

R1 model training pipelineAt a technical level, DeepSeek-R1 leverages a combination of Reinforcement Learning (RL) and Supervised Fine-Tuning (SFT) for its training pipeline:
DeepSeek-R1, at the cusp of an open revolutionDeepSeek-R1 Model Training Pipeline
Using RL and DeepSeek-v3, an interim reasoning model was built, called DeepSeek-R1-Zero, purely based on RL without relying on SFT, which demonstrated superior reasoning capabilities that matched the performance of OpenAI&aposs o1 in certain benchmarks such as AIME 2024.The model was however affected by poor readability and language-mixing and is only an interim-reasoning model built on RL principles and self-evolution.DeepSeek-R1-Zero was then used to generate SFT data, which was combined with supervised data from DeepSeek-v3 to re-train the DeepSeek-v3-Base model.The new DeepSeek-v3-Base model then underwent additional RL with prompts and scenarios to come up with the DeepSeek-R1 model.The R1-model was then used to distill a number of smaller open source models such as Llama-8b, Qwen-7b, 14b which outperformed bigger models by a large margin, effectively making the smaller models more accessible and usable.

Key contributions of DeepSeek-R1
  1. RL without the need for SFT for emergent reasoning capabilities
R1 was the first open research project to validate the efficacy of RL directly on the base model without relying on SFT as a first step, which resulted in the model developing advanced reasoning capabilities purely through self-reflection and self-verification.Although, it did degrade in its language capabilities during the process, its Chain-of-Thought (CoT) capabilities for solving complex problems was later used for further RL on the DeepSeek-v3-Base model which became R1. This is a significant contribution back to the research community.The below analysis of DeepSeek-R1-Zero and OpenAI o1-0912 shows that it is viable to attain robust reasoning capabilities purely through RL alone, which can be further augmented with other techniques to deliver even better reasoning performance.
DeepSeek-R1, at the cusp of an open revolutionSource: https://github.com/deepseek-ai/DeepSeek-R1
Its quite interesting, that the application of RL gives rise to seemingly human capabilities of "reflection", and arriving at "aha" moments, causing it to pause, ponder and focus on a specific aspect of the problem, resulting in emergent capabilities to problem-solve as humans do.
  1. Model distillation
DeepSeek-R1 also demonstrated that larger models can be distilled into smaller models which makes advanced capabilities accessible to resource-constrained environments, such as your laptop. While its not possible to run a 671b model on a stock laptop, you can still run a distilled 14b model that is distilled from the larger model which still performs better than most publicly available models out there. This enables intelligence to be brought closer to the edge, to allow faster inference at the point of experience (such as on a smartphone, or on a Raspberry Pi), which paves way for more use cases and possibilities for innovation.
DeepSeek-R1, at the cusp of an open revolutionSource: https://github.com/deepseek-ai/DeepSeek-R1
Distilled models are very different to R1, which is a massive model with a completely different model architecture than the distilled variants, and so are not directly comparable in terms of capability, but are instead built to be more smaller and efficient for more constrained environments. This technique of being able to distill a larger model&aposs capabilities down to a smaller model for portability, accessibility, speed, and cost will bring about a lot of possibilities for applying artificial intelligence in places where it would have otherwise not been possible. This is another key contribution of this technology from DeepSeek, which I believe has even further potential for democratization and accessibility of AI.
DeepSeek-R1, at the cusp of an open revolution

Why is this moment so significant?DeepSeek-R1 was a pivotal contribution in many ways.
  1. The contributions to the state-of-the-art and the open research helps move the field forward where everybody benefits, not just a few highly funded AI labs building the next billion dollar model.
  2. Open-sourcing and making the model freely available follows an asymmetric strategy to the prevailing closed nature of much of the model-sphere of the larger players. DeepSeek should be commended for making their contributions free and open.
  3. It reminds us that its not just a one-horse race, and it incentivizes competition, which has already resulted in OpenAI o3-mini a cost-effective reasoning model which now shows the Chain-of-Thought reasoning. Competition is a good thing.
  4. We stand at the cusp of an explosion of small-models that are hyper-specialized, and optimized for a specific use case that can be trained and deployed cheaply for solving problems at the edge. It raises a lot of exciting possibilities and is why DeepSeek-R1 is one of the most pivotal moments of tech history.
Truly exciting times. What will you build?

1 February 2025

Guido G nther: Free Software Activities January 2025

Another short status update of what happened on my side last month. Mostly focused on quality of life improvements in phosh and cleaning up and improving phoc this time around (including catching up with wlroots git) but some improvements for other things like phosh-osk-stub happened on the side line too. phosh phoc phosh-osk-stub xdg-desktop-portal-phosh phosh-recipes libcmatrix phrog Debian git-buildpackage livi feedbackd Wayland protocols Wlroots Bug reports Reviews This is not code by me but reviews on other peoples code. The list is incomplete, but I hope to improve on this in the upcoming months. Thanks for the contributions! Help Development If you want to support my work see donations. Comments? Join the Fediverse thread

29 January 2025

Sergio Talens-Oliag: Testing DeepSeek with Ollama and Open WebUI

With all the recent buzz about DeepSeek and its capabilities, I ve decided to give it a try using Ollama and Open WebUI on my work laptop which has an NVIDIA GPU:
$ lspci   grep NVIDIA
0000:01:00.0 3D controller: NVIDIA Corporation GA107GLM [RTX A2000 8GB Laptop GPU]
             (rev a1)
For the installation I initially I looked into the approach suggested on this article, but after reviewing it I decided to go for a docker only approach, as it leaves my system clean and updates are easier.

Step 0: Install dockerI already had it on my machine, so nothing to do here.

Step 1: Install the nvidia-container-toolkit packageAs it is needed to use the NVIDIA GPU with docker I followed the instructions to install the package using apt from the NVIDIA website.

Step 2: Run the Open WebUI container bundled with OllamaI could install ollama directly on linux or run it on docker, but I found out that there is a container with Open WebUI bundled with Ollama, so I decided to use it instead. To start the container I ve executed the following command:
docker run -d \
  -e OLLAMA_HOST="0.0.0.0:11434" -p 127.0.0.1:11434:11434 \
  -p 127.0.0.1:3000:8080 \
  -v ollama:/root/.ollama \
  -v open-webui:/app/backend/data \
  --gpus=all --name open-webui --restart always \
  ghcr.io/open-webui/open-webui:ollama
Once the container is up and running, I can access the Open WebUI on http://localhost:3000/ and the Ollama API on http://localhost:11434/ (I added the OLLAMA_HOST environment variable to be bind the server to all the container IPs, as it is needed to export it to the host).

Step 3: Install DeepSeek-R1 model from Open WebUIAfter creating an admin user on the Open WebUI, I went to the select a model button and asked to download the deepseek-r1 model (at the time of writing this it matches version deepseek-r1:7b, which uses 4.7GB of space). After the download was completed I was able to start chatting with the model on my local machine.

Step 4: Ask a questionThe first question I asked was Can you tell me what is Debian? and the answer I got was:
Debian is one of the most popular Linux distributions used worldwide. It is an
open-source operating system that has been around since 1993 and is known for
its stability, security, and reliability. Debian is widely used in servers, data
centers, and enterprise environments because it provides a robust base platform
with excellent documentation, tools, and community support.
### Key Features of Debian:

1. **History**:
   - Debian started as a project called "Pop!!" (Pootle) led by Lars Knutsen, a
     developer from Denmark.
   - It was initially forked from the Linux distribution Red Hat's RH-Debian to
     gain independence and reduce reliance on Red Hat.

2. **Use Cases**:
   - Debian is commonly used as an operating system for servers due to its
     reliability and ease of use in enterprise environments.
   - It is also used for individual user machines (laptops, desktops) because it
     offers a smooth and stable experience.
   - Debian provides the foundation for many popular derivatives like Ubuntu,
     which are designed for consumer use.

3. **Supported Architectures**:
   - Debian supports all major Linux architectures, including 32-bit and 64-bit
     systems, and runs on a wide range of hardware from old servers to modern
     smartphones.

4. **Package Management**:
   - Debian comes with a robust package management system called **Debian Policy
     Language (DPL)**, which allows users to easily install, update, and manage
     software packages.
   - This makes it easy for users to keep their systems up to date without
     needing to download multiple repositories or deal with version conflicts.

5. **Security Focus**:
   - Debian is known for its strong security practices. It has a reputation for
     being one of the most stable Linux distributions, making it a favorite
     among organizations that prioritize security.
   - The distribution regularly updates its software packages to patch
     vulnerabilities and protect against common attacks.

6. **Compatibility with Servers**:
   - Debian is widely used in data centers because it provides a consistent and
     reliable environment for running servers and applications.
   - Many open-source projects are compatible with Debian, making it an ideal
     choice for developers who want to work on servers or distributed systems.

7. **Popularity**:
   - Debian has a large and active community of users and contributors.
   - It is one of the most widely used Linux distributions in the world, with
     many enterprise organizations relying on it as their primary operating
     system.
### Comparison with Other Distributions:
- **Ubuntu**: Ubuntu is based on Debian but adds graphical user interfaces (like
  Gutsy Gibbon), modern tools (like GIMP and LibreOffice), and integrates with
  Microsoft services.
- **RHEL (Red Hat Enterprise Linux)**: RHEL is a derivative of Red Hat's
  distribution, which forked from Debian. It focuses more heavily on enterprise
  security and support compared to Debian.
### Why Debian?
Debian is popular because:
- It is free and open-source, making it accessible to anyone.
- It provides excellent documentation and tools for system administrators.
- It has a strong focus on stability and reliability.
- It supports a wide range of hardware and software needs.
If you're looking for a reliable and flexible Linux distribution that works well
in both personal and enterprise environments, Debian is an excellent choice.
Haven t tried the same question with ChatGPT, but the History, Package Management and Comparison with Other Distributions sections are hilarious, but they provided me with important data:
  • Now I know that APT is the real Debian Project Leader, what don t know is why we have elections each year ;)
  • Linux Distributions are recursive Debian was a fork of Red Hat, which was a fork of Debian, which was a fork of Red Hat,
As everybody is testing the model I will not talk more about the chat and the results, I just thought that this answer was really funny.

Step 5: Install the DeepSeek Coder and DeepSeek Coder v2 models from Open WebUIAs done before, to download the models I went to the select a model button and asked to download the deepseek-coder and deepseek-coder-v2 models (the default version of version one is said to be really quick and small, while version two is supposed to be better but slower and bigger, so I decided to install both for testing).

Step 6: Integrate Ollama with NeovimSince some months ago I ve been using Github Copilot with Neovim; I don t feel it has been very helpful in the general case, but I wanted to try it and it comes handy when you need to perform repetitive tasks when programming. It seems that there are multiple neovim plugins that support ollama, for now I ve installed and configured the codecompanion plugin on my config.lua file using packer:
require('packer').startup(function()
  [...]
  -- Codecompanion plugin
  use  
    "olimorris/codecompanion.nvim",
    requires =  
      "nvim-lua/plenary.nvim",
      "nvim-treesitter/nvim-treesitter",
     
   
  [...]
end)
[...]
-- --------------------------------
-- BEG: Codecompanion configuration
-- --------------------------------
-- Module setup
local codecompanion = require('codecompanion').setup( 
  adapters =  
    ollama = function()
      return require('codecompanion.adapters').extend('ollama',  
        schema =  
          model =  
            default = 'deepseek-coder-v2:latest',
           
         ,
       )
    end,
   ,
  strategies =  
    chat =   adapter = 'ollama',  ,
    inline =   adapter = 'ollama',  ,
   ,
 )
-- --------------------------------
-- END: Codecompanion configuration
-- --------------------------------
I ve tested it a little bit and it seems to work fine, but I ll have to test it more to see if it is really useful, I ll try to do it on future projects.

ConclusionAt a personal level I don t like nor trust AI systems, but as long as they are treated as tools and not as a magical thing you must trust they have their uses and I m happy to see that open source tools like Ollama and models like DeepSeek available for everyone to use.

Russ Allbery: Review: The Sky Road

Review: The Sky Road, by Ken MacLeod
Series: Fall Revolution #4
Publisher: Tor
Copyright: 1999
Printing: August 2001
ISBN: 0-8125-7759-0
Format: Mass market
Pages: 406
The Sky Road is the fourth book in the Fall Revolution series, but it represents an alternate future that diverges after (or during?) the events of The Sky Fraction. You probably want to read that book first, but I'm not sure reading The Stone Canal or The Cassini Division adds anything to this book other than frustration. Much more on that in a moment. Clovis colha Gree is a aspiring doctoral student in history with a summer job as a welder. He works on the platform for the project, which the reader either slowly discovers from the book or quickly discovers from the cover is a rocket to get to orbit. As the story opens, he meets (or, as he describes it) is targeted by a woman named Merrial, a tinker who works on the guidance system. The early chapters provide only a few hints about Clovis's world: a statue of the Deliverer on a horse that forms the backdrop of their meeting, the casual carrying of weapons, hints that tinkers are socially unacceptable, and some division between the white logic and the black logic in programming. Also, because this is a Ken MacLeod novel, everyone is obsessed with smoking and tobacco the way that the protagonists of erotica are obsessed with sex. Clovis's story is one thread of this novel. The other, told in the alternating chapters, is the story of Myra Godwin-Davidova, chair of the governing Council of People's Commissars of the International Scientific and Technical Workers' Republic, a micronation embedded in post-Soviet Kazakhstan. Series readers will remember Myra's former lover, David Reid, as the villain of The Stone Canal and the head of the corporation Mutual Protection, which is using slave labor (sort of) to support a resurgent space movement and its attempt to take control of a balkanized Earth. The ISTWR is in decline and a minor power by all standards except one: They still have nuclear weapons. So, first, we need to talk about the series divergence. I know from reading about this book on-line that The Sky Road is an alternate future that does not follow the events of The Stone Canal and The Cassini Division. I do not know this from the text of the book, which is completely silent about even being part of a series. More annoyingly, while the divergence in the Earth's future compared to The Cassini Division is obvious, I don't know what the Jonbar hinge is. Everything I can find on-line about this book is maddeningly coy. Wikipedia claims the divergence happens at the end of The Sky Fraction. Other reviews and the Wikipedia talk page claim it happens in the middle of The Stone Canal. I do have a guess, but it's an unsatisfying one and I'm not sure how to test its correctness. I suppose I shouldn't care and instead take each of the books on their own terms, but this is the type of thing that my brain obsesses over, and I find it intensely irritating that MacLeod didn't explain it in the books themselves. It's the sort of authorial trick that makes me feel dumb, and books that gratuitously make me feel dumb are less enjoyable to read. The second annoyance I have with this book is also only partly its fault. This series, and this book in particular, is frequently mentioned as good political science fiction that explores different ways of structuring human society. This was true of some of the earlier books in a surprisingly superficial way. Here, I would call it hogwash. This book, or at least the Myra portion of it, is full of people doing politics in a tactical sense, but like the previous books of this series, that politics is mostly embedded in personal grudges and prior romantic relationships. Everyone involved is essentially an authoritarian whose ability to act as they wish is only contested by other authoritarians and is largely unconstrained by such things as persuasion, discussions, elections, or even theory. Myra and most of the people she meets are profoundly cynical and almost contemptuous of any true discussion of political systems. This is the trappings and mechanisms of politics without the intellectual debate or attempt at consensus, turning it into a zero-sum game won by whoever can threaten the others more effectively. Given the glowing reviews I've seen in relatively political SF circles, presumably I am missing something that other people see in MacLeod's approach. Perhaps this level of pettiness and cynicism is an accurate depiction of what it's like inside left-wing political movements. (What an appalling condemnation of left-wing political movements, if so.) But many of the on-line reviews lead me to instead conclude that people's understanding of "political fiction" is stunted and superficial. For example, there is almost nothing Marxist about this book it contains essentially no economic or class analysis whatsoever but MacLeod uses a lot of Marxist terminology and sets half the book in an explicitly communist state, and this seems to be enough for large portions of the on-line commentariat to conclude that it's full of dangerous, radical ideas. I find this sadly hilarious given that MacLeod's societies tend, if anything, towards a low-grade libertarianism that would be at home in a Robert Heinlein novel. Apparently political labels are all that's needed to make political fiction; substance is optional. So much for the politics. What's left in Clovis's sections is a classic science fiction adventure in which the protagonist has a radically different perspective from the reader and the fun lies in figuring out the world-building through the skewed perspective of the characters. This was somewhat enjoyable, but would have been more fun if Clovis had any discernible personality. Sadly he instead seems to be an empty receptacle for the prejudices and perspective of his society, which involve a lot of quasi-religious taboos and an essentially magical view of the world. Merrial is a more interesting character, although as always in this series the romance made absolutely no sense to me and seemed to be conjured by authorial fiat and weirdly instant sexual attraction. Myra's portion of the story was the part I cared more about and was more invested in, aided by the fact that she's attempting to do something more interesting than launch a crewed space vehicle for no obvious reason. She at least faces some true moral challenges with no obviously correct response. It's all a bit depressing, though, and I found Myra's unwillingness to ground her decisions in a more comprehensive moral framework disappointing. If you're going to make a protagonist the ruler of a communist state, even an ironic one, I'd like to hear some real political philosophy, some theory of sociology and economics that she used to justify her decisions. The bits that rise above personal animosity and vibes were, I think, said better in The Cassini Division. This series was disappointing, and I can't say I'm glad to have read it. There is some small pleasure in finishing a set of award-winning genre books so that I can have a meaningful conversation about them, but the awards failed to find me better books to read than I would have found on my own. These aren't bad books, but the amount of enjoyment I got out of them didn't feel worth the frustration. Not recommended, I'm afraid. Rating: 6 out of 10

27 January 2025

Sergio Talens-Oliag: Running a Debian Sid on Ubuntu

Although I am a Debian Developer (not very active, BTW) I am using Ubuntu LTS (right now version 24.04.1) on my main machine; it is my work laptop and I was told to keep using Ubuntu on it when it was assigned to me, although I don t believe it is really necessary or justified (I don t need support, I don t provide support to others and I usually test my shell scripts on multiple systems if needed anyway). Initially I kept using Debian Sid on my personal laptop, but I gave it to my oldest son as the one he was using (an old Dell XPS 13) was stolen from him a year ago. I am still using Debian stable on my servers (one at home that also runs LXC containers and another one on an OVH VPS), but I don t have a Debian Sid machine anymore and while I could reinstall my work machine, I ve decided I m going to try to use a system container to run Debian Sid on it. As I want to use a container instead of a VM I ve narrowed my options to lxc or systemd-nspawn (I have docker and podman installed, but I don t believe they are good options for running system containers). As I will want to take snapshots of the container filesystem I ve decided to try incus instead of systemd-nspawn (I already have experience with it and while it works well it has less features than incus).

Installing incusAs this is a personal system where I want to try things, instead of using the packages included with Ubuntu I ve decided to install the ones from the zabbly incus stable repository. To do it I ve executed the following as root:
# Get the zabbly repository GPG key
curl -fsSL https://pkgs.zabbly.com/key.asc -o /etc/apt/keyrings/zabbly.asc
# Create the zabbly-incus-stable.sources file
sh -c 'cat <<EOF > /etc/apt/sources.list.d/zabbly-incus-stable.sources
Enabled: yes
Types: deb
URIs: https://pkgs.zabbly.com/incus/stable
Suites: $(. /etc/os-release && echo $ VERSION_CODENAME )
Components: main
Architectures: $(dpkg --print-architecture)
Signed-By: /etc/apt/keyrings/zabbly.asc
EOF'
Initially I only plan to use the command line tools, so I ve installed the incus and the incus-extra packages, but once things work I ll probably install the incus-ui-canonical package too, at least for testing it:
apt update
apt install incus incus-extra

Adding my personal user to the incus-admin groupTo be able to run incus commands as my personal user I ve added it to the incus-admin group:
sudo adduser "$(id -un)" incus-admin
And I ve logged out and in again of my desktop session to make the changes effective.

Initializing the incus environmentTo configure the incus environment I ve executed the incus admin init command and accepted the defaults for all the questions, as they are good enough for my current use case.

Creating a Debian containerTo create a Debian container I ve used the default debian/trixie image:
incus launch images:debian/trixie debian
This command downloads the image and creates a container named debian using the default profile. The exec command can be used to run a root login shell inside the container:
incus exec debian -- su -l
Instead of exec we can use the shell alias:
incus shell debian
which does the same as the previous command. Inside that shell we can try to update the machine to sid changing the /etc/apt/sources.list file and using apt:
root@debian:~# echo "deb http://deb.debian.org/debian sid main contrib non-free" \
  >/etc/apt/sources.list
root@debian:~# apt update
root@debian:~# apt dist-upgrade
As my machine has docker installed the apt update command fails because the network does not work, to fix it I ve executed the commands of the following section and re-run the apt update and apt dist-upgrade commands.

Making the incusbr0 bridge work with DockerTo avoid problems with docker networking we have to add rules for the incusbr0 bridge to the DOCKER-USER chain as follows:
sudo iptables -I DOCKER-USER -i incusbr0 -j ACCEPT
sudo iptables -I DOCKER-USER -o incusbr0 -m conntrack \
  --ctstate RELATED,ESTABLISHED -j ACCEPT
That makes things work now, but to make things persistent across reboots we need to add them each time the machine boots. As suggested by the incus documentation I ve installed the iptables-persistent package (my command also purges the ufw package, as I was not using it) and saved the current rules when installing:
sudo apt install iptables-persistent --purge

Integrating the DNS resolution of the container with the hostTo make DNS resolution for the ictus containers work from the host I ve followed the incus documentation. To set up things manually I ve run the following:
br="incusbr0";
br_ipv4="$(incus network get "$br" ipv4.address)";
br_domain="$(incus network get "$br" dns.domain)";
dns_address="$ br_ipv4%/* ";
dns_domain="$ br_domain:=incus ";
resolvectl dns "$br" "$ dns_address ";
resolvectl domain "$br" "~$ dns_domain ";
resolvectl dnssec "$br" off;
resolvectl dnsovertls "$br" off;
And to make the changes persistent across reboots I ve created the following service file:
sh -c "cat <<EOF   sudo tee /etc/systemd/system/incus-dns-$ br .service
[Unit]
Description=Incus per-link DNS configuration for $ br 
BindsTo=sys-subsystem-net-devices-$ br .device
After=sys-subsystem-net-devices-$ br .device
[Service]
Type=oneshot
ExecStart=/usr/bin/resolvectl dns $ br  $ dns_address 
ExecStart=/usr/bin/resolvectl domain $ br  ~$ dns_domain 
ExecStart=/usr/bin/resolvectl dnssec $ br  off
ExecStart=/usr/bin/resolvectl dnsovertls $ br  off
ExecStopPost=/usr/bin/resolvectl revert $ br 
RemainAfterExit=yes
[Install]
WantedBy=sys-subsystem-net-devices-$ br .device
EOF"
And enabled it:
sudo systemctl daemon-reload
sudo systemctl enable --now incus-dns-$ br .service
If all goes well the DNS resolution works from the host:
$ host debian.incus
debian.incus has address 10.149.225.121
debian.incus has IPv6 address fd42:1178:afd8:cc2c:216:3eff:fe2b:5cea

Using my host user and home dir inside the containerTo use my host user and home directory inside the container I need to add the user and group to the container. First I ve added my user group with the same GID used on the host:
incus exec debian -- addgroup --gid "$(id --group)" --allow-bad-names \
  "$(id --group --name)"
Once I have the group I ve added the user with the same UID and GID as on the host, without defining a password for it:
incus exec debian -- adduser --uid "$(id --user)" --gid "$(id --group)" \
  --comment "$(getent passwd "$(id --user -name)"   cut -d ':' -f 5)" \
  --no-create-home --disabled-password --allow-bad-names \
  "$(id --user --name)"
Once the user is created we can mount the home directory on the container (we add the shift option to make the container use the same UID and GID as we do on the host):
incus config device add debian home disk source=$HOME path=$HOME shift=true
We have the shell alias to log with the root account, now we can add another one to log into the container using the newly created user:
incus alias add ush "exec @ARGS@ -- su -l $(id --user --name)"
To log into the container as our user now we just need to run:
incus ush debian
To be able to use sudo inside the container we could add our user to the sudo group:
incus exec debian -- adduser "$(id --user --name)" "sudo"
But that requires a password and we don t have one, so instead we are going to add a file to the /etc/sudoers.d directory to allow our user to run sudo without a password:
incus exec debian -- \
  sh -c "echo '$(id --user --name) ALL = NOPASSWD: ALL' /etc/sudoers.d/user"

Accessing the container using sshTo use the container as a real machine and log into it as I do on remote machines I ve installed the openssh-server and authorized my laptop public key to log into my laptop (as we are mounting the home directory from the host that allows us to log in without password from the local machine). Also, to be able to run X11 applications from the container I ve adusted the $HOME/.ssh/config file to always forward X11 (option ForwardX11 yes for Host debian.incus) and installed the xauth package. After that I can log into the container running the command ssh debian.incus and start using it after installing other interesting tools like neovim, rsync, tmux, etc.

Taking snapshots of the containerAs this is a system container we can take snapshots of it using the incus snapshot command; that can be specially useful to take snapshots before doing a dist-upgrade so we can rollback if something goes wrong. To work with container snapshots we run use the incus snapshot command, i.e. to create a snapshot we use de create subcommand:
incus snapshot create debian
The snapshot sub commands include options to list the available snapshots, restore a snapshot, delete a snapshot, etc.

ConclusionSince last week I have a terminal running a tmux session on the Debian Sid container with multiple zsh windows open (I ve changed the prompt to be able to notice easily where I am) and it is working as expected. My plan now is to add some packages and use the container for personal projects so I can work on a Debian Sid system without having to reinstall my work machine. I ll probably write more about it in the future, but for now, I m happy with the results.

Russ Allbery: Review: The House That Fought

Review: The House That Fought, by Jenny Schwartz
Series: Uncertain Sanctuary #3
Publisher: Jenny Schwartz
Copyright: December 2020
Printing: September 2024
ASIN: B0DBX6GP8Z
Format: Kindle
Pages: 199
The House That Fought is the third and final book of the self-published space fantasy trilogy starting with The House That Walked Between Worlds. I read it as part of the Uncertain Sanctuary omnibus, which is reflected in the sidebar metadata. At the end of the last book, one of Kira's random and vibe-based trust decisions finally went awry. She has been betrayed! She's essentially omnipotent, the betrayal does not hurt her in any way, and, if anything, it helps the plot resolution, but she has to spend some time feeling bad about it first. Eventually, though, the band of House residents return to the problem of Earth's missing magic. By Earth here, I mean our world, which technically isn't called Earth in the confusing world-building of this series. Earth within this universe is an archetypal world that is the origin world for humans, the two types of dinosaurs, and Neanderthals. There are numerous worlds that have split off from it, including Human, the one world where humans are dominant, which is what we think of as Earth and what Kira calls Earth half the time. And by worlds, I mean entire universes (I think?), because traveling between "worlds" is dimensional travel, not space travel. But there is also space travel? The world building started out confusing and has degenerated over the course of the series. Given that the plot, such as it is, revolves around a world-building problem, this is not a good sign. Worse, though, is that the quality of the writing has become unedited, repetitive drivel. I liked the first book and enjoyed a few moments of the second book, but this conclusion is just bad. This is the sort of book that the maxim "show, don't tell" was intended to head off. The dull, thudding description of the justification for every character emotion leaves no room for subtlety or reader curiosity.
Evander was elf and I was human. We weren't the same. I had magic. He had the magic I'd unconsciously locked into his augmentations. We were different and in love. Speaking of our differences could be a trigger. I peeked at him, worried. My customary confidence had taken a hit. "We're different," he answered my unspoken question. "And we work anyway. We'll work to make us work."
There is page after page after page of this sort of thing: facile emotional processing full of cliches and therapy-speak, built on the most superficial of relationships. There's apparently a romance now, which happened with very little build-up, no real discussion or communication between the characters, and only the most trite and obvious relationship work. There is a plot underneath all this, but it's hard to make it suspenseful given that Kira is essentially omnipotent. Schwartz tries to turn the story into a puzzle that requires Kira figure out what's going on before she can act, but this is undermined by the confusing world-building. The loose ends the plot has accumulated over the previous two books are mostly dropped, sometimes in a startlingly casual way. I thought Kira would care who killed her parents, for example; apparently, I was wrong. The previous books caught my attention with a more subtle treatment of politics than I expect from this sort of light space fantasy. The characters had, I thought, a healthy suspicion of powerful people and a willingness to look for manipulation or ulterior motives. Unfortunately, we discover here that this is not due to an appreciation of the complexity of power and motive in governments. Instead, it's a reflexive bias against authority and structured society that sounds like an Internet libertarian complaining about taxes. Powerful people should be distrusted because all governments are corrupt and bad and steal your money in order to waste it. Oh, except for the cops and the military; they're generally good people you should trust. In retrospect, I should have expected this turn given the degree to which Schwartz stressed the independence of sorcerers. I thought that was going somewhere more interesting than sorcerers as self-appointed vigilantes who are above the law and can and should do anything they damn well please. Sadly, it was not. Adding to the lynch mob feeling, the ending of this book is a deeply distasteful bit of magical medieval punishment that I thought was vile, and which is, of course, justified by bad things happening to children. No societal problems were solved, but Kira got her petty revenge and got to be gleeful and smug about it. This is apparently what passes for a happy ending. I don't even know what to say about the bizarre insertion of Christianity, which makes little sense given the rest of the world-building. It's primarily a way for Kira to avoid understanding or thinking about an important part of the plot. As sadly seems to often be the case in books like this, Kira's faith doesn't appear to prompt any moral analysis or thoughtful ethical concern about her unlimited power, just certainty that she's right and everyone else is wrong. This was dire. It is one of those self-published books that I feel a little bad about writing this negative of a review about, because I think most of the problem was that the author's skill was not up to the story that she wanted to tell. This happens a lot in self-published fiction, particularly since Kindle Unlimited has started rewarding quantity over quality. But given how badly the writing quality degraded over the course of the series, and how offensive the ending was, I do want to warn other people off of the series. There is so much better fiction out there. Avoid this one, and probably the rest of the series unless you're willing to stop after the first book. Rating: 2 out of 10

24 January 2025

Reproducible Builds (diffoscope): diffoscope 286 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 286. This version includes the following changes:
[ Chris Lamb ]
* Bug fixes:
  - When passing files on the command line, don't call specialize(..) before
    we've checked that the files are identical. In the worst case, this was
    resulting in spinning up binwalk and extracting two entire filesystem
    images merely to confirm that they were indeed filesystem images..
    before simply concluding that they were identical anyway.
  - Do not exit with a traceback if paths are inaccessible, either directly,
    via symbolic links or within a directory. (Closes: #1065498)
  - Correctly identify changes to only the line-endings of files; don't mark
    them as "Ordering differences only".
  - Use the "surrogateescape" mechanism of str. decode,encode  to avoid a
    UnicodeDecodeError and crash when decoding zipinfo output that is not
    valid UTF-8. (Closes: #1093484)
* Testsuite changes:
  - Don't mangle newlines when opening test fixtures; we want them untouched.
  - Move to assert_diff in test_text.py.
* Misc:
  - Remove unnecessary return value from check_for_ordering_differences in
    the Difference class.
  - Drop an unused function in iso9600.py
  - Inline a call/check of Config().force_details; no need for an additional
    variable.
You find out more by visiting the project homepage.

21 January 2025

Ravi Dwivedi: The Arduous Luxembourg Visa Process

In 2024, I was sponsored by The Document Foundation (TDF) to attend the LibreOffice annual conference in Luxembourg from the 10th to the 12th of October. Being an Indian passport holder, I needed a visa to visit Luxembourg. However, due to my Kenya trip coming up in September, I ran into a dilemma: whether to apply before or after the Kenya trip. To obtain a visa, I needed to submit my application with VFS Global (and not with the Luxembourg embassy directly). Therefore, I checked the VFS website for information on processing time, which says:
As a rule, the processing time of an admissible Schengen visa application should not exceed 15 calendar days (from the date the application is received at the Embassy).
It also mentions:
If the application is received less than 15 calendar days before the intended travel date, the Embassy can deem your application inadmissible. If so, your visa application will not be processed by the Embassy and the application will be sent back to VFS along with the passport.
If I applied for the Luxembourg visa before my trip, I would run the risk of not getting my passport back in time, and therefore missing my Kenya flight. On the other hand, if I waited until after returning from Kenya, I would run afoul of the aforementioned 15 working days needed by the embassy to process my application. I had previously applied for a Schengen visa for Austria, which was completed in 7 working days. My friends who had been to France told me they got their visa decision within a week. So, I compared Luxembourg s application numbers with those of other Schengen countries. In 2023, Luxembourg received 3,090 applications from India, while Austria received 39,558, Italy received 52,332 and France received 176,237. Since Luxembourg receives a far fewer number of applications, I expected the process to be quick. Therefore, I submitted my visa application with VFS Global in Delhi on the 5th of August, giving the embassy a month with 18 working days before my Kenya trip. However, I didn t mention my Kenya trip in the Luxembourg visa application. For reference, here is a list of documents I submitted: I submitted flight reservations instead of flight tickets . It is because, in case of visa rejection, I would have lost a significant amount of money if I booked confirmed flight tickets. The embassy also recommends the same. After the submission of documents, my fingerprints were taken. The expenses for the visa application were as follows:
Service Description Amount (INR)
Visa Fee 8,114
VFS Global Fee 1,763
Courier 800
Total 10,677
Going by the emails sent by VFS, my application reached the Luxembourg embassy the next day. Fast-forward to the 27th of August 14th day of my visa application. I had already booked my flight ticket to Nairobi for the 4th of September, but my passport was still with the Luxembourg embassy, and I hadn t heard back. In addition, I also obtained Kenya s eTA and got vaccinated for Yellow Fever, a requirement to travel to Kenya. In order to check on my application status, I gave the embassy a phone call, but missed their calling window, which was easy to miss since it was only 1 hour - 12:00 to 1:00 PM. So, I dropped them an email explaining my situation. At this point, I was already wondering whether to cancel the Kenya trip or the Luxembourg one, if I had to choose. After not getting a response to my email, I called them again the next day. The embassy told me they would look into it and asked me to send my flight tickets over email. One week to go before my flight now. I followed up with the embassy on the 30th by a phone call, and the person who picked up the call told me that my request had already been forwarded to the concerned department and is under process. They asked me to follow up on Monday, 2nd September. During the visa process, I was in touch with three other Indian attendees.1 In the meantime, I got to know that all of them had applied for a Luxembourg visa by the end of the month of August. Back to our story, over the next two days, the embassy closed for the weekend. I began weighing my options. On one hand, I could cancel the Kenya trip and hope that Luxembourg goes through. Even then, Luxembourg wasn t guaranteed as the visa could get rejected, so I might have ended up missing both the trips. On the other hand, I could cancel the Luxembourg visa application and at least be sure of going to Kenya. However, I thought it would make Luxembourg very unlikely because it didn t leave 15 working days for the embassy to process my visa after returning from Kenya. I also badly wanted to attend the LibreOffice conference because I couldn t make it two years ago. Therefore, I chose not to cancel my Luxembourg visa application. I checked with my travel agent and learned that I could cancel my Nairobi flight before September 4th for a cancelation fee of approximately 7,000 INR. On the 2nd of September, I was a bit frustrated because I hadn t heard anything from the embassy regarding my request. Therefore, I called the embassy again. They assured me that they would arrange a call for me from the concerned department that day, which I did receive later that evening. During the call, they offered to return my passport via VFS the next day and asked me to resubmit it after returning from Kenya. I immediately accepted the offer and was overjoyed, as it would enable me to take my flight to Nairobi without canceling my Luxembourg visa application. However, I didn t have the offer in writing, so it wasn t clear to me how I would collect my passport from VFS. The next day, I would receive it when I would be on my way to VFS in the form of an email from the embassy which read:
Dear Mr. Dwivedi, We acknowledge the receipt of your email. As you requested, we are returning your passport exceptionally through VFS, you can collect it directly from VFS Delhi Center between 14:00-17:00 hrs, 03 Sep 2024. Kindly bring the printout of this email along with your VFS deposit receipt and Original ID proof. Once you are back from your trip, you can redeposit the passport with VFS Luxembourg for our processing. With best regards,
Consular Section GRAND DUCHY OF LUXEMBOURG
Embassy in New Delhi
I took a printout of the email and submitted it to VFS to get my passport. This seemed like a miracle - just when I lost all hope of making it to my Kenya flight and was mentally preparing myself to miss it, I got my passport back exceptionally and now I had to mentally prepare again for Kenya. I had never heard of an embassy returning passport before completing the visa process before. The next day, I took my flight to Nairobi as planned. In case you are interested, I have written two blog posts on my Kenya trip - one on the OpenStreetMap conference in Nairobi and the other on my travel experience in Kenya. After returning from Kenya, I resubmitted my passport on the 17th of September. Fast-forward to the 25th of September; I didn t hear anything from the embassy about my application process. So, I checked with TDF to see whether the embassy reached out to them. They told me they confirmed my participation and my hotel booking to the visa authorities on the 19th of September (6 days ago). I was wondering what was taking so long after the verification. On the 1st of October, I received a phone call from the Luxembourg embassy, which turned out to be a surprise interview. They asked me about my work, my income, how I came to know about the conference, whether I had been to Europe before, etc. The call lasted around 10 minutes. At this point, my travel date - 8th of October - was just two working days away as the 2nd of October was off due to Gandhi Jayanti and 5th and 6th October were weekends, leaving only the 3rd and the 4th. I am not sure why the embassy saved this for the last moment, even though I submitted my application 2 months ago. I also got to know that one of the other Indian attendees missed the call due to being in their college lab, where he was not allowed to take phone calls. Therefore, I recommend that the embassy agree on a time slot for the interview call beforehand. Visa decisions for all the above-mentioned Indian attendees were sent by the embassy on the 4th of October, and I received mine on the 5th. For my travel date of 8th October, this was literally the last moment the embassy could send my visa. The parcel contained my passport and a letter. The visa was attached to a page in the passport. I was happy that my visa had been approved. However, the timing made my task challenging. The enclosed letter stated:
Subject: Your Visa Application for Luxembourg
Dear Applicant, We would like to inform you that a Schengen visa has been granted for the 8-day duration from 08/10/2024 to 30/10/2024 for conference purposes in Luxembourg. You are requested to report back to the Embassy of Luxembourg in New Delhi through an email (email address redacted) after your return with the following documents:
  • Immigration Stamps (Entry and Exit of Schengen Area)
  • Restaurant Bills
  • Shopping/Hotel/Accommodation bills
Failure to report to the Embassy after your return will be taken into consideration for any further visa applications.
I understand the embassy wanting to ensure my entry and exit from the Schengen area during the visa validity period, but found the demand for sending shopping bills excessive. Further, not everyone was as lucky as I was as it took a couple of days for one of the Indian attendees to receive their visa, delaying their plan. Another attendee had to send their father to the VFS center to collect their visa in time, rather than wait for the courier to arrive at their home. Foreign travel is complicated, especially for the citizens of countries whose passports and currencies are weak. Embassies issuing visas a day before the travel date doesn t help. For starters, a last-minute visa does not give enough time for obtaining a forex card as banks ask for the visa. Further, getting foreign currency (Euros in our case) in cash with a good exchange rate becomes difficult. As an example, for the Kenya trip, I had to get US Dollars at the airport due to the plan being finalized at the last moment, worsening the exchange rate. Back to the current case, the flight prices went up significantly compared to September, almost doubling. The choice of airlines also got narrowed, as most of the flights got booked by the time I received my visa. With all that said, I think it was still better than an arbitrary rejection. Credits: Contrapunctus, Badri, Fletcher, Benson, and Anirudh for helping with the draft of this post.

  1. Thanks to Sophie, our point of contact for the conference, for putting me in touch with them.

19 January 2025

Fran ois Marier: Blocking comment spammers on an Ikiwiki blog

Despite comments on my ikiwiki blog being fully moderated, spammers have been increasingly posting link spam comments on my blog. While I used to use the blogspam plugin, the underlying service was likely retired circa 2017 and its public repositories are all archived. It turns out that there is a relatively simple way to drastically reduce the amount of spam submitted to the moderation queue: ban the datacentre IP addresses that spammers are using.

Looking up AS numbers It all starts by looking at the IP address of a submitted comment: From there, we can look it up using whois:
$ whois -r 2a0b:7140:1:1:5054:ff:fe66:85c5
% This is the RIPE Database query service.
% The objects are in RPSL format.
%
% The RIPE Database is subject to Terms and Conditions.
% See https://docs.db.ripe.net/terms-conditions.html
% Note: this output has been filtered.
%       To receive output for a database update, use the "-B" flag.
% Information related to '2a0b:7140:1::/48'
% Abuse contact for '2a0b:7140:1::/48' is 'abuse@servinga.com'
inet6num:       2a0b:7140:1::/48
netname:        EE-SERVINGA-2022083002
descr:          servinga.com - Estonia
geoloc:         59.4424455 24.7442221
country:        EE
org:            ORG-SG262-RIPE
mnt-domains:    HANNASKE-MNT
admin-c:        CL8090-RIPE
tech-c:         CL8090-RIPE
status:         ASSIGNED
mnt-by:         MNT-SERVINGA
created:        2020-02-18T11:12:49Z
last-modified:  2024-12-04T12:07:26Z
source:         RIPE
% Information related to '2a0b:7140:1::/48AS207408'
route6:         2a0b:7140:1::/48
descr:          servinga.com - Estonia
origin:         AS207408
mnt-by:         MNT-SERVINGA
created:        2020-02-18T11:18:11Z
last-modified:  2024-12-11T23:09:19Z
source:         RIPE
% This query was served by the RIPE Database Query Service version 1.114 (SHETLAND)
The important bit here is this line:
origin:         AS207408
which referts to Autonomous System 207408, owned by a hosting company in Germany called Servinga.

Looking up IP blocks Autonomous Systems are essentially organizations to which IPv4 and IPv6 blocks have been allocated. These allocations can be looked up easily on the command line either using a third-party service:
$ curl -sL https://ip.guide/as207408   jq .routes.v4 >> servinga
$ curl -sL https://ip.guide/as207408   jq .routes.v6 >> servinga
or a local database downloaded from IPtoASN. This is what I ended up with in the case of Servinga:
[
  "45.11.183.0/24",
  "80.77.25.0/24",
  "194.76.227.0/24"
]
[
  "2a0b:7140:1::/48"
]

Preventing comment submission While I do want to eliminate this source of spam, I don't want to block these datacentre IP addresses outright since legitimate users could be using these servers as VPN endpoints or crawlers. I therefore added the following to my Apache config to restrict the CGI endpoint (used only for write operations such as commenting):
<Location /blog.cgi>
        Include /etc/apache2/spammers.include
        Options +ExecCGI
        AddHandler cgi-script .cgi
</Location>
and then put the following in /etc/apache2/spammers.include:
<RequireAll>
    Require all granted
    # https://ipinfo.io/AS207408
    Require not ip 46.11.183.0/24
    Require not ip 80.77.25.0/24
    Require not ip 194.76.227.0/24
    Require not ip 2a0b:7140:1::/48
</RequireAll>
Finally, I can restart the website and commit my changes:
$ apache2ctl configtest && systemctl restart apache2.service
$ git commit -a -m "Ban all IP blocks from Servinga"

Future improvements I will likely automate this process in the future, but at the moment my blog can go for a week without a single spam message (down from dozens every day). It's possible that I've already cut off the worst offenders. I have published the list I am currently using.

17 January 2025

Russell Coker: Systemd Hardening and Sending Mail

A feature of systemd is the ability to reduce the access that daemons have to the system. The restrictions include access to certain directories, system calls, capabilities, and more. The systemd.exec(5) man page describes them all [1]. To see an overview of the security of daemons run systemd-analyze security and to get details of one particular daemon run a command like systemd-analyze security mon.service . I created a Debian wiki page for a systemd-analyze security goal [2]. At this time release goals aren t a serious thing for Debian so this won t result in release critical bug reports, but it is still something we can aim for. For a simple daemon (EG BIND, dhcpd, and syslogd) this isn t difficult to do. It might be difficult to understand the implications of some changes (especially when restricting system calls) but you can do some quick tests. The functionality of such programs has a limited scope and once you get it basically working it s done. For some daemons it s harder. Network-Manager is one of the well known slightly more difficult cases as it could do things like starting a VPN connection. The larger scope and the use of plugins makes it difficult to test the combinations. The systemd restrictions apply to child processes too unlike restrictions by SE Linux and AppArmor which permit a child process to run in a different security context. The messages when a daemon fails due to systemd restrictions are usually unclear which makes things harder to setup and makes it more important to get it right. My mon package (which I forked upstream as etbe-mon [3] is one of the difficult daemons as local test can involve probing large parts of the system. But I have got that working reasonably well for most cases. I have a bug report about running mon with Exim [4]. The problem with this is that Exim has a single process model which means that the process doing local delivery can be a child of the process that initially received the message. So the main mon process needs all the access for delivering mail (writing to /home etc). This also means that every other child of mon will get such access including programs that receive untrusted data from the Internet. Most of the extra access needed by Exim is not a problem, but /home access is a potential risk. It also means that more effort is needed when reviewing the access control. The problem with this Exim design is that it applies to many daemons. Every daemon that sends email or that potentially could send email in some configuration needs extra access to be granted. Can Exim be configured to have it s sendmail -T type operation just write a file in a spool directory for another program to process? Do we need to grant permissions to most of the system just for Exim?

14 January 2025

Louis-Philippe V ronneau: Montreal Subway Foot Traffic Data, 2024 edition

Another year of data from Soci t de Transport de Montr al, Montreal's transit agency! A few highlights this year:
  1. The closure of the Saint-Michel station had a drastic impact on D'Iberville, the station closest to it.
  2. The opening of the Royalmount shopping center nearly doubled the traffic of the De La Savane station.
  3. The Montreal subway continues to grow, but has not yet recovered from the pandemic. Berri-UQAM station (the largest one) is still below 1 million entries per quarter compared to its pre-pandemic record.
By clicking on a subway station, you'll be redirected to a graph of the station's foot traffic. Licences

13 January 2025

Kentaro Hayashi: Stick to boot from 6.11 linux-image

Since Dec 2024, there is a compatibility issue with linux-image 6.12 and nvidia-driver 535.216.03. #1089513 - nvidia-driver: crash in drm_open_helper on Linux 6.12.3 - Debian Bug report logs It seems that the upstream was already fixed this issue in newer release, but not available yet on Debian sid. If you keep installed linux-image-amd64 or linux-headers-amd64, it will be booted from 6.12 by default. Surely it will boot, but it still has the resolution issue. It can't be your daily driver. Thus workaround is sticking to boot from 6.11 linux-image. As usually older image was listed in "Advanced options for ..." submenu, you need to explicitly choose 6.11 image during booting. In such a case, the changing default boot image is useful. (another option is just purge all 6.12 linux-image, linux-image-amd64 and linux-headers-amd64, but it's out of scope in this article) See /boot/grub/grub.cfg and collect each menuentry_id_option. You need to collect the following menuentry id. Then, edit GRUB_DEFAULT entry in /etc/default/grub. GRUB_DEFAULT should be combination with [1] and [2] which is concatenated with ">" e.g. NOTE: the actual value might vary on your environment
GRUB_DEFAULT="gnulinux-advanced-2e0e7dd0-7e10-4b58-92f7-518dabb5b547>gnulinux-6.11.10-amd64-advanced-2e0e7dd0-7e10-4b58-92f7-518dabb5b547"
After that, update grub entry with sudo update-grub. /boot/grub/grub.cfg will be updated correctly.

Freexian Collaborators: Monthly report about Debian Long Term Support, December 2024 (by Roberto C. S nchez)

Like each month, have a look at the work funded by Freexian s Debian LTS offering.

Debian LTS contributors In December, 19 contributors have been paid to work on Debian LTS, their reports are available:
  • Abhijith PA did 14.0h (out of 14.0h assigned).
  • Adrian Bunk did 47.75h (out of 53.0h assigned and 47.0h from previous period), thus carrying over 52.25h to the next month.
  • Andrej Shadura did 6.0h (out of 17.0h assigned and -7.0h from previous period after hours given back), thus carrying over 4.0h to the next month.
  • Bastien Roucari s did 22.0h (out of 22.0h assigned).
  • Ben Hutchings did 15.0h (out of 0.0h assigned and 18.0h from previous period), thus carrying over 3.0h to the next month.
  • Chris Lamb did 18.0h (out of 18.0h assigned).
  • Daniel Leidert did 23.0h (out of 17.0h assigned and 9.0h from previous period), thus carrying over 3.0h to the next month.
  • Emilio Pozuelo Monfort did 32.25h (out of 40.5h assigned and 19.5h from previous period), thus carrying over 27.75h to the next month.
  • Guilhem Moulin did 22.5h (out of 9.75h assigned and 12.75h from previous period).
  • Jochen Sprickerhof did 2.0h (out of 3.5h assigned and 6.5h from previous period), thus carrying over 8.0h to the next month.
  • Lee Garrett did 8.5h (out of 14.75h assigned and 45.25h from previous period), thus carrying over 51.5h to the next month.
  • Lucas Kanashiro did 32.0h (out of 10.0h assigned and 54.0h from previous period), thus carrying over 32.0h to the next month.
  • Markus Koschany did 40.0h (out of 20.0h assigned and 20.0h from previous period).
  • Roberto C. S nchez did 13.5h (out of 6.75h assigned and 17.25h from previous period), thus carrying over 10.5h to the next month.
  • Santiago Ruano Rinc n did 18.75h (out of 24.75h assigned and 0.25h from previous period), thus carrying over 6.25h to the next month.
  • Sean Whitton did 6.0h (out of 2.0h assigned and 4.0h from previous period).
  • Sylvain Beucler did 10.5h (out of 21.5h assigned and 38.5h from previous period), thus carrying over 49.5h to the next month.
  • Thorsten Alteholz did 11.0h (out of 11.0h assigned).
  • Tobias Frost did 12.0h (out of 12.0h assigned).

Evolution of the situation In December, we have released 29 DLAs. The LTS Team has published updates to several notable packages. Contributor Guilhem Moulin published an update of php7.4, a widely-used open source general purpose scripting language, which addressed denial of service, authorization bypass, and information disclosure vulnerabilities. Contributor Lucas Kanashiro published an update of clamav, an antivirus toolkit for Unix and Linux, which addressed denial of service and authorization bypass vulnerabilities. Finally, contributor Tobias Frost published an update of intel-microcode, the microcode for Intel microprocessors, which well help to ensure that processor hardware is protected against several local privilege escalation and local denial of service vulnerabilities. Beyond our customary LTS package updates, the LTS Team has made contributions to Debian s stable bookworm release and its experimental section. Notably, contributor Lee Garrett published a stable update of dnsmasq. The LTS update was previously published in November and in December Lee continued working to bring the same fixes (addressing the high profile KeyTrap and NSEC3 vulnerabilities) to the dnsmasq package in Debian bookworm. This package was accepted for inclusion in the Debian 12.9 point release scheduled for January 2025. Addititionally, contributor Sean Whitton provided assistance, via upload sponsorships, to the Debian maintainers of xen. This assistance resulted in two uploads of xen into Debian s experimental section, which will contribute to the next Debian stable release having a version of xen with better longterm support from the upstream development team.

Thanks to our sponsors Sponsors that joined recently are in bold.

12 January 2025

Dirk Eddelbuettel: Rcpp 1.0.14 on CRAN: Regular Semi-Annual Update

rcpp logo The Rcpp Core Team is once again thrilled, pleased, and chuffed (am I doing this right for LinkedIn?) to announce a new release (now at 1.0.14) of the Rcpp package. It arrived on CRAN earlier today, and has since been uploaded to Debian. Windows and macOS builds should appear at CRAN in the next few days, as will builds in different Linux distribution and of course r2u should catch up tomorrow too. The release was only uploaded yesterday, and as always get flagged because of the grandfathered .Call(symbol) as well as for the url to the Rcpp book (which has remained unchanged for years) failing . My email reply was promptly dealt with under European morning hours and by the time I got up the submission was in state waiting over a single reverse-dependency failure which is also spurious, appears on some systems and not others, and also not new. Imagine that: nearly 3000 reverse dependencies and only one (spurious) change to worse. Solid testing seems to help. My thanks as always to the CRAN for responding promptly. This release continues with the six-months January-July cycle started with release 1.0.5 in July 2020. This time we also need a one-off hotfix release 1.0.13-1: we had (accidentally) conditioned an upcoming R change on 4.5.0, but it already came with 4.4.2 so we needed to adjust our code. As a reminder, we do of course make interim snapshot dev or rc releases available via the Rcpp drat repo as well as the r-universe page and repo and strongly encourage their use and testing I run my systems with these versions which tend to work just as well, and are also fully tested against all reverse-dependencies. Rcpp has long established itself as the most popular way of enhancing R with C or C++ code. Right now, 2977 packages on CRAN depend on Rcpp for making analytical code go faster and further. On CRAN, 13.6% of all packages depend (directly) on Rcpp, and 60.8% of all compiled packages do. From the cloud mirror of CRAN (which is but a subset of all CRAN downloads), Rcpp has been downloaded 93.7 million times. The two published papers (also included in the package as preprint vignettes) have, respectively, 1947 (JSS, 2011) and 354 (TAS, 2018) citations, while the the book (Springer useR!, 2013) has another 676. This release is primarily incremental as usual, generally preserving existing capabilities faithfully while smoothing our corners and / or extending slightly, sometimes in response to changing and tightened demands from CRAN or R standards. The move towards a more standardized approach for the C API of R once again to a few changes; Kevin did once again did most of these PRs. Other contributed PRs include G bor permitting builds on yet another BSD variant, Simon Guest correcting sourceCpp() to work on read-only files, Marco Colombo correcting a (surprisingly large) number of vignette typos, I aki rebuilding some documentation files that tickled (false) alerts, and I took care of a number of other maintenance items along the way. The full list below details all changes, their respective PRs and, if applicable, issue tickets. Big thanks from all of us to all contributors!

Changes in Rcpp release version 1.0.14 (2025-01-11)
  • Changes in Rcpp API:
    • Support for user-defined databases has been removed (Kevin in #1314 fixing #1313)
    • The SET_TYPEOF function and macro is no longer used (Kevin in #1315 fixing #1312)
    • An errorneous cast to int affecting large return object has been removed (Dirk in #1335 fixing #1334)
    • Compilation on DragonFlyBSD is now supported (G bor Cs rdi in #1338)
    • Use read-only VECTOR_PTR and STRING_PTR only with with R 4.5.0 or later (Kevin in #1342 fixing #1341)
  • Changes in Rcpp Attributes:
    • The sourceCpp() function can now handle input files with read-only modes (Simon Guest in #1346 fixing #1345)
  • Changes in Rcpp Deployment:
    • One unit tests for arm64 macOS has been adjusted; a macOS continuous integration runner was added (Dirk in #1324)
    • Authors@R is now used in DESCRIPTION as mandated by CRAN, the Rcpp.package.skeleton() function also creates it (Dirk in #1325 and #1327)
    • A single datetime format test has been adjusted to match a change in R-devel (Dirk in #1348 fixing #1347)
  • Changes in Rcpp Documentation:
    • The Rcpp Modules vignette was extended slightly following #1322 (Dirk)
    • Pdf vignettes have been regenerated under Ghostscript 10.03.1 to avoid a false positive by a Windows virus scanner (I aki in #1331)
    • A (large) number of (old) typos have been corrected in the vignettes (Marco Colombo in #1344)

Thanks to my CRANberries, you can also look at a diff to the previous release Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page. Bugs reports are welcome at the GitHub issue tracker as well (where one can also search among open or closed issues).

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

Next.