Search Results: "ed"

13 February 2026

Erich Schubert: Dogfood Generative AI

Current AI companies ignore licenses such as the GPL, and often train on anything they can scrape. This is not acceptable. The AI companies ignore web conventions, e.g., they deep link images from your web sites (even adding ?utm_source=chatgpt.com to image URIs, I suggest that you return 403 on these requests), but do not direct visitors to your site. You do not get a reliable way of opting out from generative AI training or use. For example, the only way to prevent your contents from being used in Google AI Overviews is to use data-nosnippet and cripple the snippet preview in Google. The AI browsers such as Comet, Atlas do not identify as such, but rather pretend they are standard Chromium. There is no way to ban such AI use on your web site. Generative AI overall is flooding the internet with garbage. It was estimated that 1/3rd of the content uploaded to YouTube is by now AI generated. This includes the same veteran stories crap in thousands of variants as well as brainrot content (that at least does not pretend to be authentic), some of which is among the most viewed recent uploads. Hence, these platforms even benefit from the AI slop. And don t blame the creators because you can currently earn a decent amount of money from such contents, people will generate brainrot content. If you have recently tried to find honest reviews of products you considered buying, you will have noticed thousands of sites with AI generated fake product reviews, that all are financed by Amazon PartnerNet commissions. Often with hilarious nonsense such as recommending sewing thread with German instructions as tool for repairing a sewing machine. And on Amazon, there are plenty of AI generated product reviews the use of emoji is a strong hint. And if you leave a negative product review, there is a chance they offer you a refund to get rid of it And the majority of SPAM that gets through my filters is by now sent via Gmail and Amazon SES. Partially because of GenAI, StackOverflow is pretty much dead which used to be one of the most valuable programming resources. (While a lot of people complain about moderation, famous moderator Shog9 from the early SO days suggested that a change in Google s ranking is also to blame, as it began favoring showing new content over the existing answered questions causing more and more duplicates to be posted because people no longer found the existing good answers. In January 2026, there were around 3400 questions and 6000 answers posted, less than in the first month of SO of August 2008 (before the official launch). Many open-source projects are suffering in many ways, e.g., false bug reports that caused curl to stop its bug bounty program. Wikipedia is also suffering badly from GenAI. Science is also flooded with poor AI generated papers, often reviewed with help from AI. This is largely due to bad incentives to graduate, you are expected to write many papers on certain A conferences, such as NeurIPS. On these conferences the number of submissions is growing insane, and the review quality plummets. All to often, the references in these papers are hallucinated, too; and libraries complain that they receive more and more requests to locate literature that does not appear to exist. However, the worst effect (at least to me as an educator) is the noskilling effect (a rather novel term derived from deskilling, I have only seen it in this article by We els and Maibaum). Instead of acquiring skills (writing, reading, summarizing, programming) by practising, too many people now outsource all this to AI, leading to them not learn the basics necessary to advance to a higher skill level. In my impression, this effect is dramatic. It is even worse than deskilling, as it does not mean losing an advanced skill that you apparently can replace, but often means not acquiring basic skills in the first place. And the earlier pupils start using generative AI, the less skills they acquire. Dogfood the AI Let s dogfood the AI. Here s an outline:
  1. Get a list of programming topics, e.g., get a list of algorithms from Wikidata, get a StackOverflow data dump.
  2. Generate flawed code examples for the algorithms / programming questions, maybe generate blog posts, too.
    You do not need a high-quality model for this. Use something you can run locally or access for free.
  3. Date everything back in time, remove typical indications of AI use.
  4. Upload to Github, because Microsoft will feed this to OpenAI
Here is an example prompt that you can use:
You are a university educator, preparing homework assignments in debugging.
The programming language used is  lang .
The students are tasked to find bugs in given code.
Do not just call existing implementations from libraries, but implement the algorithm from scratch.
Make sure there are two mistakes in the code that need to be discovered by the students.
Do NOT repeat instructions. Do NOT add small-talk. Do NOT provide a solution.
The code may have (misleading) comments, but must NOT mention the bugs.
If you do not know how to implement the algorithm, output an empty response.
Output only the code for the assignment! Do not use markdown.
Begin with a code comment that indicates the algorithm name and idea.
If you indicate a bug, always use a comment with the keyword BUG
Generate a  lang  implementation (with bugs) of:  n  ( desc )
Remember to remove the BUG comments! If you pick some slighly less common programming languages (by quantity of available code, say Go or Rust) you have higher chances that this gets into the training data. If many of us do this, we can feed GenAI its own garbage. If we generate thousands of bad code examples, this will poison their training data, and may eventually lead to an effect known as model collapse . On the long run, we need to get back to an internet for people, not an internet for bots. Some kind of internet 2.0 , but I do not have a clear vision on how to keep AI out if AI can train on it, they will. And someone will copy and paste the AI generated crap back into whatever system we built. Hence I don t think technology is the answere here, but human networks of trust.

12 February 2026

Dirk Eddelbuettel: RcppSpdlog 0.0.27 on CRAN: C++20 Accommodations

Version 0.0.27 of RcppSpdlog arrived on CRAN moments ago, and will be uploaded to Debian and built for r2u shortly. The (nice) documentation site will be refreshed too. RcppSpdlog bundles spdlog, a wonderful header-only C++ logging library with all the bells and whistles you would want that was written by Gabi Melman, and also includes fmt by Victor Zverovich. You can learn more at the nice package documention site. Brian Ripley has now turned C++20 on as a default for R-devel (aka R 4.6.0 to be ), and this turned up misbehvior in packages using RcppSpdlog such as our spdl wrapper (offering a nicer interface from both R and C++) when relying on std::format. So for now, we turned this off and remain with fmt::format from the fmt library while we investigate further. The NEWS entry for this release follows.

Changes in RcppSpdlog version 0.0.27 (2026-02-11)
  • Under C++20 or later, keep relying on fmt::format until issues experienced using std::format can be identified and resolved

Courtesy of my CRANberries, there is also a diffstat report detailing changes. More detailed information is on the RcppSpdlog page, or the package documention site.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

Freexian Collaborators: Debian Contributions: cross building, rebootstrap updates, Refresh of the patch tagging guidelines and more! (by Anupa Ann Joseph)

Debian Contributions: 2026-01 Contributing to Debian is part of Freexian s mission. This article covers the latest achievements of Freexian and their collaborators. All of this is made possible by organizations subscribing to our Long Term Support contracts and consulting services.

cross building, by Helmut Grohne In version 1.10.1, Meson merged a patch to make it call the correct g-ir-scanner by default thanks to Eli Schwarz. This problem affected more than 130 source packages. Helmut retried building them all and filed 69 patches as a result. A significant portion of those packages require another Meson change to call the correct vapigen. Another notable change is converting gnu-efi to multiarch, which ended up requiring changes to a number of other packages. Since Aurelien dropped the libcrypt-dev dependency from libc6-dev, this transition now is mostly complete and has resulted in most of the Perl ecosystem correctly expressing perl-xs-dev dependencies needed for cross building. It is these infrastructure changes affecting several client packages that this work targets. As a result of this continued work, about 66% of Debian s source packages now have satisfiable cross Build-Depends in unstable and about 10000 (55%) actually can be cross built. There are now more than 500 open bug reports affecting more than 2000 packages most of which carry patches.

rebootstrap, by Helmut Grohne Maintaining architecture cross-bootstrap requires continued effort for adapting to archive changes such as glib2.0 dropping a build profile or an e2fsprogs FTBFS. Beyond those generic problems, architecture-specific problems with e.g. musl-linux-any or sparc may arise. While all these changes move things forward on the surface, the bootstrap tooling has become a growing pile of patches. Helmut managed to upstream two changes to glibc for reducing its Build-Depends in the stage2 build profile and thanks Aurelien Jarno.

Refresh of the patch tagging guidelines, by Rapha l Hertzog Debian Enhancement Proposal #3 (DEP-3) is named Patch Tagging Guidelines and standardizes meta-information that Debian contributors can put in patches included in Debian source packages. With the feedback received over the years, and with the change in the package management landscape, the need to refresh those guidelines became evident. As the initial driver of that DEP, I spent a good day reviewing all the feedback (that I kept in a folder) and producing a new version of the document. The changes aim to give more weight to the syntax that is compatible with git format-patch s output, and also to clarify the expected uses and meanings of a couple of fields, including some algorithm that parsers should follow to define the state of the patch. After the announcement of the new draft on debian-devel, the revised DEP-3 received a significant number of comments that I still have to process.

Miscellaneous contributions
  • Helmut uploaded debvm making it work with unstable as a target distribution again.
  • Helmut modernized the code base backing dedup.debian.net significantly expanding the support for type checking.
  • Helmut fixed the multiarch hinter once more given feedback from Fabian Gr nbichler.
  • Helmut worked on migrating the rocblas package to forky.
  • Rapha l fixed RC bug #1111812 in publican and did some maintenance for tracker.debian.org.
  • Carles added support in the festival Debian package for systemd socket activation and systemd service and socket units. Adapted the patch for upstream and created a merge request (also fixed a MacOS X building system error while working on it). Updated Orca Wiki documentation regarding festival. Discussed a 2007 bug/feature in festival which allowed having a local shell and that the new systemd socket activation has the same code path.
  • Carles using po-debconf-manager worked on Catalan translations: 7 reviewed and sent; 5 follow ups, 5 deleted packages.
  • Carles made some po-debconf-manager changes: now it attaches the translation file on follow ups, fixed bullseye compatibility issues.
  • Carles reviewed a new Catalan apt translation.
  • Carles investigated and reported a lxhotkey bug and sent a patch for the abcde package.
  • Carles made minor updates for Debian Wiki for different pages (lxde for dead keys, Ripping with abcde troubleshooting, VirtualBox troubleshooting).
  • Stefano renamed build-details.json in Python 3.14 to fix multiarch coinstallability.
  • Stefano audited the tooling and ignore lists for checking the contents of the python3.X-minimal packages, finding and fixing some issues in the process.
  • Stefano made a few uploads of python3-defaults and dh-python in support of Python 3.14-as-default in Ubuntu. Also investigated the risk of ignoring byte-compilation failures by default, and started down the road of implementing this.
  • Stefano did some sysadmin work on debian.social infrastructure.
  • Stefano and Santiago worked on preparations for DebConf 26. Especially to help the local team on opening the registration, and reviewing the budget to be presented for approval.
  • Stefano uploaded routine updates of python-virtualenv and python-flexmock.
  • Antonio collaborated with DSA on enabling a new proxy for salsa to prevent scrapers from taking the service down.
  • Antonio did miscellaneous salsa administrative tasks.
  • Antonio fixed a few Ruby packages towards the Ruby 3.4 transition.
  • Antonio started work on planned improvements to the DebConf registration system.
  • Santiago prepared unstable updates for the latest upstream versions of knot-dns and knot-resolver. The authoritative DNS server and DNS resolver software developed by CZ.NIC. It is worth highlighting that, given the separation of functionality compared to other implementations, knot-dns and knot-resolver are also less complex software, which results in advantages in terms of security: only three CVEs have been reported for knot-dns since 2011).
  • Santiago made some routine reviews of merge requests proposed for the Salsa CI s pipeline. E.g. a proposal to fix how sbuild chooses the chroot when building a package for experimental.
  • Colin fixed lots of Python packages to handle Python 3.14 and to avoid using the deprecated pkg_resources module.
  • Colin added forky support to the images used in Salsa CI pipelines.
  • Colin began working on getting a release candidate of groff 1.24.0 (the first upstream release since mid-2023, so a very large set of changes) into experimental.
  • Lucas kept working on the preparation for Ruby 3.4 transition. Some packages fixed (support build against Ruby 3.3 and 3.4): ruby-rbpdf, jekyll, origami-pdf, ruby-kdl, ruby-twitter, ruby-twitter-text, ruby-globalid.
  • Lucas supported some potential mentors in the Google Summer of Code 26 program to submit their projects.
  • Anupa worked on the point release announcements for Debian 12.13 and 13.3 from the Debian publicity team side.
  • Anupa attended the publicity team meeting to discuss the team activities and to plan an online sprint in February.
  • Anupa attended meetings with the Debian India team to plan and coordinate the MinDebConf Kanpur and sent out related Micronews.
  • Emilio coordinated various transitions and helped get rid of llvm-toolchain-17 from sid.

10 February 2026

Freexian Collaborators: Writing a new worker task for Debusine (by Carles Pina i Estany)

Debusine is a tool designed for Debian developers and Operating System developers in general. You can try out Debusine on debusine.debian.net, and follow its development on salsa.debian.org. This post describes how to write a new worker task for Debusine. It can be used to add tasks to a self-hosted Debusine instance, or to submit to the Debusine project new tasks to add new capabilities to Debusine. Tasks are the lower-level pieces of Debusine workflows. Examples of tasks are Sbuild, Lintian, Debdiff (see the available tasks). This post will document the steps to write a new basic worker task. The example will add a worker task that runs reprotest and creates an artifact of the new type ReprotestArtifact with the reprotest log. Tasks are usually used by workflows. Workflows solve high-level goals by creating and orchestrating different tasks (e.g. a Sbuild workflow would create different Sbuild tasks, one for each architecture).

Overview of tasks A task usually does the following:
  • It receives structured data defining its input artifacts and configuration
  • Input artifacts are downloaded
  • A process is run by the worker (e.g. lintian, debdiff, etc.). In this blog post, it will run reprotest
  • The output (files, logs, exit code, etc.) is analyzed, artifacts and relations might be generated, and the work request is marked as completed, either with Success or Failure
If you want to follow the tutorial and add the Reprotest task, your Debusine development instance should have at least one worker, one user, a debusine client set up, and permissions for the client to create tasks. All of this can be setup following the steps in the Contribute section of the documentation. This blog post shows a functional Reprotest task. This task is not currently part of Debusine. The Reprotest task implementation is simplified (no error handling, unit tests, specific view, docs, some shortcuts in the environment preparation, etc.). At some point, in Debusine, we might add a debrebuild task which is based on buildinfo files and uses snapshot.debian.org to recreate the binary packages.

Defining the inputs of the task The input of the reprotest task will be a source artifact (a Debian source package). We model the input with pydantic in debusine/tasks/models.py:
class ReprotestData(BaseTaskDataWithExecutor):
   """Data for Reprotest task."""

   source_artifact: LookupSingle

class ReprotestDynamicData(BaseDynamicTaskDataWithExecutor):
   """Reprotest dynamic data."""

   source_artifact_id: int   None = None
The ReprotestData is what the user will input. A LookupSingle is a lookup that resolves to a single artifact. We would also have configuration for the desired variations to test, but we have left that out of this example for simplicity. Configuring variations is left as an exercise for the reader. Since ReprotestData is a subclass of BaseTaskDataWithExecutor it also contains environment where the user can specify in which environment the task will run. The environment is an artifact with a Debian image. The ReprotestDynamicData holds the resolution of all lookups. These can be seen in the Internals tab of the work request view.

Add the new Reprotest artifact data class In order for the reprotest task to create a new Artifact of the type DebianReprotest with the log and output metadata: add the new category to ArtifactCategory in debusine/artifacts/models.py:
    REPROTEST = "debian:reprotest"
In the same file add the DebianReprotest class:
class DebianReprotest(ArtifactData):
   """Data for debian:reprotest artifacts."""

   reproducible: bool   None = None

   def get_label(self) -> str:
       """Return a short human-readable label for the artifact."""
       return "reprotest analysis"
It could also include the package name or version. In order to have the category listed in the work request output artifacts table, edit the file debusine/db/models/artifacts.py: In ARTIFACT_CATEGORY_ICON_NAMES add ArtifactCategory.REPROTEST: "folder", and in ARTIFACT_CATEGORY_SHORT_NAMES add ArtifactCategory.REPROTEST: "reprotest",.

Create the new Task class In debusine/tasks/ create a new file reprotest.py.
reprotest.py
# Copyright   The Debusine Developers
# See the AUTHORS file at the top-level directory of this distribution
#
# This file is part of Debusine. It is subject to the license terms
# in the LICENSE file found in the top-level directory of this
# distribution. No part of Debusine, including this file, may be copied,
# modified, propagated, or distributed except according to the terms
# contained in the LICENSE file.

"""Task to use reprotest in debusine."""

from pathlib import Path
from typing import Any

from debusine import utils
from debusine.artifacts.local_artifact import ReprotestArtifact
from debusine.artifacts.models import (
    ArtifactCategory,
    CollectionCategory,
    DebianSourcePackage,
    DebianUpload,
    WorkRequestResults,
    get_source_package_name,
    get_source_package_version,
)
from debusine.client.models import RelationType
from debusine.tasks import BaseTaskWithExecutor, RunCommandTask
from debusine.tasks.models import ReprotestData, ReprotestDynamicData
from debusine.tasks.server import TaskDatabaseInterface


class Reprotest(
    RunCommandTask[ReprotestData, ReprotestDynamicData],
    BaseTaskWithExecutor[ReprotestData, ReprotestDynamicData],
):
    """Task to use reprotest in debusine."""

    TASK_VERSION = 1

    CAPTURE_OUTPUT_FILENAME = "reprotest.log"

    def __init__(
        self,
        task_data: dict[str, Any],
        dynamic_task_data: dict[str, Any]   None = None,
    ) -> None:
        """Initialize object."""
        super().__init__(task_data, dynamic_task_data)

        self._reprotest_target: Path   None = None

    def build_dynamic_data(
        self, task_database: TaskDatabaseInterface
    ) -> ReprotestDynamicData:
        """Compute and return ReprotestDynamicData."""
        input_source_artifact = task_database.lookup_single_artifact(
            self.data.source_artifact
        )

        assert input_source_artifact is not None
        self.ensure_artifact_categories(
            configuration_key="input.source_artifact",
            category=input_source_artifact.category,
            expected=(
                ArtifactCategory.SOURCE_PACKAGE,
                ArtifactCategory.UPLOAD,
            ),
        )
        assert isinstance(
            input_source_artifact.data, (DebianSourcePackage, DebianUpload)
        )
        subject = get_source_package_name(input_source_artifact.data)
        version = get_source_package_version(input_source_artifact.data)

        assert self.data.environment is not None

        environment = self.get_environment(
            task_database,
            self.data.environment,
            default_category=CollectionCategory.ENVIRONMENTS,
        )

        return ReprotestDynamicData(
            source_artifact_id=input_source_artifact.id,
            subject=subject,
            parameter_summary=f" subject _ version ",
            environment_id=environment.id,
        )

    def get_input_artifacts_ids(self) -> list[int]:
        """Return the list of input artifact IDs used by this task."""
        if not self.dynamic_data:
            return []

        return [
            self.dynamic_data.source_artifact_id,
            self.dynamic_data.environment_id,
        ]

    def fetch_input(self, destination: Path) -> bool:
        """Download the required artifacts."""
        assert self.dynamic_data

        artifact_id = self.dynamic_data.source_artifact_id
        assert artifact_id is not None
        self.fetch_artifact(artifact_id, destination)

        return True

    def configure_for_execution(self, download_directory: Path) -> bool:
        """
        Find a .dsc in download_directory.

        Install reprotest and other utilities used in _cmdline.
        Set self._reprotest_target to it.

        :param download_directory: where to search the files
        :return: True if valid files were found
        """
        self._prepare_executor_instance()

        if self.executor_instance is None:
            raise AssertionError("self.executor_instance cannot be None")

        self.run_executor_command(
            ["apt-get", "update"],
            log_filename="install.log",
            run_as_root=True,
            check=True,
        )
        self.run_executor_command(
            [
                "apt-get",
                "--yes",
                "--no-install-recommends",
                "install",
                "reprotest",
                "dpkg-dev",
                "devscripts",
                "equivs",
                "sudo",
            ],
            log_filename="install.log",
            run_as_root=True,
        )

        self._reprotest_target = utils.find_file_suffixes(
            download_directory, [".dsc"]
        )
        return True

    def _cmdline(self) -> list[str]:
        """
        Build the reprotest command line.

        Use configuration of self.data and self._reprotest_target.
        """
        target = self._reprotest_target
        assert target is not None

        cmd = [
            "bash",
            "-c",
            f"TMPDIR=/tmp ; cd /tmp ; dpkg-source -x  target  package/; "
            "cd package/ ; mk-build-deps ; apt-get install --yes ./*.deb ; "
            "rm *.deb ; "
            "reprotest --vary=-time,-user_group,-fileordering,-domain_host .",
        ]

        return cmd

    @staticmethod
    def _cmdline_as_root() -> bool:
        r"""apt-get install --yes ./\*.deb must be run as root."""
        return True

    def task_result(
        self,
        returncode: int   None,
        execute_directory: Path,  # noqa: U100
    ) -> WorkRequestResults:
        """
        Evaluate task output and return success.

        For a successful run of reprotest:
        -must have the output file
        -exit code is 0

        :return: WorkRequestResults.SUCCESS or WorkRequestResults.FAILURE.
        """
        reprotest_file = execute_directory / self.CAPTURE_OUTPUT_FILENAME

        if reprotest_file.exists() and returncode == 0:
            return WorkRequestResults.SUCCESS

        return WorkRequestResults.FAILURE

    def upload_artifacts(
        self, exec_directory: Path, *, execution_result: WorkRequestResults
    ) -> None:
        """Upload the ReprotestArtifact with the files and relationships."""
        if not self.debusine:
            raise AssertionError("self.debusine not set")

        assert self.dynamic_data is not None
        assert self.dynamic_data.parameter_summary is not None

        reprotest_artifact = ReprotestArtifact.create(
            reprotest_output=exec_directory / self.CAPTURE_OUTPUT_FILENAME,
            reproducible=execution_result == WorkRequestResults.SUCCESS,
            package=self.dynamic_data.parameter_summary,
        )

        uploaded = self.debusine.upload_artifact(
            reprotest_artifact,
            workspace=self.workspace_name,
            work_request=self.work_request_id,
        )

        assert self.dynamic_data is not None
        assert self.dynamic_data.source_artifact_id is not None
        self.debusine.relation_create(
            uploaded.id,
            self.dynamic_data.source_artifact_id,
            RelationType.RELATES_TO,
        )
Below are the main methods with some basic explanation. In order for Debusine to discover the task, add "Reprotest" in the file debusine/tasks/__init__.py in the __all__ list. Let s explain the different methods of the Reprotest class:

build_dynamic_data method The worker has no access to Debusine s database. Lookups are all resolved before the task gets dispatched to a worker, so all it has to do is download the specified input artifacts. build_dynamic_data method lookup the artifact, assert that is a valid category, extract the package name and version, and get the environment in which it will be executed. The environment is needed to run the task (reprotest will run in a container using unshare, incus ).
    def build_dynamic_data(
        self, task_database: TaskDatabaseInterface
    ) -> ReprotestDynamicData:
        """Compute and return ReprotestDynamicData."""
        input_source_artifact = task_database.lookup_single_artifact(
            self.data.source_artifact
        )

        assert input_source_artifact is not None
        self.ensure_artifact_categories(
            configuration_key="input.source_artifact",
            category=input_source_artifact.category,
            expected=(
                ArtifactCategory.SOURCE_PACKAGE,
                ArtifactCategory.UPLOAD,
            ),
        )
        assert isinstance(
            input_source_artifact.data, (DebianSourcePackage, DebianUpload)
        )
        subject = get_source_package_name(input_source_artifact.data)
        version = get_source_package_version(input_source_artifact.data)

        assert self.data.environment is not None

        environment = self.get_environment(
            task_database,
            self.data.environment,
            default_category=CollectionCategory.ENVIRONMENTS,
        )

        return ReprotestDynamicData(
            source_artifact_id=input_source_artifact.id,
            subject=subject,
            parameter_summary=f" subject _ version ",
            environment_id=environment.id,
        )

get_input_artifacts_ids method Used to list the task s input artifacts in the web UI.
   def get_input_artifacts_ids(self) -> list[int]:
       """Return the list of input artifact IDs used by this task."""
       if not self.dynamic_data:
           return []

       assert self.dynamic_data.source_artifact_id is not None
       return [self.dynamic_data.source_artifact_id]

fetch_input method Download the required artifacts on the worker.
    def fetch_input(self, destination: Path) -> bool:
        """Download the required artifacts."""
        assert self.dynamic_data

        artifact_id = self.dynamic_data.source_artifact_id
        assert artifact_id is not None
        self.fetch_artifact(artifact_id, destination)

        return True

configure_for_execution method Install the packages needed by the task and set _reprotest_target, which is used to build the task s command line.
   def configure_for_execution(self, download_directory: Path) -> bool:
       """
       Find a .dsc in download_directory.

       Install reprotest and other utilities used in _cmdline.
       Set self._reprotest_target to it.

       :param download_directory: where to search the files
       :return: True if valid files were found
       """
       self._prepare_executor_instance()

       if self.executor_instance is None:
           raise AssertionError("self.executor_instance cannot be None")

       self.run_executor_command(
           ["apt-get", "update"],
           log_filename="install.log",
           run_as_root=True,
           check=True,
       )
       self.run_executor_command(
           [
               "apt-get",
               "--yes",
               "--no-install-recommends",
               "install",
               "reprotest",
               "dpkg-dev",
               "devscripts",
               "equivs",
               "sudo",
           ],
           log_filename="install.log",
           run_as_root=True,
       )

       self._reprotest_target = utils.find_file_suffixes(
           download_directory, [".dsc"]
       )
       return True

_cmdline method Return the command line to run the task. In this case, and to keep the example simple, we will run reprotest directly in the worker s executor VM/container, without giving it an isolated virtual server. So, this command installs the build dependencies required by the package (so reprotest can build it) and runs reprotest itself.
   def _cmdline(self) -> list[str]:
       """
       Build the reprotest command line.

       Use configuration of self.data and self._reprotest_target.
       """
       target = self._reprotest_target
       assert target is not None

       cmd = [
           "bash",
           "-c",
           f"TMPDIR=/tmp ; cd /tmp ; dpkg-source -x  target  package/; "
           "cd package/ ; mk-build-deps ; apt-get install --yes ./*.deb ; "
           "rm *.deb ; "
           "reprotest --vary=-time,-user_group,-fileordering,-domain_host .",
       ]

       return cmd
Some reprotest variations are disabled. This is to keep the example simple with the set of packages to install and reprotest features.

_cmdline_as_root method Since during the execution it s needed to install packages, run it as root (in the container):
   @staticmethod
   def _cmdline_as_root() -> bool:
       r"""apt-get install --yes ./\*.deb must be run as root."""
       return True

task_result method Task succeeded if a log is generated and the return code is 0.
    def task_result(
        self,
        returncode: int   None,
        execute_directory: Path,  # noqa: U100
    ) -> WorkRequestResults:
        """
        Evaluate task output and return success.

        For a successful run of reprotest:
        -must have the output file
        -exit code is 0

        :return: WorkRequestResults.SUCCESS or WorkRequestResults.FAILURE.
        """
        reprotest_file = execute_directory / self.CAPTURE_OUTPUT_FILENAME

        if reprotest_file.exists() and returncode == 0:
            return WorkRequestResults.SUCCESS

        return WorkRequestResults.FAILURE

upload_artifacts method Create the ReprotestArtifact with the log and the reproducible boolean, upload it, and then add a relation between the ReprotestArtifact and the source package:
    def upload_artifacts(
        self, exec_directory: Path, *, execution_result: WorkRequestResults
    ) -> None:
        """Upload the ReprotestArtifact with the files and relationships."""
        if not self.debusine:
            raise AssertionError("self.debusine not set")

        assert self.dynamic_data is not None
        assert self.dynamic_data.parameter_summary is not None

        reprotest_artifact = ReprotestArtifact.create(
            reprotest_output=exec_directory / self.CAPTURE_OUTPUT_FILENAME,
            reproducible=execution_result == WorkRequestResults.SUCCESS,
            package=self.dynamic_data.parameter_summary,
        )

        uploaded = self.debusine.upload_artifact(
            reprotest_artifact,
            workspace=self.workspace_name,
            work_request=self.work_request_id,
        )

        assert self.dynamic_data is not None
        assert self.dynamic_data.source_artifact_id is not None
        self.debusine.relation_create(
            uploaded.id,
            self.dynamic_data.source_artifact_id,
            RelationType.RELATES_TO,
        )

Execution example To run this task in a local Debusine (see steps to have it ready with an environment, permissions and users created) you can do:
$ python3 -m debusine.client artifact import-debian -w System http://deb.debian.org/debian/pool/main/h/hello/hello_2.10-5.dsc
(get the artifact ID from the output of that command) The artifact can be seen in http://$DEBUSINE/debusine/System/artifact/$ARTIFACTID/. Then create a reprotest.yaml:
$ cat <<EOF > reprotest.yaml
source_artifact: $ARTIFACT_ID
environment: "debian/match:codename=bookworm"
EOF
Instead of debian/match:codename=bookworm it could use the artifact ID. Finally, create the work request to run the task:
$ python3 -m debusine.client create-work-request -w System reprotest --data reprotest.yaml
Using Debusine web you can see the work request, which should go to Running status, then Completed with Success or Failure (depending if reprotest could reproduce it or not). Clicking on the Output tab would have an artifact of type debian:reprotest with one file: the log. In the Metadata tab of the artifact it has Data: the package name and reproducible (true or false).

What is left to do? This was a simple example of creating a task. Other things that could be done:
  • unit tests
  • documentation
  • configurable variations
  • running reprotest directly on the worker host, using the executor environment as a reprotest virtual server
  • in this specific example, the command line might be doing too many things that could maybe be done by other parts of the task, such as prepare_environment.
  • integrate it in a workflow so it s easier to use (e.g. part of QaWorkflow)
  • extract more from the log than just pass/fail
  • display the output in a more useful way (implement an artifact specialized view)

8 February 2026

Colin Watson: Free software activity in January 2026

About 80% of my Debian contributions this month were sponsored by Freexian, as well as one direct donation via GitHub Sponsors (thanks!). If you appreciate this sort of work and are at a company that uses Debian, have a look to see whether you can pay for any of Freexian s services; as well as the direct benefits, that revenue stream helps to keep Debian development sustainable for me and several other lovely people. You can also support my work directly via Liberapay or GitHub Sponsors. Python packaging New upstream versions: Fixes for Python 3.14: Fixes for pytest 9: Porting away from the deprecated pkg_resources: Other build/test failures: I investigated several more build failures and suggested removing the packages in question: Other bugs: Other bits and pieces Alejandro Colomar reported that man(1) ignored the MANWIDTH environment variable in some circumstances. I investigated this and fixed it upstream. I contributed an ubuntu-dev-tools patch to stop recommending sudo. I added forky support to the images used in Salsa CI pipelines. I began working on getting a release candidate of groff 1.24.0 into experimental, though haven t finished that yet. I worked on some lower-priority security updates for OpenSSH. Code reviews

Dirk Eddelbuettel: chronometre: A new package (pair) demo for R and Python

Both R and Python make it reasonably easy to work with compiled extensions. But how to access objects in one environment from the other and share state or (non-trivial) objects remains trickier. Recently (and while r-forge was resting so we opened GitHub Discussions) a question was asked concerning R and Python object pointer exchange. This lead to a pretty decent discussion including arrow interchange demos (pretty ideal if dealing with data.frame-alike objects), but once the focus is on more library-specific objects from a given (C or C++, say) library it is less clear what to do, or how involved it may get. R has external pointers, and these make it feasible to instantiate the same object in Python. To demonstrate, I created a pair of (minimal) packages wrapping a lovely (small) class from the excellent spdlog library by Gabi Melman, and more specifically in an adapted-for-R version (to avoid some R CMD check nags) in my RcppSpdlog package. It is essentially a nicer/fancier C++ version of the tic() and tic() timing scheme. When an object is instantiated, it starts the clock and when we accessing it later it prints the time elapsed in microsecond resolution. In Modern C++ this takes little more than keeping an internal chrono object. Which makes for a nice, small, yet specific object to pass to Python. So the R side of the package pair instantiates such an object, and accesses its address. For different reasons, sending a raw pointer across does not work so well, but a string with the address printed works fabulously (and is a paradigm used around other packages so we did not invent this). Over on the Python side of the package pair, we then take this string representation and pass it to a little bit of pybind11 code to instantiate a new object. This can of course also expose functionality such as the show time elapsed feature, either formatted or just numerically, of interest here. And that is all that there is! Now this can be done from R as well thanks to reticulate as the demo() (also shown on the package README.md) shows:
> library(chronometre)
> demo("chronometre", ask=FALSE)


        demo(chronometre)
        ---- ~~~~~~~~~~~

> #!/usr/bin/env r
> 
> stopifnot("Demo requires 'reticulate'" = requireNamespace("reticulate", quietly=TRUE))

> stopifnot("Demo requires 'RcppSpdlog'" = requireNamespace("RcppSpdlog", quietly=TRUE))

> stopifnot("Demo requires 'xptr'" = requireNamespace("xptr", quietly=TRUE))

> library(reticulate)

> ## reticulate and Python in general these days really want a venv so we will use one,
> ## the default value is a location used locally; if needed create one
> ## check for existing virtualenv to use, or else set one up
> venvdir <- Sys.getenv("CHRONOMETRE_VENV", "/opt/venv/chronometre")

> if (dir.exists(venvdir))  
+ >     use_virtualenv(venvdir, required = TRUE)
+ >   else  
+ >     ## create a virtual environment, but make it temporary
+ >     Sys.setenv(RETICULATE_VIRTUALENV_ROOT=tempdir())
+ >     virtualenv_create("r-reticulate-env")
+ >     virtualenv_install("r-reticulate-env", packages = c("chronometre"))
+ >     use_virtualenv("r-reticulate-env", required = TRUE)
+ >  


> sw <- RcppSpdlog::get_stopwatch()                   # we use a C++ struct as example

> Sys.sleep(0.5)                                      # imagine doing some code here

> print(sw)                                           # stopwatch shows elapsed time
0.501220 

> xptr::is_xptr(sw)                                   # this is an external pointer in R
[1] TRUE

> xptr::xptr_address(sw)                              # get address, format is "0x...."
[1] "0x58adb5918510"

> sw2 <- xptr::new_xptr(xptr::xptr_address(sw))       # cloned (!!) but unclassed

> attr(sw2, "class") <- c("stopwatch", "externalptr") # class it .. and then use it!

> print(sw2)                                          #  xptr  allows us close and use
0.501597 

> sw3 <- ch$Stopwatch(  xptr::xptr_address(sw) )      # new Python object via string ctor

> print(sw3$elapsed())                                # shows output via Python I/O
datetime.timedelta(microseconds=502013)

> cat(sw3$count(), "\n")                              # shows double
0.502657 

> print(sw)                                           # object still works in R
0.502721 
> 
The same object, instantiated in R is used in Python and thereafter again in R. While this object here is minimal in features, the concept of passing a pointer is universal. We could use it for any interesting object that R can access and Python too can instantiate. Obviously, there be dragons as we pass pointers so one may want to ascertain that headers from corresponding compatible versions are used etc but principle is unaffected and should just work. Both parts of this pair of packages are now at the corresponding repositories: PyPI and CRAN. As I commonly do here on package (change) announcements, I include the (minimal so far) set of high-level changes for the R package.

Changes in version 0.0.2 (2026-02-05)
  • Removed replaced unconditional virtualenv use in demo given preceding conditional block
  • Updated README.md with badges and an updated demo

Changes in version 0.0.1 (2026-01-25)
  • Initial version and CRAN upload

Questions, suggestions, bug reports, are welcome at either the (now awoken from the R-Forge slumber) Rcpp mailing list or the newer Rcpp Discussions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

Vincent Bernat: Fragments of an adolescent web

I have unearthed a few old articles typed during my adolescence, between 1996 and 1998. Unremarkable at the time, these pages now compose, three decades later, the chronicle of a vanished era.1 The word blog does not exist yet. Wikipedia has yet to come. Google has not been born. AltaVista reigns over searches, while already struggling to embrace the nascent immensity of the web2. To meet someone, you had to agree in advance and prepare your route on paper maps. The web is taking off. The CSS specification has just emerged, HTML tables still serve for page layout. Cookies and advertising banners are making their appearance. Pages are adorned with music and videos, forcing browsers to arm themselves with plugins. Netscape Navigator sits on 86% of the territory, but Windows 95 now bundles Internet Explorer to quickly catch up. Facing this offensive, Netscape open-sources its browser. France falls behind. Outside universities, Internet access remains expensive and laborious. Minitel still reigns, offering a phone directory, train tickets, remote shopping. This was not yet possible with the Internet: buying a CD online was a pipe dream. Encryption suffers from inappropriate regulation: the DES algorithm is capped at 40 bits and cracked in a few seconds. These pages bear the trace of the web s adolescence. Thirty years have passed. The same battles continue: data selling, advertising, monopolies.

  1. Most articles linked here are not translated from French to English.
  2. I recently noticed that Google no longer fully indexes my blog. For example, it is no longer possible to find the article on lan o. I assume this is a consequence of the explosion of AI-generated content or a change in priorities for Google.

Thorsten Alteholz: My Debian Activities in January 2026

Debian LTS/ELTS This was my hundred-thirty-ninth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian (as the LTS- and ELTS-teams have been merged now, there is only one paragraph left for both activities). During my allocated time I uploaded or worked on: I also attended the monthly LTS/ELTS meeting. While working on updates, I stumbled upon packages, whose CVEs have been postponed for a long time and their CVSS score was rather high. I wonder whether one should pay more attention to postponed issues, otherwise one could have already marked them as ignored. Debian Printing Unfortunately I didn t found any time to work on this topic. Debian Lomiri This month I worked on unifying packaging on Debian and Ubuntu. This makes it easier to work on those packages independent of the used platform. This work is generously funded by Fre(i)e Software GmbH! Debian Astro This month I uploaded a new upstream version or a bugfix version of: Debian IoT Unfortunately I didn t found any time to work on this topic. Debian Mobcom Unfortunately I didn t found any time to work on this topic. misc This month I uploaded a new upstream version or a bugfix version of: Unfortunately this month I was distracted from my normal Debian work by other unpleasant things, so that the paragraphs above are mostly empty. I now have to think about how many of my spare time I am able to dedicate to Debian in the future.

Louis-Philippe V ronneau: Montreal Subway Foot Traffic Data, 2025 edition

Another year of data from Soci t de Transport de Montr al, Montreal's transit agency! A few highlights this year:
  1. Although the Saint-Michel station closed for emergency repairs in November 2024, traffic never bounced back to its pre-closure levels and is still stuck somewhere around 2022 Q2 levels. I wonder if this could be caused by the roadwork on Jean-Talon for the new Blue Line stations making it harder for folks in Montreal-Nord to reach the station by bus.
  2. The effects of the opening of the Royalmount shopping center has had a durable impact on the traffic at the De la Savane station. I reported on this last year, but it seems this wasn't just a fad.
  3. With the completion of the Deux-Montagnes branch of the R seau express m tropolitain (REM, a light-rail, above the surface transit network still in construction), the transfer stations to the Montreal subway have seen major traffic increases. The douard-Montpetit station has nearly reached its previous all-time record of 2015 and the McGill station has recovered from the general slump all the other stations have had in 2025.
  4. The Assomption station, which used to have one of the lowest number of riders of the subway network, has had a tremendous growth in the past few years. This is mostly explained by the many high-rise projects that were built around the station since the end of the COVID-19 pandemic.
  5. Although still affected by a very high seasonality, the Jean-Drapeau station broke its previous record of 2019, a testament of the continued attraction power of the various summer festivals taking place on the Sainte-H l ne et Notre-Dame islands.
More generally, it seems the Montreal subway has had a pretty bad year. Traffic had been slowly climbing back since the COVID-19 pandemic, but this is the first year since 2020 such a sharp decline can be witnessed. Even major stations like Jean-Talon or Lionel-Groulx are on a downward trend and it is pretty worrisome. As for causes, a few things come to mind. First of all, as the number of Montrealers commuting to work by bike continues to rise1, a modal shift from public transit to active mobility is to be expected. As local experts put it, this is not uncommon and has been seen in other cities before. Another important factor that certainly turned people away from the subway this year has been the impacts of the continued housing crisis in Montreal. As more and more people get kicked out of their apartments, many have been seeking refuge in the subway stations to find shelter. Sadly, this also brought a unprecedented wave of incivilities. As riders' sense of security sharply decreased, the STM eventually resorted to banning unhoused people from sheltering in the subway. This decision did bring back some peace to the network, but one can posit damage had already been done and many casual riders are still avoiding the subway for this reason. Finally, the weekslong STM worker's strike in Q4 had an important impact on general traffic, as it severely reduced the opening hours of the subway. As for the previous item, once people find alternative ways to get around, it's always harder to bring them back. Hopefully, my 2026 report will be a more cheerful one... By clicking on a subway station, you'll be redirected to a graph of the station's foot traffic. Licences

  1. Mostly thanks to major improvements to the cycling network and the BIXI bikesharing program.

6 February 2026

Reproducible Builds: Reproducible Builds in January 2026

Welcome to the first monthly report in 2026 from the Reproducible Builds project! These reports outline what we ve been up to over the past month, highlighting items of news from elsewhere in the increasingly-important area of software supply-chain security. As ever, if you are interested in contributing to the Reproducible Builds project, please see the Contribute page on our website.

  1. Flathub now testing for reproducibility
  2. Reproducibility identifying projects that will fail to build in 2038
  3. Distribution work
  4. Tool development
  5. Two new academic papers
  6. Upstream patches

Flathub now testing for reproducibility Flathub, the primary repository/app store for Flatpak-based applications, has begun checking for build reproducibility. According to a recent blog post:
We have started testing binary reproducibility of x86_64 builds targeting the stable repository. This is possible thanks to flathub-repro-checker, a tool doing the necessary legwork to recreate the build environment and compare the result of the rebuild with what is published on Flathub. While these tests have been running for a while now, we have recently restarted them from scratch after enabling S3 storage for diffoscope artifacts.
The test results and status is available on their reproducible builds page.

Reproducibility identifying software projects that will fail to build in 2038 Longtime Reproducible Builds developer Bernhard M. Wiedemann posted on Reddit on Y2K38 commemoration day T-12 that is to say, twelve years to the day before the UNIX Epoch will no longer fit into a signed 32-bit integer variable on 19th January 2038. Bernhard s comment succinctly outlines the problem as well as notes some of the potential remedies, as well as links to a discussion with the GCC developers regarding adding warnings for int time_t conversions . At the time of publication, Bernhard s topic had generated 50 comments in response.

Distribution work Conda is language-agnostic package manager which was originally developed to help Python data scientists and is now a popular package manager for Python and R. conda-forge, a community-led infrastructure for Conda recently revamped their dashboards to rebuild packages straight to track reproducibility. There have been changes over the past two years to make the conda-forge build tooling fully reproducible by embedding the lockfile of the entire build environment inside the packages.
In Debian this month:
In NixOS this month, it was announced that the GNU Guix Full Source Bootstrap was ported to NixOS as part of Wire Jansen bachelor s thesis (PDF). At the time of publication, this change has landed in NiX stdev distribution.
Lastly, Bernhard M. Wiedemann posted another openSUSE monthly update for his work there.

Tool development diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. This month, Chris Lamb made the following changes, including preparing and uploading versions, 310 and 311 to Debian.
  • Fix test compatibility with u-boot-tools version 2026-01. [ ]
  • Drop the implied Rules-Requires-Root: no entry in debian/control. [ ]
  • Bump Standards-Version to 4.7.3. [ ]
  • Reference the Debian ocaml package instead of ocaml-nox. (#1125094)
  • Apply a patch by Jelle van der Waa to adjust a test fixture match new lines. [ ]
  • Also the drop implied Priority: optional from debian/control. [ ]

In addition, Holger Levsen uploaded two versions of disorderfs, first updating the package from FUSE 2 to FUSE 3 as described in last months report, as well as updating the packaging to the latest Debian standards. A second upload (0.6.2-1) was subsequently made, with Holger adding instructions on how to add the upstream release to our release archive and incorporating changes by Roland Clobus to set _FILE_OFFSET_BITS on 32-bit platforms, fixing a build failure on 32-bit systems. Vagrant Cascadian updated diffoscope in GNU Guix to version 311-2-ge4ec97f7 and disorderfs to 0.6.2.

Two new academic papers Julien Malka, Stefano Zacchiroli and Th o Zimmermann of T l com Paris in-house research laboratory, the Information Processing and Communications Laboratory (LTCI) published a paper this month titled Docker Does Not Guarantee Reproducibility:
[ ] While Docker is frequently cited in the literature as a tool that enables reproducibility in theory, the extent of its guarantees and limitations in practice remains under-explored. In this work, we address this gap through two complementary approaches. First, we conduct a systematic literature review to examine how Docker is framed in scientific discourse on reproducibility and to identify documented best practices for writing Dockerfiles enabling reproducible image building. Then, we perform a large-scale empirical study of 5,298 Docker builds collected from GitHub workflows. By rebuilding these images and comparing the results with their historical counterparts, we assess the real reproducibility of Docker images and evaluate the effectiveness of the best practices identified in the literature.
A PDF of their paper is available online.
Quentin Guilloteau, Antoine Waehren and Florina M. Ciorba of the University of Basel in Sweden also published a Docker-related paper, theirs called Longitudinal Study of the Software Environments Produced by Dockerfiles from Research Artifacts:
The reproducibility crisis has affected all scientific disciplines, including computer science (CS). To address this issue, the CS community has established artifact evaluation processes at conferences and in journals to evaluate the reproducibility of the results shared in publications. Authors are therefore required to share their artifacts with reviewers, including code, data, and the software environment necessary to reproduce the results. One method for sharing the software environment proposed by conferences and journals is to utilize container technologies such as Docker and Apptainer. However, these tools rely on non-reproducible tools, resulting in non-reproducible containers. In this paper, we present a tool and methodology to evaluate variations over time in software environments of container images derived from research artifacts. We also present initial results on a small set of Dockerfiles from the Euro-Par 2024 conference.
A PDF of their paper is available online.

Miscellaneous news On our mailing list this month: Lastly, kpcyrd added a Rust section to the Stable order for outputs page on our website. [ ]

Upstream patches The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:

Finally, if you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

Birger Schacht: Status update, January 2026

January was a slow month, I only did three uploads to Debian unstable: I was very happy to see the new dfsg-new-queue and that there are more hands now processing the NEW queue. I also finally got one of the packages accepted that I uploaded after the Trixie release: wayback which I uploaded last August. There has been another release since then, I ll try to upload that in the next few days. There was a bug report for carl asking for Windows support. carl used the xdg create for looking up the XDG directories, but xdg does not support windows systems (and it seems this will not change) The reporter also provided a PR to replace the dependency with the directories crate which more system agnostic. I adapted the PR a bit and merged it and released version 0.6.0 of carl. At my dayjob I refactored django-grouper. django-grouper is a package we use to find duplicate objects in our data. Our users often work with datasets of thousands of historical persons, places and institutions and in projects that run over years and ingest data from multiple sources, it happens that entries are created several times. I wrote the initial app in 2024, but was never really happy about the approach I used back then. It was based on this blog post that describes how to group spreadsheet text cells. It uses sklearns TfidfVectorizer with a custom analyzer and the library sparse_dot_topn for creating the matrix. All in all the module to calculate the clusters was 80 lines and with sparse_dot_topn it pulled in a rather niche Python library. I was pretty sure that this functionality could also be implemented with basic sklearn functionality and it was: we are now using DictVectorizer because in a Django app we are working with objects that can be mapped to dicts anyway. And for clustering the data, the app now uses the DBSCAN algorithm (with the manhattan distance as metric). The module is now only half the size and the whole app lost one dependency! I released those changes as version 0.3.0 of the app. At the end of January together with friends I went to Brussels to attend FOSDEM. We took the night train but there were a couple of broken down trains so the ride took 26 hours instead of one night. It is a good thing we had a one day buffer and FOSDEM only started on Saturday. As usual there were too many talks to visit, so I ll have to watch some of the recordings in the next few weeks. Some examples of talks I found interesting so far:

Reproducible Builds (diffoscope): diffoscope 312 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 312. This version includes the following changes:
[ Jelle van der Waa ]
* Adjust u-boot-tools/fit diff to match new lines.
You find out more by visiting the project homepage.

5 February 2026

Dirk Eddelbuettel: rfoaas 2.3.3: Limited Rebirth

rfoaas greed example The original FOAAS site provided a rather wide variety of REST access points, but it sadky is no more (while the old repo is still there). A newer replacement site FOASS is up and running, but with a somewhat reduced offering. (For example, the two accessors shown in the screenshot are no more. C est la vie.) Recognising that perfect may once again be the enemy of (somewhat) good (enough), we have rejigged the rfoaas package in a new release 2.3.3. (The precding version number 2.3.2 corresponded to the upstream version, indicating which API release we matched. Now we just went + 0.0.1 but there is no longer a correspondence to the service version at FOASS.) Accessor functions for each of the now available access points are provided, ans the random sampling accessor getRandomFO() now picks from that set. My CRANberries service provides a comparison to the previous release. Questions, comments etc should go to the GitHub issue tracker. More background information is on the project page as well as on the github repo

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

4 February 2026

Dirk Eddelbuettel: littler 0.3.23 on CRAN: More Features (and Fixes)

max-heap image The twentythird release of littler as a CRAN package landed on CRAN just now, following in the now twenty year history (!!) as a (initially non-CRAN) package started by Jeff in 2006, and joined by me a few weeks later. littler is the first command-line interface for R as it predates Rscript. It allows for piping as well for shebang scripting via #!, uses command-line arguments more consistently and still starts faster. It also always loaded the methods package which Rscript only began to do in later years. littler lives on Linux and Unix, has its difficulties on macOS due to some-braindeadedness there (who ever thought case-insensitive filesystems as a default were a good idea?) and simply does not exist on Windows (yet the build system could be extended see RInside for an existence proof, and volunteers are welcome!). See the FAQ vignette on how to add it to your PATH. A few examples are highlighted at the Github repo:, as well as in the examples vignette. This release, the first in about eleven months, once again brings two new helper scripts, and enhances six existing one. The release was triggered because it finally became clear why installGitHub.r ignored r2u when available: we forced the type argument to source (so thanks to I aki for spotting this). One change was once again contributed by Michael which is again greatly appreciated. The full change description follows.

Changes in littler version 0.3.22 (2026-02-03)
  • Changes in examples scripts
    • A new script busybees.r aggregates deadlined packages by maintainer
    • Several small updated have been made to the (mostly internal) 'r2u.r' script
    • The deadliners.r script has refined treatment for screen width
    • The install2.r script has new options --quiet and --verbose as proposed by Zivan Karaman
    • The rcc.r script passes build-args to 'rcmdcheck' to compact vignettes and save data
    • The installRub.r script now defaults to 'noble' and is more tolerant of inputs
    • The installRub.r script deals correctly with empty utils::osVersion thanks to Michael Chirico
    • New script checkPackageUrls.r inspired by how CRAN checks (with thanks to Kurt Hornik for the hint)
    • The installGithub.r script now adjusts to bspm and takes advantage of r2u binaries for its build dependencies
  • Changes in package
    • Environment variables (read at build time) can use double quotes
    • Continuous intgegration scripts received a minor update

My CRANberries service provides a comparison to the previous release. Full details for the littler release are provided as usual at the ChangeLog page, and also on the package docs website. The code is available via the GitHub repo, from tarballs and now of course also from its CRAN page and via install.packages("littler"). Binary packages are available directly in Debian as well as (in a day or two) Ubuntu binaries at CRAN thanks to the tireless Michael Rutter. Comments and suggestions are welcome at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

Ben Hutchings: FOSS activity in January 2026

3 February 2026

Jonathan Dowland: FOSDEM 2026 talk recording available

FOSDEM 2026 was great! I hope to blog a proper postmortem in due course. But for now, The video of my talk is up, as are my slides with speaker notes and links.

Valhalla's Things: A Day Off

Posted on February 3, 2026
Tags: madeof:atoms, madeof:bits
Today I had a day off. Some of it went great. Some less so. I woke up, went out to pay our tribute to NotOurCat, and it was snowing! yay! And I had a day off, so if it had snowed enough that shovelling was needed, I had time to do it (it didn t, it started to rain soon afterwards, but still, YAY snow!). Then I had breakfast, with the fruit rye bread I had baked yesterday, and I treated myself to some of the strong Irish tea I have left, instead of the milder ones I want to finish before buying more of the Irish. And then, I bought myself a fancy new expensive fountain pen. One that costs 16 ! more than three times as much as my usual ones! I hope it will work as well, but I m quite confident it should. I ll find out when it arrives from Germany (together with a few ink samples that will result in a future blog post with some SCIENCE). I decided to try and use bank transfers instead of my visa debit card when buying from online shops that give the option to do so: it s a tiny bit more effort, but it means I m paying 0.25 to my bank1 rather than the seller having to pay some unknown amount to an US based payment provider. Unluckily, the fountain pen website offered a huge number of payment methods, but not bank transfers. sigh. And then, I could start working a bit on the connecting wires for the LED strips for our living room: I soldered two pieces, six wires each (it s one RGB strip, 4 pins, and a warm white one requiring two more), then did a bit of tests, including writing some micropython code to add a test mode that lights up each colour in sequence, and the morning was almost gone. For some reason this project, as simple as it is, is taking forever. But it is showing progress. There was a break, when the postman delivered a package of chemicals 2 for a future project or two. There will be blog posts! After lunch I spent some time finishing eyelets on the outfit I wanted to wear this evening, as I had not been able to finish it during fosdem. This one will result in two blog posts! Meanwhile, in the morning I didn t remember the name of the program I used to load software on micropython boards such as the one that will control the LED strips (that s thonny), and while searching for it in the documentation, I found that there is also a command line program I can use, mpremote, and that s a much better fit for my preferences! I mentioned it in an xmpp room full of nerds, and one of them mentioned that he could try it on his Inkplate, when he had time, and I was nerd-sniped into trying it on mine, which had been sitting unused showing the temperatures in our old house on the last day it spent there and needs to be updated for the sensors in the new house. And that lead to the writing of some notes on how to set it up from the command line (good), and to the opening on one upstream issue (bad), because I have an old model, and the board-specific library isn t working. at all. And that s when I realized that it was 17:00, I still had to cook the bread I had been working on since yesterday evening (ciabatta, one of my favourites, but it needs almost one hour in the oven), the outfit I wanted to wear in the evening was still not wearable, the table needed cleaning and some panicking was due. Thankfully, my mother was cooking dinner, so I didn t have to do that too. I turned the oven on, sewed the shoulder seams of the bodice while spraying water on the bread every 5 minutes, and then while it was cooking on its own, started to attach a closure to the skirt, decided that a safety pin was a perfectly reasonable closure for the first day an outfit is worn, took care of the table, took care of the bread, used some twine to close the bodice, because I still haven t worked out what to use for laces, realized my bodkin is still misplaced, used a long and sharp and big needle meant for sewing mattresses instead of a bodkin, managed not to stab myself, and less than half an hour late we could have dinner. There was bread, there was Swedish crispbread, there were spreads (tuna, and beans), and vegetables, and then there was the cake that caused my mother to panic when she added her last honey to the milk and it curdled (my SO and I tried it, it had no odd taste, we decided it could be used) and it was good, although I had to get a second slice just to be 100% sure of it. And now I m exhausted, and I ve only done half of the things I had planned to do, but I d still say I ve had quite a good day.

  1. Banca Etica, so one that avoids any investment in weapons and a number of other problematic things.
  2. not food grade, except for one, but kitchen-safe.

2 February 2026

Isoken Ibizugbe: How Open Source Contributions Define Your Career Path

Hi there, I m more than halfway through (8 weeks) my Outreachy internship with Debian, working on the openQA project to test Live Images.

My journey into tech began as a software engineering trainee, during which I built a foundation in Bash scripting, C programming, and Python. Later, I worked for a startup as a Product Manager. As is common in startups, I wore many hats, but I found myself drawn most to the Quality Assurance team. Testing user flows and edge-case scenarios sparked my curiosity, and that curiosity is exactly what led me to the Debian Live Image testing project.

From Manual to Automated

In my previous roles, I was accustomed to manual testing, simulating user actions one by one. While effective, I quickly realized it could be a bottleneck in fast-paced environments. This internship has been a masterclass in removing that bottleneck. I ve learned that automating repetitive actions makes life (and engineering) much easier. Life s too short for manual testing  .

One of my proudest technical wins so far has been creating Synergy across desktop environments. I proposed a solution to group common applications so we could use a single Perl script to handle tests for multiple desktop environments. With my mentor s guidance, we implemented this using symbolic links, which significantly reduced code redundancy.

Expanding My Technical Toolkit

Over the last 8 weeks, my technical toolkit has expanded significantly:

  • Perl & openQA: I ve learnt writing with Perl for automation within the openQA framework and I ve successfully automated apps_startstop tests for Cinnamon and LXQt
  • Technical Documentation: I authored a contributor guide. This required paying close attention to detail, ensuring that new contributors can have faster reviews and merged contributions
  • Ansible: I am learning that testing doesn t happen in a vacuum. To ensure new tests are truly automated, they must be integrated into the system s configuration.

Working on this project has shaped my perspective on where I fit in the tech ecosystem. In Open Source, my resume isn t just a PDF, it s a public trail of merged code, technical proposals, and collaborative discussions.

As my mentor recently pointed out, this is my proof-of-work. It s a transparent record of what I am capable of and where my interests lie.

Finally, I ve grown as a team player. Working with a global team across different time zones has taught me the importance of asynchronous communication and respecting everyone s time. Whether I am proposing a new logic or documenting a process, I am learning that open communication is just as vital as clean code.

Patryk Cisek: Bitwarden Secrets Manager With Ansible

If you d like to have a simple solution for managing all the secrets you re using in your Ansible Playbooks, keep reading on. Bitwarden s Secrets Manager provides an Ansible collection, which makes it very easy to use this particular Secrets Manager in Ansible Playbooks. I ll show you how to set up a free Secrets Manager account in Bitwarden. Then I ll walk you through the setup in an example Ansible Playbook.

YouTube Video version I ve also recorded a video version of this article. If you prefer a video, you can find it here.

Hellen Chemtai: Career Growth Through Open Source: A Personal Journey

Hello world  ! I am an intern at Outreachy working with the Debian OpenQA team on images testing. We get to know what career opportunities awaits us when we work on open source projects. In open source, we are constantly learning. The community has different sets of skills and a large network of people.

So, how did I start off in this internship

I entered the community with the these skills:

  • MERN (Mongo DB, Express JS , React JS and Node JS) for web development
  • Linux and Shell Scripting for some administrative purposes
  • Containerization using Google Cloud
  • Operating Systems
  • A learning passion for Open Source I contributed to some open source work in the past but it was in terms of documentation and bug hunting

I was a newbie at OpenQA but, I had a month to learn and contribute. Time is a critical resource but so is understanding what you are doing. I followed the installations instructions given but whenever I got errors, I had to research why I got the errors. I took time to understand errors I was solving then continued on with the tasks I wanted to do. I communicated my logic and understanding while working on the task and awaited reviews and feedback. Within a span of two months I had learned a lot by practicing and testing.

The skills I gained

As of today, I gained these technical skills from my work with Debian OpenQA team.

  • Perl the tests that we run are written in this language
  • Ansible configuration ansible configurations and settings for the machines the team runs
  • Git this is needed for code versioning and diving tasks into different branches
  • Linux shell scripting and working with the Debian Operating system
  • Virtual Machines and Operating Systems I constantly view how different operating systems are booted and run on virtual machines during testing
  • Testing I keep watch of needles and ensure the tests work as required
  • Debian I use a Debian system to run my Virtual Machines
  • OpenQA the tool that is used to automate testing of Images

With open source comes the need of constant communication. The community is diverse and the team is usually on different time zones. These are some of the soft / social skills I gained when working with the team

  • Communication this is essential especially in taking tasks with confidence, talking about issues encountered and stating the progress of the tasks
  • Interpersonal skills this is for general communication within the community
  • Flexibility we have to adapt to changes because we are a community of different people with different skills

With these skills and the willingness to learn , open source is a great area to focus on . Aside from your career you will extend your network. My interests are set on open source and Linux in general. Working with a wider network has really skilled me up and I will continue learning. Working with the Debian OpenQA team has been very great. The team is great at communication and I learn every day. The knowledge I gain from the team is helping me build a great career in open source.

Next.

Previous.