Search Results: "agi"

27 October 2024

Enrico Zini: Typing decorators for class members with optional arguments

This looks straightforward and is far from it. I expect tool support will improve in the future. Meanwhile, this blog post serves as a step by step explanation for what is going on in code that I'm about to push to my team.
Let's take this relatively straightforward python code. It has a function printing an int, and a decorator that makes it argument optional, taking it from a global default if missing:
from unittest import mock
default = 42
def with_default(f):
    def wrapped(self, value=None):
        if value is None:
            value = default
        return f(self, value)
    return wrapped
class Fiddle:
    @with_default
    def print(self, value):
        print("Answer:", value)
fiddle = Fiddle()
fiddle.print(12)
fiddle.print()
def mocked(self, value=None):
    print("Mocked answer:", value)
with mock.patch.object(Fiddle, "print", autospec=True, side_effect=mocked):
    fiddle.print(12)
    fiddle.print()
It works nicely as expected:
$ python3 test0.py
Answer: 12
Answer: 42
Mocked answer: 12
Mocked answer: None
It lacks functools.wraps and typing, though. Let's add them. Adding functools.wraps Adding a simple @functools.wraps, mock unexpectedly stops working:
# python3 test1.py
Answer: 12
Answer: 42
Mocked answer: 12
Traceback (most recent call last):
  File "/home/enrico/lavori/freexian/tt/test1.py", line 42, in <module>
    fiddle.print()
  File "<string>", line 2, in print
  File "/usr/lib/python3.11/unittest/mock.py", line 186, in checksig
    sig.bind(*args, **kwargs)
  File "/usr/lib/python3.11/inspect.py", line 3211, in bind
    return self._bind(args, kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.11/inspect.py", line 3126, in _bind
    raise TypeError(msg) from None
TypeError: missing a required argument: 'value'
This is the new code, with explanations and a fix:
# Introduce functools
import functools
from unittest import mock
default = 42
def with_default(f):
    @functools.wraps(f)
    def wrapped(self, value=None):
        if value is None:
            value = default
        return f(self, value)
    # Fix:
    # del wrapped.__wrapped__
    return wrapped
class Fiddle:
    @with_default
    def print(self, value):
        assert value is not None
        print("Answer:", value)
fiddle = Fiddle()
fiddle.print(12)
fiddle.print()
def mocked(self, value=None):
    print("Mocked answer:", value)
with mock.patch.object(Fiddle, "print", autospec=True, side_effect=mocked):
    fiddle.print(12)
    # mock's autospec uses inspect.getsignature, which follows __wrapped__ set
    # by functools.wraps, which points to a wrong signature: the idea that
    # value is optional is now lost
    fiddle.print()
Adding typing For simplicity, from now on let's change Fiddle.print to match its wrapped signature:
      # Give up with making value not optional, to simplify things :(
      def print(self, value: int   None = None) -> None:
          assert value is not None
          print("Answer:", value)
Typing with ParamSpec
# Introduce typing, try with ParamSpec
import functools
from typing import TYPE_CHECKING, ParamSpec, Callable
from unittest import mock
default = 42
P = ParamSpec("P")
def with_default(f: Callable[P, None]) -> Callable[P, None]:
    # Using ParamSpec we forward arguments, but we cannot use them!
    @functools.wraps(f)
    def wrapped(self, value: int   None = None) -> None:
        if value is None:
            value = default
        return f(self, value)
    return wrapped
class Fiddle:
    @with_default
    def print(self, value: int   None = None) -> None:
        assert value is not None
        print("Answer:", value)
mypy complains inside the wrapper, because while we forward arguments we don't constrain them, so we can't be sure there is a value in there:
test2.py:17: error: Argument 2 has incompatible type "int"; expected "P.args"  [arg-type]
test2.py:19: error: Incompatible return value type (got "_Wrapped[P, None, [Any, int   None], None]", expected "Callable[P, None]")  [return-value]
test2.py:19: note: "_Wrapped[P, None, [Any, int   None], None].__call__" has type "Callable[[Arg(Any, 'self'), DefaultArg(int   None, 'value')], None]"
Typing with Callable We can use explicit Callable argument lists:
# Introduce typing, try with Callable
import functools
from typing import TYPE_CHECKING, Callable, TypeVar
from unittest import mock
default = 42
A = TypeVar("A")
# Callable cannot represent the fact that the argument is optional, so now mypy
# complains if we try to omit it
def with_default(f: Callable[[A, int   None], None]) -> Callable[[A, int   None], None]:
    @functools.wraps(f)
    def wrapped(self: A, value: int   None = None) -> None:
        if value is None:
            value = default
        return f(self, value)
    return wrapped
class Fiddle:
    @with_default
    def print(self, value: int   None = None) -> None:
        assert value is not None
        print("Answer:", value)
if TYPE_CHECKING:
    reveal_type(Fiddle.print)
fiddle = Fiddle()
fiddle.print(12)
# !! Too few arguments for "print" of "Fiddle"  [call-arg]
fiddle.print()
def mocked(self, value=None):
    print("Mocked answer:", value)
with mock.patch.object(Fiddle, "print", autospec=True, side_effect=mocked):
    fiddle.print(12)
    fiddle.print()
Now mypy complains when we try to omit the optional argument, because Callable cannot represent optional arguments:
test3.py:32: note: Revealed type is "def (test3.Fiddle, Union[builtins.int, None])"
test3.py:37: error: Too few arguments for "print" of "Fiddle"  [call-arg]
test3.py:46: error: Too few arguments for "print" of "Fiddle"  [call-arg]
typing's documentation says:
Callable cannot express complex signatures such as functions that take a variadic number of arguments, overloaded functions, or functions that have keyword-only parameters. However, these signatures can be expressed by defining a Protocol class with a call() method:
Let's do that! Typing with Protocol, take 1
# Introduce typing, try with Protocol
import functools
from typing import TYPE_CHECKING, Protocol, TypeVar, Generic, cast
from unittest import mock
default = 42
A = TypeVar("A", contravariant=True)
class Printer(Protocol, Generic[A]):
    def __call__(_, self: A, value: int   None = None) -> None:
        ...
def with_default(f: Printer[A]) -> Printer[A]:
    @functools.wraps(f)
    def wrapped(self: A, value: int   None = None) -> None:
        if value is None:
            value = default
        return f(self, value)
    return cast(Printer, wrapped)
class Fiddle:
    # function has a __get__ method to generated bound versions of itself
    # the Printer protocol does not define it, so mypy is now unable to type
    # the bound method correctly
    @with_default
    def print(self, value: int   None = None) -> None:
        assert value is not None
        print("Answer:", value)
if TYPE_CHECKING:
    reveal_type(Fiddle.print)
fiddle = Fiddle()
# !! Argument 1 to "__call__" of "Printer" has incompatible type "int"; expected "Fiddle"
fiddle.print(12)
fiddle.print()
def mocked(self, value=None):
    print("Mocked answer:", value)
with mock.patch.object(Fiddle, "print", autospec=True, side_effect=mocked):
    fiddle.print(12)
    fiddle.print()
New mypy complaints:
test4.py:41: error: Argument 1 to "__call__" of "Printer" has incompatible type "int"; expected "Fiddle"  [arg-type]
test4.py:42: error: Missing positional argument "self" in call to "__call__" of "Printer"  [call-arg]
test4.py:50: error: Argument 1 to "__call__" of "Printer" has incompatible type "int"; expected "Fiddle"  [arg-type]
test4.py:51: error: Missing positional argument "self" in call to "__call__" of "Printer"  [call-arg]
What happens with class methods, is that the function object has a __get__ method that generates a bound versions of itself. Our Printer protocol does not define it, so mypy is now unable to type the bound method correctly. Typing with Protocol, take 2 So... we add the function descriptor methos to our Protocol! A lot of this is taken from this discussion.
# Introduce typing, try with Protocol, harder!
import functools
from typing import TYPE_CHECKING, Protocol, TypeVar, Generic, cast, overload, Union
from unittest import mock
default = 42
A = TypeVar("A", contravariant=True)
# We now produce typing for the whole function descriptor protocol
#
# See https://github.com/python/typing/discussions/1040
class BoundPrinter(Protocol):
    """Protocol typing for bound printer methods."""
    def __call__(_, value: int   None = None) -> None:
        """Bound signature."""
class Printer(Protocol, Generic[A]):
    """Protocol typing for printer methods."""
    # noqa annotations are overrides for flake8 being confused, giving either D418:
    # Function/ Method decorated with @overload shouldn't contain a docstring
    # or D105:
    # Missing docstring in magic method
    #
    # F841 is for vulture being confused:
    #   unused variable 'objtype' (100% confidence)
    @overload
    def __get__(  # noqa: D105
        self, obj: A, objtype: type[A]   None = None  # noqa: F841
    ) -> BoundPrinter:
        ...
    @overload
    def __get__(  # noqa: D105
        self, obj: None, objtype: type[A]   None = None  # noqa: F841
    ) -> "Printer[A]":
        ...
    def __get__(
        self, obj: A   None, objtype: type[A]   None = None  # noqa: F841
    ) -> Union[BoundPrinter, "Printer[A]"]:
        """Implement function descriptor protocol for class methods."""
    def __call__(_, self: A, value: int   None = None) -> None:
        """Unbound signature."""
def with_default(f: Printer[A]) -> Printer[A]:
    @functools.wraps(f)
    def wrapped(self: A, value: int   None = None) -> None:
        if value is None:
            value = default
        return f(self, value)
    return cast(Printer, wrapped)
class Fiddle:
    # function has a __get__ method to generated bound versions of itself
    # the Printer protocol does not define it, so mypy is now unable to type
    # the bound method correctly
    @with_default
    def print(self, value: int   None = None) -> None:
        assert value is not None
        print("Answer:", value)
fiddle = Fiddle()
fiddle.print(12)
fiddle.print()
def mocked(self, value=None):
    print("Mocked answer:", value)
with mock.patch.object(Fiddle, "print", autospec=True, side_effect=mocked):
    fiddle.print(12)
    fiddle.print()
It works! It's typed! And mypy is happy!

24 October 2024

Emmanuel Kasper: back to blogging and running a feed reader as a containerized systemd service

After reading about Jonathan McDowell feed reader install and the back to blogging initiative, I decided to install a feed reader to follow all those nice blog posts. With a feed reader you can compose your own feed of news based on blog posts, websites, mastodon toots. And then you are independant from ad oriented ranking algorithms of social networks. Since Jonathan used FreshRSS as a feed reader, I started with the same software. On a quick glance on its github page, it sounded like a good project:
  • active contributions
  • different channels for stable and latest version of the software
  • container images pointing to the stable release
  • support multiple databases for storage, including PostgreSQL
  • correct documentation mentioning security caveats
I prefer to do the container image installation using podman since:
  • upgrades from FreshRSS are easy to do and can be done separately from operating system upgrades
  • I do not mess my based operating system with php (subjective) and in case of a compromized freshrss, the freshrss/apache install would be still restrained to its own Linux namespaces, separated from the rest of the system.
Podman is image compatible with Docker as they both implement the OCI runtime specification, and have a nearly identical command line interface. This installation will be done on a Debian server, but should work too on any Linux distribution. Initial setup
  • start a container image based on the start command provided by the FreshRSS project. The podman command line is nearly identical to the docker command line, excepts that podman expects the fully qualified domain name associated with the container image, and I chose to run the freshrss container on the localhost interface only. I also use a defined version tag, because using the latest tag makes it complicated to track which exact ersion I have installed.
# podman pull docker.io/freshrss/freshrss:1.20.1
# podman run --detach --restart unless-stopped --log-opt max-size=10m \
  --publish 127.0.0.1:8081:80 \
  --env TZ=Europe/Paris \
  --env 'CRON_MIN=1,31' \
  --volume freshrss_data:/var/www/FreshRSS/data \
  --volume freshrss_extensions:/var/www/FreshRSS/extensions \
  --name freshrss \
  docker.io/freshrss/freshrss:1.20.1
  • verify where the podman volumes have been created. This is where the user data of freshrss will be stored.
# podman volume ls
# podman volume inspect freshrss_data
  • now that freshrss is installed, you can start its configuration wizard at localhost:8081. You should keep the default sqlite choice
  • finally after running the wizard, you can login again and add some feeds
  • verify that your config has been stored outside the container, and inside the volume (so that it will not be erased in case of upgrages)
# ls -l /var/lib/containers/storage/volumes/freshrss_data/_data/users/
  • verify the state of sqlite database
echo '.tables'  sqlite3  /var/lib/containers/storage/volumes/freshrss_data/_data/users/<your freshrss user>/db.sqlite 
category  entry     entrytag  entrytmp  feed      tag
Going with FreshRSS in Production Podman has this very nice feature that it can generate a systemd unit from a running container, and use systemd to start a container on boot. This is in contrary to docker where the docker daemon does the stop/start of containers on boot. I prefer the systemd approach as it treats containers the same way as other system services. Once the freshrss container is running we can generate a systemd unit of it with:
# podman generate systemd --new --name freshrss   tee /etc/systemd/system/container-freshrss.service
Let s stop the container we started previously, and use systemd to manage it:
# podman stop freshrss
# systemctl enable --now container-freshrss.service
We can verify that we have a listening socket on the localhost interface, on the source port 8081
# systemctl status container-freshrss.service
  ...
# ss --listening --numeric --process '( sport = 8081 )'
Netid         State           Recv-Q          Send-Q                   Local Address:Port                   Peer Address:Port         Process         
tcp           LISTEN          0               4096                         127.0.0.1:8081                        0.0.0.0:*             users:(("conmon",pid=4464,fd=5))
Nota Bene: conmon (8) is the process managing the network namespace in which fresh-rss is running, hence it is displayed as the process owning the listening socket Exposing FreshRSS to the external world We have now a running service, but we need to make it reachable from the internet. The simplest, classical way, is to create a subdomain and a VirtualHost configured as a reverse proxy to access the service at 127.0.0.1:8081. Fortunately the FreshRSS authors have documented this setup in https://github.com/FreshRSS/FreshRSS/tree/edge/Docker#alternative-reverse-proxy-using-apache and those steps are no different from a standard application behind a web reverse proxy. Upgrading freshrss container to a newer version A documentation showing how to install a piece of software is nothing when it does not show how to upgrade that said software. Installing is easy, upgrading is where the challenge is. Fortunately to the good stateless design of freshrss (everything is in the sqlite database, which is backed by a non-epheremal volume in our setup), switchting versions is a peace of cake.
# podman pull docker.io/freshrss/freshrss:1.20.2
# systemctl stop container-freshrss.service
# sed -i 's,docker.io/freshrss/freshrss:1.20.1,docker.io/freshrss/freshrss:1.20.2,' /etc/systemd/system/container-freshrss.service
# systemctl daemon-reload
# systemctl start container-freshrss.service
If you need to rollback, you just need to revert version numbers in the instruction above. Enjoy your own reader feed ! I will add the following feeds of blogs I like, let us see if I follow them better with a feed reader !

22 October 2024

Dirk Eddelbuettel: drat 0.2.5 on CRAN: Small Updates

drat user A new minor release of the drat package arrived on CRAN today, which is just over a year since the previous release. drat stands for drat R Archive Template, and helps with easy-to-create and easy-to-use repositories for R packages. Since its inception in early 2015 it has found reasonably widespread adoption among R users because repositories with marked releases is the better way to distribute code. Because for once it really is as your mother told you: Friends don t let friends install random git commit snapshots. Properly rolled-up releases it is. Just how CRAN shows us: a model that has demonstrated for over two-and-a-half decades how to do this. And you can too: drat is easy to use, documented by six vignettes and just works. Detailed information about drat is at its documentation site. That said, and these days , if you mainly care about github code then r-universe is there too, also offering binaries its makes and all that jazz. But sometimes you just want to, or need to, roll a local repository and drat can help you there. This release contains a small PR (made by Arne Holmin just after the previous release) adding support for an OSflacour variable (helpful for macOS). We also corrected an issue with one test file being insufficiently careful of using git2r only when installed, and as usual did a round of maintenance for the package concerning both continuous integration and documentation. The NEWS file summarises the release as follows:

Changes in drat version 0.2.5 (2024-10-21)
  • Function insertPackage has a new optional argument OSflavour (Arne Holmin in #142)
  • A test file conditions correctly about git2r being present (Dirk)
  • Several smaller packaging updates and enhancements to continuous integration and documentation have been added (Dirk)

Courtesy of my CRANberries, there is a comparison to the previous release. More detailed information is on the drat page as well as at the documentation site. If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

21 October 2024

Russ Allbery: California general election

As usual with these every-two-year posts, probably of direct interest only to California residents. Maybe the more obscure things we're voting on will be a minor curiosity to people elsewhere. I'm a bit late this year, although not as late as last year, so a lot of people may have already voted, but I've been doing this for a while and wanted to keep it up. This post will only be about the ballot propositions. I don't have anything useful to say about the candidates that isn't hyper-local. I doubt anyone who has read my posts will be surprised by which candidates I'm voting for. As always with Calfornia ballot propositions, it's worth paying close attention to which propositions were put on the ballot by the legislature, usually because there's some state law requirement (often that I disagree with) that they be voted on by the public, and propositions that were put on the ballot by voter petition. The latter are often poorly written and have hidden problems. As a general rule of thumb, I tend to default to voting against propositions added by petition. This year, one can conveniently distinguish by number: the single-digit propositions were added by the legislature, and the two-digit ones were added by petition. Proposition 2: YES. Issue $10 billion in bonds for public school infrastructure improvements. I generally vote in favor of spending measures like this unless they have some obvious problem. The opposition argument is a deranged rant against immigrants and government debt and fails to point out actual problems. The opposition argument also claims this will result in higher property taxes and, seriously, if only that were true. That would make me even more strongly in favor of it. Proposition 3: YES. Enshrines the right to marriage without regard to sex or race into the California state constitution. This is already the law given US Supreme Court decisions, but fixing California state law is a long-overdue and obvious cleanup step. One of the quixotic things I would do if I were ever in government, which I will never be, would be to try to clean up the laws to make them match reality, repealing all of the dead clauses that were overturned by court decisions or are never enforced. I am in favor of all measures in this direction even when I don't agree with the direction of the change; here, as a bonus, I also strongly agree with the change. Proposition 4: YES. Issue $10 billion in bonds for infrastructure improvements to mitigate climate risk. This is basically the same argument as Proposition 2. The one drawback of this measure is that it's kind of a mixed grab bag of stuff and probably some of it should be supported out of the general budget rather than bonds, but I consider this a minor problem. We definitely need to ramp up climate risk mitigation efforts. Proposition 5: YES. Reduces the required super-majority to pass local bond measures for affordable housing from 67% to 55%. The fact that this requires a supermajority at all is absurd, California desperately needs to build more housing of any kind however we can, and publicly funded housing is an excellent idea. Proposition 6: YES. Eliminates "involuntary servitude" (in other words, "temporary" slavery) as a legally permissible punishment for crimes in the state of California. I'm one of the people who think the 13th Amendment to the US Constitution shouldn't have an exception for punishment for crimes, so obviously I'm in favor of this. This is one very, very tiny step towards improving the absolutely atrocious prison conditions in the state. Proposition 32: YES. Raises the minimum wage to $18 per hour from the current $16 per hour, over two years, and ties it to inflation. This is one of the rare petition-based propositions that I will vote in favor of because it's very straightforward, we clearly should be raising the minimum wage, and living in California is absurdly expensive because we refuse to build more housing (see Propositions 5 and 33). The opposition argument is the standard lie that a higher minimum wage will increase unemployment, which we know from numerous other natural experiments is simply not true. Proposition 33: NO. Repeals Costa-Hawkins, which prohibits local municipalities from enacting rent control on properties built after 1995. This one is going to split the progressive vote rather badly, I suspect. California has a housing crisis caused by not enough housing supply. It is not due to vacant housing, as much as some people would like you to believe that; the numbers just don't add up. There are way more people living here and wanting to live here than there is housing, so we need to build more housing. Rent control serves a valuable social function of providing stability to people who already have housing, but it doesn't help, and can hurt, the project of meeting actual housing demand. Rent control alone creates a two-tier system where people who have housing are protected but people who don't have housing have an even harder time getting housing than they do today. It's therefore quite consistent with the general NIMBY playbook of trying to protect the people who already have housing by making life harder for the people who do not, while keeping the housing supply essentially static. I am in favor of rent control in conjunction with real measures to increase the housing supply. I am therefore opposed to this proposition, which allows rent control without any effort to increase housing supply. I am quite certain that, if this passes, some municipalities will use it to make constructing new high-density housing incredibly difficult by requiring it all be rent-controlled low-income housing, thus cutting off the supply of multi-tenant market-rate housing entirely. This is already a common political goal in the part of California where I live. Local neighborhood groups advocate for exactly this routinely in local political fights. Give me a mandate for new construction that breaks local zoning obstructionism, including new market-rate housing to maintain a healthy lifecycle of housing aging into affordable housing as wealthy people move into new market-rate housing, and I will gladly support rent control measures as part of that package. But rent control on its own just allocates winners and losers without addressing the underlying problem. Proposition 34: NO. This is an excellent example of why I vote against petition propositions by default. This is a law designed to affect exactly one organization in the state of California: the AIDS Healthcare Foundation. The reason for this targeting is disputed; one side claims it's because of the AHF support for Proposition 33, and another side claims it's because AHF is a slumlord abusing California state funding. I have no idea which side of this is true. I also don't care, because I am fundamentally opposed to writing laws this way. Laws should establish general, fair principles that are broadly applicable, not be written with bizarrely specific conditions (health care providers that operate multifamily housing) that will only be met by a single organization. This kind of nonsense creates bad legal codes and the legal equivalent of technical debt. Just don't do this. Proposition 35: YES. I am, reluctantly, voting in favor of this even though it is a petition proposition because it looks like a useful simplification and cleanup of state health care funding, makes an expiring tax permanent, and is supported by a very wide range of organizations that I generally trust to know what they're talking about. No opposition argument was filed, which I think is telling. Proposition 36: NO. I am resigned to voting down attempts to start new "war on drugs" nonsense for the rest of my life because the people who believe in this crap will never, ever, ever stop. This one has bonus shoplifting fear-mongering attached, something that touches on nasty local politics that have included large retail chains manipulating crime report statistics to give the impression that shoplifting is up dramatically. It's yet another round of the truly horrific California "three strikes" criminal penalty obsession, which completely misunderstands both the causes of crime and the (almost nonexistent) effectiveness of harsh punishment as deterrence.

20 October 2024

Bits from Debian: Ada Lovelace Day 2024 - Interview with some Women in Debian

Alt Ada Lovelace portrait Ada Lovelace Day was celebrated on October 8 in 2024, and on this occasion, to celebrate and raise awareness of the contributions of women to the STEM fields we interviewed some of the women in Debian. Here we share their thoughts, comments, and concerns with the hope of inspiring more women to become part of the Sciences, and of course, to work inside of Debian. This article was simulcasted to the debian-women mail list. Beatrice Torracca 1. Who are you? I am Beatrice, I am Italian. Internet technology and everything computer-related is just a hobby for me, not my line of work or the subject of my academic studies. I have too many interests and too little time. I would like to do lots of things and at the same time I am too Oblomovian to do any. 2. How did you get introduced to Debian? As a user I started using newsgroups when I had my first dialup connection and there was always talk about this strange thing called Linux. Since moving from DR DOS to Windows was a shock for me, feeling like I lost the control of my machine, I tried Linux with Debian Potato and I never strayed away from Debian since then for my personal equipment. 3. How long have you been into Debian? Define "into". As a user... since Potato, too many years to count. As a contributor, a similar amount of time, since early 2000 I think. My first archived email about contributing to the translation of the description of Debian packages dates 2001. 4. Are you using Debian in your daily life? If yes, how? Yes!! I use testing. I have it on my desktop PC at home and I have it on my laptop. The desktop is where I have a local IMAP server that fetches all the mails of my email accounts, and where I sync and back up all my data. On both I do day-to-day stuff (from email to online banking, from shopping to taxes), all forms of entertainment, a bit of work if I have to work from home (GNU R for statistics, LibreOffice... the usual suspects). At work I am required to have another OS, sadly, but I am working on setting up a Debian Live system to use there too. Plus if at work we start doing bioinformatics there might be a Linux machine in our future... I will of course suggest and hope for a Debian system. 5. Do you have any suggestions to improve women's participation in Debian? This is a tough one. I am not sure. Maybe, more visibility for the women already in the Debian Project, and make the newcomers feel seen, valued and welcomed. A respectful and safe environment is key too, of course, but I think Debian made huge progress in that aspect with the Code of Conduct. I am a big fan of promoting diversity and inclusion; there is always room for improvement. Ileana Dumitrescu (ildumi) 1. Who are you? I am just a girl in the world who likes cats and packaging Free Software. 2. How did you get introduced to Debian? I was tinkering with a computer running Debian a few years ago, and I decided to learn more about Free Software. After a search or two, I found Debian Women. 3. How long have you been into Debian? I started looking into contributing to Debian in 2021. After contacting Debian Women, I received a lot of information and helpful advice on different ways I could contribute, and I decided package maintenance was the best fit for me. I eventually became a Debian Maintainer in 2023, and I continue to maintain a few packages in my spare time. 4. Are you using Debian in your daily life? If yes, how? Yes, it is my favourite GNU/Linux operating system! I use it for email, chatting, browsing, packaging, etc. 5. Do you have any suggestions to improve women's participation in Debian? The mailing list for Debian Women may attract more participation if it is utilized more. It is where I started, and I imagine participation would increase if it is more engaging. Kathara Sasikumar (kathara) 1. Who are you? I'm Kathara Sasikumar, 22 years old and a recent Debian user turned Maintainer from India. I try to become a creative person through sketching or playing guitar chords, but it doesn't work! xD 2. How did you get introduced to Debian? When I first started college, I was that overly enthusiastic student who signed up for every club and volunteered for anything that crossed my path just like every other fresher. But then, the pandemic hit, and like many, I hit a low point. COVID depression was real, and I was feeling pretty down. Around this time, the FOSS Club at my college suddenly became more active. My friends, knowing I had a love for free software, pushed me to join the club. They thought it might help me lift my spirits and get out of the slump I was in. At first, I joined only out of peer pressure, but once I got involved, the club really took off. FOSS Club became more and more active during the pandemic, and I found myself spending more and more time with it. A year later, we had the opportunity to host a MiniDebConf at our college. Where I got to meet a lot of Debian developers and maintainers, attending their talks and talking with them gave me a wider perspective on Debian, and I loved the Debian philosophy. At that time, I had been distro hopping but never quite settled down. I occasionally used Debian but never stuck around. However, after the MiniDebConf, I found myself using Debian more consistently, and it truly connected with me. The community was incredibly warm and welcoming, which made all the difference. 3. How long have you been into Debian? Now, I've been using Debian as my daily driver for about a year. 4. Are you using Debian in your daily life? If yes, how? It has become my primary distro, and I use it every day for continuous learning and working on various software projects with free and open-source tools. Plus, I've recently become a Debian Maintainer (DM) and have taken on the responsibility of maintaining a few packages. I'm looking forward to contributing more to the Debian community Rhonda D'Vine (rhonda) 1. Who are you? My name is Rhonda, my pronouns are she/her, or per/pers. I'm 51 years old, working in IT. 2. How did you get introduced to Debian? I was already looking into Linux because of university, first it was SuSE. And people played around with gtk. But when they packaged GNOME and it just didn't even install I looked for alternatives. A working colleague from back then gave me a CD of Debian. Though I couldn't install from it because Slink didn't recognize the pcmcia drive. I had to install it via floppy disks, but apart from that it was quite well done. And the early GNOME was working, so I never looked back. 3. How long have you been into Debian? Even before I was more involved, a colleague asked me whether I could help with translating the release documentation. That was my first contribution to Debian, for the slink release in early 1999. And I was using some other software before on my SuSE systems, and I wanted to continue to use them on Debian obviously. So that's how I got involved with packaging in Debian. But I continued to help with translation work, for a long period of time I was almost the only person active for the German part of the website. 4. Are you using Debian in your daily life? If yes, how? Being involved with Debian was a big part of the reason I got into my jobs since a long time now. I always worked with maintaining Debian (or Ubuntu) systems. Privately I run Debian on my laptop, with occasionally switching to Windows in dual boot when (rarely) needed. 5. Do you have any suggestions to improve women's participation in Debian? There are factors that we can't influence, like that a lot of women are pushed into care work because patriarchal structures work that way, and don't have the time nor energy to invest a lot into other things. But we could learn to appreciate smaller contributions better, and not focus so much on the quantity of contributions. When we look at longer discussions on mailing lists, those that write more mails actually don't contribute more to the discussion, they often repeat themselves without adding more substance. Through working on our own discussion patterns this could create a more welcoming environment for a lot of people. Sophie Brun (sophieb) 1. Who are you? I'm a 44 years old French woman. I'm married and I have 2 sons. 2. How did you get introduced to Debian? In 2004 my boyfriend (now my husband) installed Debian on my personal computer to introduce me to Debian. I knew almost nothing about Open Source. During my engineering studies, a professor mentioned the existence of Linux, Red Hat in particular, but without giving any details. I learnt Debian by using and reading (in advance) The Debian Administrator's Handbook. 3. How long have you been into Debian? I've been a user since 2004. But I only started contributing to Debian in 2015: I had quit my job and I wanted to work on something more meaningful. That's why I joined my husband in Freexian, his company. Unlike most people I think, I started contributing to Debian for my work. I only became a DD in 2021 under gentle social pressure and when I felt confident enough. 4. Are you using Debian in your daily life? If yes, how? Of course I use Debian in my professional life for almost all the tasks: from administrative tasks to Debian packaging. I also use Debian in my personal life. I have very basic needs: Firefox, LibreOffice, GnuCash and Rhythmbox are the main applications I need. Sruthi Chandran (srud) 1. Who are you? A feminist, a librarian turned Free Software advocate and a Debian Developer. Part of Debian Outreach team and DebConf Committee. 2. How did you get introduced to Debian? I got introduced to the free software world and Debian through my husband. I attended many Debian events with him. During one such event, out of curiosity, I participated in a Debian packaging workshop. Just after that I visited a Tibetan community in India and they mentioned that there was no proper Tibetan font in GNU/Linux. Tibetan font was my first package in Debian. 3. How long have you been into Debian? I have been contributing to Debian since 2016 and Debian Developer since 2019. 4. Are you using Debian in your daily life? If yes, how? I haven't used any other distro on my laptop since I got introduced to Debian. 5. Do you have any suggestions to improve women's participation in Debian? I was involved with actively mentoring newcomers to Debian since I started contributing myself. I specially work towards reducing the gender gap inside the Debian and Free Software community in general. In my experience, I believe that visibility of already existing women in the community will encourage more women to participate. Also I think we should reintroduce mentoring through debian-women. T ssia Cam es Ara jo (tassia) 1. Who are you? T ssia Cam es Ara jo, a Brazilian living in Canada. I'm a passionate learner who tries to push myself out of my comfort zone and always find something new to learn. I also love to mentor people on their learning journey. But I don't consider myself a typical geek. My challenge has always been to not get distracted by the next project before I finish the one I have in my hands. That said, I love being part of a community of geeks and feel empowered by it. I love Debian for its technical excellence, and it's always reassuring to know that someone is taking care of the things I don't like or can't do. When I'm not around computers, one of my favorite things is to feel the wind on my cheeks, usually while skating or riding a bike; I also love music, and I'm always singing a melody in my head. 2. How did you get introduced to Debian? As a student, I was privileged to be introduced to FLOSS at the same time I was introduced to computer programming. My university could not afford to have labs in the usual proprietary software model, and what seemed like a limitation at the time turned out to be a great learning opportunity for me and my colleagues. I joined this student-led initiative to "liberate" our servers and build LTSP-based labs - where a single powerful computer could power a few dozen diskless thin clients. How revolutionary it was at the time! And what an achievement! From students to students, all using Debian. Most of that group became close friends; I've married one of them, and a few of them also found their way to Debian. 3. How long have you been into Debian? I first used Debian in 2001, but my first real connection with the community was attending DebConf 2004. Since then, going to DebConfs has become a habit. It is that moment in the year when I reconnect with the global community and my motivation to contribute is boosted. And you know, in 20 years I've seen people become parents, grandparents, children grow up; we've had our own child and had the pleasure of introducing him to the community; we've mourned the loss of friends and healed together. I'd say Debian is like family, but not the kind you get at random once you're born, Debian is my family by choice. 4. Are you using Debian in your daily life? If yes, how? These days I teach at Vanier College in Montr al. My favorite course to teach is UNIX, which I have the pleasure of teaching mostly using Debian. I try to inspire my students to discover Debian and other FLOSS projects, and we are happy to run a FLOSS club with participation from students, staff and alumni. I love to see these curious young minds put to the service of FLOSS. It is like recruiting soldiers for a good battle, and one that can change their lives, as it certainly did mine. 5. Do you have any suggestions to improve women's participation in Debian? I think the most effective way to inspire other women is to give visibility to active women in our community. Speaking at conferences, publishing content, being vocal about what we do so that other women can see us and see themselves in those positions in the future. It's not easy, and I don't like being in the spotlight. It took me a long time to get comfortable with public speaking, so I can understand the struggle of those who don't want to expose themselves. But I believe that this space of vulnerability can open the way to new connections. It can inspire trust and ultimately motivate our next generation. It's with this in mind that I publish these lines. Another point we can't neglect is that in Debian we work on a volunteer basis, and this in itself puts us at a great disadvantage. In our societies, women usually take a heavier load than their partners in terms of caretaking and other invisible tasks, so it is hard to afford the free time needed to volunteer. This is one of the reasons why I bring my son to the conferences I attend, and so far I have received all the support I need to attend DebConfs with him. It is a way to share the caregiving burden with our community - it takes a village to raise a child. Besides allowing us to participate, it also serves to show other women (and men) that you can have a family life and still contribute to Debian. My feeling is that we are not doing super well in terms of diversity in Debian at the moment, but that should not discourage us at all. That's the way it is now, but that doesn't mean it will always be that way. I feel like we go through cycles. I remember times when we had many more active female contributors, and I'm confident that we can improve our ratio again in the future. In the meantime, I just try to keep going, do my part, attract those I can, reassure those who are too scared to come closer. Debian is a wonderful community, it is a family, and of course a family cannot do without us, the women. These interviews were conducted via email exchanges in October, 2024. Thanks to all the wonderful women who participated in this interview. We really appreciate your contributions in Debian and to Free/Libre software.

14 October 2024

Dirk Eddelbuettel: RcppDate 0.0.4: New Upstream Minor

RcppDate wraps the featureful date library written by Howard Hinnant for use with R. This header-only modern C++ library has been in pretty wide-spread use for a while now, and adds to C++11/C++14/C++17 what will be (with minor modifications) the date library in C++20. This release, the first in 3 1/2 years, syncs the code with the recent date 3.0.2 release from a few days ago. It also updates a few packaging details such as URLs, badges or continuous integration.

Changes in version 0.0.4 (2024-10-14)
  • Updated to upstream version 3.0.2 (and adjusting one pragma)
  • Several small updates to overall packaging and testing

Courtesy of my CRANberries, there is also a diffstat report for the most recent release. More information is available at the repository or the package page. If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Scarlett Gately Moore: Kubuntu 24.10 Released, KDE Snaps at 24.08.2, and I lived to tell you about it!

Happy 28th birthday KDE!Happy 28th Birthday KDE!
Sorry my blog updates have been MIA. Let me tell you a story As some of you know, 3 months ago I was in a no fault car accident. Thankfully, the only injury was I ended up with a broken arm. ER sends me home in a sling and tells me it was a clean break and it will mend itself in no time. After a week of excruciating pain I went to my follow up doctor appointment, and with my x-rays in hand, the doc tells me it was far from a clean break and needs surgery. So after a week of my shattered bone scraping my nerves and causing pain I have never felt before, I finally go in for surgery! They put in a metal plate with screws to hold the bone in place so it can properly heal. The nerve pain was gone, so I thought I was on the mend. Some time goes by and the swelling still has not subsided, the doctors are not as concerned about this as I am, so I carry on until it becomes really inflamed and developed fever blisters. After no success in reaching the doctors office my husband borrows the neighbors car and rushes me to the ER. Good thing too, I had an infection. So after a 5 day stay in the hospital, they sent us home loaded with antibiotics and trained my husband in wound packing. We did everything right, kept the place immaculate, followed orders with the wound care, took my antibiotics, yet when they ran out there was still no sign of relief, or healing. Went to doctors and they gave me another month supply of antibiotics. Two days after my final dose my arm becomes inflamed again and with extra spectacular levels of pain to go with it. I call the doctor office They said to come in on my appointment day ( 4 days away ). I asked, You aren t concerned with this inflammation? , to which they replied, No. . Ok, maybe I am over reacting and it s all in my head, I can power through 4 more days. The following morning my husband observed fever blisters and the wound site was clearly not right, so once again off we go to the ER. Well thankfully we did. I was in Sepsis and could have died After deliberating with the doctor on the course of action for treatment, the doctor accepted our plea to remove the plate, rather than tighten screws and have me drive 100 miles to hospital everyday for iv antibiotics (Umm I don t have a car!?) So after another 4 day stay I am released into the world, alive and well. I am happy to report, the swelling is almost gone, the pain is minimal, and I am finally healing nicely. I am still in a sling and I have to be super careful and my arm was not fully knitted. So with that I am bummed to say, no traveling for me, no Ubuntu Summit  I still need help with that car, if it weren t for our neighbor, this story would have ended much differently. https://gofund.me/00942f47 Despite my tragic few months for my right arm, my left arm has been quite busy. Thankfully I am a lefty! On to my work progress report. Kubuntu:
With Plasma 6! A big thank you to the Debian KDE/QT team and Rik Mills, could not have done it without you!
KDE Snaps: All release service snaps are done! Save a few problematic ones still WIP.. I have released 24.08.2 which you can find here: https://snapcraft.io/publisher/kde I completed the qt6 and KDE frameworks 6 content packs for core24 Snapcraft: I have a PR in for kde-neon-6 extension core24 support. That s all for now. Thanks for stopping by!

13 October 2024

Andy Simpkins: The state of the art

A long time ago . A long time ago a computer was a woman (I think almost exclusively a women, not a man) who was employed to do a lot of repetitive mathematics typically for accounting and stock / order processing. Then along came Lyons, who deployed an artificial computer to perform the same task, only with fewer errors in less time. Modern day computing was born we had entered the age of the Digital Computer. These computers were large, consumed huge amounts of power but were precise, and gave repeatable, verifiable results. Over time the huge mainframe digital computers have shrunk in size, increased in performance, and consume far less power so much so that they often didn t need the specialist CFC based, refrigerated liquid cooling systems of their bigger mainframe counterparts, only requiring forced air flow, and occasionally just convection cooling. They shrank so far and became cheep enough that the Personal Computer became to be, replacing the mainframe with its time shared resources with a machine per user. Desktop or even portable laptop computers were everywhere. We networked them together, so now we can share information around the office, a few computers were given specialist tasks of being available all the time so we could share documents, or host databases these servers were basically PCs designed to operate 24 7, usually more powerful than their desktop counterparts (or at least with faster storage and networking). Next we joined these networks together and the internet was born. The dream of a paperless office might actually become realised we can now send email (and documents) from one organisation (or individual) to another via email. We can make our specialist computers applications available outside just the office and web servers / web apps come of age. Fast forward a few years and all of a sudden we need huge data-halls filled with Rack scale machines augmented with exotic GPUs and NPUs again with refrigerated liquid cooling, all to do the same task that we were doing previously without the magical buzzword that has been named AI; because we all need another dot com bubble or block chain band waggon to jump aboard. Our AI enabled searches take slightly longer, consume magnitudes more power, and best of all the results we are given may or may not be correct . Progress, less precise answers, taking longer, consuming more power, without any verification and often giving a different result if you repeat your question AND we still need a personal computing device to access this wondrous thing. Remind me again why we are here? (time lines and huge swaves of history simply ignored to make an attempted comic point this is intended to make a point and not be scholarly work)

Taavi V n nen: Bulk downloading Wikimedia Commons categories

Wikimedia Commons, the Wikimedia project for freely licensed media files, also contains a bunch of photos by me and photos of me at various events. While I don't think Commons is going away anytime soon, I would still like to have a local copy of those images available on my own storage hardware. Obviously this requires some way to query for photos you want to download. I'm using Commons categories for this, since that's easy to implement and works for both use cases. The Commons community tends to come up with very specific categories that you can use, and if not, you can usually categorize the files yourself.
Me replying 'shh' to a Discord message showing myself categorizing photos about me and accusing me of COI editing
thankfully Commons has no such thing as a Conflict of interest (COI) policy
There is almost an existing tool for this: Sam Wilson's mwcli project has support for exporting images one has uploaded to Commons. However I couldn't use that to upload photos of me others have uploaded, plus it's written in PHP and I don't exactly want to deal with the problem of figuring out how to package it in a way I could neatly install it on my NAS. So I wrote my own tool for it, called comload. It's written in Python because Python is easy to deploy (I can just throw it in a .deb and upload it to my internal repository), and because I did not find a Go library to handle Action API pagination for me. The basic usage is like this:
$ comload --subcats "Taavi V n nen"
This will download any files in Category:Taavi V n nen and its sub-categories to the current directory. Former image versions, as well as the image description and SDC data, if any, is also included. And it's smart enough to not download any files that are already there on future runs, so you can just throw it in a systemd timer to get any future files. I'd still like it to handle moved files without creating a duplicate copy, but otherwise I'm really happy with the current state. comload is available from PyPI and from my Git server directly, and is licensed under the GPLv3.

11 October 2024

Freexian Collaborators: Monthly report about Debian Long Term Support, September 2024 (by Roberto C. S nchez)

Like each month, have a look at the work funded by Freexian s Debian LTS offering.

Debian LTS contributors In September, 18 contributors have been paid to work on Debian LTS, their reports are available:
  • Abhijith PA did 7.0h (out of 0.0h assigned and 14.0h from previous period), thus carrying over 7.0h to the next month.
  • Adrian Bunk did 51.75h (out of 9.25h assigned and 55.5h from previous period), thus carrying over 13.0h to the next month.
  • Arturo Borrero Gonzalez did 10.0h (out of 0.0h assigned and 10.0h from previous period).
  • Bastien Roucari s did 20.0h (out of 20.0h assigned).
  • Ben Hutchings did 20.0h (out of 12.0h assigned and 12.0h from previous period), thus carrying over 4.0h to the next month.
  • Chris Lamb did 18.0h (out of 18.0h assigned).
  • Daniel Leidert did 23.0h (out of 26.0h assigned), thus carrying over 3.0h to the next month.
  • Emilio Pozuelo Monfort did 23.5h (out of 22.25h assigned and 37.75h from previous period), thus carrying over 36.5h to the next month.
  • Guilhem Moulin did 22.25h (out of 20.0h assigned and 2.5h from previous period), thus carrying over 0.25h to the next month.
  • Lucas Kanashiro did 10.0h (out of 5.0h assigned and 15.0h from previous period), thus carrying over 10.0h to the next month.
  • Markus Koschany did 40.0h (out of 40.0h assigned).
  • Ola Lundqvist did 6.5h (out of 14.5h assigned and 9.5h from previous period), thus carrying over 17.5h to the next month.
  • Roberto C. S nchez did 24.75h (out of 21.0h assigned and 3.75h from previous period).
  • Santiago Ruano Rinc n did 19.0h (out of 19.0h assigned).
  • Sean Whitton did 0.75h (out of 4.0h assigned and 2.0h from previous period), thus carrying over 5.25h to the next month.
  • Sylvain Beucler did 16.0h (out of 42.0h assigned and 18.0h from previous period), thus carrying over 44.0h to the next month.
  • Thorsten Alteholz did 11.0h (out of 11.0h assigned).
  • Tobias Frost did 17.0h (out of 7.5h assigned and 9.5h from previous period).

Evolution of the situation In September, we have released 52 DLAs. September marked the first full month of Debian 11 bullseye under the responsibility of the LTS Team and the team immediately got to work, publishing more than 4 dozen updates. Some notable updates include ruby2.7 (denial-of-service, information leak, and remote code execution), git (various arbitrary code execution vulnerabilities), firefox-esr (multiple issues), gnutls28 (information disclosure), thunderbird (multiple issues), cacti (cross site scripting and SQL injection), redis (unauthorized access, denial of service, and remote code execution), mariadb-10.5 (arbitrary code execution), cups (arbitrary code execution). Several LTS contributors have also contributed package updates which either resulted in a DSA (a Debian Security Announcement, which applies to Debian 12 bookworm) or in an upload that will be published at the next stable point release of Debian 12 bookworm. This list of packages includes cups, cups-filters, booth, nghttp2, puredata, python3.11, sqlite3, and wireshark. This sort of work, contributing fixes to newer Debian releases (and sometimes even to unstable), helps to ensure that upgrades from a release in the LTS phase of its lifecycle to a newer release do not expose users to vulnerabilities which have been closed in the older release. Looking beyond Debian, LTS contributor Bastien Roucari s has worked with the upstream developers of apache2 to address regressions introduced upstream by some recent vulnerability fixes and he has also reached out to the community regarding a newly discovered security issue in the dompurify package. LTS contributor Santiago Ruano Rinc n has undertaken the work of triaging and reproducing nearly 4 dozen CVEs potentially affecting the freeimage package. The upstream development of freeimage appears to be dormant and some of the issues have languished for more than 5 years. It is unclear how much can be done without the aid of upstream, but we will do our best to provide as much help to the community as we can feasibly manage. Finally, it is sometimes necessary to limit or discontinue support for certain packages. The transition of a release from being under the responsibility of the Debian Security Team to that of the LTS Team is an occasion where we assess any pending decisions in this area and formalize them. Please see the announcement for a complete list of packages which have been designated as unsupported.

Thanks to our sponsors Sponsors that joined recently are in bold.

10 October 2024

Freexian Collaborators: Debian Contributions: Packaging Pydantic v2, Reworking of glib2.0 for cross bootstrap, Python archive rebuilds and more! (by Anupa Ann Joseph)

Debian Contributions: 2024-09 Contributing to Debian is part of Freexian s mission. This article covers the latest achievements of Freexian and their collaborators. All of this is made possible by organizations subscribing to our Long Term Support contracts and consulting services.

Pydantic v2, by Colin Watson Pydantic is a useful library for validating data in Python using type hints: Freexian uses it in a number of projects, including Debusine. Its Debian packaging had been stalled at 1.10.17 in testing for some time, partly due to needing to make sure everything else could cope with the breaking changes introduced in 2.x, but mostly due to needing to sort out packaging of its new Rust dependencies. Several other people (notably Alexandre Detiste, Andreas Tille, Drew Parsons, and Timo R hling) had made some good progress on this, but nobody had quite got it over the line and it seemed a bit stuck. Colin upgraded a few Rust libraries to new upstream versions, packaged rust-jiter, and chased various failures in other packages. This eventually allowed getting current versions of both pydantic-core and pydantic into testing. It should now be much easier for us to stay up to date routinely.

Reworking of glib2.0 for cross bootstrap, by Helmut Grohne Simon McVittie (not affiliated with Freexian) earlier restructured the libglib2.0-dev such that it would absorb more functionality and in particular provide tools for working with .gir files. Those tools practically require being run for their host architecture (practically this means running under qemu-user) which is at odds with the requirements of architecture cross bootstrap. The qemu requirement was expressed in package dependencies and also made people unhappy attempting to use libglib2.0-dev for i386 on amd64 without resorting to qemu. The use of qemu in architecture bootstrap is particularly problematic as it tends to not be ready at the time bootstrapping is needed. As a result, Simon proposed and implemented the introduction of a libgio-2.0-dev package providing a subset of libglib2.0-dev that does not require qemu. Packages should continue to use libglib2.0-dev in their Build-Depends unless involved in architecture bootstrap. Helmut reviewed and tested the implementation and integrated the necessary changes into rebootstrap. He also prepared a patch for libverto to use the new package and proposed adding forward compatibility to glib2.0. Helmut continued working on adding cross-exe-wrapper to architecture-properties and implemented autopkgtests later improved by Simon. The cross-exe-wrapper package now provides a generic mechanism to a program on a different architecture by using qemu when needed only. For instance, a dependency on cross-exe-wrapper:i386 provides a i686-linux-gnu-cross-exe-wrapper program that can be used to wrap an ELF executable for the i386 architecture. When installed on amd64 or i386 it will skip installing or running qemu, but for other architectures qemu will be used automatically. This facility can be used to support cross building with targeted use of qemu in cases where running host code is unavoidable as is the case for GObject introspection. This concludes the joint work with Simon and Niels Thykier on glib2.0 and architecture-properties resolving known architecture bootstrap regressions arising from the glib2.0 refactoring earlier this year.

Analyzing binary package metadata, by Helmut Grohne As Guillem Jover (not affiliated with Freexian) continues to work on adding metadata tracking to dpkg, the question arises how this affects existing packages. The dedup.debian.net infrastructure provides an easy playground to answer such questions, so Helmut gathered file metadata from all binary packages in unstable and performed an explorative analysis. Some results include: Guillem also performed a cursory analysis and reported other problem categories such as mismatching directory permissions for directories installed by multiple packages and thus gained a better understanding of what consistency checks dpkg can enforce.

Python archive rebuilds, by Stefano Rivera Last month Stefano started to write some tooling to do large-scale rebuilds in debusine, starting with finding packages that had already started to fail to build from source (FTBFS) due to the removal of setup.py test. This month, Stefano did some more rebuilds, starting with experimental versions of dh-python. During the Python 3.12 transition, we had added a dependency on python3-setuptools to dh-python, to ease the transition. Python 3.12 removed distutils from the stdlib, but many packages were expecting it to still be available. Setuptools contains a version of distutils, and dh-python was a convenient place to depend on setuptools for most package builds. This dependency was never meant to be permanent. A rebuild without it resulted in mass-filing about 340 bugs (and around 80 more by mistake). A new feature in Python 3.12, was to have unittest s test runner exit with a non-zero return code, if no tests were run. We added this feature, to be able to detect tests that are not being discovered, by mistake. We are ignoring this failure, as we wouldn t want to suddenly cause hundreds of packages to fail to build, if they have no tests. Stefano did a rebuild to see how many packages were affected, and found that around 1000 were. The Debian Python community has not come to a conclusion on how to move forward with this. As soon as Python 3.13 release candidate 2 was available, Stefano did a rebuild of the Python packages in the archive against it. This was a more complex rebuild than the others, as it had to be done in stages. Many packages need other Python packages at build time, typically to run tests. So transitions like this involve some manual bootstrapping, followed by several rounds of builds. Not all packages could be tested, as not all their dependencies support 3.13 yet. The result was around 100 bugs in packages that need work to support Python 3.13. Many other packages will need additional work to properly support Python 3.13, but being able to build (and run tests) is an important first step.

Miscellaneous contributions
  • Carles prepared the update of python-pyaarlo package to a new upstream release.
  • Carles worked on updating python-ring-doorbell to a new upstream release. Unfinished, pending to package a new dependency python3-firebase-messaging RFP #1082958 and its dependency python3-http-ece RFP #1083020.
  • Carles improved po-debconf-manager. Main new feature is that it can open Salsa merge requests. Aiming for a lightning talk in MiniDebConf Toulouse (November) to be functional end to end and get feedback from the wider public for this proof of concept.
  • Carles helped one translator to use po-debconf-manager (added compatibility for bullseye, fixed other issues) and reviewed 17 package templates.
  • Colin upgraded the OpenSSH packaging to 9.9p1.
  • Colin upgraded the various YubiHSM packages to new upstream versions, enabled more tests, fixed yubihsm-shell build failures on some 32-bit architectures, made yubihsm-shell build reproducibly, and fixed yubihsm-connector to apply udev rules to existing devices when the package is installed. As usual, bookworm-backports is up to date with all these changes.
  • Colin fixed quite a bit of fallout from setuptools 72.0.0 removing setup.py test, backported a large upstream patch set to make buildbot work with SQLAlchemy 2.0, and upgraded 25 other Python packages to new upstream versions.
  • Enrico worked with Jakob Haufe to get him up to speed for managing sso.debian.org
  • Rapha l did remove spam entries in the list of teams on tracker.debian.org (see #1080446), and he applied a few external contributions, fixing a rendering issue and replacing the DDPO link with a more useful alternative. He also gave feedback on a couple of merge requests that required more work. As part of the analysis of the underlying problem, he suggested to the ftpmasters (via #1083068) to auto-reject packages having the too-many-contacts lintian error, and he raised the severity of #1076048 to serious to actually have that 4 year old bug fixed.
  • Rapha l uploaded zim and hamster-time-tracker to fix issues with Python 3.12 getting rid of setuptools. He also uploaded a new gnome-shell-extension-hamster to cope with the upcoming transition to GNOME 47.
  • Helmut sent seven patches and sponsored one upload for cross build failures.
  • Helmut uploaded a Nagios/Icinga plugin check-smart-attributes for monitoring the health of physical disks.
  • Helmut collaborated on sbuild reviewing and improving a MR for refactoring the unshare backend.
  • Helmut sent a patch fixing coinstallability of gcc-defaults.
  • Helmut continued to monitor the evolution of the /usr-move. With more and more key packages such as libvirt or fuse3 fixed. We re moving into the boring long-tail of the transition.
  • Helmut proposed updating the meson buildsystem in debhelper to use env2mfile.
  • Helmut continued to update patches maintained in rebootstrap. Due to the work on glib2.0 above, rebootstrap moves a lot further, but still fails for any architecture.
  • Santiago reviewed some Merge Request in Salsa CI, such as: !478, proposed by Otto to extend the information about how to use additional runners in the pipeline and !518, proposed by Ahmed to add support for Ubuntu images, that will help to test how some debian packages, including the complex MariaDB are built on Ubuntu. Santiago also prepared !545, which will make the reprotest job more consistent with the result seen on reproducible-builds.
  • Santiago worked on different tasks related to DebConf 25. Especially he drafted the fundraising brochure (which is almost ready).
  • Thorsten Alteholz uploaded package libcupsfilter to fix the autopkgtest and a dependency problem of this package. After package splix was abandoned by upstream and OpenPrinting.org adopted its maintenance, Thorsten uploaded their first release.
  • Anupa published posts on the Debian Administrators group in LinkedIn and moderated the group, one of the tasks of the Debian Publicity Team.
  • Anupa helped organize DebUtsav 2024. It had over 100 attendees with hand-on sessions on making initial contributions to Linux Kernel, Debian packaging, submitting documentation to Debian wiki and assisting Debian Installations.

7 October 2024

Reproducible Builds: Reproducible Builds in September 2024

Welcome to the September 2024 report from the Reproducible Builds project! Our reports attempt to outline what we ve been up to over the past month, highlighting news items from elsewhere in tech where they are related. As ever, if you are interested in contributing to the project, please visit our Contribute page on our website. Table of contents:
  1. New binsider tool to analyse ELF binaries
  2. Unreproducibility of GHC Haskell compiler 95% fixed
  3. Mailing list summary
  4. Towards a 100% bit-for-bit reproducible OS
  5. Two new reproducibility-related academic papers
  6. Distribution work
  7. diffoscope
  8. Other software development
  9. Android toolchain core count issue reported
  10. New Gradle plugin for reproducibility
  11. Website updates
  12. Upstream patches
  13. Reproducibility testing framework

New binsider tool to analyse ELF binaries Reproducible Builds developer Orhun Parmaks z has announced a fantastic new tool to analyse the contents of ELF binaries. According to the project s README page:
Binsider can perform static and dynamic analysis, inspect strings, examine linked libraries, and perform hexdumps, all within a user-friendly terminal user interface!
More information about Binsider s features and how it works can be found within Binsider s documentation pages.

Unreproducibility of GHC Haskell compiler 95% fixed A seven-year-old bug about the nondeterminism of object code generated by the Glasgow Haskell Compiler (GHC) received a recent update, consisting of Rodrigo Mesquita noting that the issue is:
95% fixed by [merge request] !12680 when -fobject-determinism is enabled. [ ]
The linked merge request has since been merged, and Rodrigo goes on to say that:
After that patch is merged, there are some rarer bugs in both interface file determinism (eg. #25170) and in object determinism (eg. #25269) that need to be taken care of, but the great majority of the work needed to get there should have been merged already. When merged, I think we should close this one in favour of the more specific determinism issues like the two linked above.

Mailing list summary On our mailing list this month:
  • Fay Stegerman let everyone know that she started a thread on the Fediverse about the problems caused by unreproducible zlib/deflate compression in .zip and .apk files and later followed up with the results of her subsequent investigation.
  • Long-time developer kpcyrd wrote that there has been a recent public discussion on the Arch Linux GitLab [instance] about the challenges and possible opportunities for making the Linux kernel package reproducible , all relating to the CONFIG_MODULE_SIG flag. [ ]
  • Bernhard M. Wiedemann followed-up to an in-person conversation at our recent Hamburg 2024 summit on the potential presence for Reproducible Builds in recognised standards. [ ]
  • Fay Stegerman also wrote about her worry about the possible repercussions for RB tooling of Debian migrating from zlib to zlib-ng as reproducibility requires identical compressed data streams. [ ]
  • Martin Monperrus wrote the list announcing the latest release of maven-lockfile that is designed aid building Maven projects with integrity . [ ]
  • Lastly, Bernhard M. Wiedemann wrote about potential role of reproducible builds in combatting silent data corruption, as detailed in a recent Tweet and scholarly paper on faulty CPU cores. [ ]

Towards a 100% bit-for-bit reproducible OS Bernhard M. Wiedemann began writing on journey towards a 100% bit-for-bit reproducible operating system on the openSUSE wiki:
This is a report of Part 1 of my journey: building 100% bit-reproducible packages for every package that makes up [openSUSE s] minimalVM image. This target was chosen as the smallest useful result/artifact. The larger package-sets get, the more disk-space and build-power is required to build/verify all of them.
This work was sponsored by NLnet s NGI Zero fund.

Distribution work In Debian this month, 14 reviews of Debian packages were added, 12 were updated and 20 were removed, all adding to our knowledge about identified issues. A number of issue types were updated as well. [ ][ ] In addition, Holger opened 4 bugs against the debrebuild component of the devscripts suite of tools. In particular:
  • #1081047: Fails to download .dsc file.
  • #1081048: Does not work with a proxy.
  • #1081050: Fails to create a debrebuild.tar.
  • #1081839: Fails with E: mmdebstrap failed to run error.
Last month, an issue was filed to update the Salsa CI pipeline (used by 1,000s of Debian packages) to no longer test for reproducibility with reprotest s build_path variation. Holger Levsen provided a rationale for this change in the issue, which has already been made to the tests being performed by tests.reproducible-builds.org. This month, this issue was closed by Santiago R. R., nicely explaining that build path variation is no longer the default, and, if desired, how developers may enable it again. In openSUSE news, Bernhard M. Wiedemann published another report for that distribution.

diffoscope diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. This month, Chris Lamb made the following changes, including preparing and uploading version 278 to Debian:
  • New features:
    • Add a helpful contextual message to the output if comparing Debian .orig tarballs within .dsc files without the ability to fuzzy-match away the leading directory. [ ]
  • Bug fixes:
    • Drop removal of calculated os.path.basename from GNU readelf output. [ ]
    • Correctly invert X% similar value and do not emit 100% similar . [ ]
  • Misc:
    • Temporarily remove procyon-decompiler from Build-Depends as it was removed from testing (via #1057532). (#1082636)
    • Update copyright years. [ ]
For trydiffoscope, the command-line client for the web-based version of diffoscope, Chris Lamb also:
  • Added an explicit python3-setuptools dependency. (#1080825)
  • Bumped the Standards-Version to 4.7.0. [ ]

Other software development disorderfs is our FUSE-based filesystem that deliberately introduces non-determinism into system calls to reliably flush out reproducibility issues. This month, version 0.5.11-4 was uploaded to Debian unstable by Holger Levsen making the following changes:
  • Replace build-dependency on the obsolete pkg-config package with one on pkgconf, following a Lintian check. [ ]
  • Bump Standards-Version field to 4.7.0, with no related changes needed. [ ]

In addition, reprotest is our tool for building the same source code twice in different environments and then checking the binaries produced by each build for any differences. This month, version 0.7.28 was uploaded to Debian unstable by Holger Levsen including a change by Jelle van der Waa to move away from the pipes Python module to shlex, as the former will be removed in Python version 3.13 [ ].

Android toolchain core count issue reported Fay Stegerman reported an issue with the Android toolchain where a part of the build system generates a different classes.dex file (and thus a different .apk) depending on the number of cores available during the build, thereby breaking Reproducible Builds:
We ve rebuilt [tag v3.6.1] multiple times (each time in a fresh container): with 2, 4, 6, 8, and 16 cores available, respectively:
  • With 2 and 4 cores we always get an unsigned APK with SHA-256 14763d682c9286ef .
  • With 6, 8, and 16 cores we get an unsigned APK with SHA-256 35324ba4c492760 instead.

New Gradle plugin for reproducibility A new plugin for the Gradle build tool for Java has been released. This easily-enabled plugin results in:
reproducibility settings [being] applied to some of Gradle s built-in tasks that should really be the default. Compatible with Java 8 and Gradle 8.3 or later.

Website updates There were a rather substantial number of improvements made to our website this month, including:

Upstream patches The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:

Reproducibility testing framework The Reproducible Builds project operates a comprehensive testing framework running primarily at tests.reproducible-builds.org in order to check packages and other artifacts for reproducibility. In September, a number of changes were made by Holger Levsen, including:
  • Debian-related changes:
    • Upgrade the osuosl4 node to Debian trixie in anticipation of running debrebuild and rebuilderd there. [ ][ ][ ]
    • Temporarily mark the osuosl4 node as offline due to ongoing xfs_repair filesystem maintenance. [ ][ ]
    • Do not warn about (very old) broken nodes. [ ]
    • Add the risc64 architecture to the multiarch version skew tests for Debian trixie and sid. [ ][ ][ ]
    • Mark the virt 32,64 b nodes as down. [ ]
  • Misc changes:
    • Add support for powercycling OpenStack instances. [ ]
    • Update the fail2ban to ban hosts for 4 weeks in total [ ][ ] and take care to never ban our own Jenkins instance. [ ]
In addition, Vagrant Cascadian recorded a disk failure for the virt32b and virt64b nodes [ ], performed some maintenance of the cbxi4a node [ ][ ] and marked most armhf architecture systems as being back online.

Finally, If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

4 October 2024

Bits from Debian: Debian welcomes Freexian as our newest partner!

Freexian logo We are excited to announce and welcome Freexian into Debian Partners. Freexian specializes in Free Software with a particular focus on Debian GNU/Linux. Freexian can assist with consulting, training, technical support, packaging, or software development on projects involving use or development of Free software. All of Freexian's employees and partners are well-known contributors in the Free Software community, a choice that is integral to Freexian's business model. About the Debian Partners Program The Debian Partners Program was created to recognize companies and organizations that help and provide continuous support to the project with services, finances, equipment, vendor support, and a slew of other technical and non-technical services. Partners provide critical assistance, help, and support which has advanced and continues to further our work in providing the 'Universal Operating System' to the world. Thank you Freexian!

1 October 2024

Colin Watson: Free software activity in September 2024

Almost all of my Debian contributions this month were sponsored by Freexian. You can also support my work directly via Liberapay. Pydantic My main Debian project for the month turned out to be getting Pydantic back into a good state in Debian testing. I ve used Pydantic quite a bit in various projects, most recently in Debusine, so I have an interest in making sure it works well in Debian. However, it had been stalled on 1.10.17 for quite a while due to the complexities of getting 2.x packaged. This was partly making sure everything else could cope with the transition, but in practice mostly sorting out packaging of its new Rust dependencies. Several other people (notably Alexandre Detiste, Andreas Tille, Drew Parsons, and Timo R hling) had made some good progress on this, but nobody had quite got it over the line and it seemed a bit stuck. Learning Rust is on my to-do list, but merely not knowing a language hasn t stopped me before. So I learned how the Debian Rust team s packaging works, upgraded a few packages to new upstream versions (including rust-half and upstream rust-idna test fixes), and packaged rust-jiter. After a lot of waiting around for various things and chasing some failures in other packages I was eventually able to get current versions of both pydantic-core and pydantic into testing. I m looking forward to being able to drop our clunky v1 compatibility code once debusine can rely on running on trixie! OpenSSH I upgraded the Debian packaging to OpenSSH 9.9p1. YubiHSM I upgraded python-yubihsm, yubihsm-connector, and yubihsm-shell to new upstream versions. I noticed that I could enable some tests in python-yubihsm and yubihsm-shell; I d previously thought the whole test suite required a real YubiHSM device, but when I looked closer it turned out that this was only true for some tests. I fixed yubihsm-shell build failures on some 32-bit architectures (upstream PRs #431, #432), and also made it build reproducibly. Thanks to Helmut Grohne, I fixed yubihsm-connector to apply udev rules to existing devices when the package is installed. As usual, bookworm-backports is up to date with all these changes. Python team setuptools 72.0.0 removed the venerable setup.py test command. This caused some fallout in Debian, some of which was quite non-obvious as packaging helpers sometimes fell back to different ways of running test suites that didn t quite work. I fixed django-guardian, manuel, python-autopage, python-flask-seeder, python-pgpdump, python-potr, python-precis-i18n, python-stopit, serpent, straight.plugin, supervisor, and zope.i18nmessageid. As usual for new language versions, the addition of Python 3.13 caused some problems. I fixed psycopg2, python-time-machine, and python-traits. I fixed build/autopkgtest failures in keymapper, python-django-test-migrations, python-rosettasciio, routes, transmissionrpc, and twisted. buildbot was in a bit of a mess due to being incompatible with SQLAlchemy 2.0. Fortunately by the time I got to it upstream had committed a workable set of patches, and the main difficulty was figuring out what to cherry-pick since they haven t made a new upstream release with all of that yet. I figured this out and got us up to 4.0.3. Adrian Bunk asked whether python-zipp should be removed from trixie. I spent some time investigating this and concluded that the answer was no, but looking into it was an interesting exercise anyway. On the other hand, I looked into flask-appbuilder, concluded that it should be removed, and filed a removal request. I upgraded some embedded CSS files in nbconvert. I upgraded importlib-resources, ipywidgets, jsonpickle, pydantic-settings, pylint (fixing a test failure), python-aiohttp-session, python-apptools, python-asyncssh, python-django-celery-beat, python-django-rules, python-limits, python-multidict, python-persistent, python-pkginfo, python-rt, python-spur, python-zipp, stravalib, transmissionrpc, vulture, zodbpickle, zope.exceptions (adopting it), zope.i18nmessageid, zope.proxy, and zope.security to new upstream versions. debmirror The experimental and *-proposed-updates suites used to not have Contents-* files, and a long time ago debmirror was changed to just skip those files in those suites. They were added to the Debian archive some time ago, but debmirror carried on skipping them anyway. Once I realized what was going on, I removed these unnecessary special cases (#819925, #1080168).

29 September 2024

Dirk Eddelbuettel: RApiSerialize 0.1.4 on CRAN: Added C++ Namespace

A new minor release 0.1.5 of RApiSerialize arrived on CRAN today. The RApiSerialize package is used by both my RcppRedis as well as by Travers excellent qs package. This release adds an optional C++ namespace, available when the API header file is included in a C++ source file. And as one often does, the release also brings a few small updates to different aspects of the packaging.

Changes in version 0.1.4 (2024-09-28)
  • Add C++ namespace in API header (Dirk in #9 closing #8)
  • Several packaging updates: switched to Authors@R, README.md badge updates, added .editorconfig and cleanup

Courtesy of my CRANberries, there is a diffstat report relative to previous release. More details are at the RApiSerialize page; code, issue tickets etc at the GitHub repositoryrapiserializerepo. If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Reproducible Builds: Supporter spotlight: Kees Cook on Linux kernel security

The Reproducible Builds project relies on several projects, supporters and sponsors for financial support, but they are also valued as ambassadors who spread the word about our project and the work that we do. This is the eighth installment in a series featuring the projects, companies and individuals who support the Reproducible Builds project. We started this series by featuring the Civil Infrastructure Platform project, and followed this up with a post about the Ford Foundation as well as recent ones about ARDC, the Google Open Source Security Team (GOSST), Bootstrappable Builds, the F-Droid project, David A. Wheeler and Simon Butler. Today, however, we will be talking with Kees Cook, founder of the Kernel Self-Protection Project.

Vagrant Cascadian: Could you tell me a bit about yourself? What sort of things do you work on? Kees Cook: I m a Free Software junkie living in Portland, Oregon, USA. I have been focusing on the upstream Linux kernel s protection of itself. There is a lot of support that the kernel provides userspace to defend itself, but when I first started focusing on this there was not as much attention given to the kernel protecting itself. As userspace got more hardened the kernel itself became a bigger target. Almost 9 years ago I formally announced the Kernel Self-Protection Project because the work necessary was way more than my time and expertise could do alone. So I just try to get people to help as much as possible; people who understand the ARM architecture, people who understand the memory management subsystem to help, people who understand how to make the kernel less buggy.
Vagrant: Could you describe the path that lead you to working on this sort of thing? Kees: I have always been interested in security through the aspect of exploitable flaws. I always thought it was like a magic trick to make a computer do something that it was very much not designed to do and seeing how easy it is to subvert bugs. I wanted to improve that fragility. In 2006, I started working at Canonical on Ubuntu and was mainly focusing on bringing Debian and Ubuntu up to what was the state of the art for Fedora and Gentoo s security hardening efforts. Both had really pioneered a lot of userspace hardening with compiler flags and ELF stuff and many other things for hardened binaries. On the whole, Debian had not really paid attention to it. Debian s packaging building process at the time was sort of a chaotic free-for-all as there wasn t centralized build methodology for defining things. Luckily that did slowly change over the years. In Ubuntu we had the opportunity to apply top down build rules for hardening all the packages. In 2011 Chrome OS was following along and took advantage of a bunch of the security hardening work as they were based on ebuild out of Gentoo and when they looked for someone to help out they reached out to me. We recognized the Linux kernel was pretty much the weakest link in the Chrome OS security posture and I joined them to help solve that. Their userspace was pretty well handled but the kernel had a lot of weaknesses, so focusing on hardening was the next place to go. When I compared notes with other users of the Linux kernel within Google there were a number of common concerns and desires. Chrome OS already had an upstream first requirement, so I tried to consolidate the concerns and solve them upstream. It was challenging to land anything in other kernel team repos at Google, as they (correctly) wanted to minimize their delta from upstream, so I needed to work on any major improvements entirely in upstream and had a lot of support from Google to do that. As such, my focus shifted further from working directly on Chrome OS into being entirely upstream and being more of a consultant to internal teams, helping with integration or sometimes backporting. Since the volume of needed work was so gigantic I needed to find ways to inspire other developers (both inside and outside of Google) to help. Once I had a budget I tried to get folks paid (or hired) to work on these areas when it wasn t already their job.
Vagrant: So my understanding of some of your recent work is basically defining undefined behavior in the language or compiler? Kees: I ve found the term undefined behavior to have a really strict meaning within the compiler community, so I have tried to redefine my goal as eliminating unexpected behavior or ambiguous language constructs . At the end of the day ambiguity leads to bugs, and bugs lead to exploitable security flaws. I ve been taking a four-pronged approach: supporting the work people are doing to get rid of ambiguity, identify new areas where ambiguity needs to be removed, actually removing that ambiguity from the C language, and then dealing with any needed refactoring in the Linux kernel source to adapt to the new constraints. None of this is particularly novel; people have recognized how dangerous some of these language constructs are for decades and decades but I think it is a combination of hard problems and a lot of refactoring that nobody has the interest/resources to do. So, we have been incrementally going after the lowest hanging fruit. One clear example in recent years was the elimination of C s implicit fall-through in switch statements. The language would just fall through between adjacent cases if a break (or other code flow directive) wasn t present. But this is ambiguous: is the code meant to fall-through, or did the author just forget a break statement? By defining the [[fallthrough]] statement, and requiring its use in Linux, all switch statements now have explicit code flow, and the entire class of bugs disappeared. During our refactoring we actually found that 1 in 10 added [[fallthrough]] statements were actually missing break statements. This was an extraordinarily common bug! So getting rid of that ambiguity is where we have been. Another area I ve been spending a bit of time on lately is looking at how defensive security work has challenges associated with metrics. How do you measure your defensive security impact? You can t say because we installed locks on the doors, 20% fewer break-ins have happened. Much of our signal is always secondary or retrospective, which is frustrating: This class of flaw was used X much over the last decade so, and if we have eliminated that class of flaw and will never see it again, what is the impact? Is the impact infinity? Attackers will just move to the next easiest thing. But it means that exploitation gets incrementally more difficult. As attack surfaces are reduced, the expense of exploitation goes up.
Vagrant: So it is hard to identify how effective this is how bad would it be if people just gave up? Kees: I think it would be pretty bad, because as we have seen, using secondary factors, the work we have done in the industry at large, not just the Linux kernel, has had an impact. What we, Microsoft, Apple, and everyone else is doing for their respective software ecosystems, has shown that the price of functional exploits in the black market has gone up. Especially for really egregious stuff like a zero-click remote code execution. If those were cheap then obviously we are not doing something right, and it becomes clear that it s trivial for anyone to attack the infrastructure that our lives depend on. But thankfully we have seen over the last two decades that prices for exploits keep going up and up into millions of dollars. I think it is important to keep working on that because, as a central piece of modern computer infrastructure, the Linux kernel has a giant target painted on it. If we give up, we have to accept that our computers are not doing what they were designed to do, which I can t accept. The safety of my grandparents shouldn t be any different from the safety of journalists, and political activists, and anyone else who might be the target of attacks. We need to be able to trust our devices otherwise why use them at all?
Vagrant: What has been your biggest success in recent years? Kees: I think with all these things I am not the only actor. Almost everything that we have been successful at has been because of a lot of people s work, and one of the big ones that has been coordinated across the ecosystem and across compilers was initializing stack variables to 0 by default. This feature was added in Clang, GCC, and MSVC across the board even though there were a lot of fears about forking the C language. The worry was that developers would come to depend on zero-initialized stack variables, but this hasn t been the case because we still warn about uninitialized variables when the compiler can figure that out. So you still still get the warnings at compile time but now you can count on the contents of your stack at run-time and we drop an entire class of uninitialized variable flaws. While the exploitation of this class has mostly been around memory content exposure, it has also been used for control flow attacks. So that was politically and technically a large challenge: convincing people it was necessary, showing its utility, and implementing it in a way that everyone would be happy with, resulting in the elimination of a large and persistent class of flaws in C.
Vagrant: In a world where things are generally Reproducible do you see ways in which that might affect your work? Kees: One of the questions I frequently get is, What version of the Linux kernel has feature $foo? If I know how things are built, I can answer with just a version number. In a Reproducible Builds scenario I can count on the compiler version, compiler flags, kernel configuration, etc. all those things are known, so I can actually answer definitively that a certain feature exists. So that is an area where Reproducible Builds affects me most directly. Indirectly, it is just being able to trust the binaries you are running are going to behave the same for the same build environment is critical for sane testing.
Vagrant: Have you used diffoscope? Kees: I have! One subset of tree-wide refactoring that we do when getting rid of ambiguous language usage in the kernel is when we have to make source level changes to satisfy some new compiler requirement but where the binary output is not expected to change at all. It is mostly about getting the compiler to understand what is happening, what is intended in the cases where the old ambiguity does actually match the new unambiguous description of what is intended. The binary shouldn t change. We have used diffoscope to compare the before and after binaries to confirm that yep, there is no change in binary .
Vagrant: You cannot just use checksums for that? Kees: For the most part, we need to only compare the text segments. We try to hold as much stable as we can, following the Reproducible Builds documentation for the kernel, but there are macros in the kernel that are sensitive to source line numbers and as a result those will change the layout of the data segment (and sometimes the text segment too). With diffoscope there s flexibility where I can exclude or include different comparisons. Sometimes I just go look at what diffoscope is doing and do that manually, because I can tweak that a little harder, but diffoscope is definitely the default. Diffoscope is awesome!
Vagrant: Where has reproducible builds affected you? Kees: One of the notable wins of reproducible builds lately was dealing with the fallout of the XZ backdoor and just being able to ask the question is my build environment running the expected code? and to be able to compare the output generated from one install that never had a vulnerable XZ and one that did have a vulnerable XZ and compare the results of what you get. That was important for kernel builds because the XZ threat actor was working to expand their influence and capabilities to include Linux kernel builds, but they didn t finish their work before they were noticed. I think what happened with Debian proving the build infrastructure was not affected is an important example of how people would have needed to verify the kernel builds too.
Vagrant: What do you want to see for the near or distant future in security work? Kees: For reproducible builds in the kernel, in the work that has been going on in the ClangBuiltLinux project, one of the driving forces of code and usability quality has been the continuous integration work. As soon as something breaks, on the kernel side, the Clang side, or something in between the two, we get a fast signal and can chase it and fix the bugs quickly. I would like to see someone with funding to maintain a reproducible kernel build CI. There have been places where there are certain architecture configurations or certain build configuration where we lose reproducibility and right now we have sort of a standard open source development feedback loop where those things get fixed but the time in between introduction and fix can be large. Getting a CI for reproducible kernels would give us the opportunity to shorten that time.
Vagrant: Well, thanks for that! Any last closing thoughts? Kees: I am a big fan of reproducible builds, thank you for all your work. The world is a safer place because of it.
Vagrant: Likewise for your work!


For more information about the Reproducible Builds project, please see our website at reproducible-builds.org. If you are interested in ensuring the ongoing security of the software that underpins our civilisation and wish to sponsor the Reproducible Builds project, please reach out to the project by emailing contact@reproducible-builds.org.

28 September 2024

Dave Hibberd: EuroBSDCon 2024 Report

This year I attended EuroBSDCon 2024 in Dublin. I always appreciate an excuse to head over to Ireland, and this seemed like a great chance to spend some time in Dublin and learn new things. Due to constraints on my time I didn t go to the 2 day devsummit that precedes the conference, only the main event itself.

The Event EuroBSDCon was attended by about 200-250 people, the hardcore of the BSD community! Attendees came from all over, I met Canadians, USAians, Germanians, Belgians and Irelandians amongst other nationalities! The event was at UCD Dublin, which is a gorgeous university campus about 10km south of Dublin proper in Stillorgan. The speaker hotel was a 20 minute walk (at my ~9min/km pace) from the hotel, or a quick bus journey. It was a pleasant walk, through the leafy campus and then along some pretty broad pavements, albeit beside a dual carriageway. The cycle infrastructure was pretty excellent too, but I sadly was unable to lease a city bike and make my way around on 2 wheels - Dublinbikes don t extend that far out the city. Lunch each day was Irish themed food - Saturday was beef stew (a Frenchman asked me what it was called - his only equivalent words were Beef Bourguignon ) and Sunday was Bangers & Mash! The kitchen struggled a bit - food was brought out in bowls in waves, and that ensured there was artificial scarcity that clearly left anxiety for some that they weren t going to be fed! Everyone I met was friendly from the day I arrived, and that set me very much at ease and made the event much more enjoyable - things are better shared with others. Big shout out to dch and Blake Willis for spending a lot of time talking to me over the weekend!

Talks I Attended

Keynote: Evidence based Policy formation in the EU Tom Smyth This talk given by Tom Smyth was an interesting look into his work with EU Policymakers in ensuring fair competition for his small, Irish ISP. It was an enlightening look into the workings of the EU and the various bodies that set, and manage policy. It truly is a complicated beast, but the feeling I left with was that there are people all through the organisation who are desperate to do the right thing for EU citizens at all costs. Sadly none of it is directly applicable to me living in the UK, but I still get to have a say on policy and vote in polls as an Irish citizen abroad.

10(ish) years of FreeBSD/arm64 Andrew Turner I have been a fan of ARM platforms for a long, long time. I had an early ARM Chromebook and have been equal measures excited and frustrated by the raspberry pi since first contact. I tend to find other ARM people at events and this was no exception! It was an interesting view into one person s dedication to making arm64 a platform for FreeBSD, starting out with no documentation or hardware to becoming a first-class platform. It s interesting to see the roadmap and things upcoming too and makes me hopeful for the future of arm64 in various OSes!

1-800-RC(8)-HELP: Dial Into FreeBSD Service Scripts Mastery! Mateusz Piotrowski rc scripts and startup applications scare me a bit. I m better at systemd units than sysvinit scripts, but that isn t really a transferable skill! This was a deep dive into lots of the functionality that FreeBSD s RC offers, and highlighted things that I only thought were limited to Linux s systemd. I am much more aware of what it s capable of now, but I m still scared to take it on! Afterwards I had a great chat in the hallway with Mateusz about our OS s different approaches to this problem and was impressed with the pragmatic view he had on startup, systemd, rc and the future!

Package management without borders. Using Ravenports on multiple BSDs Michael Reim Ports on the BSDs interest me, but I hadn t realised that outside of each major BSD s collection there were other, cross platform ports collections offered. Ravenports is one of these under developments, and it was good to understand the hows, why and what s happenings of the system. Plus, with my hibbian obsession on building other people s software as my own packages, it s interesting to see how others are doing it!

Building a Modern Packet Radio Network using Open Software me I spoke for 45 minutes to share my passion and frustration for amateur radio, packet radio, the law, the technology and what we re doing in the UK Packet network. This was a lot of fun - it felt like I had a busy room, lots of people interested in the stupid stuff I do with technology and I had lots of conversations after the fact about radio, telecoms, networking and at one point was cornered by what I describe as the Erlang Mafia to talk about how they could help!

Hacking - 30 years ago Walter Belgers This unrecorded talk looked at the history of the Dutch hacker scene, and a young group of hackers explorations of the early internet before modern security was a thing. It was exciting, enrapturing, well presented and a great story of a well spent youth in front of computers.

Social By 1730 I was pretty drained so I took myself back to the hotel missing the last talks, had some down time, and got the DART train to the social event at Brewdog. This invovled about an hour s walking and some train time and that was a nice time to reset my head and just watch the world. The train I was on had a particularly interesting feature where when the motors were not loaded (slowdown or coasting) the lights slowly flickered dim-bright-dim. I don t know if this is across the fleet or just this one, but it was fun to pontificate as I looked out the window at South Dublin passing by. The social was good - a few beer tokens (cider in my case, trying to avoid beer-driven hangovers still), some pleasant junk food and plenty of good company to talk to, lots of people wanting to talk about radio and packet to me! Brewdog struggled a bit - both in bar speed (a linear queue formed despite the staff preferring the crowd-around method of queue) and buffet food appeared in somewhat disjointed waves, meaning that people loitered around the food tables and cleared the plates of wings, sliders, fries, onion rings, mac & cheese as they appeared 4-5 plates at a time. Perhaps a few hundred hungry bodies was a bit too much for them to feed at once. They had shuffleboard that was played all night by various groups! I caught the last bus home, which was relatively painless!

Is our software sustainable? Kent Inge Fagerland Simonsen This was an interesting look into reducing the footprint of software to make it a net benefit. Lots of examples of how little changes can barrel up to big, gigawatthour changes when aggregated over the entire installbase of android or iOS!

A Packet s Journey Through the OpenBSD Network Stack Alexander Bluhm This was an analysis of what happens at each stage of networking in OpenBSD and was pretty interesting to see. Lots of it was out my depth, but it s cool to get an explanation and appreciation for various elements of how software handles each packet that arrives and the differences in the ipv4 and ipv6 stack!

FreeBSD at 30 Years: Its Secrets to Success Kirk McKusick This was a great statistical breakdown of FreeBSD since inception, including top committers, why certain parts of the system and community work so well and what has given it staying power compared to some projects on the internet that peter out after just a few years! Kirk s excitement and passion for the project really shone through, and I want to read his similarly titled article in the FreeBSD Journal now!

Building an open native FreeBSD CI system from scratch with lua, C, jails & zfs Dave Cottlehuber Dave spoke pretty excitedly about his work on a CI system using tools that FreeBSD ships with, and introduced me to the integration of C and Lua which I wasn t fully aware of before. Or I was, and my brain forgot it! With my interest in software build this year, it was quite a timely look at how others are thinking of doing things (I am doing similar stuff with zfs!). I look forward to playing with it when it finally is released to the Real World!

Building an Appliance Allan Jude This was an interesting look into the tools that FreeBSD provides which can be used to make immutable, appliance OSes without too much overhead. Fail safe upgrades and boots with ZFS, running approved code with secure boot, factory resetting and more were discussed! I have had thoughts around this in the recent past, so it was good to have some ideas validated, some challenged and gave me food for thought.

Experience as a speaker I really enjoyed being a speaker at the event! I ve spoken at other things before, but this really was a cut above. The event having money to provide me a hotel was a really welcome surprise, and also receiving a gorgeous scarf as a speaker gift was a great surprise (and it has already been worn with the change of temperature here in Scotland this week!). I would definitely consider returning, either as an attendee or as a speaker. The community of attendees were pragmatic, interesting, engaging and welcoming, the organising committee were spot-on in their work making it happen and the whole event, while turning my brain to mush with all the information, was really enjoyable and I left energised and excited by things instead of ground down and tired.

26 September 2024

Melissa Wen: Reflections on 2024 Linux Display Next Hackfest

Hey everyone! The 2024 Linux Display Next hackfest concluded in May, and its outcomes continue to shape the Linux Display stack. Igalia hosted this year s event in A Coru a, Spain, bringing together leading experts in the field. Samuel Iglesias and I organized this year s edition and this blog post summarizes the experience and its fruits. One of the highlights of this year s hackfest was the wide range of backgrounds represented by our 40 participants (both on-site and remotely). Developers and experts from various companies and open-source projects came together to advance the Linux Display ecosystem. You can find the list of participants here. The event covered a broad spectrum of topics affecting the development of Linux projects, user experiences, and the future of display technologies on Linux. From cutting-edge topics to long-term discussions, you can check the event agenda here.

Organization Highlights The hackfest was marked by in-depth discussions and knowledge sharing among Linux contributors, making everyone inspired, informed, and connected to the community. Building on feedback from the previous year, we refined the unconference format to enhance participant preparation and engagement. Structured Agenda and Timeboxes: Each session had a defined scope, time limit (1h20 or 2h10), and began with an introductory talk on the topic.
  • Participant-Led Discussions: We pre-selected in-person participants to lead discussions, allowing them to prepare introductions, resources, and scope.
  • Transparent Scheduling: The schedule was shared in advance as GitHub issues, encouraging participants to review and prepare for sessions of interest.
Engaging Sessions: The hackfest featured a variety of topics, including presentations and discussions on how participants were addressing specific subjects within their companies.
  • No Breakout Rooms, No Overlaps: All participants chose to attend all sessions, eliminating the need for separate breakout rooms. We also adapted run-time schedule to keep everybody involved in the same topics.
  • Real-time Updates: We provided notifications and updates through dedicated emails and the event matrix room.
Strengthening Community Connections: The hackfest offered ample opportunities for networking among attendees.
  • Social Events: Igalia sponsored coffee breaks, lunches, and a dinner at a local restaurant.
  • Museum Visit: Participants enjoyed a sponsored visit to the Museum of Estrela Galicia Beer (MEGA).

Fruitful Discussions and Follow-up The structured agenda and breaks allowed us to cover multiple topics during the hackfest. These discussions have led to new display feature development and improvements, as evidenced by patches, merge requests, and implementations in project repositories and mailing lists. With the KMS color management API taking shape, we discussed refinements and best approaches to cover the variety of color pipeline from different hardware-vendors. We are also investigating techniques for a performant SDR<->HDR content reproduction and reducing latency and power consumption when using the color blocks of the hardware.

Color Management/HDR Color Management and HDR continued to be the hottest topic of the hackfest. We had three sessions dedicated to discuss Color and HDR across Linux Display stack layers.

Color/HDR (Kernel-Level) Harry Wentland (AMD) led this session. Here, kernel Developers shared the Color Management pipeline of AMD, Intel and NVidia. We counted with diagrams and explanations from HW-vendors developers that discussed differences, constraints and paths to fit them into the KMS generic color management properties such as advertising modeset needs, IN\_FORMAT, segmented LUTs, interpolation types, etc. Developers from Qualcomm and ARM also added information regarding their hardware. Upstream work related to this session:

Color/HDR (Compositor-Level) Sebastian Wick (RedHat) led this session. It started with Sebastian s presentation covering Wayland color protocols and compositor implementation. Also, an explanation of APIs provided by Wayland and how they can be used to achieve better color management for applications and discussions around ICC profiles and color representation metadata. There was also an intensive Q&A about LittleCMS with Marti Maria. Upstream work related to this session:

Color/HDR (Use Cases and Testing) Christopher Cameron (Google) and Melissa Wen (Igalia) led this session. In contrast to the other sessions, here we focused less on implementation and more on brainstorming and reflections of real-world SDR and HDR transformations (use and validation) and gainmaps. Christopher gave a nice presentation explaining HDR gainmap images and how we should think of HDR. This presentation and Q&A were important to put participants at the same page of how to transition between SDR and HDR and somehow emulating HDR. We also discussed on the usage of a kernel background color property. Finally, we discussed a bit about Chamelium and the future of VKMS (future work and maintainership).

Power Savings vs Color/Latency Mario Limonciello (AMD) led this session. Mario gave an introductory presentation about AMD ABM (adaptive backlight management) that is similar to Intel DPST. After some discussions, we agreed on exposing a kernel property for power saving policy. This work was already merged on kernel and the userspace support is under development. Upstream work related to this session:

Strategy for video and gaming use-cases Leo Li (AMD) led this session. Miguel Casas (Google) started this session with a presentation of Overlays in Chrome/OS Video, explaining the main goal of power saving by switching off GPU for accelerated compositing and the challenges of different colorspace/HDR for video on Linux. Then Leo Li presented different strategies for video and gaming and we discussed the userspace need of more detailed feedback mechanisms to understand failures when offloading. Also, creating a debugFS interface came up as a tool for debugging and analysis.

Real-time scheduling and async KMS API Xaver Hugl (KDE/BlueSystems) led this session. Compositor developers have exposed some issues with doing real-time scheduling and async page flips. One is that the Kernel limits the lifetime of realtime threads and if a modeset takes too long, the thread will be killed and thus the compositor as well. Also, simple page flips take longer than expected and drivers should optimize them. Another issue is the lack of feedback to compositors about hardware programming time and commit deadlines (the lastest possible time to commit). This is difficult to predict from drivers, since it varies greatly with the type of properties. For example, color management updates take much longer. In this regard, we discusssed implementing a hw_done callback to timestamp when the hardware programming of the last atomic commit is complete. Also an API to pre-program color pipeline in a kind of A/B scheme. It may not be supported by all drivers, but might be useful in different ways.

VRR/Frame Limit, Display Mux, Display Control, and more and beer We also had sessions to discuss a new KMS API to mitigate headaches on VRR and Frame Limit as different brightness level at different refresh rates, abrupt changes of refresh rates, low frame rate compensation (LFC) and precise timing in VRR more. On Display Control we discussed features missing in the current KMS interface for HDR mode, atomic backlight settings, source-based tone mapping, etc. We also discussed the need of a place where compositor developers can post TODOs to be developed by KMS people. The Content-adaptive Scaling and Sharpening session focused on sharpening and scaling filters. In the Display Mux session, we discussed proposals to expose the capability of dynamic mux switching display signal between discrete and integrated GPUs. In the last session of the 2024 Display Next Hackfest, participants representing different compositors summarized current and future work and built a Linux Display wish list , which includes: improvements to VTTY and HDR switching, better dmabuf API for multi-GPU support, definition of tone mapping, blending and scaling sematics, and wayland protocols for advertising to clients which colorspaces are supported. We closed this session with a status update on feature development by compositors, including but not limited to: plane offloading (from libcamera to output) / HDR video offloading (dma-heaps) / plane-based scrolling for web pages, color management / HDR / ICC profiles support, addressing issues such as flickering when color primaries don t match, etc. After three days of intensive discussions, all in-person participants went to a guided tour at the Museum of Extrela Galicia beer (MEGA), pouring and tasting the most famous local beer.

Feedback and Future Directions Participants provided valuable feedback on the hackfest, including suggestions for future improvements.
  • Schedule and Break-time Setup: Having a pre-defined agenda and schedule provided a better balance between long discussions and mental refreshments, preventing the fatigue caused by endless discussions.
  • Action Points: Some participants recommended explicitly asking for action points at the end of each session and assigning people to follow-up tasks.
  • Remote Participation: Remote attendees appreciated the inclusive setup and opportunities to actively participate in discussions.
  • Technical Challenges: There were bandwidth and video streaming issues during some sessions due to the large number of participants.

Thank you for joining the 2024 Display Next Hackfest We can t help but thank the 40 participants, who engaged in-person or virtually on relevant discussions, for a collaborative evolution of the Linux display stack and for building an insightful agenda. A big thank you to the leaders and presenters of the nine sessions: Christopher Cameron (Google), Harry Wentland (AMD), Leo Li (AMD), Mario Limoncello (AMD), Sebastian Wick (RedHat) and Xaver Hugl (KDE/BlueSystems) for the effort in preparing the sessions, explaining the topic and guiding discussions. My acknowledge to the others in-person participants that made such an effort to travel to A Coru a: Alex Goins (NVIDIA), David Turner (Raspberry Pi), Georges Stavracas (Igalia), Joan Torres (SUSE), Liviu Dudau (Arm), Louis Chauvet (Bootlin), Robert Mader (Collabora), Tian Mengge (GravityXR), Victor Jaquez (Igalia) and Victoria Brekenfeld (System76). It was and awesome opportunity to meet you and chat face-to-face. Finally, thanks virtual participants who couldn t make it in person but organized their days to actively participate in each discussion, adding different perspectives and valuable inputs even remotely: Abhinav Kumar (Qualcomm), Chaitanya Borah (Intel), Christopher Braga (Qualcomm), Dor Askayo (Red Hat), Jiri Koten (RedHat), Jonas dahl (Red Hat), Leandro Ribeiro (Collabora), Marti Maria (Little CMS), Marijn Suijten, Mario Kleiner, Martin Stransky (Red Hat), Michel D nzer (Red Hat), Miguel Casas-Sanchez (Google), Mitulkumar Golani (Intel), Naveen Kumar (Intel), Niels De Graef (Red Hat), Pekka Paalanen (Collabora), Pichika Uday Kiran (AMD), Shashank Sharma (AMD), Sriharsha PV (AMD), Simon Ser, Uma Shankar (Intel) and Vikas Korjani (AMD). We look forward to another successful Display Next hackfest, continuing to drive innovation and improvement in the Linux display ecosystem!

25 September 2024

Russell Coker: The PiKVM

Hardware I have just setup a PiKVM, here s the Amazon link for the KVM hardware (case and Pi hat etc) and here s an Amazon link for a Pi4 to match. The PiKVM web site has good documentation [1] and they have a YouTube channel with videos showing how to assemble the devices [2]. It s really convenient being able to change the playback speed from low speeds like 1/4 original speed) to double speed when watching such a video. One thing to note is that there are some revisions to the hardware that aren t covered in the videos, the device I received had some improvements that made it easier to assemble which weren t in the video. When you buy the device and Pi you need to also get a SD card of at least 4G in size, a CR1220 battery for real-time clock, and a USB-2/3 to USB-C cable for keyboard/mouse MUST NOT BE USB-C to USB-C! When I first tried using it I used a USB-C to USB-C cable for keyboard and mouse and it didn t work for reasons I don t understand (I welcome comments with theories about this). You also need a micro-HDMI to HDMI cable to get video output if you want to set it up without having to find the IP address and ssh to it. The system has a bright OLED display to show the IP address and some other information which is very handy. The hardware is easy enough for a 12yo to setup. The construction of the parts are solid and well engineered with everything fitting together nicely. It has a PCI/PCIe slot adaptor for controlling power and sending LED status over the connection which I didn t test. I definitely recommend this. Software This is the download link for the RaspberryPi images for the PiKVM [3]. The v3 image matches the hardware from the Amazon link I provided. The default username/password is root/root. Connect it to a HDMI monitor and USB keyboard to change the password etc. If you control the DHCP server you can find the IP address it s using and ssh to it to change the password (it is configured to allow ssh as root with password authentication). If you get the kit to assemble it (as opposed to buying a completed unit already assembled) then you need to run the following commands as root to enable the OLED display. This means that after assembling it you can t get the IP address without plugging in a monitor with a micro-HDMI to HDMI cable or having access to the DHCP server logs.
rw
systemctl enable --now kvmd-oled kvmd-oled-reboot kvmd-oled-shutdown
systemctl enable --now kvmd-fan
ro
The default webadmin username/password is admin/admin. To change the passwords run the following commands:
rw
kvmd-htpasswd set admin
passwd root
ro
It is configured to have the root filesystem mounted read-only which is something I thought had gone out of fashion decades ago. I don t think that modern versions of the Ext3/4 drivers are going to corrupt your filesystem if you have it mounted read-write when you reboot. By default it uses a self-signed SSL certificate so with a Chrome based browser you get an error when you connect where you have to select advanced and then tell it to proceed regardless. I presume you could use the DNS method of Certbot authentication to get a SSL certificate to use on an internal view of your DNS to make it work normally with SSL. The web based software has all the features you expect from a KVM. It shows the screen in any resolution up to 1920*1080 and proxies keyboard and mouse. Strangely lsusb on the machine being managed only reports a single USB device entry for it which covers both keyboard and mouse. Managing Computers For a tower PC disconnect any regular monitor(s) and connect a HDMI port to the HDMI input on the KVM. Connect a regular USB port (not USB-C) to the OTG port on the KVM, then it should all just work. For a laptop connect the HDMI port to the HDMI input on the KVM. Connect a regular USB port (not USB-C) to the OTG port on the KVM. Then boot it up and press Fn-F8 for Dell, Fn-F7 for Lenovo or whatever the vendor code is to switch display output to HDMI during the BIOS initialisation, then Linux will follow the BIOS and send all output to the HDMI port for the early stages of booting. Apparently Lenovo systems have the Fn key mapped in the BIOS so an external keyboard could be used to switch between display outputs, but the PiKVM software doesn t appear to support that. For other systems (probably including the Dell laptops that interest me) the Fn key apparently can t be simulated externally. So for using this to work on laptops in another city I need to have someone local press Fn-F8 at the right time to allow me to change BIOS settings. It is possible to configure the Linux kernel to mirror display to external HDMI and an internal laptop screen. But this doesn t seem useful to me as the use cases for this device don t require that. If you are using it for a server that doesn t have iDRAC/ILO or other management hardware there will be no other monitor and all the output will go through the only connected HDMI device. My main use for it in the near future will be for supporting remote laptops, when Linux has a problem on boot as an easier option than talking someone through Linux commands and for such use it will be a temporary thing and not something that is desired all the time. For the gdm3 login program you can copy the .config/monitors.xml file from a GNOME user session to the gdm home directory to keep the monitor settings. This configuration option is decent for the case where a fixed set of monitors are used but not so great if your requirement is display a login screen on anything that s available . Is there an xdm type program in Debian/Ubuntu that supports this by default or with easy reconfiguration? Conclusion The PiKVM is a well engineered and designed product that does what s expected at a low price. There are lots of minor issues with using it which aren t the fault of the developers but are due to historical decisions in the design of BIOS and Linux software. We need to change the Linux software in question and lobby hardware vendors for BIOS improvements. The feature for connecting to an ATX PSU was unexpected and could be really handy for some people, it s not something I have an immediate use for but is something I could possibly use in future. I like the way they shipped the hardware for it as part of the package giving the user choices about how they use it, many vendors would make it an optional extra that costs another $100. This gives the PiKVM more functionality than many devices that are much more expensive. The web UI wasn t as user friendly as it might have been, but it s a lot better than iDRAC so I don t have a serious complaint about it. It would be nice if there was an option for creating macros for keyboard scancodes so I could try and emulate the Fn options and keys for volume control on systems that support it.

Melissa Wen: Reflections on 2024 Linux Display Next Hackfest

Hey everyone! The 2024 Linux Display Next hackfest concluded in May, and its outcomes continue to shape the Linux Display stack. Igalia hosted this year s event in A Coru a, Spain, bringing together leading experts in the field. Samuel Iglesias and I organized this year s edition and this blog post summarizes the experience and its fruits. One of the highlights of this year s hackfest was the wide range of backgrounds represented by our 40 participants (both on-site and remotely). Developers and experts from various companies and open-source projects came together to advance the Linux Display ecosystem. You can find the list of participants here. The event covered a broad spectrum of topics affecting the development of Linux projects, user experiences, and the future of display technologies on Linux. From cutting-edge topics to long-term discussions, you can check the event agenda here.

Organization Highlights The hackfest was marked by in-depth discussions and knowledge sharing among Linux contributors, making everyone inspired, informed, and connected to the community. Building on feedback from the previous year, we refined the unconference format to enhance participant preparation and engagement. Structured Agenda and Timeboxes: Each session had a defined scope, time limit (1h20 or 2h10), and began with an introductory talk on the topic.
  • Participant-Led Discussions: We pre-selected in-person participants to lead discussions, allowing them to prepare introductions, resources, and scope.
  • Transparent Scheduling: The schedule was shared in advance as GitHub issues, encouraging participants to review and prepare for sessions of interest.
Engaging Sessions: The hackfest featured a variety of topics, including presentations and discussions on how participants were addressing specific subjects within their companies.
  • No Breakout Rooms, No Overlaps: All participants chose to attend all sessions, eliminating the need for separate breakout rooms. We also adapted run-time schedule to keep everybody involved in the same topics.
  • Real-time Updates: We provided notifications and updates through dedicated emails and the event matrix room.
Strengthening Community Connections: The hackfest offered ample opportunities for networking among attendees.
  • Social Events: Igalia sponsored coffee breaks, lunches, and a dinner at a local restaurant.
  • Museum Visit: Participants enjoyed a sponsored visit to the Museum of Estrela Galicia Beer (MEGA).

Fruitful Discussions and Follow-up The structured agenda and breaks allowed us to cover multiple topics during the hackfest. These discussions have led to new display feature development and improvements, as evidenced by patches, merge requests, and implementations in project repositories and mailing lists. With the KMS color management API taking shape, we discussed refinements and best approaches to cover the variety of color pipeline from different hardware-vendors. We are also investigating techniques for a performant SDR<->HDR content reproduction and reducing latency and power consumption when using the color blocks of the hardware.

Color Management/HDR Color Management and HDR continued to be the hottest topic of the hackfest. We had three sessions dedicated to discuss Color and HDR across Linux Display stack layers.

Color/HDR (Kernel-Level) Harry Wentland (AMD) led this session. Here, kernel Developers shared the Color Management pipeline of AMD, Intel and NVidia. We counted with diagrams and explanations from HW-vendors developers that discussed differences, constraints and paths to fit them into the KMS generic color management properties such as advertising modeset needs, IN\_FORMAT, segmented LUTs, interpolation types, etc. Developers from Qualcomm and ARM also added information regarding their hardware. Upstream work related to this session:

Color/HDR (Compositor-Level) Sebastian Wick (RedHat) led this session. It started with Sebastian s presentation covering Wayland color protocols and compositor implementation. Also, an explanation of APIs provided by Wayland and how they can be used to achieve better color management for applications and discussions around ICC profiles and color representation metadata. There was also an intensive Q&A about LittleCMS with Marti Maria. Upstream work related to this session:

Color/HDR (Use Cases and Testing) Christopher Cameron (Google) and Melissa Wen (Igalia) led this session. In contrast to the other sessions, here we focused less on implementation and more on brainstorming and reflections of real-world SDR and HDR transformations (use and validation) and gainmaps. Christopher gave a nice presentation explaining HDR gainmap images and how we should think of HDR. This presentation and Q&A were important to put participants at the same page of how to transition between SDR and HDR and somehow emulating HDR. We also discussed on the usage of a kernel background color property. Finally, we discussed a bit about Chamelium and the future of VKMS (future work and maintainership).

Power Savings vs Color/Latency Mario Limonciello (AMD) led this session. Mario gave an introductory presentation about AMD ABM (adaptive backlight management) that is similar to Intel DPST. After some discussions, we agreed on exposing a kernel property for power saving policy. This work was already merged on kernel and the userspace support is under development. Upstream work related to this session:

Strategy for video and gaming use-cases Leo Li (AMD) led this session. Miguel Casas (Google) started this session with a presentation of Overlays in Chrome/OS Video, explaining the main goal of power saving by switching off GPU for accelerated compositing and the challenges of different colorspace/HDR for video on Linux. Then Leo Li presented different strategies for video and gaming and we discussed the userspace need of more detailed feedback mechanisms to understand failures when offloading. Also, creating a debugFS interface came up as a tool for debugging and analysis.

Real-time scheduling and async KMS API Xaver Hugl (KDE/BlueSystems) led this session. Compositor developers have exposed some issues with doing real-time scheduling and async page flips. One is that the Kernel limits the lifetime of realtime threads and if a modeset takes too long, the thread will be killed and thus the compositor as well. Also, simple page flips take longer than expected and drivers should optimize them. Another issue is the lack of feedback to compositors about hardware programming time and commit deadlines (the lastest possible time to commit). This is difficult to predict from drivers, since it varies greatly with the type of properties. For example, color management updates take much longer. In this regard, we discusssed implementing a hw_done callback to timestamp when the hardware programming of the last atomic commit is complete. Also an API to pre-program color pipeline in a kind of A/B scheme. It may not be supported by all drivers, but might be useful in different ways.

VRR/Frame Limit, Display Mux, Display Control, and more and beer We also had sessions to discuss a new KMS API to mitigate headaches on VRR and Frame Limit as different brightness level at different refresh rates, abrupt changes of refresh rates, low frame rate compensation (LFC) and precise timing in VRR more. On Display Control we discussed features missing in the current KMS interface for HDR mode, atomic backlight settings, source-based tone mapping, etc. We also discussed the need of a place where compositor developers can post TODOs to be developed by KMS people. The Content-adaptive Scaling and Sharpening session focused on sharpening and scaling filters. In the Display Mux session, we discussed proposals to expose the capability of dynamic mux switching display signal between discrete and integrated GPUs. In the last session of the 2024 Display Next Hackfest, participants representing different compositors summarized current and future work and built a Linux Display wish list , which includes: improvements to VTTY and HDR switching, better dmabuf API for multi-GPU support, definition of tone mapping, blending and scaling sematics, and wayland protocols for advertising to clients which colorspaces are supported. We closed this session with a status update on feature development by compositors, including but not limited to: plane offloading (from libcamera to output) / HDR video offloading (dma-heaps) / plane-based scrolling for web pages, color management / HDR / ICC profiles support, addressing issues such as flickering when color primaries don t match, etc. After three days of intensive discussions, all in-person participants went to a guided tour at the Museum of Extrela Galicia beer (MEGA), pouring and tasting the most famous local beer.

Feedback and Future Directions Participants provided valuable feedback on the hackfest, including suggestions for future improvements.
  • Schedule and Break-time Setup: Having a pre-defined agenda and schedule provided a better balance between long discussions and mental refreshments, preventing the fatigue caused by endless discussions.
  • Action Points: Some participants recommended explicitly asking for action points at the end of each session and assigning people to follow-up tasks.
  • Remote Participation: Remote attendees appreciated the inclusive setup and opportunities to actively participate in discussions.
  • Technical Challenges: There were bandwidth and video streaming issues during some sessions due to the large number of participants.

Thank you for joining the 2024 Display Next Hackfest We can t help but thank the 40 participants, who engaged in-person or virtually on relevant discussions, for a collaborative evolution of the Linux display stack and for building an insightful agenda. A big thank you to the leaders and presenters of the nine sessions: Christopher Cameron (Google), Harry Wentland (AMD), Leo Li (AMD), Mario Limoncello (AMD), Sebastian Wick (RedHat) and Xaver Hugl (KDE/BlueSystems) for the effort in preparing the sessions, explaining the topic and guiding discussions. My acknowledge to the others in-person participants that made such an effort to travel to A Coru a: Alex Goins (NVIDIA), David Turner (Raspberry Pi), Georges Stavracas (Igalia), Joan Torres (SUSE), Liviu Dudau (Arm), Louis Chauvet (Bootlin), Robert Mader (Collabora), Tian Mengge (GravityXR), Victor Jaquez (Igalia) and Victoria Brekenfeld (System76). It was and awesome opportunity to meet you and chat face-to-face. Finally, thanks virtual participants who couldn t make it in person but organized their days to actively participate in each discussion, adding different perspectives and valuable inputs even remotely: Abhinav Kumar (Qualcomm), Chaitanya Borah (Intel), Christopher Braga (Qualcomm), Dor Askayo, Jiri Koten (RedHat), Jonas dahl (Red Hat), Leandro Ribeiro (Collabora), Marti Maria (Little CMS), Marijn Suijten, Mario Kleiner, Martin Stransky (Red Hat), Michel D nzer (Red Hat), Miguel Casas-Sanchez (Google), Mitulkumar Golani (Intel), Naveen Kumar (Intel), Niels De Graef (Red Hat), Pekka Paalanen (Collabora), Pichika Uday Kiran (AMD), Shashank Sharma (AMD), Sriharsha PV (AMD), Simon Ser, Uma Shankar (Intel) and Vikas Korjani (AMD). We look forward to another successful Display Next hackfest, continuing to drive innovation and improvement in the Linux display ecosystem!

Next.

Previous.