Search Results: "eric"

6 November 2024

Jaldhar Vyas: Making America Great Again

Making America Great Again Justice For Peanut Some interesting takeaways (With the caveat that exit polls are not completely accurate and we won't have the full picture for days.)
  • President Trump seems to have won the popular vote which no Republican has done I believe since Reagan.
  • Apparently women didn't particularly care about abortion (CNN said only 14% considered it their primary issue) There is a noticable divide but it is single versus married not women versus men per se.
  • Hispanics who are here legally voted against Hispanics coming here illegally. Latinx's didn't vote for anything because they don't exist.
  • The infamous MSG rally joke had no effect on the voting habits of Puerto Ricans.
  • Republicans have taken the Senate and if trends continue as they are will retain control of the House of Representatives.
  • President Biden may have actually been a better candidate than Border Czar Harris.

2 November 2024

Russell Coker: What is a Workstation?

I recently had someone describe a Mac Mini as a workstation , which I strongly disagree with. The Wikipedia page for Workstation [1] says that it s a type of computer designed for scientific or technical use, for a single user, and would commonly run a multi-user OS. The Mac Mini runs a multi-user OS and is designed for a single user. The issue is whether it is for scientific or technical use . A Mac Mini is a nice little graphical system which could be used for CAD and other engineering work. But I believe that the low capabilities of the system and lack of expansion options make it less of a workstation. The latest versions of the Mac Mini (to be officially launched next week) have up to 64G of RAM and up to 8T of storage. That is quite decent compute power for a small device. For comparison the HP ML 110 Gen9 workstation I m currently using was released in 2021 and has 256G of RAM and has 4 * 3.5 SAS bays so I could easily put a few 4TB NVMe devices and some hard drives larger than 10TB. The HP Z640 workstation I have was released in 2014 and has 128G of RAM and 4*2.5 SATA drive bays and 2*3.5 SATA drive bays. Previously I had a Dell PowerEdge T320 which was released in 2012 and had 96G of RAM and 8*3.5 SAS bays. In CPU and GPU power the recent Mac Minis will compare well to my latest workstations. But they compare poorly to workstations from as much as 12 years ago for RAM and storage. Which is more important depends on the task, if you have to do calculations on 80G of data with lots of scans through the entire data set then a system with 64G of RAM will perform very poorly and a system with 96G and a CPU less than half as fast will perform better. A Dell PowerEdge T320 from 2012 fully loaded with 192G of RAM will outperform a modern Mac Mini on many tasks due to this and the T420 supported up to 384G. Another issue is generic expansion options. I expect a workstation to have a number of PCIe slots free for GPUs and other devices. The T320 I used to use had a PCIe power cable for a power hungry GPU and I think all the T320 and T420 models with high power PSUs supported that. I think that a usable definition of a workstation is a system having a feature set that is typical of servers (ECC RAM, lots of storage for RAID, maybe hot-swap storage devices, maybe redundant PSUs, and lots of expansion options) while also being suitable for running on a desktop or under a desk. The Mac Mini is nice for running on a desk but that s the only workstation criteria it fits. I think that ECC RAM should be a mandatory criteria and any system without it isn t a workstation. That excludes most Apple hardware. The Mac Mini is more of a thin-client than a workstation. My main workstation with ECC RAM could run 3 VMs that each have more RAM than the largest Mac Mini that will be sold next week. If 32G of non-ECC RAM is considered enough for a workstation then you could get an Android phone that counts as a workstation and it will probably cost less than a Mac Mini.

27 October 2024

Enrico Zini: Typing decorators for class members with optional arguments

This looks straightforward and is far from it. I expect tool support will improve in the future. Meanwhile, this blog post serves as a step by step explanation for what is going on in code that I'm about to push to my team.
Let's take this relatively straightforward python code. It has a function printing an int, and a decorator that makes it argument optional, taking it from a global default if missing:
from unittest import mock
default = 42
def with_default(f):
    def wrapped(self, value=None):
        if value is None:
            value = default
        return f(self, value)
    return wrapped
class Fiddle:
    @with_default
    def print(self, value):
        print("Answer:", value)
fiddle = Fiddle()
fiddle.print(12)
fiddle.print()
def mocked(self, value=None):
    print("Mocked answer:", value)
with mock.patch.object(Fiddle, "print", autospec=True, side_effect=mocked):
    fiddle.print(12)
    fiddle.print()
It works nicely as expected:
$ python3 test0.py
Answer: 12
Answer: 42
Mocked answer: 12
Mocked answer: None
It lacks functools.wraps and typing, though. Let's add them. Adding functools.wraps Adding a simple @functools.wraps, mock unexpectedly stops working:
# python3 test1.py
Answer: 12
Answer: 42
Mocked answer: 12
Traceback (most recent call last):
  File "/home/enrico/lavori/freexian/tt/test1.py", line 42, in <module>
    fiddle.print()
  File "<string>", line 2, in print
  File "/usr/lib/python3.11/unittest/mock.py", line 186, in checksig
    sig.bind(*args, **kwargs)
  File "/usr/lib/python3.11/inspect.py", line 3211, in bind
    return self._bind(args, kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.11/inspect.py", line 3126, in _bind
    raise TypeError(msg) from None
TypeError: missing a required argument: 'value'
This is the new code, with explanations and a fix:
# Introduce functools
import functools
from unittest import mock
default = 42
def with_default(f):
    @functools.wraps(f)
    def wrapped(self, value=None):
        if value is None:
            value = default
        return f(self, value)
    # Fix:
    # del wrapped.__wrapped__
    return wrapped
class Fiddle:
    @with_default
    def print(self, value):
        assert value is not None
        print("Answer:", value)
fiddle = Fiddle()
fiddle.print(12)
fiddle.print()
def mocked(self, value=None):
    print("Mocked answer:", value)
with mock.patch.object(Fiddle, "print", autospec=True, side_effect=mocked):
    fiddle.print(12)
    # mock's autospec uses inspect.getsignature, which follows __wrapped__ set
    # by functools.wraps, which points to a wrong signature: the idea that
    # value is optional is now lost
    fiddle.print()
Adding typing For simplicity, from now on let's change Fiddle.print to match its wrapped signature:
      # Give up with making value not optional, to simplify things :(
      def print(self, value: int   None = None) -> None:
          assert value is not None
          print("Answer:", value)
Typing with ParamSpec
# Introduce typing, try with ParamSpec
import functools
from typing import TYPE_CHECKING, ParamSpec, Callable
from unittest import mock
default = 42
P = ParamSpec("P")
def with_default(f: Callable[P, None]) -> Callable[P, None]:
    # Using ParamSpec we forward arguments, but we cannot use them!
    @functools.wraps(f)
    def wrapped(self, value: int   None = None) -> None:
        if value is None:
            value = default
        return f(self, value)
    return wrapped
class Fiddle:
    @with_default
    def print(self, value: int   None = None) -> None:
        assert value is not None
        print("Answer:", value)
mypy complains inside the wrapper, because while we forward arguments we don't constrain them, so we can't be sure there is a value in there:
test2.py:17: error: Argument 2 has incompatible type "int"; expected "P.args"  [arg-type]
test2.py:19: error: Incompatible return value type (got "_Wrapped[P, None, [Any, int   None], None]", expected "Callable[P, None]")  [return-value]
test2.py:19: note: "_Wrapped[P, None, [Any, int   None], None].__call__" has type "Callable[[Arg(Any, 'self'), DefaultArg(int   None, 'value')], None]"
Typing with Callable We can use explicit Callable argument lists:
# Introduce typing, try with Callable
import functools
from typing import TYPE_CHECKING, Callable, TypeVar
from unittest import mock
default = 42
A = TypeVar("A")
# Callable cannot represent the fact that the argument is optional, so now mypy
# complains if we try to omit it
def with_default(f: Callable[[A, int   None], None]) -> Callable[[A, int   None], None]:
    @functools.wraps(f)
    def wrapped(self: A, value: int   None = None) -> None:
        if value is None:
            value = default
        return f(self, value)
    return wrapped
class Fiddle:
    @with_default
    def print(self, value: int   None = None) -> None:
        assert value is not None
        print("Answer:", value)
if TYPE_CHECKING:
    reveal_type(Fiddle.print)
fiddle = Fiddle()
fiddle.print(12)
# !! Too few arguments for "print" of "Fiddle"  [call-arg]
fiddle.print()
def mocked(self, value=None):
    print("Mocked answer:", value)
with mock.patch.object(Fiddle, "print", autospec=True, side_effect=mocked):
    fiddle.print(12)
    fiddle.print()
Now mypy complains when we try to omit the optional argument, because Callable cannot represent optional arguments:
test3.py:32: note: Revealed type is "def (test3.Fiddle, Union[builtins.int, None])"
test3.py:37: error: Too few arguments for "print" of "Fiddle"  [call-arg]
test3.py:46: error: Too few arguments for "print" of "Fiddle"  [call-arg]
typing's documentation says:
Callable cannot express complex signatures such as functions that take a variadic number of arguments, overloaded functions, or functions that have keyword-only parameters. However, these signatures can be expressed by defining a Protocol class with a call() method:
Let's do that! Typing with Protocol, take 1
# Introduce typing, try with Protocol
import functools
from typing import TYPE_CHECKING, Protocol, TypeVar, Generic, cast
from unittest import mock
default = 42
A = TypeVar("A", contravariant=True)
class Printer(Protocol, Generic[A]):
    def __call__(_, self: A, value: int   None = None) -> None:
        ...
def with_default(f: Printer[A]) -> Printer[A]:
    @functools.wraps(f)
    def wrapped(self: A, value: int   None = None) -> None:
        if value is None:
            value = default
        return f(self, value)
    return cast(Printer, wrapped)
class Fiddle:
    # function has a __get__ method to generated bound versions of itself
    # the Printer protocol does not define it, so mypy is now unable to type
    # the bound method correctly
    @with_default
    def print(self, value: int   None = None) -> None:
        assert value is not None
        print("Answer:", value)
if TYPE_CHECKING:
    reveal_type(Fiddle.print)
fiddle = Fiddle()
# !! Argument 1 to "__call__" of "Printer" has incompatible type "int"; expected "Fiddle"
fiddle.print(12)
fiddle.print()
def mocked(self, value=None):
    print("Mocked answer:", value)
with mock.patch.object(Fiddle, "print", autospec=True, side_effect=mocked):
    fiddle.print(12)
    fiddle.print()
New mypy complaints:
test4.py:41: error: Argument 1 to "__call__" of "Printer" has incompatible type "int"; expected "Fiddle"  [arg-type]
test4.py:42: error: Missing positional argument "self" in call to "__call__" of "Printer"  [call-arg]
test4.py:50: error: Argument 1 to "__call__" of "Printer" has incompatible type "int"; expected "Fiddle"  [arg-type]
test4.py:51: error: Missing positional argument "self" in call to "__call__" of "Printer"  [call-arg]
What happens with class methods, is that the function object has a __get__ method that generates a bound versions of itself. Our Printer protocol does not define it, so mypy is now unable to type the bound method correctly. Typing with Protocol, take 2 So... we add the function descriptor methos to our Protocol! A lot of this is taken from this discussion.
# Introduce typing, try with Protocol, harder!
import functools
from typing import TYPE_CHECKING, Protocol, TypeVar, Generic, cast, overload, Union
from unittest import mock
default = 42
A = TypeVar("A", contravariant=True)
# We now produce typing for the whole function descriptor protocol
#
# See https://github.com/python/typing/discussions/1040
class BoundPrinter(Protocol):
    """Protocol typing for bound printer methods."""
    def __call__(_, value: int   None = None) -> None:
        """Bound signature."""
class Printer(Protocol, Generic[A]):
    """Protocol typing for printer methods."""
    # noqa annotations are overrides for flake8 being confused, giving either D418:
    # Function/ Method decorated with @overload shouldn't contain a docstring
    # or D105:
    # Missing docstring in magic method
    #
    # F841 is for vulture being confused:
    #   unused variable 'objtype' (100% confidence)
    @overload
    def __get__(  # noqa: D105
        self, obj: A, objtype: type[A]   None = None  # noqa: F841
    ) -> BoundPrinter:
        ...
    @overload
    def __get__(  # noqa: D105
        self, obj: None, objtype: type[A]   None = None  # noqa: F841
    ) -> "Printer[A]":
        ...
    def __get__(
        self, obj: A   None, objtype: type[A]   None = None  # noqa: F841
    ) -> Union[BoundPrinter, "Printer[A]"]:
        """Implement function descriptor protocol for class methods."""
    def __call__(_, self: A, value: int   None = None) -> None:
        """Unbound signature."""
def with_default(f: Printer[A]) -> Printer[A]:
    @functools.wraps(f)
    def wrapped(self: A, value: int   None = None) -> None:
        if value is None:
            value = default
        return f(self, value)
    return cast(Printer, wrapped)
class Fiddle:
    # function has a __get__ method to generated bound versions of itself
    # the Printer protocol does not define it, so mypy is now unable to type
    # the bound method correctly
    @with_default
    def print(self, value: int   None = None) -> None:
        assert value is not None
        print("Answer:", value)
fiddle = Fiddle()
fiddle.print(12)
fiddle.print()
def mocked(self, value=None):
    print("Mocked answer:", value)
with mock.patch.object(Fiddle, "print", autospec=True, side_effect=mocked):
    fiddle.print(12)
    fiddle.print()
It works! It's typed! And mypy is happy!

24 October 2024

Emmanuel Kasper: back to blogging and running a feed reader as a containerized systemd service

After reading about Jonathan McDowell feed reader install and the back to blogging initiative, I decided to install a feed reader to follow all those nice blog posts. With a feed reader you can compose your own feed of news based on blog posts, websites, mastodon toots. And then you are independant from ad oriented ranking algorithms of social networks. Since Jonathan used FreshRSS as a feed reader, I started with the same software. On a quick glance on its github page, it sounded like a good project:
  • active contributions
  • different channels for stable and latest version of the software
  • container images pointing to the stable release
  • support multiple databases for storage, including PostgreSQL
  • correct documentation mentioning security caveats
I prefer to do the container image installation using podman since:
  • upgrades from FreshRSS are easy to do and can be done separately from operating system upgrades
  • I do not mess my based operating system with php (subjective) and in case of a compromized freshrss, the freshrss/apache install would be still restrained to its own Linux namespaces, separated from the rest of the system.
Podman is image compatible with Docker as they both implement the OCI runtime specification, and have a nearly identical command line interface. This installation will be done on a Debian server, but should work too on any Linux distribution. Initial setup
  • start a container image based on the start command provided by the FreshRSS project. The podman command line is nearly identical to the docker command line, excepts that podman expects the fully qualified domain name associated with the container image, and I chose to run the freshrss container on the localhost interface only. I also use a defined version tag, because using the latest tag makes it complicated to track which exact ersion I have installed.
# podman pull docker.io/freshrss/freshrss:1.20.1
# podman run --detach --restart unless-stopped --log-opt max-size=10m \
  --publish 127.0.0.1:8081:80 \
  --env TZ=Europe/Paris \
  --env 'CRON_MIN=1,31' \
  --volume freshrss_data:/var/www/FreshRSS/data \
  --volume freshrss_extensions:/var/www/FreshRSS/extensions \
  --name freshrss \
  docker.io/freshrss/freshrss:1.20.1
  • verify where the podman volumes have been created. This is where the user data of freshrss will be stored.
# podman volume ls
# podman volume inspect freshrss_data
  • now that freshrss is installed, you can start its configuration wizard at localhost:8081. You should keep the default sqlite choice
  • finally after running the wizard, you can login again and add some feeds
  • verify that your config has been stored outside the container, and inside the volume (so that it will not be erased in case of upgrages)
# ls -l /var/lib/containers/storage/volumes/freshrss_data/_data/users/
  • verify the state of sqlite database
echo '.tables'  sqlite3  /var/lib/containers/storage/volumes/freshrss_data/_data/users/<your freshrss user>/db.sqlite 
category  entry     entrytag  entrytmp  feed      tag
Going with FreshRSS in Production Podman has this very nice feature that it can generate a systemd unit from a running container, and use systemd to start a container on boot. This is in contrary to docker where the docker daemon does the stop/start of containers on boot. I prefer the systemd approach as it treats containers the same way as other system services. Once the freshrss container is running we can generate a systemd unit of it with:
# podman generate systemd --new --name freshrss   tee /etc/systemd/system/container-freshrss.service
Let s stop the container we started previously, and use systemd to manage it:
# podman stop freshrss
# systemctl enable --now container-freshrss.service
We can verify that we have a listening socket on the localhost interface, on the source port 8081
# systemctl status container-freshrss.service
  ...
# ss --listening --numeric --process '( sport = 8081 )'
Netid         State           Recv-Q          Send-Q                   Local Address:Port                   Peer Address:Port         Process         
tcp           LISTEN          0               4096                         127.0.0.1:8081                        0.0.0.0:*             users:(("conmon",pid=4464,fd=5))
Nota Bene: conmon (8) is the process managing the network namespace in which fresh-rss is running, hence it is displayed as the process owning the listening socket Exposing FreshRSS to the external world We have now a running service, but we need to make it reachable from the internet. The simplest, classical way, is to create a subdomain and a VirtualHost configured as a reverse proxy to access the service at 127.0.0.1:8081. Fortunately the FreshRSS authors have documented this setup in https://github.com/FreshRSS/FreshRSS/tree/edge/Docker#alternative-reverse-proxy-using-apache and those steps are no different from a standard application behind a web reverse proxy. Upgrading freshrss container to a newer version A documentation showing how to install a piece of software is nothing when it does not show how to upgrade that said software. Installing is easy, upgrading is where the challenge is. Fortunately to the good stateless design of freshrss (everything is in the sqlite database, which is backed by a non-epheremal volume in our setup), switchting versions is a peace of cake.
# podman pull docker.io/freshrss/freshrss:1.20.2
# systemctl stop container-freshrss.service
# sed -i 's,docker.io/freshrss/freshrss:1.20.1,docker.io/freshrss/freshrss:1.20.2,' /etc/systemd/system/container-freshrss.service
# systemctl daemon-reload
# systemctl start container-freshrss.service
If you need to rollback, you just need to revert version numbers in the instruction above. Enjoy your own reader feed ! I will add the following feeds of blogs I like, let us see if I follow them better with a feed reader !

21 October 2024

Sven Hoexter: Terraform: Making Use of Precondition Checks

I'm in the unlucky position to have to deal with GitHub. Thus I've a terraform module in a project which deals with populating organization secrets in our GitHub organization, and assigning repositories access to those secrets. Since the GitHub terraform provider internally works mostly with repository IDs, not slugs (this human readable organization/repo format), we've to do some mapping in between. In my case it looks like this:
#tfvars Input for Module
org_secrets =  
    "SECRET_A" =  
        repos = [
            "infra-foo",
            "infra-baz",
            "deployment-foobar",
        ]
    "SECRET_B" =  
        repos = [
            "job-abc",
            "job-xyz",
        ]
     
 
# Module Code
/*
Limitation: The GH search API which is queried returns at most 1000
results. Thus whenever we reach that limit this approach will no longer work.
The query is also intentionally limited to internal repositories right now.
*/
data "github_repositories" "repos"  
    query           = "org:myorg archived:false -is:public -is:private"
    include_repo_id = true
 
/*
The properties of the github_repositories.repos data source queried
above contains only lists. Thus we've to manually establish a mapping
between the repository names we need as a lookup key later on, and the
repository id we got in another list from the search query above.
*/
locals  
    # Assemble the set of repository names we need repo_ids for
    repos = toset(flatten([for v in var.org_secrets : v.repos]))
    # Walk through all names in the query result list and check
    # if they're also in our repo set. If yes add the repo name -> id
    # mapping to our resulting map
    repos_and_ids =  
        for i, v in data.github_repositories.repos.names : v => data.github_repositories.repos.repo_ids[i]
        if contains(local.repos, v)
     
 
resource "github_actions_organization_secret" "org_secrets"  
    for_each        = var.org_secrets
    secret_name     = each.key
    visibility      = "selected"
    # the logic how the secret value is sourced is omitted here
    plaintext_value = data.xxx
    selected_repository_ids = [
        for r in each.value.repos : local.repos_and_ids[r]
        if can(local.repos_and_ids[r])
    ]
 
Now if we do something bad, delete a repository and forget to remove it from the configuration for the module, we receive some error message that a (numeric) repository ID could not be found. Pretty much useless for the average user because you've to figure out which repository is still in the configuration list, but got deleted recently. Luckily terraform supports since version 1.2 precondition checks, which we can use in an output-block to provide the information which repository is missing. What we need is the set of missing repositories and the validation condition:
locals  
    # Debug facility in combination with an output and precondition check
    # There we can report which repository we still have in our configuration
    # but no longer get as a result from the data provider query
    missing_repos = setsubtract(local.repos, data.github_repositories.repos.names)
 
# Debug facility - If we can not find every repository in our
# search query result, report those repos as an error
output "missing_repos"  
    value = local.missing_repos
    precondition  
        condition     = length(local.missing_repos) == 0
        error_message = format("Repos in config missing from resultset: %v", local.missing_repos)
     
 
Now you only have to be aware that GitHub is GitHub and the TF provider has open bugs, but is not supported by GitHub and you will encounter inconsistent results. But it works, even if your terraform apply failed that way.

10 October 2024

Freexian Collaborators: Debian Contributions: Packaging Pydantic v2, Reworking of glib2.0 for cross bootstrap, Python archive rebuilds and more! (by Anupa Ann Joseph)

Debian Contributions: 2024-09 Contributing to Debian is part of Freexian s mission. This article covers the latest achievements of Freexian and their collaborators. All of this is made possible by organizations subscribing to our Long Term Support contracts and consulting services.

Pydantic v2, by Colin Watson Pydantic is a useful library for validating data in Python using type hints: Freexian uses it in a number of projects, including Debusine. Its Debian packaging had been stalled at 1.10.17 in testing for some time, partly due to needing to make sure everything else could cope with the breaking changes introduced in 2.x, but mostly due to needing to sort out packaging of its new Rust dependencies. Several other people (notably Alexandre Detiste, Andreas Tille, Drew Parsons, and Timo R hling) had made some good progress on this, but nobody had quite got it over the line and it seemed a bit stuck. Colin upgraded a few Rust libraries to new upstream versions, packaged rust-jiter, and chased various failures in other packages. This eventually allowed getting current versions of both pydantic-core and pydantic into testing. It should now be much easier for us to stay up to date routinely.

Reworking of glib2.0 for cross bootstrap, by Helmut Grohne Simon McVittie (not affiliated with Freexian) earlier restructured the libglib2.0-dev such that it would absorb more functionality and in particular provide tools for working with .gir files. Those tools practically require being run for their host architecture (practically this means running under qemu-user) which is at odds with the requirements of architecture cross bootstrap. The qemu requirement was expressed in package dependencies and also made people unhappy attempting to use libglib2.0-dev for i386 on amd64 without resorting to qemu. The use of qemu in architecture bootstrap is particularly problematic as it tends to not be ready at the time bootstrapping is needed. As a result, Simon proposed and implemented the introduction of a libgio-2.0-dev package providing a subset of libglib2.0-dev that does not require qemu. Packages should continue to use libglib2.0-dev in their Build-Depends unless involved in architecture bootstrap. Helmut reviewed and tested the implementation and integrated the necessary changes into rebootstrap. He also prepared a patch for libverto to use the new package and proposed adding forward compatibility to glib2.0. Helmut continued working on adding cross-exe-wrapper to architecture-properties and implemented autopkgtests later improved by Simon. The cross-exe-wrapper package now provides a generic mechanism to a program on a different architecture by using qemu when needed only. For instance, a dependency on cross-exe-wrapper:i386 provides a i686-linux-gnu-cross-exe-wrapper program that can be used to wrap an ELF executable for the i386 architecture. When installed on amd64 or i386 it will skip installing or running qemu, but for other architectures qemu will be used automatically. This facility can be used to support cross building with targeted use of qemu in cases where running host code is unavoidable as is the case for GObject introspection. This concludes the joint work with Simon and Niels Thykier on glib2.0 and architecture-properties resolving known architecture bootstrap regressions arising from the glib2.0 refactoring earlier this year.

Analyzing binary package metadata, by Helmut Grohne As Guillem Jover (not affiliated with Freexian) continues to work on adding metadata tracking to dpkg, the question arises how this affects existing packages. The dedup.debian.net infrastructure provides an easy playground to answer such questions, so Helmut gathered file metadata from all binary packages in unstable and performed an explorative analysis. Some results include: Guillem also performed a cursory analysis and reported other problem categories such as mismatching directory permissions for directories installed by multiple packages and thus gained a better understanding of what consistency checks dpkg can enforce.

Python archive rebuilds, by Stefano Rivera Last month Stefano started to write some tooling to do large-scale rebuilds in debusine, starting with finding packages that had already started to fail to build from source (FTBFS) due to the removal of setup.py test. This month, Stefano did some more rebuilds, starting with experimental versions of dh-python. During the Python 3.12 transition, we had added a dependency on python3-setuptools to dh-python, to ease the transition. Python 3.12 removed distutils from the stdlib, but many packages were expecting it to still be available. Setuptools contains a version of distutils, and dh-python was a convenient place to depend on setuptools for most package builds. This dependency was never meant to be permanent. A rebuild without it resulted in mass-filing about 340 bugs (and around 80 more by mistake). A new feature in Python 3.12, was to have unittest s test runner exit with a non-zero return code, if no tests were run. We added this feature, to be able to detect tests that are not being discovered, by mistake. We are ignoring this failure, as we wouldn t want to suddenly cause hundreds of packages to fail to build, if they have no tests. Stefano did a rebuild to see how many packages were affected, and found that around 1000 were. The Debian Python community has not come to a conclusion on how to move forward with this. As soon as Python 3.13 release candidate 2 was available, Stefano did a rebuild of the Python packages in the archive against it. This was a more complex rebuild than the others, as it had to be done in stages. Many packages need other Python packages at build time, typically to run tests. So transitions like this involve some manual bootstrapping, followed by several rounds of builds. Not all packages could be tested, as not all their dependencies support 3.13 yet. The result was around 100 bugs in packages that need work to support Python 3.13. Many other packages will need additional work to properly support Python 3.13, but being able to build (and run tests) is an important first step.

Miscellaneous contributions
  • Carles prepared the update of python-pyaarlo package to a new upstream release.
  • Carles worked on updating python-ring-doorbell to a new upstream release. Unfinished, pending to package a new dependency python3-firebase-messaging RFP #1082958 and its dependency python3-http-ece RFP #1083020.
  • Carles improved po-debconf-manager. Main new feature is that it can open Salsa merge requests. Aiming for a lightning talk in MiniDebConf Toulouse (November) to be functional end to end and get feedback from the wider public for this proof of concept.
  • Carles helped one translator to use po-debconf-manager (added compatibility for bullseye, fixed other issues) and reviewed 17 package templates.
  • Colin upgraded the OpenSSH packaging to 9.9p1.
  • Colin upgraded the various YubiHSM packages to new upstream versions, enabled more tests, fixed yubihsm-shell build failures on some 32-bit architectures, made yubihsm-shell build reproducibly, and fixed yubihsm-connector to apply udev rules to existing devices when the package is installed. As usual, bookworm-backports is up to date with all these changes.
  • Colin fixed quite a bit of fallout from setuptools 72.0.0 removing setup.py test, backported a large upstream patch set to make buildbot work with SQLAlchemy 2.0, and upgraded 25 other Python packages to new upstream versions.
  • Enrico worked with Jakob Haufe to get him up to speed for managing sso.debian.org
  • Rapha l did remove spam entries in the list of teams on tracker.debian.org (see #1080446), and he applied a few external contributions, fixing a rendering issue and replacing the DDPO link with a more useful alternative. He also gave feedback on a couple of merge requests that required more work. As part of the analysis of the underlying problem, he suggested to the ftpmasters (via #1083068) to auto-reject packages having the too-many-contacts lintian error, and he raised the severity of #1076048 to serious to actually have that 4 year old bug fixed.
  • Rapha l uploaded zim and hamster-time-tracker to fix issues with Python 3.12 getting rid of setuptools. He also uploaded a new gnome-shell-extension-hamster to cope with the upcoming transition to GNOME 47.
  • Helmut sent seven patches and sponsored one upload for cross build failures.
  • Helmut uploaded a Nagios/Icinga plugin check-smart-attributes for monitoring the health of physical disks.
  • Helmut collaborated on sbuild reviewing and improving a MR for refactoring the unshare backend.
  • Helmut sent a patch fixing coinstallability of gcc-defaults.
  • Helmut continued to monitor the evolution of the /usr-move. With more and more key packages such as libvirt or fuse3 fixed. We re moving into the boring long-tail of the transition.
  • Helmut proposed updating the meson buildsystem in debhelper to use env2mfile.
  • Helmut continued to update patches maintained in rebootstrap. Due to the work on glib2.0 above, rebootstrap moves a lot further, but still fails for any architecture.
  • Santiago reviewed some Merge Request in Salsa CI, such as: !478, proposed by Otto to extend the information about how to use additional runners in the pipeline and !518, proposed by Ahmed to add support for Ubuntu images, that will help to test how some debian packages, including the complex MariaDB are built on Ubuntu. Santiago also prepared !545, which will make the reprotest job more consistent with the result seen on reproducible-builds.
  • Santiago worked on different tasks related to DebConf 25. Especially he drafted the fundraising brochure (which is almost ready).
  • Thorsten Alteholz uploaded package libcupsfilter to fix the autopkgtest and a dependency problem of this package. After package splix was abandoned by upstream and OpenPrinting.org adopted its maintenance, Thorsten uploaded their first release.
  • Anupa published posts on the Debian Administrators group in LinkedIn and moderated the group, one of the tasks of the Debian Publicity Team.
  • Anupa helped organize DebUtsav 2024. It had over 100 attendees with hand-on sessions on making initial contributions to Linux Kernel, Debian packaging, submitting documentation to Debian wiki and assisting Debian Installations.

26 September 2024

Melissa Wen: Reflections on 2024 Linux Display Next Hackfest

Hey everyone! The 2024 Linux Display Next hackfest concluded in May, and its outcomes continue to shape the Linux Display stack. Igalia hosted this year s event in A Coru a, Spain, bringing together leading experts in the field. Samuel Iglesias and I organized this year s edition and this blog post summarizes the experience and its fruits. One of the highlights of this year s hackfest was the wide range of backgrounds represented by our 40 participants (both on-site and remotely). Developers and experts from various companies and open-source projects came together to advance the Linux Display ecosystem. You can find the list of participants here. The event covered a broad spectrum of topics affecting the development of Linux projects, user experiences, and the future of display technologies on Linux. From cutting-edge topics to long-term discussions, you can check the event agenda here.

Organization Highlights The hackfest was marked by in-depth discussions and knowledge sharing among Linux contributors, making everyone inspired, informed, and connected to the community. Building on feedback from the previous year, we refined the unconference format to enhance participant preparation and engagement. Structured Agenda and Timeboxes: Each session had a defined scope, time limit (1h20 or 2h10), and began with an introductory talk on the topic.
  • Participant-Led Discussions: We pre-selected in-person participants to lead discussions, allowing them to prepare introductions, resources, and scope.
  • Transparent Scheduling: The schedule was shared in advance as GitHub issues, encouraging participants to review and prepare for sessions of interest.
Engaging Sessions: The hackfest featured a variety of topics, including presentations and discussions on how participants were addressing specific subjects within their companies.
  • No Breakout Rooms, No Overlaps: All participants chose to attend all sessions, eliminating the need for separate breakout rooms. We also adapted run-time schedule to keep everybody involved in the same topics.
  • Real-time Updates: We provided notifications and updates through dedicated emails and the event matrix room.
Strengthening Community Connections: The hackfest offered ample opportunities for networking among attendees.
  • Social Events: Igalia sponsored coffee breaks, lunches, and a dinner at a local restaurant.
  • Museum Visit: Participants enjoyed a sponsored visit to the Museum of Estrela Galicia Beer (MEGA).

Fruitful Discussions and Follow-up The structured agenda and breaks allowed us to cover multiple topics during the hackfest. These discussions have led to new display feature development and improvements, as evidenced by patches, merge requests, and implementations in project repositories and mailing lists. With the KMS color management API taking shape, we discussed refinements and best approaches to cover the variety of color pipeline from different hardware-vendors. We are also investigating techniques for a performant SDR<->HDR content reproduction and reducing latency and power consumption when using the color blocks of the hardware.

Color Management/HDR Color Management and HDR continued to be the hottest topic of the hackfest. We had three sessions dedicated to discuss Color and HDR across Linux Display stack layers.

Color/HDR (Kernel-Level) Harry Wentland (AMD) led this session. Here, kernel Developers shared the Color Management pipeline of AMD, Intel and NVidia. We counted with diagrams and explanations from HW-vendors developers that discussed differences, constraints and paths to fit them into the KMS generic color management properties such as advertising modeset needs, IN\_FORMAT, segmented LUTs, interpolation types, etc. Developers from Qualcomm and ARM also added information regarding their hardware. Upstream work related to this session:

Color/HDR (Compositor-Level) Sebastian Wick (RedHat) led this session. It started with Sebastian s presentation covering Wayland color protocols and compositor implementation. Also, an explanation of APIs provided by Wayland and how they can be used to achieve better color management for applications and discussions around ICC profiles and color representation metadata. There was also an intensive Q&A about LittleCMS with Marti Maria. Upstream work related to this session:

Color/HDR (Use Cases and Testing) Christopher Cameron (Google) and Melissa Wen (Igalia) led this session. In contrast to the other sessions, here we focused less on implementation and more on brainstorming and reflections of real-world SDR and HDR transformations (use and validation) and gainmaps. Christopher gave a nice presentation explaining HDR gainmap images and how we should think of HDR. This presentation and Q&A were important to put participants at the same page of how to transition between SDR and HDR and somehow emulating HDR. We also discussed on the usage of a kernel background color property. Finally, we discussed a bit about Chamelium and the future of VKMS (future work and maintainership).

Power Savings vs Color/Latency Mario Limonciello (AMD) led this session. Mario gave an introductory presentation about AMD ABM (adaptive backlight management) that is similar to Intel DPST. After some discussions, we agreed on exposing a kernel property for power saving policy. This work was already merged on kernel and the userspace support is under development. Upstream work related to this session:

Strategy for video and gaming use-cases Leo Li (AMD) led this session. Miguel Casas (Google) started this session with a presentation of Overlays in Chrome/OS Video, explaining the main goal of power saving by switching off GPU for accelerated compositing and the challenges of different colorspace/HDR for video on Linux. Then Leo Li presented different strategies for video and gaming and we discussed the userspace need of more detailed feedback mechanisms to understand failures when offloading. Also, creating a debugFS interface came up as a tool for debugging and analysis.

Real-time scheduling and async KMS API Xaver Hugl (KDE/BlueSystems) led this session. Compositor developers have exposed some issues with doing real-time scheduling and async page flips. One is that the Kernel limits the lifetime of realtime threads and if a modeset takes too long, the thread will be killed and thus the compositor as well. Also, simple page flips take longer than expected and drivers should optimize them. Another issue is the lack of feedback to compositors about hardware programming time and commit deadlines (the lastest possible time to commit). This is difficult to predict from drivers, since it varies greatly with the type of properties. For example, color management updates take much longer. In this regard, we discusssed implementing a hw_done callback to timestamp when the hardware programming of the last atomic commit is complete. Also an API to pre-program color pipeline in a kind of A/B scheme. It may not be supported by all drivers, but might be useful in different ways.

VRR/Frame Limit, Display Mux, Display Control, and more and beer We also had sessions to discuss a new KMS API to mitigate headaches on VRR and Frame Limit as different brightness level at different refresh rates, abrupt changes of refresh rates, low frame rate compensation (LFC) and precise timing in VRR more. On Display Control we discussed features missing in the current KMS interface for HDR mode, atomic backlight settings, source-based tone mapping, etc. We also discussed the need of a place where compositor developers can post TODOs to be developed by KMS people. The Content-adaptive Scaling and Sharpening session focused on sharpening and scaling filters. In the Display Mux session, we discussed proposals to expose the capability of dynamic mux switching display signal between discrete and integrated GPUs. In the last session of the 2024 Display Next Hackfest, participants representing different compositors summarized current and future work and built a Linux Display wish list , which includes: improvements to VTTY and HDR switching, better dmabuf API for multi-GPU support, definition of tone mapping, blending and scaling sematics, and wayland protocols for advertising to clients which colorspaces are supported. We closed this session with a status update on feature development by compositors, including but not limited to: plane offloading (from libcamera to output) / HDR video offloading (dma-heaps) / plane-based scrolling for web pages, color management / HDR / ICC profiles support, addressing issues such as flickering when color primaries don t match, etc. After three days of intensive discussions, all in-person participants went to a guided tour at the Museum of Extrela Galicia beer (MEGA), pouring and tasting the most famous local beer.

Feedback and Future Directions Participants provided valuable feedback on the hackfest, including suggestions for future improvements.
  • Schedule and Break-time Setup: Having a pre-defined agenda and schedule provided a better balance between long discussions and mental refreshments, preventing the fatigue caused by endless discussions.
  • Action Points: Some participants recommended explicitly asking for action points at the end of each session and assigning people to follow-up tasks.
  • Remote Participation: Remote attendees appreciated the inclusive setup and opportunities to actively participate in discussions.
  • Technical Challenges: There were bandwidth and video streaming issues during some sessions due to the large number of participants.

Thank you for joining the 2024 Display Next Hackfest We can t help but thank the 40 participants, who engaged in-person or virtually on relevant discussions, for a collaborative evolution of the Linux display stack and for building an insightful agenda. A big thank you to the leaders and presenters of the nine sessions: Christopher Cameron (Google), Harry Wentland (AMD), Leo Li (AMD), Mario Limoncello (AMD), Sebastian Wick (RedHat) and Xaver Hugl (KDE/BlueSystems) for the effort in preparing the sessions, explaining the topic and guiding discussions. My acknowledge to the others in-person participants that made such an effort to travel to A Coru a: Alex Goins (NVIDIA), David Turner (Raspberry Pi), Georges Stavracas (Igalia), Joan Torres (SUSE), Liviu Dudau (Arm), Louis Chauvet (Bootlin), Robert Mader (Collabora), Tian Mengge (GravityXR), Victor Jaquez (Igalia) and Victoria Brekenfeld (System76). It was and awesome opportunity to meet you and chat face-to-face. Finally, thanks virtual participants who couldn t make it in person but organized their days to actively participate in each discussion, adding different perspectives and valuable inputs even remotely: Abhinav Kumar (Qualcomm), Chaitanya Borah (Intel), Christopher Braga (Qualcomm), Dor Askayo (Red Hat), Jiri Koten (RedHat), Jonas dahl (Red Hat), Leandro Ribeiro (Collabora), Marti Maria (Little CMS), Marijn Suijten, Mario Kleiner, Martin Stransky (Red Hat), Michel D nzer (Red Hat), Miguel Casas-Sanchez (Google), Mitulkumar Golani (Intel), Naveen Kumar (Intel), Niels De Graef (Red Hat), Pekka Paalanen (Collabora), Pichika Uday Kiran (AMD), Shashank Sharma (AMD), Sriharsha PV (AMD), Simon Ser, Uma Shankar (Intel) and Vikas Korjani (AMD). We look forward to another successful Display Next hackfest, continuing to drive innovation and improvement in the Linux display ecosystem!

25 September 2024

Melissa Wen: Reflections on 2024 Linux Display Next Hackfest

Hey everyone! The 2024 Linux Display Next hackfest concluded in May, and its outcomes continue to shape the Linux Display stack. Igalia hosted this year s event in A Coru a, Spain, bringing together leading experts in the field. Samuel Iglesias and I organized this year s edition and this blog post summarizes the experience and its fruits. One of the highlights of this year s hackfest was the wide range of backgrounds represented by our 40 participants (both on-site and remotely). Developers and experts from various companies and open-source projects came together to advance the Linux Display ecosystem. You can find the list of participants here. The event covered a broad spectrum of topics affecting the development of Linux projects, user experiences, and the future of display technologies on Linux. From cutting-edge topics to long-term discussions, you can check the event agenda here.

Organization Highlights The hackfest was marked by in-depth discussions and knowledge sharing among Linux contributors, making everyone inspired, informed, and connected to the community. Building on feedback from the previous year, we refined the unconference format to enhance participant preparation and engagement. Structured Agenda and Timeboxes: Each session had a defined scope, time limit (1h20 or 2h10), and began with an introductory talk on the topic.
  • Participant-Led Discussions: We pre-selected in-person participants to lead discussions, allowing them to prepare introductions, resources, and scope.
  • Transparent Scheduling: The schedule was shared in advance as GitHub issues, encouraging participants to review and prepare for sessions of interest.
Engaging Sessions: The hackfest featured a variety of topics, including presentations and discussions on how participants were addressing specific subjects within their companies.
  • No Breakout Rooms, No Overlaps: All participants chose to attend all sessions, eliminating the need for separate breakout rooms. We also adapted run-time schedule to keep everybody involved in the same topics.
  • Real-time Updates: We provided notifications and updates through dedicated emails and the event matrix room.
Strengthening Community Connections: The hackfest offered ample opportunities for networking among attendees.
  • Social Events: Igalia sponsored coffee breaks, lunches, and a dinner at a local restaurant.
  • Museum Visit: Participants enjoyed a sponsored visit to the Museum of Estrela Galicia Beer (MEGA).

Fruitful Discussions and Follow-up The structured agenda and breaks allowed us to cover multiple topics during the hackfest. These discussions have led to new display feature development and improvements, as evidenced by patches, merge requests, and implementations in project repositories and mailing lists. With the KMS color management API taking shape, we discussed refinements and best approaches to cover the variety of color pipeline from different hardware-vendors. We are also investigating techniques for a performant SDR<->HDR content reproduction and reducing latency and power consumption when using the color blocks of the hardware.

Color Management/HDR Color Management and HDR continued to be the hottest topic of the hackfest. We had three sessions dedicated to discuss Color and HDR across Linux Display stack layers.

Color/HDR (Kernel-Level) Harry Wentland (AMD) led this session. Here, kernel Developers shared the Color Management pipeline of AMD, Intel and NVidia. We counted with diagrams and explanations from HW-vendors developers that discussed differences, constraints and paths to fit them into the KMS generic color management properties such as advertising modeset needs, IN\_FORMAT, segmented LUTs, interpolation types, etc. Developers from Qualcomm and ARM also added information regarding their hardware. Upstream work related to this session:

Color/HDR (Compositor-Level) Sebastian Wick (RedHat) led this session. It started with Sebastian s presentation covering Wayland color protocols and compositor implementation. Also, an explanation of APIs provided by Wayland and how they can be used to achieve better color management for applications and discussions around ICC profiles and color representation metadata. There was also an intensive Q&A about LittleCMS with Marti Maria. Upstream work related to this session:

Color/HDR (Use Cases and Testing) Christopher Cameron (Google) and Melissa Wen (Igalia) led this session. In contrast to the other sessions, here we focused less on implementation and more on brainstorming and reflections of real-world SDR and HDR transformations (use and validation) and gainmaps. Christopher gave a nice presentation explaining HDR gainmap images and how we should think of HDR. This presentation and Q&A were important to put participants at the same page of how to transition between SDR and HDR and somehow emulating HDR. We also discussed on the usage of a kernel background color property. Finally, we discussed a bit about Chamelium and the future of VKMS (future work and maintainership).

Power Savings vs Color/Latency Mario Limonciello (AMD) led this session. Mario gave an introductory presentation about AMD ABM (adaptive backlight management) that is similar to Intel DPST. After some discussions, we agreed on exposing a kernel property for power saving policy. This work was already merged on kernel and the userspace support is under development. Upstream work related to this session:

Strategy for video and gaming use-cases Leo Li (AMD) led this session. Miguel Casas (Google) started this session with a presentation of Overlays in Chrome/OS Video, explaining the main goal of power saving by switching off GPU for accelerated compositing and the challenges of different colorspace/HDR for video on Linux. Then Leo Li presented different strategies for video and gaming and we discussed the userspace need of more detailed feedback mechanisms to understand failures when offloading. Also, creating a debugFS interface came up as a tool for debugging and analysis.

Real-time scheduling and async KMS API Xaver Hugl (KDE/BlueSystems) led this session. Compositor developers have exposed some issues with doing real-time scheduling and async page flips. One is that the Kernel limits the lifetime of realtime threads and if a modeset takes too long, the thread will be killed and thus the compositor as well. Also, simple page flips take longer than expected and drivers should optimize them. Another issue is the lack of feedback to compositors about hardware programming time and commit deadlines (the lastest possible time to commit). This is difficult to predict from drivers, since it varies greatly with the type of properties. For example, color management updates take much longer. In this regard, we discusssed implementing a hw_done callback to timestamp when the hardware programming of the last atomic commit is complete. Also an API to pre-program color pipeline in a kind of A/B scheme. It may not be supported by all drivers, but might be useful in different ways.

VRR/Frame Limit, Display Mux, Display Control, and more and beer We also had sessions to discuss a new KMS API to mitigate headaches on VRR and Frame Limit as different brightness level at different refresh rates, abrupt changes of refresh rates, low frame rate compensation (LFC) and precise timing in VRR more. On Display Control we discussed features missing in the current KMS interface for HDR mode, atomic backlight settings, source-based tone mapping, etc. We also discussed the need of a place where compositor developers can post TODOs to be developed by KMS people. The Content-adaptive Scaling and Sharpening session focused on sharpening and scaling filters. In the Display Mux session, we discussed proposals to expose the capability of dynamic mux switching display signal between discrete and integrated GPUs. In the last session of the 2024 Display Next Hackfest, participants representing different compositors summarized current and future work and built a Linux Display wish list , which includes: improvements to VTTY and HDR switching, better dmabuf API for multi-GPU support, definition of tone mapping, blending and scaling sematics, and wayland protocols for advertising to clients which colorspaces are supported. We closed this session with a status update on feature development by compositors, including but not limited to: plane offloading (from libcamera to output) / HDR video offloading (dma-heaps) / plane-based scrolling for web pages, color management / HDR / ICC profiles support, addressing issues such as flickering when color primaries don t match, etc. After three days of intensive discussions, all in-person participants went to a guided tour at the Museum of Extrela Galicia beer (MEGA), pouring and tasting the most famous local beer.

Feedback and Future Directions Participants provided valuable feedback on the hackfest, including suggestions for future improvements.
  • Schedule and Break-time Setup: Having a pre-defined agenda and schedule provided a better balance between long discussions and mental refreshments, preventing the fatigue caused by endless discussions.
  • Action Points: Some participants recommended explicitly asking for action points at the end of each session and assigning people to follow-up tasks.
  • Remote Participation: Remote attendees appreciated the inclusive setup and opportunities to actively participate in discussions.
  • Technical Challenges: There were bandwidth and video streaming issues during some sessions due to the large number of participants.

Thank you for joining the 2024 Display Next Hackfest We can t help but thank the 40 participants, who engaged in-person or virtually on relevant discussions, for a collaborative evolution of the Linux display stack and for building an insightful agenda. A big thank you to the leaders and presenters of the nine sessions: Christopher Cameron (Google), Harry Wentland (AMD), Leo Li (AMD), Mario Limoncello (AMD), Sebastian Wick (RedHat) and Xaver Hugl (KDE/BlueSystems) for the effort in preparing the sessions, explaining the topic and guiding discussions. My acknowledge to the others in-person participants that made such an effort to travel to A Coru a: Alex Goins (NVIDIA), David Turner (Raspberry Pi), Georges Stavracas (Igalia), Joan Torres (SUSE), Liviu Dudau (Arm), Louis Chauvet (Bootlin), Robert Mader (Collabora), Tian Mengge (GravityXR), Victor Jaquez (Igalia) and Victoria Brekenfeld (System76). It was and awesome opportunity to meet you and chat face-to-face. Finally, thanks virtual participants who couldn t make it in person but organized their days to actively participate in each discussion, adding different perspectives and valuable inputs even remotely: Abhinav Kumar (Qualcomm), Chaitanya Borah (Intel), Christopher Braga (Qualcomm), Dor Askayo, Jiri Koten (RedHat), Jonas dahl (Red Hat), Leandro Ribeiro (Collabora), Marti Maria (Little CMS), Marijn Suijten, Mario Kleiner, Martin Stransky (Red Hat), Michel D nzer (Red Hat), Miguel Casas-Sanchez (Google), Mitulkumar Golani (Intel), Naveen Kumar (Intel), Niels De Graef (Red Hat), Pekka Paalanen (Collabora), Pichika Uday Kiran (AMD), Shashank Sharma (AMD), Sriharsha PV (AMD), Simon Ser, Uma Shankar (Intel) and Vikas Korjani (AMD). We look forward to another successful Display Next hackfest, continuing to drive innovation and improvement in the Linux display ecosystem!

31 August 2024

Russell Coker: Links August 2024

Bruce Schneier and Kim C rdova wrote an insightful article about the changes that corporations make to culture as technical debt [1]. We need anti-trust laws to be enforced before it s too late! Bruce Schneier posted the transcript of an insightful lecture he gave on rethinking democracy for the age of AI [2]. Cory Doctorow wrote an insightful blog post about companies that are too big to care [3]. We need to break up those monopolies. Science Alert has an interesting article on plans to get renewable energy by drilling into the magma chamber of an active volcano [4]. What I want to know is whether using the energy could reduce the power of an eruption or even prevent it from happening. Bruce Schneier wrote an interesting article about Crowdstrike and the market incentives for brittle systems [5]. Also we need to have more formally proven software and more use of systems like seL4. Dave s Garage on YouTube has an interesting video about modern Mainframes [6]. Their IO capacity dwarfs the memory bandwidth of most PC servers. Framework has an interesting YouTube video about the process of developing a RISC-V motherboard for their laptops [7]. The documentary series Who Broke Britain by ABC news gives a good insight into the harm caused by austerity policies [8]. Rolling Stone has an interesting story about the consequences of being a CIA agent in al Quaeda [9].

24 August 2024

Dirk Eddelbuettel: RcppEigen 0.3.4.0.2 on CRAN: Micro Maintenance

A new maintenance release of RcppEigen is now on CRAN, and will go to Debian shortly as usual. Eigen is a C++ template library for linear algebra: matrices, vectors, numerical solvers, and related algorithms. RcppEigen is used by 460 other CRAN packages, and has been downloaded 31.9 million times just off the mirrors of CRAN keeping logs for counting. The recent change switing to Authors@R (now that CRAN mandates it) contained in dual typo in ORCID tags, this releases fixes it. The complete NEWS file entry follows.

Changes in RcppEigen version 0.3.4.0.2 (2024-08-23)
  • Correct two typos in the ORCID tag

Courtesy of CRANberries, there is also a diffstat report for the most recent release. If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

22 August 2024

Jonathan McDowell: Thoughts on Advent of Code + Rust

Diego wrote about his dislike for Advent of Code and that reminded me I hadn t written up my experience from 2023. Mostly because, spoiler, I never actually completed it and always intended to do so and then write it up. I think it s time to accept I m not going to do that, and write down some thoughts before I forget all of them. These are somewhat vague, given the time that s elapsed, but I think still relevant. You might also find Roger s problem write up interesting. I ve tried AoC a couple of times before; I think I had a very brief attempt back in 2021, and I got 4 days in for 2022. For Advent of Code 2023 I tried much harder to actually complete the challenges, and got most of the way there. I didn t allow myself to move on to the next day until fully completing the previous day, and didn t end up doing the second half of December 24th, or any of December 25th.

Rust First I want to talk about Rust, which is the language I chose to use for the problems. I ve dabbled a little in it, but I d like more familiarity with the basic language, and some programming problems seemed like a good way to get that. It s a language I want to like; I ve spent a lot of my career writing C, do more in Go these days, and generally think Rust promises a low level, run-time light environment like C but with the rough edges taken off. I set myself the challenge of using just bare Rust; no external crates, no use of cargo. I was accused of playing on hard mode by doing this, but it really wasn t the intention - I figured that I should be able to do what I needed without recourse to anything outside the core language, and didn t want what seemed like the extra complexity of dealing with cargo. That caused problems, however. I m used to by-default generic error handling in Go through the error type, but Rust seems to have much more tightly typed errors. I was pointed at anyhow as the right way to do this in Rust. I still find this surprising; I ended up using unwrap() a lot when I think with more generic error handling I could have used ?. The other thing I discovered is that by default rustc is heavy on the debug output. I got significantly better results on some of the solutions with rustc -O -C target-cpu=native source.rs. I probably shouldn t be surprised by this, but worth noting. Rust, to me, has a syntax only a C++ programmer could love. I am not a C++ programmer. Coming from C I found Go to be a nice, simple syntax to learn. Rust has not been the same. There s a lot more punctuation, and it s not always clear to me what it s doing. This applies more when reading other people s code than when writing it myself, obviously, but I see a lot of Rust code that could give Perl a run for its money in terms of looking like line noise. The borrow checker didn t bug me too much, but did add overhead to my thinking. The Rust compiler is generally very good at outputting helpful error messages when the programmer is an idiot. I ended up having to use a RefCell for one solution, and using .iter() for loops rather than explicit iterators (why, why is this different?). I also kept forgetting to explicitly mark variables as mutable when declaring them. Things I liked? There s a rich set of first class data types. Look, I m a C programmer, I m easily pleased. You give me some sort of hash array and I ll be happy. Rust manages that, tuples, strings, all the standard bits any modern language can provide. The whole impl thing for adding methods to structures I like as a way of providing some abstraction, though I think Go has a nicer syntax for it. The compiler, as mentioned, is great at spitting out useful errors for the most part. Also although I wasn t using external crates for AoC I do appreciate there s a decent ecosystem there now (though that brings up another gripe: rust seems to still be a fairly fast moving target, to the extent I can no longer rely on the compiler in Debian stable to be able to compile random projects I find).

Advent of Code Let s talk about the advent of code bit now. Hopefully it s long enough since it came out that this won t be spoilers for anyone, but if you haven t attempted the 2023 AoC and might, you might want to stop reading here. First, a refresher on the format for those who might not be aware of it. Problems are posted daily from December 1st until the 25th. Each is in 2 parts; the second part is not viewable until you have provided the correct answer for the first part. There s a whole leaderboard thing going on, but the puzzle opens at midnight UTC-5 so generally by the time I wake up and have time to look the problem has been solved many times over; no chance of getting listed. Credit to AoC creator, Eric Wastl, for writing up the set of problems in an entertaining fashion. I quite enjoyed seeing how the puzzle would be phrased each day, and the whole thing obviously brings a lot of joy to folk I know. I always start AoC thinking it ll be a fun set of puzzles to solve. Then something happens and I miss a day or two, and all of a sudden I ve a bunch of catching up to do and it s all a bit more of a chore. I hit that at some points this time, but made a concerted effort to try and power through it. That perseverance was required up front, because I found the second part of Day 1 to be ill specified, and had to iterate a few times to actually calculate the desired solution (IIRC, issues about whether sevenone at the end of a line ended up as 7 or 1 really tripped me up). I don t recall any other problems that bit me as hard on the specification as this one, but it happening up front was unfortunate. The short example input doesn t always help with this either; either it s not enough to be able to extrapolate patterns, or it doesn t show all the variations you need to account for (that aren t fully specified in the text), or in a few cases it turned out I needed to understand the shape of the actual data to produce a solution that could actually complete in a reasonable time. Which brings me to another matter, sometimes brute force doesn t actually work. This is fine, but the second part of the day s problem can change the approach you d take. So sometimes I got lucky in the way I handled the first half, and doing the second half was a simple 5 minute tweak, and sometimes I had to entirely change the way I was storing data. You might claim that if I was a better programmer I d have always produced a first half solution that was amenable to extension for the second half. First, I dispute that; I think there are always situations where the problem domain can change in enough directions that you can t handle all of them without a lot of effort. Secondly, I didn t find AoC an environment that encouraged me to optimise for generic solutions. Maybe some of the puzzles in isolation would allow for that, but a month of daily problems to solve while still engaging in regular life meant I hacked things up, took short cuts based on the knowledge I had of the input data, etc, etc. Overall I can see the appeal, but the sheer quantity and the fact I write code as part of my day job just made it feel too much like a chore, rather than a fun mental exercise. I did wonder how they d look as a set of interview puzzles (obviously a subset, rather than all of them), but I m not sure how you d actually use them for that - I wouldn t want anyone to have to solve them in a live interview. So, in case it s not obvious, I m not planning to engage in AoC again this yet. But I m continuing to persevere with Rust (though most of my work stuff is thankfully still Go).

21 August 2024

Russ Allbery: Review: These Burning Stars

Review: These Burning Stars, by Bethany Jacobs
Series: Kindom Trilogy #1
Publisher: Orbit
Copyright: October 2023
ISBN: 0-316-46342-6
Format: Kindle
Pages: 430
These Burning Stars is a science fiction thriller with cyberpunk vibes. It is Bethany Jacobs's first novel and the first of an expected trilogy, and won the 2024 Philip K. Dick Award for the best SF paperback original published in the US. Generation starships brought humanity to the three star systems of the Treble, where they've built a new and thriving culture of billions. The Treble is ruled by the Kindom, a tripartite government structure built around the worship of six gods and the aristocratic power of the First Families. The Clerisy handle religion, the Secretaries run the bureaucracy, and the Cloaksaan enforce the decisions of the other branches. The Nightfoots are one of the First Families. They control sevite, the propellant required to move between the systems of the Treble now that the moon Jeve and the sole source of natural jevite has been destroyed. Esek Nightfoot is a cleric, theoretically following the rules of the Clerisy, but she has made a career of training cloaksaan. She is is mercurial, powerful, ruthless, ambitious, politically well-connected, and greatly feared. She is also obsessed with a person named Six: an orphan she first encountered at a training school who was too young to have a gender or a name but who was already one of the best fighters in the school. In the sort of manipulative challenge typical of Esek, she dangled the offer of a place as a student and challenged the child to learn enough to do something impressive. The subsequent twenty years of elusive taunts and mysterious gifts from the impossible-to-locate Six have driven Esek wild. Cleric Chono was beside Esek for much of that time. One of Six's classmates and another of Esek's rescues, Chono is the rare student who became a cleric rather than a cloaksaan. She is pious, cautious, and careful, the opposite of Esek's mercurial rage, but it's impossible to spend that much time around the woman and not be affected and manipulated by her. As this story opens, Chono is summoned by the First Cleric to join Esek on an assignment: recover a data coin that was stolen from a pirate raid on the Nightfoot compound. He refuses to tell them what data is on it, only saying that he believes it could be used to undermine public trust in the Nightfoot family. Jun is a hacker with considerably fewer connections to power or government and no desire to meet any of these people. She and her partner Liis make a dubiously legal living from smaller, quieter jobs. Buying a collection of stolen data coins for an archivist consortium is riskier than she prefers, but she's been tracking down rumors of this coin for months. The deal is worth a lot of money, enough to make a huge difference for her family. This is the second book I've read recently with strong cyberpunk vibes, although These Burning Stars mixes them with political thriller. This is a messy world with complicated political and religious systems, a lot of contentious history, and vast inequality. The story is told in two interleaved time sequences: the present-day fight over the data coin and the information that it contains, and a sequence of flashbacks telling the history of Esek's relationship with Six and Chono. Jun's story is the most cyberpunk and the one I found the most enjoyable to read, but Chono is a good viewpoint character for Esek's vicious energy and abusive charisma. Six is not a viewpoint character. For most of the book, they're present mostly in shadows, glimpses, and consequences, but they're the strongest character of the book. Both Esek and Six are larger than life, creatures of legend stuffed into mundane politics but too full of strong emotions, both good and bad, to play by any of the rules. Esek has the power base and access to the levers of government, but Six's quiet competence and mercilessly targeted morality may make them the more dangerous of the pair. I found the twisty political thriller part of this book engrossing and very difficult to put down, but it was also a bit too much drama for me in places. Jacobs has some surprises in store, one of which I did not expect at all, and they're set up beautifully and well-done within the story, but Esek and Six become an emotional star that the other characters orbit around and are in danger of getting pulled into. Chono is an accomplished and powerful character in her own right, but she's also an abuse victim, and while those parts are realistic, I didn't entirely enjoy reading them. There is quiet competence here alongside the drama, but I think I wanted the balance of emotion to tip a bit more towards the competence. There is one thing that Jacobs does with the end of the book that greatly impressed me. Unfortunately I can't even hint at it for fear of spoilers, but the ending is unsettling in a way that I found surprising and thought-provoking. I think what I can say is that this book respects the intelligence and skill of secondary characters in a way that I think is rare in a story with such overwhelming protagonists. I'm still thinking about that, and it's going to pull me right into the sequel. This is not going to be to everyone's taste. Esek is a viewpoint character and she can be very nasty. There's a lot of violence and abuse, including one rather graphic fight scene that I thought dragged on much longer than it needed to. But it's a satisfying, complex story with a true variety of characters and some real surprises. I'm glad I read it. Followed by On Vicious Worlds, not yet published as I write this. Content warnings: emotional and physical abuse, graphic violence, off-screen rape and sexual abuse of minors. Rating: 7 out of 10

16 August 2024

Dirk Eddelbuettel: RcppEigen 0.3.4.0.1 on CRAN: Minor Maintenance

A new maintenance release of RcppEigen is now on CRAN, and will go to Debian shortly as usual. Eigen is a C++ template library for linear algebra: matrices, vectors, numerical solvers, and related algorithms. A very recent change in the development version of R (aka r-devel ) expanded the set of declared BLAS and LAPACK functions (and I tooted approvingly about it as well). It turns out that the xerbla() declaration there (which, as usual for R and as discussed in Writing R Extensions, defines the new optional character length entry for a char vector) conflicts with one in the blas.h header in Eigen upsetting the compilation of just one reverse-dependency. So CRAN, as they so often (and quietly) do in these cases, gave us a friendly and concise heads-up and asked for a change so we complied, did the usual reverse-dependency check of the other 400+ packages using RcppEigen and produced the new release which was injected into the repository during the current summer break. The complete NEWS file entry follows.

Changes in RcppEigen version 0.3.4.0.1 (2024-08-14)
  • Conditionally comment-out xerbla in blas.h as it is now providedd by R-devel albeit with FC_LEN_T (per a CRAN request)
  • Minor package updates (continuous integration, badges)

Courtesy of CRANberries, there is also a diffstat report for the most recent release. If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

9 August 2024

Kalyani Kenekar: One Backpack, One Passport: My First Solo Trip

Planing A Self Organized Solo Trip You know the movie Queen? The actor Kangana Ranaut plays in that movie the role of Rani Mehra, a 24-year-old Punjabi woman, who was a simple, homely girl that was always reliant on her family. Similar to Rani I too rarely ventured out without my parents and often needed my younger sibling by my side. Inspired by her transformation, I decided it was time to take control of my own story and discover who I truly am. Queen movie picture Of Kangana

Trip Requirements

My First Passport The journey began with a significant first step: Obtaining my first passport Never having had one before, I scheduled the nearest available interview date on June 29 2022. This meant traveling to Solapur, a city 309 km from my hometown, accompanied by my father. After successfully completing the interview, I received my passport on July 14 2022.

Select A Country, Booking Flights And Accommodation Excited and ready to embark on my adventure, I planed trip to Albania and booked the flight tickets. Why? I had heard from friends that it was a beautiful European country with beaches and other attractions, and importantly, it didn t require a visa for Indian citizens and was more affordable than other European destinations. Before heading to Albania, I planned a overnight stop in Abu Dhabi with a transit visa, thanks to friend who knew the process for obtaining it. Some of my friends did travel also to Europe at the same time and quite close to my plannings, but that I realized just later the trip.

Day 1, Starting The Experience On July 20, 2022, I started my journey by traveling from Pune, Maharashtra, to Delhi, where my brother lives. He came to see me off at the airport, adding a touch of warmth and support to the beginning of my solo adventure. Upon arriving in Delhi, with my next flight scheduled for July 21, I stayed at a backpacker hostel named Zostel, Paharganj, Delhi to rest. During my stay, I noticed that many travelers at the hostel carried rucksacks, which sparked a desire in me to get one for my own trip to Europe. Up until then, I had always shopped with my mom and had never bought anything on my own. Inspired by the travelers, I set out to find a suitable rucksack. I traveled alone by metro from Paharganj to Rohini to visit a Decathlon store, where I purchased a 50-liter rucksack. This was a significant step in preparing for my European adventure and marked a milestone in my journey of self reliance. Rucksack description tag Kalyani s packpacker

Day 2, Flying To Abu Dhabi The following day, July 21 2024, I had a flight to Abu Dhabi. I spent the night at the hostel to rest before my journey. On the day of the flight, I needed to reach the airport by 3 PM, and a friend kindly came to drop me off. With my rucksack packed and excitement building, I was ready for the next leg of my adventure. When we arrived at the airport, my friend saw me off, marking the start of my international journey. With mom made spices, chutneys, and chilly flakes packed for comfort, I completed my immigration process in about two and a half hours. I then settled at the gate for my flight, feeling a mix of excitement and anxiety as thoughts raced through my mind. mom-made spices Passport and boarding pass To ease my nerves, I struck up a conversation with a man seated nearby who was also traveling to Abu Dhabi for work. He provided helpful information about safety and transportation in Abu Dhabi, which reassured me. With the boarding process complete and my anxiety somewhat eased. I found my window seat on the flight and settled in, excited for the journey ahead. Next to me was a young man from Ranchi(Zarkhand, India), heading to Abu Dhabi for work at a mining factory. We had an engaging conversation about work culture in Abu Dhabi and recruitment from India. Upon arriving in Abu Dhabi, I completed my transit, collected my luggage, and began finding my way to the hotel Premier Inn AbuDhabi, which was in the airport area. To my surprise, I ran into the same man from the flight, now in a cab. He kindly offered to drop me at my hotel, which I gladly accepted since navigating an unfamiliar city with a short acquaintance felt safer. At the hotel gate, he asked if I had local currency (Dirhams) for payment, as sometimes online transactions can fail. That hadn t crossed my mind, and I realized I might be left stranded if a transaction failed. Recognizing his help as a godsend, I asked if he could lend me some Dirhams, promising to transfer the amount later. He kindly assured me to pay him back once I reached the hotel room. With that relief, I checked into the hotel, feeling deeply grateful for the unexpected assistance and transferred the money to him after getting to my room. dhiramm money hotel room Kalyani in hotel room

Day 3, Flying And Arrive In Tirana Once in the hotel room, I found it hard to sleep, anxious about waking up on time for my flight. I set an alarm to wake up early, but my subconscious mind kept me alert, and I woke up before the alarm went off. I got freshened up and went down for breakfast, where I found some vegetarian options like Idli-Sambar and bread with butter, along with some morning tea. After breakfast, I headed back to the airport, ready to catch my flight to my final destination: Tirana, Albania. Breakfast at hotel Airport area I reached Tirana, Albania after a six hours flight, feeling exhausted and I was suffering from a headache. The air pressure had blocked my ears, and jet lag added to my fatigue. After collecting my checked luggage, I headed to the first ATM machine at the airport. Struggling to insert my card, I asked a nearby gentleman for help. He tried his best, but my card got stuck inside the machine. Panic set in as I worried about how I would survive without money. Taking a deep breath, I found an airport employee and explained the situation. The gentleman stayed with me, offering support and repeatedly apologizing for his mistake. However, it wasn t his fault, the ATM was out of order, which I hadn t noticed. My focus was solely on retrieving my ATM card. The airport employee worked diligently, using a hairpin to carefully extract my card. Finally, the card was freed, and I felt an immense sense of relief, grateful for the help of these kind strangers. I used another ATM, successfully withdrew money, and then went to an airport mobile SIM shop to buy a new SIM card for local internet and connectivity. sim plans

Day 4, Arriving In Tirana, Facing Challenges In A Foreign Country I had booked a stay at a backpacker hostel near the city center of Tirana. After sorting out the ATM and SIM card issues, I searched for a bus or any transport to get there. It was quite late, around 8:30 PM, and being in a new city, I was in a hurry. I saw a bus nearly leaving the airport, stopped it, and asked if it went to the city center. They gave me the green flag, so I boarded the airport service bus and reached the city center. Feeling very tired, I discovered that the hostel was about an hour and a half away by walking. Deciding to take a cab, I faced a challenge as the driver couldn t understand my English or accent. Using a mobile translator to convert my address from English to Albanian, I finally communicated my destination to him. With that sorted out, I headed to the Blue Door Backpacker Hostel and arrived around 9 PM, relieved to have finally reached my destination and I checked in. Hostel gate Street in Tirana I found my top bunk bed, only to realize I had booked a mixed-gender dormitory. This detail had completely escaped my notice during the booking process. I felt unsure about how to handle the situation. Coincidentally, my experience mirrored what Kangana faced in the movie Queen . Feeling acidic due to an empty stomach and the exhaustion of heavy traveling, I wasn t up to cooking in the hostel s kitchen. I asked the front desk about the nearest restaurant. It was nearly 9:30 PM, and the streets were deserted. To avoid any mishaps like in the movie Queen, I kept my passport securely locked in my bag, ensuring it wouldn t be a victim of theft. Venturing out for dinner, I felt uneasy on the quiet streets. I eventually found a restaurant recommended by the hostel, but the menu was almost entirely non-vegetarian. I struggled to ask about vegetarian options and was uncertain if any dishes contained eggs, as some people consider eggs to be vegetarian. Feeling frustrated and unsure, I left the restaurant without eating. I noticed a nearby grocery store that was about to close and managed to get a few extra minutes to shop. I bought some snacks, wafers, milk, and tea bags (though I couldn t find tea powder to make Indian-style tea). Returning to the hostel, I made do with wafers, cookies, and milk for dinner. That day was incredibly tough for me, I filled with exhaustion and struggle in a new country, I was on the verge of tears . I made a video call home before sleeping on the top bunk bed. It was a new experience for me, sharing a room with both unknown men and women. I kept my passport safe inside my purse and under my pillow while sleeping, staying very conscious about its security.

Day 5, Exploring Nearby Places I woke up the next day at noon. After having some coffee, the hostel management girl asked if I wanted breakfast. She offered curd with cornflakes, which I refused because I don t like curd. Instead, I ordered a pizza from a vegetarian pizza place with her help, and I started feeling better. I met some people in the hostel, some from Syria and others from Italy. I struggled to understand their accents but kept pushing myself to get involved in their discussions. Despite the challenges, I felt more at ease and was slowly adapting to my new environment. I went out from the hostel in the evening to buy some vegetables to cook something. I searched for shops and found some potatoes, tomatoes, and rice. I decided to cook Khichdi, an Indian dish made with rice, and added some chili flakes I brought from home. After preparing my dinner, I ate and then went to sleep again. vegetable shop cooking in kitchen Food

Day 6, Tiranas Recent History The next day, I planned to explore the city and visited Bunkart-1, a fascinating museum in a massive underground bunker from the communist era. Originally built as a shelter for Albania s political and military elite, it now offers a unique glimpse into the country s history under Enver Hoxha s oppressive regime. The museum s exhibits include historical artifacts, photographs, and multimedia displays that detail the lives of Albanians during that time. Walking through the dimly lit corridors, I felt the weight of history and gained a deeper understanding of Albania s past. Bunkart Bunkart Bunkart Bunkart Bunkart Bunkart Bunkar Bunkart Bunkart Bunkart Bunkart

Day 7-8, Meeting Friends From India The next day, I accidentally met with Chirag, who was returning from the Debian Conference 2022 held in Prizren, Kosovo, and staying at the same hostel. When I encountered him, he was talking on the phone, and I recognized he was Indian by his accent. I introduced myself, and we discovered we had some mutual friends. Chirag told me that our common friend, Raju, was also coming to stay at the hostel the next day. This news made me feel relaxed and happy to have known people around. When Raju arrived, the three of us, Chirag, Raju, and I planned to have dinner at an Indian restaurant and explore Tirana city. I had a great time talking and enjoying their company. Friends on street

Day 9-10, Meeting More Friends Raju had a ticket to leave soon, so Chirag and I made a plan to visit Shkod r and the nearby Komani Lake for kayaking. We started our journey early in the morning by bus and reached Shkod r. There, we met new friends from the conference, Pavit and Abraham, who were already there. We had dinner together and enjoyed an ice cream treat from Chirag. Friends on dinner

Day 12, Kayaking And Say Good Bye To Friends The next day, Pavit and Abraham had a flight back to India, so Chirag and I went to Komani Lake. We had an adventurous time kayaking, even though neither of us knew how to swim. We took a ferry through the backwaters to the island on Komani Lake and enjoyed a fantastic adventure together. After our trip, Chirag returned to Tirana for his flight back to India, leaving me to continue my journey alone. Lake with mountain Kayak

Day 13, Climbing Rozafa Castel By stopping at Shkod r, I visited Rozafa Castle. Despite the language barrier, as most locals only spoke Albanian, people around me guided me correctly on how to get there. At times, I used applications like Google Translate to communicate. To read signs or hotel menus, I used Google Photos' language converter. I even used the audio converter to understand and speak some basic Albanian phrases. View from top of Castel Rozafa castel I took a bus from Shkod r to the southern part of Albania, heading to Sarand . The journey lasted about five to six hours, and I had booked a stay at Mona s Hostel. Upon arrival, I met Eliza from America, and we went together to Ksamil Beach, spending a wonderful day there.

Day 14, Vlora Beach: Beach Side Cycling Next, I traveled to Vlor , where I stayed for one day. During my time there, I enjoyed beach side cycling with a cycle provided by the hostel owner and spent some time feeding fish. I also met a fellow traveler from Delhi who had brought along some preserved Indian curry. He kindly shared it with me, which was a welcome change after nearly 15 days without authentic Indian cuisine, except for what I had cooked myself in various hostels. Sunset on BeachKalyani on Beach Beach with streetBeach side cycling

Day 15-16 Visiting Durress, Travelling Back To Tirana I then visited Durr s, exploring its beautiful beaches, before heading back to Tirana one day before my flight home. On the day of my flight, my alarm didn t go off, and I woke up late at the hostel. In a frantic rush, I packed everything in just five minutes and dashed toward the city center to catch the bus to the airport. If I had been just five minutes later, I would have missed the bus. Thankfully, I managed to stop it just in time and began my journey back home, reflecting on the incredible adventure I had experienced. Fortunately, I wasn t late; I arrived at the airport just in time. After clearing immigration, I boarded my flight, which had a layover in Warsaw, Poland. The journey from Tirana to Warsaw took about two and a half hours, followed by a seven to eight-hour flight from Poland back to India. Once I arrived in Delhi, I returned to Zostel and booked a train ticket to Aurangabad for the next three days.

Backview This trip was an incredible adventure for me. I never imagined I could accomplish something like this, but I did. Meeting diverse people, experiencing different cultures, and learning so much made this journey truly unforgettable. Looking back, I realize how much I ve grown from this experience. Although I may have more opportunities to travel abroad in the future, this trip will always hold a special place in my heart. The memories I made and the incredible people I met along the way are irreplaceable. This experience goes beyond what I can express through this blog or words; it was incredibly precious to me. Every moment of this journey is etched in my memory, and I am grateful for every part of it.

30 July 2024

Russell Coker: Links July 2024

Interesting Scientific American article about the way that language shapes thought processes and how it was demonstrated in eye tracking experiments with people who have Aboriginal languages as their first language [1]. David Brin wrote an interesting article Do We Really Want Immortality [2]. I disagree with his conclusions about the politics though. Better manufacturing technology should allow decreasing the retirement age while funding schools well. Scientific American has a surprising article about the differences between Chimp and Bonobo parenting [3]. I d never have expected Chimp moms to be protective. Sam Varghese wrote an insightful and informative article about the corruption in Indian politics and the attempts to silence Australian journalist Avani Dias [4]. WorksInProgress has an insightful article about the world s first around the world solo yacht race [5]. It has some interesting ideas about engineering. Htwo has an interesting video about adverts for fake games [6]. It s surprising how they apparently make money from advertising games that don t exist. Elena Hashman wrote an insightful blog post about Chronic Fatigue Syndrome [7]. I hope they make some progress on curing it soon. The fact that it seems similar to long Covid which is quite common suggests that a lot of research will be applied to that sort of thing. Bruce Schneier wrote an insightful blog post about the risks of MS Copilot [8]. Krebs has an interesting article about how Apple does Wifi AP based geo-location and how that can be abused for tracking APs in warzones etc. Bad Apple! [9]. Bruce Schneier wrote an insightful blog post on How AI Will Change Democracy [10]. Charles Stross wrote an amusing and insightful post about MS Recall titled Is Microsoft Trying to Commit Suicide [11]. Bruce Schneier wrote an insightful blog post about seeing the world as a data structure [12]. Luke Miani has an informative YouTube video about eBay scammers selling overprices MacBooks [13]. The Yorkshire Ranter has an insightful article about Ronald Coase and the problems with outsourcing big development contracts as an array of contracts without any overall control [14].

21 July 2024

Russell Coker: SE Linux Policy for Dell Management

The recent issue of Windows security software killing computers has reminded me about the issue of management software for Dell systems. I wrote policy for the Dell management programs that extract information from iDRAC and store it in Linux. After the break I ve pasted in the policy. It probably needs some changes for recent software, it was last tested on a PowerEdge T320 and prior to that was used on a PowerEdge R710 both of which are old hardware and use different management software to the recent hardware. One would hope that the recent software would be much better but usually such hope is in vain. I deliberately haven t submitted this for inclusion in the reference policy because it s for proprietary software and also it permits many operations that we would prefer not to permit. The policy is after the break because it s larger than you want on a Planet feed. But first I ll give a few selected lines that are bad in a noteworthy way:
  1. sys_admin means the ability to break everything
  2. dac_override means break Unix permissions
  3. mknod means a daemon creates devices due to a lack of udev configuration
  4. sys_rawio means someone didn t feel like writing a device driver, maintaining a device driver for DKMS is hard and getting a driver accepted upstream requires writing quality code, in any case this is a bad sign.
  5. self:lockdown is being phased out, but used to mean bypassing some integrity protections, that would usually be related to sys_rawio or similar.
  6. dev_rx_raw_memory is bad, reading raw memory allows access to pretty much everything and execute of raw memory is something I can t imagine a good use for, the Reference Policy doesn t use this anywhere!
  7. dev_rw_generic_chr_files usually means a lack of udev configuration as udev should do that.
  8. storage_raw_write_fixed_disk shouldn t be needed for this sort of thing, it doesn t do anything that involves managing partitions.
Now without network access or other obvious ways of remote control this level of access while excessive isn t necessarily going to allow bad things to happen due to outside attack. But if there are bugs in the software there s nothing to stop it from giving the worst results.
allow dell_datamgrd_t self:capability   dac_override dac_read_search mknod sys_rawio sys_admin  ;
allow dell_datamgrd_t self:lockdown integrity;
dev_rx_raw_memory(dell_datamgrd_t)
dev_rw_generic_chr_files(dell_datamgrd_t)
dev_rw_ipmi_dev(dell_datamgrd_t)
dev_rw_sysfs(dell_datamgrd_t)
storage_raw_read_fixed_disk(dell_datamgrd_t)
storage_raw_write_fixed_disk(dell_datamgrd_t)
allow dellsrvadmin_t self:lockdown integrity;
allow dellsrvadmin_t self:capability   sys_admin sys_rawio  ;
dev_read_raw_memory(dellsrvadmin_t)
dev_rw_sysfs(dellsrvadmin_t)
dev_rx_raw_memory(dellsrvadmin_t)
The best thing that Dell could do for their customers is to make this free software and allow the community to fix some of these issues.
Here is dellsrvadmin.te:
policy_module(dellsrvadmin,1.0.0)
require  
  type dmidecode_exec_t;
  type udev_t;
  type device_t;
  type event_device_t;
  type mon_local_test_t;
 
type dellsrvadmin_t;
type dellsrvadmin_exec_t;
init_daemon_domain(dellsrvadmin_t, dellsrvadmin_exec_t)
type dell_datamgrd_t;
type dell_datamgrd_exec_t;
init_daemon_domain(dell_datamgrd_t, dell_datamgrd_t)
type dellsrvadmin_var_t;
files_type(dellsrvadmin_var_t)
domain_transition_pattern(udev_t, dellsrvadmin_exec_t, dellsrvadmin_t)
modutils_domtrans(dellsrvadmin_t)
allow dell_datamgrd_t device_t:dir rw_dir_perms;
allow dell_datamgrd_t device_t:chr_file create;
allow dell_datamgrd_t event_device_t:chr_file   read write  ;
allow dell_datamgrd_t self:process signal;
allow dell_datamgrd_t self:fifo_file rw_file_perms;
allow dell_datamgrd_t self:sem create_sem_perms;
allow dell_datamgrd_t self:capability   dac_override dac_read_search mknod sys_rawio sys_admin  ;
allow dell_datamgrd_t self:lockdown integrity;
allow dell_datamgrd_t self:unix_dgram_socket create_socket_perms;
allow dell_datamgrd_t self:netlink_route_socket r_netlink_socket_perms;
modutils_domtrans(dell_datamgrd_t)
can_exec(dell_datamgrd_t, dmidecode_exec_t)
allow dell_datamgrd_t dellsrvadmin_var_t:dir rw_dir_perms;
allow dell_datamgrd_t dellsrvadmin_var_t:file manage_file_perms;
allow dell_datamgrd_t dellsrvadmin_var_t:lnk_file read;
allow dell_datamgrd_t dellsrvadmin_var_t:sock_file manage_file_perms;
kernel_read_network_state(dell_datamgrd_t)
kernel_read_system_state(dell_datamgrd_t)
kernel_search_fs_sysctls(dell_datamgrd_t)
kernel_read_vm_overcommit_sysctl(dell_datamgrd_t)
# for /proc/bus/pci/*
kernel_write_proc_files(dell_datamgrd_t)
corecmd_exec_bin(dell_datamgrd_t)
corecmd_exec_shell(dell_datamgrd_t)
corecmd_shell_entry_type(dell_datamgrd_t)
dev_rx_raw_memory(dell_datamgrd_t)
dev_rw_generic_chr_files(dell_datamgrd_t)
dev_rw_ipmi_dev(dell_datamgrd_t)
dev_rw_sysfs(dell_datamgrd_t)
files_search_tmp(dell_datamgrd_t)
files_read_etc_files(dell_datamgrd_t)
files_read_etc_symlinks(dell_datamgrd_t)
files_read_usr_files(dell_datamgrd_t)
logging_search_logs(dell_datamgrd_t)
miscfiles_read_localization(dell_datamgrd_t)
storage_raw_read_fixed_disk(dell_datamgrd_t)
storage_raw_write_fixed_disk(dell_datamgrd_t)
can_exec(mon_local_test_t, dellsrvadmin_exec_t)
allow mon_local_test_t dellsrvadmin_var_t:dir search;
allow mon_local_test_t dellsrvadmin_var_t:file read_file_perms;
allow mon_local_test_t dellsrvadmin_var_t:file setattr;
allow mon_local_test_t dellsrvadmin_var_t:sock_file write;
allow mon_local_test_t dell_datamgrd_t:unix_stream_socket connectto;
allow mon_local_test_t self:sem   create read write destroy unix_write  ;
allow dellsrvadmin_t self:process signal;
allow dellsrvadmin_t self:lockdown integrity;
allow dellsrvadmin_t self:sem create_sem_perms;
allow dellsrvadmin_t self:fifo_file rw_file_perms;
allow dellsrvadmin_t self:packet_socket create;
allow dellsrvadmin_t self:unix_stream_socket   connectto create_stream_socket_perms  ;
allow dellsrvadmin_t self:capability   sys_admin sys_rawio  ;
dev_read_raw_memory(dellsrvadmin_t)
dev_rw_sysfs(dellsrvadmin_t)
dev_rx_raw_memory(dellsrvadmin_t)
allow dellsrvadmin_t dellsrvadmin_var_t:dir rw_dir_perms;
allow dellsrvadmin_t dellsrvadmin_var_t:file manage_file_perms;
allow dellsrvadmin_t dellsrvadmin_var_t:lnk_file read;
allow dellsrvadmin_t dellsrvadmin_var_t:sock_file write;
allow dellsrvadmin_t dell_datamgrd_t:unix_stream_socket connectto;
kernel_read_network_state(dellsrvadmin_t)
kernel_read_system_state(dellsrvadmin_t)
kernel_search_fs_sysctls(dellsrvadmin_t)
kernel_read_vm_overcommit_sysctl(dellsrvadmin_t)
corecmd_exec_bin(dellsrvadmin_t)
corecmd_exec_shell(dellsrvadmin_t)
corecmd_shell_entry_type(dellsrvadmin_t)
files_read_etc_files(dellsrvadmin_t)
files_read_etc_symlinks(dellsrvadmin_t)
files_read_usr_files(dellsrvadmin_t)
logging_search_logs(dellsrvadmin_t)
miscfiles_read_localization(dellsrvadmin_t)
Here is dellsrvadmin.fc:
/opt/dell/srvadmin/sbin/.*        --        gen_context(system_u:object_r:dellsrvadmin_exec_t,s0)
/opt/dell/srvadmin/sbin/dsm_sa_datamgrd        --        gen_context(system_u:object_r:dell_datamgrd_t,s0)
/opt/dell/srvadmin/bin/.*        --        gen_context(system_u:object_r:dellsrvadmin_exec_t,s0)
/opt/dell/srvadmin/var(/.*)?                        gen_context(system_u:object_r:dellsrvadmin_var_t,s0)
/opt/dell/srvadmin/etc/srvadmin-isvc/ini(/.*)?        gen_context(system_u:object_r:dellsrvadmin_var_t,s0)

17 July 2024

Gunnar Wolf: Script for weather reporting in Waybar

While I was living in Argentina, we (my family) found ourselves checking for weather forecasts almost constantly weather there can be quite unexpected, much more so that here in Mexico. So it took me a bit of tinkering to come up with a couple of simple scripts to show the weather forecast as part of my Waybar setup. I haven t cared to share with anybody, as I believe them to be quite trivial and quite dirty. But today, V ctor was asking for some slightly-related things, so here I go. Please do remember I warned: Dirty. Forecast I am using OpenWeather s open API. I had to register to get an APPID, and it allows me for up to 1,000 API calls per day, more than plenty for my uses, even if I am logged in at my desktops at three different computers (not an uncommon situation). Having that, I set up a file named /etc/get_weather/, that currently reads:
# Home, Mexico City
LAT=19.3364
LONG=-99.1819
# # Home, Paran , Argentina
# LAT=-31.7208
# LONG=-60.5317
# # PKNU, Busan, South Korea
# LAT=35.1339
#LONG=129.1055
APPID=SomeLongRandomStringIAmNotSharing
Then, I have a simple script, /usr/local/bin/get_weather, that fetches the current weather and the forecast, and stores them as /run/weather.json and /run/forecast.json:
#!/usr/bin/bash
CONF_FILE=/etc/get_weather
if [ -e "$CONF_FILE" ]; then
    . "$CONF_FILE"
else
    echo "Configuration file $CONF_FILE not found"
    exit 1
fi
if [ -z "$LAT" -o -z "$LONG" -o -z "$APPID" ]; then
    echo "Configuration file must declare latitude (LAT), longitude (LONG) "
    echo "and app ID (APPID)."
    exit 1
fi
CURRENT=/run/weather.json
FORECAST=/run/forecast.json
wget -q "https://api.openweathermap.org/data/2.5/weather?lat=$ LAT &lon=$ LONG &units=metric&appid=$ APPID " -O "$ CURRENT "
wget -q "https://api.openweathermap.org/data/2.5/forecast?lat=$ LAT &lon=$ LONG &units=metric&appid=$ APPID " -O "$ FORECAST "
This script is called by the corresponding systemd service unit, found at /etc/systemd/system/get_weather.service:
[Unit]
Description=Get the current weather
[Service]
Type=oneshot
ExecStart=/usr/local/bin/get_weather
And it is run every 15 minutes via the following systemd timer unit, /etc/systemd/system/get_weather.timer:
[Unit]
Description=Get the current weather every 15 minutes
[Timer]
OnCalendar=*:00/15:00
Unit=get_weather.service
[Install]
WantedBy=multi-user.target
(yes, it runs even if I m not logged in, wasting some of my free API calls but within reason) Then, I declare a "custom/weather" module in the desired position of my ~/.config/waybar/waybar.config, and define it as:
"custom/weather":  
    "exec": "while true;do /home/gwolf/bin/parse_weather.rb;sleep 10; done",
"return-type": "json",
 ,
This script basically morphs a generic weather JSON description into another set of JSON bits that display my weather in the way I prefer to have it displayed as:
#!/usr/bin/ruby
require 'json'
Sources =  :weather => '/run/weather.json',
           :forecast => '/run/forecast.json'
           
Icons =  '01d' => ' ', # d   day
         '01n' => ' ', # n   night
         '02d' => ' ',
         '02n' => ' ',
         '03d' => ' ',
         '03n' => ' ',
         '04d'  => ' ',
         '04n' => ' ',
         '09d' => ' ',
         '10n' =>  '  ',
         '10d' => ' ',
         '13d' => ' ',
         '50d' => ' '
         
ret =  'text': nil, 'tooltip': nil, 'class': 'weather', 'percentage': 100 
# Current weather report: Main text of the module
begin
  weather = JSON.parse(open(Sources[:weather],'r').read)
  loc_name = weather['name']
  icon = Icons[weather['weather'][0]['icon']]   ' ' + f['weather'][0]['icon'] + f['weather'][0]['main']
  temp = weather['main']['temp']
  sens = weather['main']['feels_like']
  hum = weather['main']['humidity']
  wind_vel = weather['wind']['speed']
  wind_dir = weather['wind']['deg']
  portions =  
  portions[:loc] = loc_name
  portions[:temp] = '%s  %2.2f C (%2.2f)' % [icon, temp, sens]
  portions[:hum] = '  %2d%%' % hum
  portions[:wind] = ' %2.2fm/s %d ' % [wind_vel, wind_dir]
  ret['text'] = [:loc, :temp, :hum, :wind].map   p  portions[p] .join(' ')
rescue => err
  ret['text'] = 'Could not process weather file (%s   %s: %s)' % [Sources[:weather], err.class, err.to_s]
end
# Weather prevision for the following hours/days
begin
  cast = []
  forecast = JSON.parse(open(Sources[:forecast], 'r').read)
  min = ''
  max = ''
  day=Time.now.strftime('%Y.%m.%d')
  by_day =  
  forecast['list'].each_with_index do  f,i 
    by_day[day]  = []
    time = Time.at(f['dt'])
    time_lbl = '%02d:%02d' % [time.hour, time.min]
    icon = Icons[f['weather'][0]['icon']]   ' ' + f['weather'][0]['icon'] + f['weather'][0]['main']
    by_day[day] << f['main']['temp']
    if time.hour == 0
      min = '%2.2f' % by_day[day].min
      max = '%2.2f' % by_day[day].max
      cast << '          min: <b>%s C</b> max: <b>%s C</b>' % [min, max]
      day = time.strftime('%Y.%m.%d')
      cast << '        <b>%04d.%02d.%02d</b>  ' %
              [time.year, time.month, time.day]
    end
    cast << '%s   %2.2f C    %2d%%   %s %s' % [time_lbl,
                                                f['main']['temp'],
                                                f['main']['humidity'],
                                                icon,
                                                f['weather'][0]['description']
                                               ]
  end
  cast << '          min: <b>%s</b> C max: <b>%s C</b>' % [min, max]
  ret['tooltip'] = cast.join("\n")
  
rescue => err
  ret['tooltip'] = 'Could not process forecast file (%s   %s)' % [Sources[:forecast], err.class, err.to_s]
end
# Print out the result for Waybar to process
puts ret.to_json
The end result? Nothing too stunning, but definitively something I find useful and even nicely laid out: Screenshot Do note that it seems OpenWeather will return the name of the closest available meteorology station with (most?) recent data for my home, I often get Ciudad Universitaria, but sometimes Coyoac n or even San ngel Inn.

Mike Gabriel: Weather Experts with Translation Skills Needed!

Lomiri Weather App goes Open Meteo In Ubuntu Touch / Lomiri, Maciej Sopy o has updated Lomiri's Weather App to operate against a different weather forecast provider (Open Meteo). Additionally, the new implementation is generic and pluggable, so other weather data providers can be added-in later. Big thanks to Maciej for working on this just in time (the previous implementation's API has recently been EOL'ed and is not available anymore to Ubuntu Touch / Lomiri users). Lomiri Weather App - new Meteorological Terms part of the App now While the old weather data provider implementation obtained all the meteorological information as already localized strings from the provider, the new implementation requires all sorts of weather conditions being translated within the Lomiri Weather App itself. The meteorological terms are probably not easy to translate for the usual software translator, so special help might be required here. Call for Translations: Lomiri Weather App So, if you feel entitled to help here, please join the Hosted Weblate service [1] and start working on Lomiri Weather App. Thanks a lot! light+love
Mike Gabriel (aka sunweaver) [1] https://hosted.weblate.org/
[2] https://hosted.weblate.org/projects/lomiri/lomiri-weather-app/

10 July 2024

Petter Reinholdtsen: Some notes from the 2024 LinuxCNC Norwegian developer gathering

The Norwegian The LinuxCNC developer gathering 2024 is over. It was a great and productive weekend, and I am sad that it is over. Regular readers probably still remember what LinuxCNC is, but her is a quick summary for those that forgot? LinuxCNC is a free software system for numerical control of machines such as milling machines, lathes, plasma cutters, routers, cutting machines, robots and hexapods. It eats G-code and produce motor movement and other changes to the physical world, while reading sensor input. I am not quite sure about the total head count, as not all people were present at the gathering the entire weekend, but I believe it was close to 10 people showing their faces at the gathering. The "hard core" of the group, who stayed the entire weekend, were two from Norway, two from Germany and one from England. I am happy with the outcome from the gathering. We managed to wrap up a new stable LinuxCNC release 2.9.3 and even tested it on real hardware within minutes of the release. The release notes for 2.9.3 are still being written, but should show up on on the project site in the next few days. We managed to go through around twenty pull requests and merge then into either the stable release (2.9) or the development branch (master). There are still around thirty pull requests left to process, so we are not out of work yet. We even managed to fix/improve a slightly worn lathe, and experiment with running a mechanical clock using G-code. The evening barbeque worked well both on Saturday and Sunday. It is quite fun to light up a charcoal grill using compressed air. Sadly the weather was not the best, so we stayed indoors most of the time. This gathering was made possible partly with sponsoring from both Redpill Linpro, Debian and NUUG Foundation, and we are most grateful for the support. I would also like to thank the local school for lending us some furniture, and of course the rest of the members of the organizers team, Asle and Bosse, for their countless contributions. The gathering was such success that we want to do it again next year. We plan to organize the next Norwegian LinuxCNC developer gathering at the end of June next year, the weekend Friday 27th to Sunday 29th of June 2025. I recommend you reserve the dates on your calendar today. Other related communities are also welcome to join in, for example those working on systems like FreeCAD and opencamlib, as I am sure we have much in common and sharing experiences would be very useful to all involved. We are of course looking for sponsors for this gathering already. The total budget for this gathering was around NOK 25.000 (around EUR 2.300), so our needs are quite modest. Perhaps a machine or tools company would like to help out the free software manufacturing community by sponsoring food, lodging and transport for such gathering?

3 July 2024

Sahil Dhiman: RTI to NPL Regarding Their NTP Infrastructure

I became interested in Network Time Protocol (NTP) last year after learning how fundamental this protocol is to the functioning of the global Internet. NTP helps synchronize clocks on devices over the Internet, which is essential for secure browsing, timestamping, keeping everyone in sync or just checking what time it is. Computers usually have a hardware real-time clock (RTC) but that deviates over time, so an occasional sync over NTP is required to keep the time accurate. Many network and IoT devices don t have hardware RTC so have even more reliance on NTP. Accurate time keeping starts with reference clocks like atomic clocks, GPS etc. Multiple government standard agencies host these reference clocks, which are regarded as Stratum 0. Stratum 1 servers are known as primary servers, and directly connect to Stratum 0 clocks for time. Stratum 1 servers then distribute time to Stratum 2 and further down the hierarchy. Computers typically connects to one or more Stratum 1/2/3 servers to get their time. Someone has to host these public Stratum 1,2,3 NTP servers. That s what NTP pool, a global effort by volunteers does. They provide NTP servers for the public to use. As of today, there are 4700+ servers in the pool which are free to use for anyone. Now let s come to the reason for writing this post. Indian Computer Emergency Response Team (CERT-In) in April 2022 released a set of cybersecurity directions which set the alarm bells ringing. Internet Society (and almost everyone else) wrote about it. And then there was this specific section about NTP:
All service providers, intermediaries, data centres, body corporate and Government organisations shall connect to the Network Time Protocol (NTP) Server of National Informatics Centre (NIC) or National Physical Laboratory (NPL) or with NTP servers traceable to these NTP servers, for synchronisation of all their ICT systems clocks. Entities having ICT infrastructure spanning multiple geographies may also use accurate and standard time source other than NPL and NIC, however it is to be ensured that their time source shall not deviate from NPL and NIC.
CSIR-National Physical Laboratory (NPL) is the official timekeeper for India and hosts the only public Stratum 1 clock in India, according to NTP pool website. So I was naturally curious to know what kind of infrastructure they re running for NTP. India has a Right to Information (RTI) Act which, like the Freedom of Information Act (FOIA) in the United States, gives citizens rights to request information from governmental entities, to which they have to respond in under 30 days. So last year, I filed two sets of RTI (one after the first reply came) inquiring about NPL s public NTP server setup. The first RTI had some generic questions: RTI 1
First RTI. Click to enlarge
This gave a vague idea about the setup, so I sat down and came with some specific questions in the next RTI. RTI 2
Second RTI. Click to enlarge
Feel free to make your conclusions from it now. Bear in mind these were filled last year so things might have changed. Do let me know if you have more information about it. Update (07/07/2024): Found an article from Medianama about Indian government time servers with information sourced through RTI.

Next.