Search Results: "matrix"

5 January 2026

Hellen Chemtai: Starting out on Open source: Try the OpenQA Debian Image Testing Project

Hello world  . I am an intern at Outreachy and contributing to the Debian Images Testing project since October 2025. This project is Open Source and everyone can contribute to it in any way. The project uses Open QA to automatically install Operating System Images and test them . We have a community here of contributors that is always ready to help out. The mentors and project maintainers are very open to contributions. They listen to any innovative ideas and point out what they have been doing so far. So far contributions have been in terms of: Contributing to this project requires some knowledge of Linux commands and Operating Systems. What we will learn later as we go on will be :
Preparation
Before any contribution begins, we would want to first try out the project and run a couple of tests. Get to understand what we are doing first. Let s say you are starting out as a Windows or MacOS user and you want to start contributing. I would recommend dual partitioning your device first. Do enough research and prepare the resources. The network install image just needs at-least 4 GB USB flash drive for dual booting. You will use Debian as the second operating system. Give enough space to Debian, I recommend around 150 GB or more. Also assign at-least 1 GB space to the /boot/efi directory to prevent the low space warnings after a while.This will be a way good way to learn about image installation which is part of the work . I do not recommend Virtual Box because it will hinder full use of system resources. This process will take a day or two.
Set Up and Testing
After dual booting. We log into our Debian System . The next step of instructions will take you through how we set up and run our tests. These instructions have many Linux commands. You will be learning if you are a newbie as you go through the steps. Try to understand these commands and do not blindly copy and paste. You can start your contributions here if you have a suggestion to add to the install docs. Run some tests then log in the Web UI as per the instructions to view your tests progress. Green means they ve passed. Blue means its still running. Red means failed.
Trying Out Ideas
Kudos if you have reached this point. The community of contributors will help if you are stuck. We get to try out our own variations of tests using variables. We will also rely on documentation to understand the configurations / test commands like these
openqa-cli api -X POST isos ISO=debian-13.1.0-amd64-netinst.iso DISTRI=debian VERSION=stable FLAVOR=netinst-iso ARCH=x86_64 BUILD=1310 #This is the test you will run on the guide
openqa-cli api -X POST isos ISO=debian-13.1.0-amd64-netinst.iso DISTRI=debian VERSION=stable FLAVOR=netinst-iso ARCH=x86_64 BUILD=1310 TEST=cinnamon #I have added a TEST variable that runs only cinnamon test suite
You can check specific test suites from the WebUI :
We get some failures at times. Here are some failed tests from a build I was working on.
Here we find the cinnamon test failed at locale module. Click on any module above and it will lead us to needles and point to the where the test failed. You can check the error or try to add a needle if its needle failure.
Try editing a test module and test your changes. Try out some ideas. Read the documentation folder and write some pseudo code. Interact with the community. Try working on some tasks from the community . Create your tests and add them to the configuration. There is a lot of stuff that can you can work on in this community. It may seem hard to grasp at first as a newbie to Open Source. The community will help you through out even if the problem seems small. We are very friendly and the code maintainers have extensive knowledge. Get to sit with us during one of our meetings and you will learn so much about the project. Learning , networking and communicating is part of contributing to the broader community.

Bits from Debian: Debian welcomes Outreachy interns for December 2025-March 2026 round

Outreachy logo Debian continues participating in Outreachy, and as you might have already noticed, Debian has selected two interns for the Outreachy December 2025 - March 2026 round. After a busy contribution phase and a competitive selection process, Hellen Chemtai Taylor and Isoken Ibizugbe are officially working as interns on Debian Images Testing with OpenQA for the past month, mentored by T ssia Cam es Ara jo, Roland Clobus and Philip Hands. Congratulations and welcome Hellen Chemtai Taylor and Isoken Ibizugbe! The team also congratulates all candidates for their valuable contributions, with special thanks to those who manage to continue participating as volunteers. From the official website: Outreachy provides three-month internships for people from groups traditionally underrepresented in tech. Interns work remotely with mentors from Free and Open Source Software (FOSS) communities on projects ranging from programming, user experience, documentation, illustration and graphical design, to data science. The Outreachy programme is possible in Debian thanks to the efforts of Debian developers and contributors who dedicate their free time to mentor students and outreach tasks, and the Software Freedom Conservancy's administrative support, as well as the continued support of Debian's donors, who provide funding for the internships. Join us and help to improve Debian! You can follow the work of the Outreachy interns reading their blog posts (syndicated in Planet Debian), and chat with the team at the debian-openqa matrix channel. For Outreachy matters, the programme admins can be reached on #debian-outreach IRC/matrix channel and mailing list.

11 December 2025

Dirk Eddelbuettel: #056: Running r-ci with R-devel

Welcome to post 56 in the R4 series. The recent post #54 reviewed a number of earlier posts on r-ci, our small (but very versatile) runner for continunous integration (CI) with R. The post also introduced the notion of using a container in the matrix of jobs defined and running in parallel. The initial motivation was the (still ongoing, and still puzzling) variation in run-times of GitHub Actions. So when running CI and relying on r2u for the fast, easy, reliable: pick all three! provision of CRAN packages as Ubuntu binaries, a small amount of time is spent prepping a basic Ubuntu instance with the necessary setup. This can be as fast as maybe 20 to 30 seconds, but it can also stretch to almost two minutes when GitHub is busier or out of sorts for other reasons. When the CI job itself is short, that is a nuisance. We presented relying on a pre-made r2u4ci container that adds just a few commands to the standard r2u container to be complete for CI. And with that setup CI runs tend to be reliably faster. This situation is still evolving. I have not converted any of my existing CI scripts (apart from a test instance or two), but I keep monitoring the situation. However, this also offered another perspective: why not rely on a different container for a different CI aspect? When discussing the CI approach with Jeff the other day (and helping add CI to his mmap repo), it occurred to me we could also use on of the Rocker containers for R-devel. A minimal change to the underlying run.sh script later, this was accomplished. An example is provided as both a test and an illustration in the repo for package RcppInt64 in its script ci.yaml:
    strategy:
      matrix:
        include:
          -   name: container, os: ubuntu-latest, container: rocker/r2u4ci  
          -   name: r-devel,   os: ubuntu-latest, container: rocker/drd  
          -   name: macos,     os: macos-latest  
          -   name: ubuntu,    os: ubuntu-latest  

    runs-on: $  matrix.os  
    container: $  matrix.container  
This runs both a standard Ubuntu setup (fourth entry) and the alternate just described relying on the container (first entry) along with the (usually commented-out) optional macOS setup (third entry). And line two brings the drd container from Rocker. The CI runner script now checks for a possible Rdevel binary as provided inside drd (along with alias RD) and uses it when present. And that is all that there is: no other change on the user side; tests now run under R-devel. You can see some of the initial runs at the rcppint64 repo actions log. Another example is now also at Jeff s mmap repo. It should be noted that this relies on R-devel running packages made with R-release. Every few years this breaks when R needs to break its binary API. If and when that happens this option will be costlier as the R-devel instance will then have to (re-)install its R package dependencies. This can be accomodated easily as a step in the yaml file. And under normal circumstances it is not needed. Having easy access to recent builds of R-devel (the container refreshes weekly on a schedule) with the convenience of r2u gives another option for package testing. I may continue to test locally with R-devel as my primary option, and most likely keep my CI small and lean (usually just one R-relase run on Ubuntu) but having another option at GitHub Actions is also a good thing.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can now sponsor me at GitHub.

2 November 2025

Guido G nther: Free Software Activities October 2025

Quiete some things made progress last month: We put out Phosh 0.50 release, got closer to enabling media roles for audio by default in Phosh (see related post) and reworked our images builds. You should also (hopefully) notice some nice quality of life improvements once changes land in a distro near you and you're using Phosh. See below for details: phosh phoc phosh-mobile-settings stevia (formerly phosh-osk-stub) phosh-tour meta-phosh xdg-desktop-portal-phosh libphosh-rs Calls Phrog phosh-recipes feedbackd feedbackd-device-themes Chatty Squeekboard Debian Cellbroadcastd gnome-settings-daemon gnome-control-center gnome-initial-setup sdm845-mainline gnome-session alpine droid-juicer phosh-site phosh-debs Linux Wireplumber Phosh.mobi e.V. demo-phones Reviews This is not code by me but reviews on other peoples code. The list is (as usual) slightly incomplete. Thanks for the contributions! Comments? Join the Fediverse thread

27 October 2025

Dirk Eddelbuettel: #054: Faster r-ci Continuous Integration via r2u Container

Welcome to post 54 in the R4 series. The topic of continuous integration has been a recurrent theme here at the R4 series. Post #32 introducess r-ci, while post #41 brings r2u to r-ci, but does not show a matrix deployment. Post #45 describes the updated r-ci setup that is now the default and contains a macOS and Ubuntu matrix, where the latter relies on r2u to keep things fast, easy, reliable . Last but not least more recent post #52 shares a trick for ensuring coverage reports. Following #45, use of r-ci at for example GitHub Actions has seen steady use and very reliable performance. With the standard setup, a vanilla Ubuntu setup is changed into one supported by r2u. This requires downloading and installating a few Ubuntu packages, and has generally been fairly quick on the order of around fourty seconds. Now, the general variability of run-times for identical tasks in GitHub Actions is well documented by the results of the setup described in post #39 which still runs weekly. It runs the identical SQL query against a remote backend using two different package families. And lo and behold, the intra-method variability on unchanged code or setup and therefore due solely to system variability is about as large as the inter-method variability. In short, GitHub Actions performance varies randomly with significant variability. See the repo README.md for chart that updates weekly (and see #39 for background). Of late, this variability became more noticeable during standard GitHub Actions runs where it would regularly take more than two minutes of setup time before actual continuous integration work was done. Some caching seems to be in effect, so subsequent runs in the same repo seem faster and often came back to one minute or less. For lightweight and small packages, loosing two minutes to setup when the actual test time is a minute or less gets old fast. Looking around, we noticed that container use can be combined with matrix use. So we have now been deploying the following setup (not always over all the matrix elements though)
jobs:
  ci:
    strategy:
      matrix:
        include:
          -   name: container, os: ubuntu-latest, container: rocker/r2u4ci  
          -   name: macos,     os: macos-latest   
          -   name: ubuntu,    os: ubuntu-latest  

    runs-on: $  matrix.os  
    container: $  matrix.container  
GitHub Actions is smart enough to provide NULL for container in the two other cases, so container: $ matrix.container is ignored there. But when container is set as here for the ci-enhanced version of r2u (which adds a few binaries commonly needed such as git, curl, wget etc needed for CI) then the CI jobs runs inside the container. And thereby skips most of the setup time as the container is already prepared. This also required some small adjustments in the underlying shell script doing the work. To not disrupt standard deployment, we placed these into a release candidate / development version one can op into via an new variable dev_version
      - name: Setup
        uses: eddelbuettel/github-actions/r-ci@master
        with:
          dev_version: 'TRUE'
Everything else remains the same and works as before. But faster as much less time is spent on setup. You can see the actual full yaml file and actions in my repositories for rcpparmadillo and rcppmlpack-examples. Additional testing would be welcome, so feel free to deploy this in your actions now. Otherwise I will likely carry this over and make it the defaul in a few weeks time. It will still work as before but when the added container: line is used will run much faster thanks to rocker/r2u4ci being already set up for CI.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can now sponsor me at GitHub.

3 October 2025

Guido G nther: Free Software Activities September 2025

Another short status update of what happened on my side last month. Nothing stands out too much, I enjoyed doing the OSK changes the most as that helped to improve the typing experience further. Also doing a small bit of kernel work again was fun (still need to figure out the 6mq's touch controller repsonsiveness though). See below for details on the above and more: phosh phoc phosh-mobile-settings stevia (formerly phosh-osk-stub) xdg-desktop-portal-phosh pfs libphosh-rs Phrog feedbackd feedbackd-device-themes Debian gnome-settings-daemon git-buildpackage govarnam Sessions twenty-twenty-hugo tuwunnel Linux mutter Phosh debs phosh-site Reviews This is not code by me but reviews I did on other peoples code. The list is (as usual) slightly incomplete. Thanks for the contributions! Help Development If you want to support my work see donations. Comments? Join the Fediverse thread

1 October 2025

Dirk Eddelbuettel: #052: Running r-ci with Coverage

Welcome to post 52 in the R4 series. Following up on the post #51 yesterday and its stated intent of posting some more here The r-ci setup (which was introduced in post #32 and updated in post #45) offers portable continuous integration which can take advantage of different backends: Github Actions, Azure Pipelines, GitLab, BitBuckets, and possibly as it only requires a basic Ubuntu shell after which it customizes itself and runs via shell script. Portably. Now, most of us, I suspect still use it with GitHub Actions but it is reassuring to know that one can take it elsewhere should the need or desire arise. One thing many repos did, and which stopped working reliably is coverage analysis. This is made easy by the covr package, and made fast, easy, reliable (as we quip) thanks to r2u. But transmission to codecov stopped working a while back so I had mostly commented it out in my repos, rendering the reports more and more stale. Which is not ideal. A few weeks ago I gave this another look, and it turns that that codecov now requires an API token to upload. Which one can generate in the user -> settings menu on their website under the access tab. Which in turn then needs to be stored in each repo wanting to upload. For Github, this is under settings -> secrets and variables -> actions as a repository secret . I suggest using CODECOV_TOKEN for its name. After that the three-line block in the yaml file can reference it as a secret as in the following snippet, now five lines, taken from one of my ci.yaml files:
      - name: Coverage
        if: $  matrix.os == 'ubuntu-latest'  
        run: ./run.sh coverage
        env:
          CODECOV_TOKEN: $  secrets.CODECOV_TOKEN  
It assigns the secret we stored on the website, references it by prefix secrets. and assigns it to the environment variable CODECOV_TOKEN. After this reports flow as one can see on repositories where I re-enabled this as for example here for RcppArmadillo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can now sponsor me at GitHub.

8 September 2025

Dirk Eddelbuettel: RcppArmadillo 15.0.2-1 on CRAN: New Upstream, Some Changes

armadillo image Armadillo is a powerful and expressive C++ template library for linear algebra and scientific computing. It aims towards a good balance between speed and ease of use, has a syntax deliberately close to Matlab, and is useful for algorithm development directly in C++, or quick conversion of research code into production environments. RcppArmadillo integrates this library with the R environment and language and is widely used by (currently) 1279 other packages on CRAN, downloaded 41.2 million times (per the partial logs from the cloud mirrors of CRAN), and the CSDA paper (preprint / vignette) by Conrad and myself has been cited 644 times according to Google Scholar. Fairly shortly after the release of Armadillo 14.6.3 (which became our precedig release 14.6.3), Conrad released version 15.0.0 of Armadillo. It contains several breaking changes which made it unsuitable for CRAN as is. First, the minimum C++ standard was lifted to C++14. There are many good reasons for doing so, but doing it unannounced with no transition offered cannot work at CRAN. For (mostly historical) reasons (as C++11 was once a step up from even more archaic code) many packages still enforce C++11. These packages would break under Armadillo 15.0.0. With some reverse-dependency checks we quickly found that this would affect over 200 packages. Second, Armadillo frequently deprecates code and issues a warning offering a hint towards an improved / current alternative. That too is good practice, and recommended. But at CRAN a deprecation warning seen leads to NOTE and an unfixed NOTEs can lead to archival which is not great. For that reason, we had added a global muffler suppressing the deprecation warnings. This was now removed in C++14 (as one can rely on newer compiler attributes we cannot undo, whereas the old scheme just used macros). Again, not something we can do at CRAN. So I hatched the plan of offering both the final release under the previous state of the world, i.e. 14.6.3, along with 15.0.0. User could then opt-into 15.0.0 but have a fallback of 14.6.3 allowing all packages to coexist. The first idea was to turn the fallback on only if C++11 compilation was noticed. This lead still a lot of deprecation warnings meaning we had to more strongly default to 14.6.3 and only if users actively selected 15.0.* would they get it. This seemed to work. We were still faced with two hard failures. Turns out one was local to a since-archived package, the other reveal a bad interaction betweem Armadillo and gcc for complex variables and lead to bug fix 15.0.1. Which I wrapped and uploaded a week ago. Release 15.0.1 was quickly seen as only temporary because I overlooked two issues. Moving legacy and current into, respectively, a directory of that name meant to top-level include header armadillo remained. This broked a few use cases where packages use a direct #include <armadillo> (which we do not recommend as it may miss a few other settings we apply for the R case). The other is that an old test for limited LAPACK capabilities made noise in some builds (for example on Windows). As (current) R versions generally have sufficient LAPACK and BLAS, this too has been removed. We have tested and rested these changed extensively. The usual repository with the logs shows eight reverse-dependency runs each of which can take up to a day. This careful approach, together with pacing uploads at CRAN usually works. We got a NOTE for seven uploads in six months for this one, but a CRAN maintainer quickly advanced it, and the fully automated test showed no regression or other human intervention which is nice given the set of changes, and the over 1200 reverse dependencies. The changes since the last announced CRAN release 14.6.3 follow.

Changes in RcppArmadillo version 15.0.2-1 (2025-09-08)
  • Upgraded to Armadillo release 15.0.2-1 (Medium Roast)
    • Optionally use OpenMP parallelisation for fp16 matrix multiplication
    • Faster vectorisation of cube tubes
  • Provide a top-level include file armadillo as fallback (#480)
  • Retire the no-longer-needed check for insufficient LAPACK as R now should supply a sufficient libRlapack (when used) (#483)
  • Two potential narrowing warnings are avoided via cast

Changes in RcppArmadillo version 15.0.1-1 (2025-09-01)
  • Upgraded to Armadillo release 15.0.1-1 (Medium Roast)
    • Workaround for GCC compiler bug involving misoptimisation of complex number multiplication
  • This version contains both 'legacy' and 'current' version of Armadillo (see also below). Package authors should set a '#define' to select the 'current' version, or select the 'legacy' version (also chosen as default) if they must. See GitHub issue #475 for more details.
  • Updated DESCRIPTION and README.md

Changes in RcppArmadillo version 15.0.0-1 (2025-08-21) (GitHub Only)
  • Upgraded to Armadillo release 15.0.0-1 (Medium Roast)
    • C++14 is now the minimum required C++ standard
    • Aded preliminary support for matrices with half-precision fp16 element type
    • Added second form of cond() to allow detection of failures
    • Added repcube()
    • Added .freeze() and .unfreeze() member functions to wall_clock
    • Extended conv() and conv2() to accept the "valid" shape argument
  • Also includes Armadillo 14.6.3 as fallback for C++11 compilations
  • This new 'dual' setup had been rigorously tested with five interim pre-releases of which several received full reverse-dependency checks

Courtesy of my CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the Rcpp R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

1 September 2025

Guido G nther: Free Software Activities August 2025

Another short status update of what happened on my side last month. Released Phosh 0.49.0 and added some more QoL improvements to Phosh Mobile stack (e.g. around Cell broadcasts). Also pulled my SHIFT6mq out of the drawer (where it was sitting in a drawwer far too long) and got it to show a picture after a small driver fix. Thanks to the work the sdm845-mainlining folks are doing that was all that was needed. If I can get touch to work better that would be another nice device for demoing Phosh. See below for details on the above and more: phosh phoc phosh-mobile-settings stevia (formerly phosh-osk-stub) xdg-desktop-portal-phosh pfs feedbackd feedbackd-device-themes libcmatrix Chatty Debian Cellbroadcastd ModemManager gnome-clocks git-buildpackage mobian-recipies Linux Reviews This is not code by me but reviews on other peoples code. The list is (as usual) slightly incomplete. Thanks for the contributions! Help Development If you want to support my work see donations. Comments? Join the Fediverse thread

29 August 2025

Raju Devidas: Fixing Auto-Rotate screen orientation on PostmarketOS devices running MATE DE

Fixing Auto-Rotate screen orientation on PostmarketOS devices running MATE DEI have been using my Samsung Galaxy Tab A (2015) with PostmarketOS on and off since last year. It serves as a really good e-book reader with KOReader installed on it. Have tried phosh and plasma-mobile on it, works nicely but slows the device down heavily (2 GB RAM and old processor) so I use MATE Desktop environment on it.Lately I have started using this tablet along with my laptop as a second screen for work. And it has been working super nicely for that. The only issue being that I have to manually rotate the screen to landscape every time I reboot the device. It resets the screen orientation to portrait after a reboot. So I went through the pmOS wiki and a neat nice hack documented there worked very well for me. First we will test if the auto-rotate sensor works and if we can read values from it. So we install some basic necessary packages
$ sudo apk add xrandr xinput inotify-tools iio-sensor-proxy
Enable the service for iio-sensor-proxy
sudo rc-update add iio-sensor-proxy
Reboot the device. Now in the device terminal start the sensor monitor-sensor
user@samsung-gt58 ~> monitor-sensor
    Waiting for iio-sensor-proxy to appear
+++ iio-sensor-proxy appeared
=== Has accelerometer (orientation: normal, tilt: vertical)
=== Has ambient light sensor (value: 5.000000, unit: lux)
=== No proximity sensor
=== No compass
    Light changed: 14.000000 (lux)
    Accelerometer orientation changed: left-up
    Tilt changed: tilted-down
    Light changed: 12.000000 (lux)
    Tilt changed: vertical
    Light changed: 13.000000 (lux)
    Light changed: 11.000000 (lux)
    Light changed: 13.000000 (lux)
    Accelerometer orientation changed: normal
    Light changed: 5.000000 (lux)
    Light changed: 6.000000 (lux)
    Light changed: 5.000000 (lux)
    Accelerometer orientation changed: right-up
    Light changed: 3.000000 (lux)
    Light changed: 4.000000 (lux)
    Light changed: 5.000000 (lux)
    Light changed: 12.000000 (lux)
    Tilt changed: tilted-down
    Light changed: 19.000000 (lux)
    Accelerometer orientation changed: bottom-up
    Tilt changed: vertical
    Light changed: 1.000000 (lux)
    Light changed: 2.000000 (lux)
    Light changed: 4.000000 (lux)
    Accelerometer orientation changed: right-up
    Tilt changed: tilted-down
    Light changed: 11.000000 (lux)
    Accelerometer orientation changed: normal
    Tilt changed: vertical
    Tilt changed: tilted-down
    Light changed: 18.000000 (lux)
    Light changed: 21.000000 (lux)
    Light changed: 22.000000 (lux)
    Light changed: 19.000000 (lux)
    Accelerometer orientation changed: left-up
    Light changed: 17.000000 (lux)
    Tilt changed: vertical
    Light changed: 14.000000 (lux)
    Tilt changed: tilted-down
    Light changed: 16.000000 (lux)
    Light changed: 18.000000 (lux)
    Light changed: 17.000000 (lux)
    Light changed: 18.000000 (lux)
    Light changed: 17.000000 (lux)
    Light changed: 18.000000 (lux)
    Light changed: 17.000000 (lux)
    Light changed: 18.000000 (lux)
    Light changed: 17.000000 (lux)
As you can see we can read the rotation values from the sensor as I am rotating the tablet in different orientations. Now we just need to use a script which changes the screen orientation using xrandr according to the sensor value.
#!/bin/sh
killall monitor-sensor
monitor-sensor > /dev/shm/sensor.log 2>&1 &
while inotifywait -e modify /dev/shm/sensor.log; do
  ORIENTATION=$(tail /dev/shm/sensor.log   grep &aposorientation&apos   tail -1   grep -oE &apos[^ ]+$&apos)
  case "$ORIENTATION" in
    normal)
      xrandr -o normal
      xinput set-prop "Goodix Capacitive TouchScreen" "Coordinate Transformation Matrix" 1 0 0 0 1 0 0 0 1
      ;;
    left-up)
      xrandr -o left
      xinput set-prop "Goodix Capacitive TouchScreen" "Coordinate Transformation Matrix" 0 -1 1 1 0 0 0 0 1
      ;;
    bottom-up)
      xrandr -o inverted
      xinput set-prop "Goodix Capacitive TouchScreen" "Coordinate Transformation Matrix" -1 0 1 0 -1 1 0 0 1
      ;;
    right-up)
      xrandr -o right
      xinput set-prop "Goodix Capacitive TouchScreen" "Coordinate Transformation Matrix" 0 1 0 -1 0 1 0 0 1
      ;;
  esac
done

auto-rotate-screen.sh

You need to replace the name of your touch input device in the script, you can get the name by using xinput --list , make sure to type this on the device terminal.
user@samsung-gt58 ~> xinput --list
* Virtual core pointer                    	id=2	[master pointer  (3)]
*   * Virtual core XTEST pointer              	id=4	[slave  pointer  (2)]
*   * Zinitix Capacitive TouchScreen          	id=10	[slave  pointer  (2)]
*   * Toad One Plus                           	id=12	[slave  pointer  (2)]
* Virtual core keyboard                   	id=3	[master keyboard (2)]
    * Virtual core XTEST keyboard             	id=5	[slave  keyboard (3)]
    * GPIO Buttons                            	id=6	[slave  keyboard (3)]
    * pm8941_pwrkey                           	id=7	[slave  keyboard (3)]
    * pm8941_resin                            	id=8	[slave  keyboard (3)]
    * Zinitix Capacitive TouchScreen          	id=11	[slave  keyboard (3)]
    * samsung-a2015 Headset Jack              	id=9	[slave  keyboard (3)]
In our script here we are using a Zinitix capacitive screen, it will be different for yours.Once your script is ready with the correct touchscreen name. Save and make the script executable. chmod +x auto-rotate-screen.sh Then test your script in your terminal ./auto-rotate.sh , stop the script using Ctrl + CNow we need add this script to auto-start. On MATE DE you can go to System > Control Center > Startup Applications, then click on Custom Add button, browse the script location, give it a name and then click on Add button. Now reboot the tablet/device, login and see the auto rotation working.
0:00
/0:09

  1. Auto-Rotation wiki article on PostmarketOS Wiki https://wiki.postmarketos.org/wiki/Auto-rotation

26 July 2025

Bits from Debian: DebConf25 closes in Brest and DebConf26 announced

DebConf25 group photo - click to enlarge On Saturday 19 July 2025, the annual Debian Developers and Contributors Conference came to a close. Over 443 attendees representing 50 countries from around the world came together for a combined 169 events (including some which took place during the DebCamp) including more than 50 Talks, 39 Short Talks, 5 Discussions, 59 Birds of a Feather sessions ("BoF" informal meeting between developers and users), 10 workshops, and activities in support of furthering our distribution and free software, learning from our mentors and peers, building our community, and having a bit of fun. The conference was preceded by the annual DebCamp hacking session held 7 through 13 July where Debian Developers and Contributors convened to focus on their individual Debian-related projects or work in team sprints geared toward in-person collaboration in developing Debian. This year, a session was dedicated to prepare the BoF "Dealing with Dormant Packages: Ensuring Debian's High Standards"; another, at the initiative of our DPL, to prepare suggestions for the BoF Package Acceptance in Debian: Challenges and Opportunities"; and an afternoon around Salsa-CI. As has been the case for several years, a special effort has been made to welcome newcomers and help them become familiar with Debian and DebConf by organizing a sprint "New Contributors Onboarding" every day of Debcamp, followed more informally by mentorship during DebConf. The actual Debian Developers Conference started on Monday 14 July 2025. In addition to the traditional "Bits from the DPL" talk, the continuous key-signing party, lightning talks, and the announcement of next year's DebConf26, there were several update sessions shared by internal projects and teams. Many of the hosted discussion sessions were presented by our technical core teams with the usual and useful "Meet the Technical Committee", the "What's New in the Linux Kernel" session, and a set of BoFs about Debian packaging policy and Debian infrastructure. Thus, more than a quarter of the discussions dealt with this theme, including talks about our tools and Debian's archive processes. Internationalization and Localization have been the subject of several talks. The Python, Perl, Ruby, Go, and Rust programming language teams also shared updates on their work and efforts. Several talks have covered Debian Blends and Debian-derived distributions and other talks addressed the issue of Debian and AI. More than 17 BoFs and talks about community, diversity, and local outreach highlighted the work of various teams involved in not just the technical but also the social aspect of our community; four women who have made contributions to Debian through their artwork in recent years presented their work. The one-day session "DebConf 2025 Academic Track!", organized in collaboration with the IRISA laboratory was the first session welcoming fellow academics at DebConf, bringing together around ten presentations. The schedule was updated each day with planned and ad hoc activities introduced by attendees over the course of the conference. Several traditional activities took place: a job fair, a poetry performance, the traditional Cheese and Wine party (this year with cider as well), the Group Photos, and the Day Trips. For those who were not able to attend, most of the talks and sessions were broadcasted live and recorded; currently the videos are made available through this link. Almost all of the sessions facilitated remote participation via IRC and Matrix messaging apps or online collaborative text documents which allowed remote attendees to "be in the room" to ask questions or share comments with the speaker or assembled audience. DebConf25 saw over 441 T-shirts, 3 day trips, and up to 315 meals planned per day. All of these events, activities, conversations, and streams coupled with our love, interest, and participation in Debian and F/OSS certainly made this conference an overall success both here in Brest, France and online around the world. The DebConf25 website will remain active for archival purposes and will continue to offer links to the presentations and videos of talks and events. Next year, DebConf26 will be held in Santa Fe, Argentina, likely in July. As tradition follows before the next DebConf the local organizers in Argentina will start the conference activities with DebCamp with a particular focus on individual and team work towards improving the distribution. DebConf is committed to a safe and welcome environment for all participants. See the web page about the Code of Conduct on the DebConf25 website for more details on this. Debian thanks the commitment of numerous sponsors to support DebConf25, particularly our Platinum Sponsors: AMD, EDF, Infomaniak, Proxmox, and Viridien. We also wish to thank our Video and Infrastructure teams, the DebConf25 and DebConf committees, our host nation of France, and each and every person who helped contribute to this event and to Debian overall. Thank you all for your work in helping Debian continue to be "The Universal Operating System". See you next year! About Debian The Debian Project was founded in 1993 by Ian Murdock to be a truly free community project. Since then the project has grown to be one of the largest and most influential Open Source projects. Thousands of volunteers from all over the world work together to create and maintain Debian software. Available in 70 languages, and supporting a huge range of computer types, Debian calls itself the universal operating system. About DebConf DebConf is the Debian Project's developer conference. In addition to a full schedule of technical, social and policy talks, DebConf provides an opportunity for developers, contributors and other interested people to meet in person and work together more closely. It has taken place annually since 2000 in locations as varied as Scotland, Bosnia and Herzegovina, India, Korea. More information about DebConf is available from https://debconf.org/. About AMD The AMD ROCm platform includes programming models, tools, compilers, libraries, and runtimes for AI and HPC solution development on AMD GPUs. Debian is an officially supported platform for AMD ROCm and a growing number of components are now included directly in the Debian distribution. For more than 55 years AMD has driven innovation in high-performance computing, graphics and visualization technologies. AMD is deeply committed to supporting and contributing to open-source projects, foundations, and open-standards organizations, taking pride in fostering innovation and collaboration within the open-source community. About EDF EDF is a leading global utility company focused on low-carbon power generation. The group uses advanced engineering and scientific computing tools to drive innovation and efficiency in its operations, especially in nuclear power plant design and safety assessment. Since 2003, the EDF Group has been using Debian as its main scientific computing environment. Debian's focus on stability and reproducibility ensures that EDF's calculations and simulations produce consistent and accurate results. About Infomaniak Infomaniak is Switzerland's leading developer of Web technologies. With operations all over Europe and based exclusively in Switzerland, the company designs and manages its own data centers powered by 100% renewable energy, and develops all its solutions locally, without outsourcing. With millions of users and the trust of public and private organizations across Europe - such as RTBF, the United Nations, central banks, over 3,000 radio and TV stations, as well as numerous cities and security bodies - Infomaniak stands for sovereign, sustainable and independent digital technology. The company offers a complete suite of collaborative tools, cloud hosting, streaming, marketing and events solutions, while being owned by its employees and self-financed exclusively by its customers. About Proxmox Proxmox develops powerful, yet easy-to-use Open Source server software. The product portfolio from Proxmox, including server virtualization, backup, and email security, helps companies of any size, sector, or industry to simplify their IT infrastructures. The Proxmox solutions are built on Debian, we are happy that they give back to the community by sponsoring DebConf25. About Viridien Viridien is an advanced technology, digital and Earth data company that pushes the boundaries of science for a more prosperous and sustainable future. Viridien has been using Debian-based systems to power most of its HPC infrastructure and its cloud platform since 2009 and currently employs two active Debian Project Members. Contact Information For further information, please visit the DebConf25 web page at https://debconf25.debconf.org/ or send mail to press@debian.org.

15 July 2025

Valhalla's Things: Federated instant messaging, 100% debianized

Posted on July 15, 2025
Tags: madeof:bits, topic:xmpp, topic:debian
This is an approximation of what I told at my talk Federated instant messaging, 100% debianized at DebConf 25, for people who prefer reading text. There will also be a video recording, as soon as it s ready :) at the link above. Communicating is a basic human need, and today some kind of computer-mediated communication is a requirement for most people, especially those in this room. With everything that is happening, it s now more important than ever that these means of communication aren t controlled by entities that can t be trusted, whether because they can stop providing the service at any given time or worse because they are going to abuse it in order to extract more profit. If only there was a well established chat system based on some standard developed in an open way, with all of the features one expects from a chat system but federated so that one can choose between many different and independent providers, or even self-hosting. But wait, it does exist! I m not talking about IRC, I m talking about XMPP! While it has been around since the last millennium, it has not remained still, with hundred of XMPP Extension Protocols, or XEPs that have been developed to add all of the features that nobody in 1999 imagined we could need in Instant Messaging today, and more, such as IoT devices or even social networks. There is a myth that this makes XMPP a mess of incompatible software, but there is an XEP for that: XEP-0479: XMPP Compliance Suites 2023, which is a list of XEPs that needs to be supported by Instant Messaging servers and clients, including mobile ones, and all of the recommended ones will mostly just work. These include conversations.im on android, dino on linux, which also works pretty nicely on linux phones, gajim for a more fully featured option that includes the kitchen sink, profanity for text interface fanatics like me, and I ve heard that monal works decently enough on the iThings. One thing that sets XMPP apart from other federated protocols, is that it has already gone through the phase where everybody was on one very big server, which then cut out federation, and we ve learned from the experience. These days there are still a few places that cater to newcomers, like https://account.conversations.im/, https://snikket.org/ (which also includes tools to make it easier to host your own instance) and https://quicksy.im/, but most people are actually on servers of a manageable size. My strong recommendation is for community hosting: not just self-hosting for yourself, but finding a community you feel part of and trust, and share a server with them, whether managed by volunteers from the community itself, or by a paid provider. If you are a Debian Developer, you already have one: you can go to https://db.debian.org/ , select Change rtc password to set your own password, wait an hour or so and you re good to go, as described at the bottom of https://wiki.debian.org/Teams/DebianSocial. A few years ago it had remained a bit behind, but these days it s managed by an active team, and if you re missing some features, or just want to know what s happening with it, you can join their BoF on Friday afternoon (and also thank them for their work). But for most people in this room, I d also recommend finding a friend or two who can help as a backup, and run a server for your own families or community: as a certified lazy person who doesn t like doing sysadmin jobs, I can guarantee it s perfectly feasible, about in the same range of difficulty as running your own web server for a static site. The two most popular servers for this, prosody and ejabberd, are well maintained in Debian, and these days there isn t a lot more to do than installing them, telling them your hostname, setting up a few DNS entries, and then you mostly need to keep the machine updated and very little else. After that, it s just applying system security updates, upgrading everything every couple years (some configuration updates may be needed, but nothing major) and maybe helping some non-technical users, if you are hosting your non-technical friends (the kind who would need support on any other platform).
Question time (including IRC questions) included which server would be recommended for very few users (I use prosody and I m very happy with it, but I believe ejabberd works also just fine), then somebody reminded me that I had forgotten to mention https://www.chatons.org/ , which lists free, ethical and decentralized services, including xmpp ones. I was also asked a comparison with matrix, which does cover a very similar target as XMPP, but I am quite biased against it, and I d prefer to talk well of my favourite platform than badly of its competitor.

2 July 2025

Dirk Eddelbuettel: Rcpp 1.1.0 on CRAN: C++11 now Minimum, Regular Semi-Annual Update

rcpp logo With a friendly Canadian hand wave from vacation in Beautiful British Columbia, and speaking on behalf of the Rcpp Core Team, I am excited to shared that the (regularly scheduled bi-annual) update to Rcpp just brought version 1.1.0 to CRAN. Debian builds haven been prepared and uploaded, Windows and macOS builds should appear at CRAN in the next few days, as will builds in different Linux distribution and of course r2u should catch up tomorrow as well. The key highlight of this release is the switch to C++11 as minimum standard. R itself did so in release 4.0.0 more than half a decade ago; if someone is really tied to an older version of R and an equally old compiler then using an older Rcpp with it has to be acceptable. Our own tests (using continuous integration at GitHub) still go back all the way to R 3.5.* and work fine (with a new-enough compiler). In the previous release post, we commented that we had only reverse dependency (falsely) come up in the tests by CRAN, this time there was none among the well over 3000 packages using Rcpp at CRAN. Which really is quite amazing, and possibly also a testament to our rigorous continued testing of our development and snapshot releases on the key branch. This release continues with the six-months January-July cycle started with release 1.0.5 in July 2020. As just mentioned, we do of course make interim snapshot dev or rc releases available. While we not longer regularly update the Rcpp drat repo, the r-universe page and repo now really fill this role admirably (and with many more builds besides just source). We continue to strongly encourage their use and testing I run my systems with these versions which tend to work just as well, and are of course also fully tested against all reverse-dependencies. Rcpp has long established itself as the most popular way of enhancing R with C or C++ code. Right now, 3038 packages on CRAN depend on Rcpp for making analytical code go faster and further. On CRAN, 13.6% of all packages depend (directly) on Rcpp, and 61.3% of all compiled packages do. From the cloud mirror of CRAN (which is but a subset of all CRAN downloads), Rcpp has been downloaded 100.8 million times. The two published papers (also included in the package as preprint vignettes) have, respectively, 2023 (JSS, 2011) and 380 (TAS, 2018) citations, while the the book (Springer useR!, 2013) has another 695. As mentioned, this release switches to C++11 as the minimum standard. The diffstat display in the CRANberries comparison to the previous release shows how several (generated) sources files with C++98 boilerplate have now been removed; we also flattened a number of if/else sections we no longer need to cater to older compilers (see below for details). We also managed more accommodation for the demands of tighter use of the C API of R by removing DATAPTR and CLOENV use. A number of other changes are detailed below. The full list below details all changes, their respective PRs and, if applicable, issue tickets. Big thanks from all of us to all contributors!

Changes in Rcpp release version 1.1.0 (2025-07-01)
  • Changes in Rcpp API:
    • C++11 is now the required minimal C++ standard
    • The std::string_view type is now covered by wrap() (Lev Kandel in #1356 as discussed in #1357)
    • A last remaining DATAPTR use has been converted to DATAPTR_RO (Dirk in #1359)
    • Under R 4.5.0 or later, R_ClosureEnv is used instead of CLOENV (Dirk in #1361 fixing #1360)
    • Use of lsInternal switched to lsInternal3 (Dirk in #1362)
    • Removed compiler detection macro in a header cleanup setting C++11 as the minunum (Dirk in #1364 closing #1363)
    • Variadic templates are now used onconditionally given C++11 (Dirk in #1367 closing #1366)
    • Remove RCPP_USING_CXX11 as a #define as C++11 is now a given (Dirk in #1369)
    • Additional cleanup for __cplusplus checks (I aki in #1371 fixing #1370)
    • Unordered set construction no longer needs a macro for the pre-C++11 case (I aki in #1372)
    • Lambdas are supported in a Rcpp Sugar functions (I aki in #1373)
    • The Date(time)Vector classes now have default ctor (Dirk in #1385 closing #1384)
    • Fixed an issue where Rcpp::Language would duplicate its arguments (Kevin in #1388, fixing #1386)
  • Changes in Rcpp Attributes:
    • The C++26 standard now has plugin support (Dirk in #1381 closing #1380)
  • Changes in Rcpp Documentation:
    • Several typos were correct in the NEWS file (Ben Bolker in #1354)
    • The Rcpp Libraries vignette mentions PACKAGE_types.h to declare types used in RcppExports.cpp (Dirk in #1355)
    • The vignettes bibliography file was updated to current package versions, and now uses doi references (Dirk in #1389)
  • Changes in Rcpp Deployment:
    • Rcpp.package.skeleton() creates URL and BugReports if given a GitHub username (Dirk in #1358)
    • R 4.4.* has been added to the CI matrix (Dirk in #1376)
    • Tests involving NA propagation are skipped under linux-arm64 as they are under macos-arm (Dirk in #1379 closing #1378)

Thanks to my CRANberries, you can also look at a diff to the previous release Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page. Bugs reports are welcome at the GitHub issue tracker as well (where one can also search among open or closed issues).

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

17 June 2025

Evgeni Golov: Arguing with an AI or how Evgeni tried to use CodeRabbit

Everybody is trying out AI assistants these days, so I figured I'd jump on that train and see how fast it derails. I went with CodeRabbit because I've seen it on YouTube ads work, I guess. I am trying to answer the following questions: To reduce the amount of output and not to confuse contributors, CodeRabbit was configured to only do reviews on demand. What follows is a rather unscientific evaluation of CodeRabbit based on PRs in two Foreman-related repositories, looking at the summaries CodeRabbit posted as well as the comments/suggestions it had about the code. Ansible 2.19 support PR: theforeman/foreman-ansible-modules#1848 summary posted The summary CodeRabbit posted is technically correct.
This update introduces several changes across CI configuration, Ansible roles, plugins, and test playbooks. It expands CI test coverage to a new Ansible version, adjusts YAML key types in test variables, refines conditional logic in Ansible tasks, adds new default variables, and improves clarity and consistency in playbook task definitions and debug output.
Yeah, it does all of that, all right. But it kinda misses the point that the addition here is "Ansible 2.19 support", which starts with adding it to the CI matrix and then adjusting the code to actually work with that version. Also, the changes are not for "clarity" or "consistency", they are fixing bugs in the code that the older Ansible versions accepted, but the new one is more strict about. Then it adds a table with the changed files and what changed in there. To me, as the author, it felt redundant, and IMHO doesn't add any clarity to understand the changes. (And yes, same "clarity" vs bugfix mistake here, but that makes sense as it apparently miss-identified the change reason) And then the sequence diagrams They probably help if you have a dedicated change to a library or a library consumer, but for this PR it's just noise, especially as it only covers two of the changes (addition of 2.19 to the test matrix and a change to the inventory plugin), completely ignoring other important parts. Overall verdict: noise, don't need this. comments posted CodeRabbit also posted 4 comments/suggestions to the changes. Guard against undefined result.task IMHO a valid suggestion, even if on the picky side as I am not sure how to make it undefined here. I ended up implementing it, even if with slightly different (and IMHO better readable) syntax. Inconsistent pipeline in when for composite CV versions That one was funny! The original complaint was that the when condition used slightly different data manipulation than the data that was passed when the condition was true. The code was supposed to do "clean up the data, but only if there are any items left after removing the first 5, as we always want to keep 5 items". And I do agree with the analysis that it's badly maintainable code. But the suggested fix was to re-use the data in the variable we later use for performing the cleanup. While this is (to my surprise!) valid Ansible syntax, it didn't make the code much more readable as you need to go and look at the variable definition. The better suggestion then came from Ewoud: to compare the length of the data with the number we want to keep. Humans, so smart! But Ansible is not Ewoud's native turf, so he asked whether there is a more elegant way to count how much data we have than to use list count in Jinja (the data comes from a Python generator, so needs to be converted to a list first). And the AI helpfully suggested to use count instead! However, count is just an alias for length in Jinja, so it behaves identically and needs a list. Luckily the AI quickly apologized for being wrong after being pointed at the Jinja source and didn't try to waste my time any further. Wouldn't I have known about the count alias, we'd have committed that suggestion and let CI fail before reverting again. Apply the same fix for non-composite CV versions The very same complaint was posted a few lines later, as the logic there is very similar just slightly different data to be filtered and cleaned up. Interestingly, here the suggestion also was to use the variable. But there is no variable with the data! The text actually says one need to "define" it, yet the "committable suggestion" doesn't contain that part. Interestingly, when asked where it sees the "inconsistency" in that hunk, it said the inconsistency is with the composite case above. That however is nonsense, as while we want to keep the same number of composite and non-composite CV versions, the data used in the task is different it even gets consumed by a totally different playbook so there can't be any real consistency between the branches. I ended up applying the same logic as suggested by Ewoud above. As that refactoring was possible in a consistent way. Ensure consistent naming for Oracle Linux subscription defaults One of the changes in Ansible 2.19 is that Ansible fails when there are undefined variables, even if they are only undefined for cases where they are unused. CodeRabbit complains that the names of the defaults I added are inconsistent. And that is technically correct. But those names are already used in other places in the code, so I'd have to refactor more to make it work properly. Once being pointed at the fact that the variables already exist, the AI is as usual quick to apologize, yay. add new parameters to the repository module PR: theforeman/foreman-ansible-modules#1860 summary posted Again, the summary is technically correct
The repository module was updated to support additional parameters for repository synchronization and authentication. New options were added for ansible collections, ostree, Python packages, and yum repositories, including authentication tokens, filtering controls, and version retention settings. All changes were limited to module documentation and argument specification.
But it doesn't add anything you'd not get from looking at the diff, especially as it contains a large documentation chunk explaining those parameters. No sequence diagram this time. That's a good thing! Overall verdict: noise (even if the amount is small), don't need this. comments posted CodeRabbit generated two comments for this PR. Interestingly, none of them overlapped with the issues ansible-lint and friends found. get rid of the FIXMEs Yepp, that's fair add validation for the new parameters Yepp, I forgot these (not intentionally!). The diff it suggests is nonsense, as it doesn't take into account the existing Ansible and Yum validations, but it clearly has read them as the style etc of the new ones matches. It also managed to group the parameters correctly by repository type, so it's something.
 if module.foreman_params['content_type'] != 'ansible_collection':
     invalid_list = [key for key in ['ansible_collection_requirements'] if key in module.foreman_params]
     if invalid_list:
         module.fail_json(msg="( 0 ) can only be used with content_type 'ansible_collection'".format(",".join(invalid_list)))
+
+# Validate ansible_collection specific parameters
+if module.foreman_params['content_type'] != 'ansible_collection':
+    invalid_list = [key for key in ['ansible_collection_auth_token', 'ansible_collection_auth_url'] if key in module.foreman_params]
+    if invalid_list:
+        module.fail_json(msg="( 0 ) can only be used with content_type 'ansible_collection'".format(",".join(invalid_list)))
+
+# Validate ostree specific parameters
+if module.foreman_params['content_type'] != 'ostree':
+    invalid_list = [key for key in ['depth', 'exclude_refs', 'include_refs'] if key in module.foreman_params]
+    if invalid_list:
+        module.fail_json(msg="( 0 ) can only be used with content_type 'ostree'".format(",".join(invalid_list)))
+
+# Validate python package specific parameters
+if module.foreman_params['content_type'] != 'python':
+    invalid_list = [key for key in ['excludes', 'includes', 'package_types', 'keep_latest_packages'] if key in module.foreman_params]
+    if invalid_list:
+        module.fail_json(msg="( 0 ) can only be used with content_type 'python'".format(",".join(invalid_list)))
+
+# Validate yum specific parameter
+if module.foreman_params['content_type'] != 'yum' and 'upstream_authentication_token' in module.foreman_params:
+    module.fail_json(msg="upstream_authentication_token can only be used with content_type 'yum'")
Interestingly, it also said "Note: If 'python' is not a valid content_type, please adjust the validation accordingly." which is quite a hint at a bug in itself. The module currently does not even allow to create content_type=python repositories. That should have been more prominent, as it's a BUG! parameter persistence in obsah PR: theforeman/obsah#72 summary posted Mostly correct. It did miss-interpret the change to a test playbook as an actual "behavior" change: "Introduced new playbook variables for database configuration" there is no database configuration in this repository, just the test playbook using the same metadata as a consumer of the library. Later on it does say "Playbook metadata and test fixtures", so unclear whether this is a miss-interpretation or just badly summarized. As long as you also look at the diff, it won't confuse you, but if you're using the summary as the sole source of information (bad!) it would. This time the sequence diagram is actually useful, yay. Again, not 100% accurate: it's missing the fact that saving the parameters is hidden behind an "if enabled" flag something it did represent correctly for loading them. Overall verdict: not really useful, don't need this. comments posted Here I was a bit surprised, especially as the nitpicks were useful! Persist-path should respect per-user state locations (nitpick) My original code used os.environ.get('OBSAH_PERSIST_PATH', '/var/lib/obsah/parameters.yaml') for the location of the persistence file. CodeRabbit correctly pointed out that this won't work for non-root users and one should respect XDG_STATE_HOME. Ewoud did point that out in his own review, so I am not sure whether CodeRabbit came up with this on its own, or also took the human comments into account. The suggested code seems fine too just doesn't use /var/lib/obsah at all anymore. This might be a good idea for the generic library we're working on here, and then be overridden to a static /var/lib path in a consumer (which always runs as root). In the end I did not implement it, but mostly because I was lazy and was sure we'd override it anyway. Positional parameters are silently excluded from persistence (nitpick) The library allows you to generate both positional (foo without --) and non-positional (--foo) parameters, but the code I wrote would only ever persist non-positional parameters. This was intentional, but there is no documentation of the intent in a comment which the rabbit thought would be worth pointing out. It's a fair nitpick and I ended up adding a comment. Enforce FQDN validation for database_host The library has a way to perform type checking on passed parameters, and one of the supported types is "FQDN" so a fully qualified domain name, with dots and stuff. The test playbook I added has a database_host variable, but I didn't bother adding a type to it, as I don't really need any type checking here. While using "FQDN" might be a bit too strict here technically a working database connection can also use a non-qualified name or an IP address, I was positively surprised by this suggestion. It shows that the rest of the repository was taken into context when preparing the suggestion. reset_args() can raise AttributeError when a key is absent This is a correct finding, the code is not written in a way that would survive if it tries to reset things that are not set. However, that's only true for the case where users pass in --reset-<parameter> without ever having set parameter before. The complaint about the part where the parameter is part of the persisted set but not in the parsed args is wrong as parsed args inherit from the persisted set. The suggested code is not well readable, so I ended up fixing it slightly differently. Persisted values bypass argparse type validation When persisting, I just yaml.safe_dump the parsed parameters, which means the YAML will contain native types like integers. The argparse documentation warns that the type checking argparse does only applies to strings and is skipped if you pass anything else (via default values). While correct, it doesn't really hurt here as the persisting only happens after the values were type-checked. So there is not really a reason to type-check them again. Well, unless the type changes, anyway. Not sure what I'll do with this comment. consider using contextlib.suppress This was added when I asked CodeRabbit for a re-review after pushing some changes. Interestingly, the PR already contained try: except: pass code before, and it did not flag that. Also, the code suggestion contained import contextlib in the middle of the code, instead in the head of the file. Who would do that?! But the comment as such was valid, so I fixed it in all places it is applicable, not only the one the rabbit found. workaround to ensure LCE and CV are always sent together PR: theforeman/foreman-ansible-modules#1867 summary posted
A workaround was added to the _update_entity method in the ForemanAnsibleModule class to ensure that when updating a host, both content_view_id and lifecycle_environment_id are always included together in the update payload. This prevents partial updates that could cause inconsistencies.
Partial updates are not a thing. The workaround is purely for the fact that Katello expects both parameters to be sent, even if only one of them needs an actual update. No diagram, good. Overall verdict: misleading summaries are bad! comments posted Given a small patch, there was only one comment. Implementation looks correct, but consider adding error handling for robustness. This reads correct on the first glance. More error handling is always better, right? But if you dig into the argumentation, you see it's wrong. Either: The AI accepted defeat once I asked it to analyze things in more detail, but why did I have to ask in the first place?! Summary Well, idk, really. Did the AI find things that humans did not find (or didn't bother to mention)? Yes. It's debatable whether these were useful (see e.g. the database_host example), but I tend to be in the "better to nitpick/suggest more and dismiss than oversee" team, so IMHO a positive win. Did the AI output help the humans with the review (useful summary etc)? In my opinion it did not. The summaries were either "lots of words, no real value" or plain wrong. The sequence diagrams were not useful either. Luckily all of that can be turned off in the settings, which is what I'd do if I'd continue using it. Did the AI output help the humans with the code (useful suggestions etc)? While the actual patches it posted were "meh" at best, there were useful findings that resulted in improvements to the code. Was the AI output misleading? Absolutely! The whole Jinja discussion would have been easier without the AI "help". Same applies for the "error handling" in the workaround PR. Was the AI output distracting? The output is certainly a lot, so yes I think it can be distracting. As mentioned, I think dropping the summaries can make the experience less distracting. What does all that mean? I will disable the summaries for the repositories, but will leave the @coderabbitai review trigger active if someone wants an AI-assisted review. This won't be something that I'll force on our contributors and maintainers, but they surely can use it if they want. But I don't think I'll be using this myself on a regular basis. Yes, it can be made "usable". But so can be vim ;-) Also, I'd prefer to have a junior human asking all the questions and making bad suggestions, so they can learn from it, and not some planet burning machine.

1 June 2025

Guido G nther: Free Software Activities May 2025

Another short status update of what happened on my side last month. Larger blocks besides the Phosh 0.47 release are on screen keyboard and cell broadcast improvements, work on separate volume streams, the switch of phoc to wlroots 0.19.0 and effort to make Phosh work on Debian's upcoming stable release (Trixie) out of the box. Trixie will ship with Phosh 0.46, if you want to try out 0.47 you can fetch it from Debian's experimental suite. See below for details on the above and more: phosh phoc phosh-mobile-settings phosh-osk-stub / stevia phosh-tour phosh-osk-data pfs xdg-desktop-portal-phosh phrog phosh-debs meta-phosh feedbackd feedbackd-device-themes gmobile GNOME calls Debian ModemManager osmo-cbc gsm-cell-testing mobile-broadband-provider-info Cellbroadcastd phosh-site pipewire wireplumber python-dbusmock Bugs Reviews This is not code by me but reviews on other peoples code. The list is (as usual) slightly incomplete. Thanks for the contributions! Help Development If you want to support my work see donations. Comments? Join the Fediverse thread

29 May 2025

Debian XMPP Team: XMPP/Jabber Debian 13 Trixie News

Debian 13 "Trixie" full freeze has started 2025-05-17, so this is a good time to take a look at some of the features, that this release will bring. Here we will focus on packages related to XMPP, a.k.a. Jabber. XMPP is a universal communication protocol for instant messaging, push notifications, IoT, WebRTC, and social applications. It has existed since 1999, originally called "Jabber", it has a diverse and active developers community. Clients Servers Libraries Gateways/Transports Not in Trixie

19 May 2025

Melissa Wen: A Look at the Latest Linux KMS Color API Developments on AMD and Intel

This week, I reviewed the last available version of the Linux KMS Color API. Specifically, I explored the proposed API by Harry Wentland and Alex Hung (AMD), their implementation for the AMD display driver and tracked the parallel efforts of Uma Shankar and Chaitanya Kumar Borah (Intel) in bringing this plane color management to life. With this API in place, compositors will be able to provide better HDR support and advanced color management for Linux users. To get a hands-on feel for the API s potential, I developed a fork of drm_info compatible with the new color properties. This allowed me to visualize the display hardware color management capabilities being exposed. If you re curious and want to peek behind the curtain, you can find my exploratory work on the drm_info/kms_color branch. The README there will guide you through the simple compilation and installation process. Note: You will need to update libdrm to match the proposed API. You can find an updated version in my personal repository here. To avoid potential conflicts with your official libdrm installation, you can compile and install it in a local directory. Then, use the following command: export LD_LIBRARY_PATH="/usr/local/lib/" In this post, I invite you to familiarize yourself with the new API that is about to be released. You can start doing as I did below: just deploy a custom kernel with the necessary patches and visualize the interface with the help of drm_info. Or, better yet, if you are a userspace developer, you can start developing user cases by experimenting with it. The more eyes the better.

KMS Color API on AMD The great news is that AMD s driver implementation for plane color operations is being developed right alongside their Linux KMS Color API proposal, so it s easy to apply to your kernel branch and check it out. You can find details of their progress in the AMD s series. I just needed to compile a custom kernel with this series applied, intentionally leaving out the AMD_PRIVATE_COLOR flag. The AMD_PRIVATE_COLOR flag guards driver-specific color plane properties, which experimentally expose hardware capabilities while we don t have the generic KMS plane color management interface available. If you don t know or don t remember the details of AMD driver specific color properties, you can learn more about this work in my blog posts [1] [2] [3]. As driver-specific color properties and KMS colorops are redundant, the driver only advertises one of them, as you can see in AMD workaround patch 24. So, with the custom kernel image ready, I installed it on a system powered by AMD DCN3 hardware (i.e. my Steam Deck). Using my custom drm_info, I could clearly see the Plane Color Pipeline with eight color operations as below:
 "COLOR_PIPELINE" (atomic): enum  Bypass, Color Pipeline 258  = Bypass
     Bypass
     Color Pipeline 258
         Color Operation 258
             "TYPE" (immutable): enum  1D Curve, 1D LUT, 3x4 Matrix, Multiplier, 3D LUT  = 1D Curve
             "BYPASS" (atomic): range [0, 1] = 1
             "CURVE_1D_TYPE" (atomic): enum  sRGB EOTF, PQ 125 EOTF, BT.2020 Inverse OETF  = sRGB EOTF
         Color Operation 263
             "TYPE" (immutable): enum  1D Curve, 1D LUT, 3x4 Matrix, Multiplier, 3D LUT  = Multiplier
             "BYPASS" (atomic): range [0, 1] = 1
             "MULTIPLIER" (atomic): range [0, UINT64_MAX] = 0
         Color Operation 268
             "TYPE" (immutable): enum  1D Curve, 1D LUT, 3x4 Matrix, Multiplier, 3D LUT  = 3x4 Matrix
             "BYPASS" (atomic): range [0, 1] = 1
             "DATA" (atomic): blob = 0
         Color Operation 273
             "TYPE" (immutable): enum  1D Curve, 1D LUT, 3x4 Matrix, Multiplier, 3D LUT  = 1D Curve
             "BYPASS" (atomic): range [0, 1] = 1
             "CURVE_1D_TYPE" (atomic): enum  sRGB Inverse EOTF, PQ 125 Inverse EOTF, BT.2020 OETF  = sRGB Inverse EOTF
         Color Operation 278
             "TYPE" (immutable): enum  1D Curve, 1D LUT, 3x4 Matrix, Multiplier, 3D LUT  = 1D LUT
             "BYPASS" (atomic): range [0, 1] = 1
             "SIZE" (atomic, immutable): range [0, UINT32_MAX] = 4096
             "LUT1D_INTERPOLATION" (immutable): enum  Linear  = Linear
             "DATA" (atomic): blob = 0
         Color Operation 285
             "TYPE" (immutable): enum  1D Curve, 1D LUT, 3x4 Matrix, Multiplier, 3D LUT  = 3D LUT
             "BYPASS" (atomic): range [0, 1] = 1
             "SIZE" (atomic, immutable): range [0, UINT32_MAX] = 17
             "LUT3D_INTERPOLATION" (immutable): enum  Tetrahedral  = Tetrahedral
             "DATA" (atomic): blob = 0
         Color Operation 292
             "TYPE" (immutable): enum  1D Curve, 1D LUT, 3x4 Matrix, Multiplier, 3D LUT  = 1D Curve
             "BYPASS" (atomic): range [0, 1] = 1
             "CURVE_1D_TYPE" (atomic): enum  sRGB EOTF, PQ 125 EOTF, BT.2020 Inverse OETF  = sRGB EOTF
         Color Operation 297
             "TYPE" (immutable): enum  1D Curve, 1D LUT, 3x4 Matrix, Multiplier, 3D LUT  = 1D LUT
             "BYPASS" (atomic): range [0, 1] = 1
             "SIZE" (atomic, immutable): range [0, UINT32_MAX] = 4096
             "LUT1D_INTERPOLATION" (immutable): enum  Linear  = Linear
             "DATA" (atomic): blob = 0
Note that Gamescope is currently using AMD driver-specific color properties implemented by me, Autumn Ashton and Harry Wentland. It doesn t use this KMS Color API, and therefore COLOR_PIPELINE is set to Bypass. Once the API is accepted upstream, all users of the driver-specific API (including Gamescope) should switch to the KMS generic API, as this will be the official plane color management interface of the Linux kernel.

KMS Color API on Intel On the Intel side, the driver implementation available upstream was built upon an earlier iteration of the API. This meant I had to apply a few tweaks to bring it in line with the latest specifications. You can explore their latest work here. For a more simplified handling, combining the V9 of the Linux Color API, Intel s contributions, and my necessary adjustments, check out my dedicated branch. I then compiled a kernel from this integrated branch and deployed it on a system featuring Intel TigerLake GT2 graphics. Running my custom drm_info revealed a Plane Color Pipeline with three color operations as follows:
 "COLOR_PIPELINE" (atomic): enum  Bypass, Color Pipeline 480  = Bypass
     Bypass
     Color Pipeline 480
         Color Operation 480
             "TYPE" (immutable): enum  1D Curve, 1D LUT, 3x4 Matrix, 1D LUT Mult Seg, 3x3 Matrix, Multiplier, 3D LUT  = 1D LUT Mult Seg
             "BYPASS" (atomic): range [0, 1] = 1
             "HW_CAPS" (atomic, immutable): blob = 484
             "DATA" (atomic): blob = 0
         Color Operation 487
             "TYPE" (immutable): enum  1D Curve, 1D LUT, 3x4 Matrix, 1D LUT Mult Seg, 3x3 Matrix, Multiplier, 3D LUT  = 3x3 Matrix
             "BYPASS" (atomic): range [0, 1] = 1
             "DATA" (atomic): blob = 0
         Color Operation 492
             "TYPE" (immutable): enum  1D Curve, 1D LUT, 3x4 Matrix, 1D LUT Mult Seg, 3x3 Matrix, Multiplier, 3D LUT  = 1D LUT Mult Seg
             "BYPASS" (atomic): range [0, 1] = 1
             "HW_CAPS" (atomic, immutable): blob = 496
             "DATA" (atomic): blob = 0
Observe that Intel s approach introduces additional properties like HW_CAPS at the color operation level, along with two new color operation types: 1D LUT with Multiple Segments and 3x3 Matrix. It s important to remember that this implementation is based on an earlier stage of the KMS Color API and is awaiting review.

A Shout-Out to Those Who Made This Happen I m impressed by the solid implementation and clear direction of the V9 of the KMS Color API. It aligns with the many insightful discussions we ve had over the past years. A huge thank you to Harry Wentland and Alex Hung for their dedication in bringing this to fruition! Beyond their efforts, I deeply appreciate Uma and Chaitanya s commitment to updating Intel s driver implementation to align with the freshest version of the KMS Color API. The collaborative spirit of the AMD and Intel developers in sharing their color pipeline work upstream is invaluable. We re now gaining a much clearer picture of the color capabilities embedded in modern display hardware, all thanks to their hard work, comprehensive documentation, and engaging discussions. Finally, thanks all the userspace developers, color science experts, and kernel developers from various vendors who actively participate in the upstream discussions, meetings, workshops, each iteration of this API and the crucial code review process. I m happy to be part of the final stages of this long kernel journey, but I know that when it comes to colors, one step is completed for new challenges to be unlocked. Looking forward to meeting you in this year Linux Display Next hackfest, organized by AMD in Toronto, to further discuss HDR, advanced color management, and other display trends.

13 May 2025

Ravi Dwivedi: KDE India Conference 2025

Last month, I attended the KDE India conference in Gandhinagar, Gujarat from the 4th to the 6th of April. I made my mind to attend when Sahil told me about his plans to attend and giving a talk. A day after my talk submission, the organizer Bhushan contacted me on Matrix and informed me that my talk had been accepted. I was also informed that KDE will cover my travel and accommodation expenses. So, I planned to attend the conference at this point. I am a longtime KDE user, so why not ;) I arrived in Ahmedabad, the twin city of Gandhinagar, a day before the conference. The first thing that struck me as soon as I came out of the Ahmedabad airport was the heat. I felt as if I was being cooked exactly how Bhushan put it earlier in the group chat. I took a taxi to get to my hotel, which was close to the conference venue. Later that afternoon, I met Bhushan and Joseph. Joseph lived in Germany. Bhushan was taking him to get a SIM card, so I tagged along and got to roam around. Joseph was unsure about where to go after the conference, so I asked him what he wanted out of his trip and had conversations along that line. Later, Vishal convinced him to go to Lucknow. Since he was adamant about taking the train, I booked a Tatkal train ticket for him to Lucknow. He was curious about how Tatkal booking works and watched me in amusement while I was booking the ticket. The 4th of April marked the first day of the conference, with around 25 attendees. Bhushan started the day with an overview of KDE conferences in India, followed by Vishal, who discussed FOSS United s activities. After the lunch, Joseph gave an overview of his campaign to help people switch from Windows to GNU/Linux due to environmental and security reasons. He continued his session in detail the next day.
Conference hall Conference hall
A key takeaway for me from Joseph s session was the idea pointed out by Adwaith: marketing GNU/Linux as a cheap alternative may not attract as much attention as marketing it as a status symbol. He gave the example of how the Tata Nano didn t do well in the Indian market due to being perceived as a poor person s car. My talk was scheduled for the evening of the first day. I hadn t prepared any slides because I wanted to make my session interactive. During my talk, I did an activity with the attendees to demonstrate the federated nature of XMPP messaging, of which Prav is a part. After the talk, I got a lot of questions, signalling engagement. The audience was cooperative (just like Prav ;)), contrary to my expectations (I thought they will be tired and sleepy). On the third day, I did a demo on editing OpenStreetMap (referred to as OSM in short) using the iD editor. It involved adding points to OSM based on the students suggestions. Since my computer didn t have an HDMI port, I used Subin s computer, and he logged into his OSM account for my session. Therefore, any mistakes I made will be under Subin s name. :) On the third day, I attended Aaruni s talk about backing up a GNU/Linux system. This was the talk that resonated with me the most. He suggested formatting the system with the btrfs file system during the installation, which helps in taking snapshots of the system and provides an easy way to roll back to a previous version if, for example, a file is accidentally deleted. I have tried many backup techniques, including this one, but I never tried backing up on the internal disk. I ll certainly give this a try. A conference is not only about the talks, that s why we had a Prav table as well ;) Just kidding. What I really mean is that a conference is more about interactions than talks. Since the conference was a three-day affair, attendees got plenty of time to bond and share ideas.
Prav stall at the conference Prav stall at the conference
Conference group photo Conference group photo
After the conference, Bhushan took us to Adalaj Stepwell, an attraction near Gandhinagar. Upon entering the complex, we saw a park where there were many langurs. Going further, there were stairs that led down to a well. I guess this is why it is called a stepwell.
Adalaj Stepwell Adalaj Stepwell
Later that day, we had Gujarati Thali for dinner. It was an all-you-can-eat buffet and was reasonably priced at 300 rupees per plate. Aamras (Mango juice) was the highlight for me. This was the only time we had Gujarati food during this visit. After the dinner, Aaruni dropped Sahil and I off at the airport. The hospitality was superb - for instance, in addition to Aaruni dropping us, Bhushan also picked up some of the attendees from the airport. Finally, I would like to thank KDE for sponsoring my travel and accommodation costs. Let s wrap up this post here and meet you in the next one. Thanks to contrapunctus and Joseph for proofreading.

29 April 2025

Freexian Collaborators: Freexian partners with Invisible Things Lab to extend security support for Xen hypervisor

Freexian is pleased to announce a partnership with Invisible Things Lab to extend the security support of the Xen type-1 hypervisor version 4.17. Three years after its initial release, Xen 4.17, the version available in Debian 12 bookworm , will reach end-of-security-support status upstream on December 2025. The aim of our partnership with Invisible Things is to extend the security support until, at least, July 2027. We may also explore a possibility of extending the support until June 2028, to coincide with the end of Debian 12 LTS support-period. The security support of Xen in Debian, since Debian 8 jessie until Debian 11 bullseye , reached its end before the end of the life cycle of the release. We aim then to significantly improve the situation of Xen in Debian 12. As with similar efforts, we would like to mention that this is an experiment and that we will do our best to make it a success. We are aiming to try and to extend the security support for Xen versions included in future Debian releases, including Debian 13 trixie . In the long term, we hope that this effort will ultimately allow the Xen Project to increase the official security support period for Xen releases from the current three years to at least five years, with the extra work being funded by the community of companies benefiting from the longer support period. If your company relies on Xen and wants to help sustain LTS versions of Xen, please reach out to us. For companies using Debian, the simplest way is to subscribe to Freexian s Debian LTS offer at a gold level (or above) and let us know that you want to contribute to Xen LTS when you send in your subscription form. For others, please reach out to us at sales@freexian.com and we will figure out a way to help you contribute. In the mean time, this initiative has been made possible thanks to the current LTS sponsors and ELTS customers. We hope the entire community of Debian and Xen users will benefit from this initiative. For any queries you might have, please don t hesitate to contact us at sales@freexian.com.

About Invisible Things Lab Invisible Things Lab (ITL) offers low-level security consulting auditing services for x86 virtualization technologies; C, C++, and assembly codebases; Intel SGX; binary exploitation and mitigations; and more. ITL also specializes in Qubes OS and Gramine consulting, including deployment, debugging, and feature development.

1 April 2025

Guido G nther: Free Software Activities March 2025

Another short status update of what happened on my side last month. Some more ModemManager bits landed, Phosh 0.46 is out, haptic feedback is now better tunable plus some more. See below for details (no April 1st joke in there, I promise): phosh phoc phosh-osk-stub phosh-mobile-settings phosh-tour pfs xdg-desktop-portal-gtk xdg-desktop-portal-phosh meta-phosh feedbackd feedbackd-device-themes gmobile livi Debian git-buildpackage feedbackd-device-themes wlroots ModemManager Tuba xdg-spec gnome-calls Reviews This is not code by me but reviews on other peoples code. The list is (as usual) slightly incomplete. Thanks for the contributions! Help Development If you want to support my work see donations. Comments? Join the Fediverse thread

Next.