Search Results: "tg"

28 March 2024

Scarlett Gately Moore: Kubuntu, KDE Report. In Loving Memory of my Son.

Personal: As many of you know, I lost my beloved son March 9th. This has hit me really hard, but I am staying strong and holding on to all the wonderful memories I have. He grew up to be an amazing man, devoted christian and wonderful father. He was loved by everyone who knew him and will be truly missed by us all. I have had folks ask me how they can help. He left behind his 7 year old son Mason. Mason was Billy s world and I would like to make sure Mason is taken care of. I have set up a gofundme for Mason and all proceeds will go to the future care of him. https://gofund.me/25dbff0c

Work report Kubuntu: Bug bashing! I am triaging allthebugs for Plasma which can be seen here: https://bugs.launchpad.net/plasma-5.27/+bug/2053125 I am happy to report many of the remaining bugs have been fixed in the latest bug fix release 5.27.11. I prepared https://kde.org/announcements/plasma/5/5.27.11/ and Rik uploaded to archive, thank you. Unfortunately, this and several other key fixes are stuck in transition do to the time_t64 transition, which you can read about here: https://wiki.debian.org/ReleaseGoals/64bit-time . It is the biggest transition in Debian/Ubuntu history and it couldn t come at a worst time. We are aware our ISO installer is currently broken, calamares is one of those things stuck in this transition. There is a workaround in the comments of the bug report: https://bugs.launchpad.net/ubuntu/+source/calamares/+bug/2054795 Fixed an issue with plasma-welcome. Found the fix for emojis and Aaron has kindly moved this forward with the fontconfig maintainer. Thanks! I have received an https://kfocus.org/spec/spec-ir14.html laptop and it is truly a great machine and is now my daily driver. A big thank you to the Kfocus team! I can t wait to show it off at https://linuxfestnorthwest.org/. KDE Snaps: You will see the activity in this ramp back up as the KDEneon Core project is finally a go! I will participate in the project with part time status and get everyone in the Enokia team up to speed with my snap knowledge, help prepare the qt6/kf6 transition, package plasma, and most importantly I will focus on documentation for future contributors. I have created the ( now split ) qt6 with KDE patchset support and KDE frameworks 6 SDK and runtime snaps. I have made the kde-neon-6 extension and the PR is in: https://github.com/canonical/snapcraft/pull/4698 . Future work on the extension will include multiple versions track support and core24 support.

I have successfully created our first qt6/kf6 snap ark. They will show showing up in the store once all the required bits have been merged and published. Thank you for stopping by. ~Scarlett

18 March 2024

Simon Josefsson: Apt archive mirrors in Git-LFS

My effort to improve transparency and confidence of public apt archives continues. I started to work on this in Apt Archive Transparency in which I mention the debdistget project in passing. Debdistget is responsible for mirroring index files for some public apt archives. I ve realized that having a publicly auditable and preserved mirror of the apt repositories is central to being able to do apt transparency work, so the debdistget project has become more central to my project than I thought. Currently I track Trisquel, PureOS, Gnuinos and their upstreams Ubuntu, Debian and Devuan. Debdistget download Release/Package/Sources files and store them in a git repository published on GitLab. Due to size constraints, it uses two repositories: one for the Release/InRelease files (which are small) and one that also include the Package/Sources files (which are large). See for example the repository for Trisquel release files and the Trisquel package/sources files. Repositories for all distributions can be found in debdistutils archives GitLab sub-group. The reason for splitting into two repositories was that the git repository for the combined files become large, and that some of my use-cases only needed the release files. Currently the repositories with packages (which contain a couple of months worth of data now) are 9GB for Ubuntu, 2.5GB for Trisquel/Debian/PureOS, 970MB for Devuan and 450MB for Gnuinos. The repository size is correlated to the size of the archive (for the initial import) plus the frequency and size of updates. Ubuntu s use of Apt Phased Updates (which triggers a higher churn of Packages file modifications) appears to be the primary reason for its larger size. Working with large Git repositories is inefficient and the GitLab CI/CD jobs generate quite some network traffic downloading the git repository over and over again. The most heavy user is the debdistdiff project that download all distribution package repositories to do diff operations on the package lists between distributions. The daily job takes around 80 minutes to run, with the majority of time is spent on downloading the archives. Yes I know I could look into runner-side caching but I dislike complexity caused by caching. Fortunately not all use-cases requires the package files. The debdistcanary project only needs the Release/InRelease files, in order to commit signatures to the Sigstore and Sigsum transparency logs. These jobs still run fairly quickly, but watching the repository size growth worries me. Currently these repositories are at Debian 440MB, PureOS 130MB, Ubuntu/Devuan 90MB, Trisquel 12MB, Gnuinos 2MB. Here I believe the main size correlation is update frequency, and Debian is large because I track the volatile unstable. So I hit a scalability end with my first approach. A couple of months ago I solved this by discarding and resetting these archival repositories. The GitLab CI/CD jobs were fast again and all was well. However this meant discarding precious historic information. A couple of days ago I was reaching the limits of practicality again, and started to explore ways to fix this. I like having data stored in git (it allows easy integration with software integrity tools such as GnuPG and Sigstore, and the git log provides a kind of temporal ordering of data), so it felt like giving up on nice properties to use a traditional database with on-disk approach. So I started to learn about Git-LFS and understanding that it was able to handle multi-GB worth of data that looked promising. Fairly quickly I scripted up a GitLab CI/CD job that incrementally update the Release/Package/Sources files in a git repository that uses Git-LFS to store all the files. The repository size is now at Ubuntu 650kb, Debian 300kb, Trisquel 50kb, Devuan 250kb, PureOS 172kb and Gnuinos 17kb. As can be expected, jobs are quick to clone the git archives: debdistdiff pipelines went from a run-time of 80 minutes down to 10 minutes which more reasonable correlate with the archive size and CPU run-time. The LFS storage size for those repositories are at Ubuntu 15GB, Debian 8GB, Trisquel 1.7GB, Devuan 1.1GB, PureOS/Gnuinos 420MB. This is for a couple of days worth of data. It seems native Git is better at compressing/deduplicating data than Git-LFS is: the combined size for Ubuntu is already 15GB for a couple of days data compared to 8GB for a couple of months worth of data with pure Git. This may be a sub-optimal implementation of Git-LFS in GitLab but it does worry me that this new approach will be difficult to scale too. At some level the difference is understandable, Git-LFS probably store two different Packages files around 90MB each for Trisquel as two 90MB files, but native Git would store it as one compressed version of the 90MB file and one relatively small patch to turn the old files into the next file. So the Git-LFS approach surprisingly scale less well for overall storage-size. Still, the original repository is much smaller, and you usually don t have to pull all LFS files anyway. So it is net win. Throughout this work, I kept thinking about how my approach relates to Debian s snapshot service. Ultimately what I would want is a combination of these two services. To have a good foundation to do transparency work I would want to have a collection of all Release/Packages/Sources files ever published, and ultimately also the source code and binaries. While it makes sense to start on the latest stable releases of distributions, this effort should scale backwards in time as well. For reproducing binaries from source code, I need to be able to securely find earlier versions of binary packages used for rebuilds. So I need to import all the Release/Packages/Sources packages from snapshot into my repositories. The latency to retrieve files from that server is slow so I haven t been able to find an efficient/parallelized way to download the files. If I m able to finish this, I would have confidence that my new Git-LFS based approach to store these files will scale over many years to come. This remains to be seen. Perhaps the repository has to be split up per release or per architecture or similar. Another factor is storage costs. While the git repository size for a Git-LFS based repository with files from several years may be possible to sustain, the Git-LFS storage size surely won t be. It seems GitLab charges the same for files in repositories and in Git-LFS, and it is around $500 per 100GB per year. It may be possible to setup a separate Git-LFS backend not hosted at GitLab to serve the LFS files. Does anyone know of a suitable server implementation for this? I had a quick look at the Git-LFS implementation list and it seems the closest reasonable approach would be to setup the Gitea-clone Forgejo as a self-hosted server. Perhaps a cloud storage approach a la S3 is the way to go? The cost to host this on GitLab will be manageable for up to ~1TB ($5000/year) but scaling it to storing say 500TB of data would mean an yearly fee of $2.5M which seems like poor value for the money. I realized that ultimately I would want a git repository locally with the entire content of all apt archives, including their binary and source packages, ever published. The storage requirements for a service like snapshot (~300TB of data?) is today not prohibitly expensive: 20TB disks are $500 a piece, so a storage enclosure with 36 disks would be around $18.000 for 720TB and using RAID1 means 360TB which is a good start. While I have heard about ~TB-sized Git-LFS repositories, would Git-LFS scale to 1PB? Perhaps the size of a git repository with multi-millions number of Git-LFS pointer files will become unmanageable? To get started on this approach, I decided to import a mirror of Debian s bookworm for amd64 into a Git-LFS repository. That is around 175GB so reasonable cheap to host even on GitLab ($1000/year for 200GB). Having this repository publicly available will make it possible to write software that uses this approach (e.g., porting debdistreproduce), to find out if this is useful and if it could scale. Distributing the apt repository via Git-LFS would also enable other interesting ideas to protecting the data. Consider configuring apt to use a local file:// URL to this git repository, and verifying the git checkout using some method similar to Guix s approach to trusting git content or Sigstore s gitsign. A naive push of the 175GB archive in a single git commit ran into pack size limitations: remote: fatal: pack exceeds maximum allowed size (4.88 GiB) however breaking up the commit into smaller commits for parts of the archive made it possible to push the entire archive. Here are the commands to create this repository: git init
git lfs install
git lfs track 'dists/**' 'pool/**'
git add .gitattributes
git commit -m"Add Git-LFS track attributes." .gitattributes
time debmirror --method=rsync --host ftp.se.debian.org --root :debian --arch=amd64 --source --dist=bookworm,bookworm-updates --section=main --verbose --diff=none --keyring /usr/share/keyrings/debian-archive-keyring.gpg --ignore .git .
git add dists project
git commit -m"Add." -a
git remote add origin git@gitlab.com:debdistutils/archives/debian/mirror.git
git push --set-upstream origin --all
for d in pool//; do
echo $d;
time git add $d;
git commit -m"Add $d." -a
git push
done
The resulting repository size is around 27MB with Git LFS object storage around 174GB. I think this approach would scale to handle all architectures for one release, but working with a single git repository for all releases for all architectures may lead to a too large git repository (>1GB). So maybe one repository per release? These repositories could also be split up on a subset of pool/ files, or there could be one repository per release per architecture or sources. Finally, I have concerns about using SHA1 for identifying objects. It seems both Git and Debian s snapshot service is currently using SHA1. For Git there is SHA-256 transition and it seems GitLab is working on support for SHA256-based repositories. For serious long-term deployment of these concepts, it would be nice to go for SHA256 identifiers directly. Git-LFS already uses SHA256 but Git internally uses SHA1 as does the Debian snapshot service. What do you think? Happy Hacking!

17 March 2024

Thomas Koch: Minimal overhead VMs with Nix and MicroVM

Posted on March 17, 2024
Joachim Breitner wrote about a Convenient sandboxed development environment and thus reminded me to blog about MicroVM. I ve toyed around with it a little but not yet seriously used it as I m currently not coding. MicroVM is a nix based project to configure and run minimal VMs. It can mount and thus reuse the hosts nix store inside the VM and thus has a very small disk footprint. I use MicroVM on a debian system using the nix package manager. The MicroVM author uses the project to host production services. Otherwise I consider it also a nice way to learn about NixOS after having started with the nix package manager and before making the big step to NixOS as my main system. The guests root filesystem is a tmpdir, so one must explicitly define folders that should be mounted from the host and thus be persistent across VM reboots. I defined the VM as a nix flake since this is how I started from the MicroVM projects example:
 
  description = "Haskell dev MicroVM";
  inputs.impermanence.url = "github:nix-community/impermanence";
  inputs.microvm.url = "github:astro/microvm.nix";
  inputs.microvm.inputs.nixpkgs.follows = "nixpkgs";
  outputs =   self, impermanence, microvm, nixpkgs  :
    let
      persistencePath = "/persistent";
      system = "x86_64-linux";
      user = "thk";
      vmname = "haskell";
      nixosConfiguration = nixpkgs.lib.nixosSystem  
          inherit system;
          modules = [
            microvm.nixosModules.microvm
            impermanence.nixosModules.impermanence
            ( pkgs, ...  :  
            environment.persistence.$ persistencePath  =  
                hideMounts = true;
                users.$ user  =  
                  directories = [
                    "git" ".stack"
                  ];
                 ;
               ;
              environment.sessionVariables =  
                TERM = "screen-256color";
               ;
              environment.systemPackages = with pkgs; [
                ghc
                git
                (haskell-language-server.override   supportedGhcVersions = [ "94" ];  )
                htop
                stack
                tmux
                tree
                vcsh
                zsh
              ];
              fileSystems.$ persistencePath .neededForBoot = nixpkgs.lib.mkForce true;
              microvm =  
                forwardPorts = [
                    from = "host"; host.port = 2222; guest.port = 22;  
                    from = "guest"; host.port = 5432; guest.port = 5432;   # postgresql
                ];
                hypervisor = "qemu";
                interfaces = [
                    type = "user"; id = "usernet"; mac = "00:00:00:00:00:02";  
                ];
                mem = 4096;
                shares = [  
                  # use "virtiofs" for MicroVMs that are started by systemd
                  proto = "9p";
                  tag = "ro-store";
                  # a host's /nix/store will be picked up so that no
                  # squashfs/erofs will be built for it.
                  source = "/nix/store";
                  mountPoint = "/nix/.ro-store";
                   
                  proto = "virtiofs";
                  tag = "persistent";
                  source = "~/.local/share/microvm/vms/$ vmname /persistent";
                  mountPoint = persistencePath;
                  socket = "/run/user/1000/microvm-$ vmname -persistent";
                 
                ];
                socket = "/run/user/1000/microvm-control.socket";
                vcpu = 3;
                volumes = [];
                writableStoreOverlay = "/nix/.rwstore";
               ;
              networking.hostName = vmname;
              nix.enable = true;
              nix.nixPath = ["nixpkgs=$ builtins.storePath <nixpkgs> "];
              nix.settings =  
                extra-experimental-features = ["nix-command" "flakes"];
                trusted-users = [user];
               ;
              security.sudo =  
                enable = true;
                wheelNeedsPassword = false;
               ;
              services.getty.autologinUser = user;
              services.openssh =  
                enable = true;
               ;
              system.stateVersion = "24.11";
              systemd.services.loadnixdb =  
                description = "import hosts nix database";
                path = [pkgs.nix];
                wantedBy = ["multi-user.target"];
                requires = ["nix-daemon.service"];
                script = "cat $ persistencePath /nix-store-db-dump nix-store --load-db";
               ;
              time.timeZone = nixpkgs.lib.mkDefault "Europe/Berlin";
              users.users.$ user  =  
                extraGroups = [ "wheel" "video" ];
                group = "user";
                isNormalUser = true;
                openssh.authorizedKeys.keys = [
                  "ssh-rsa REDACTED"
                ];
                password = "";
               ;
              users.users.root.password = "";
              users.groups.user =  ;
             )
          ];
         ;
    in  
      packages.$ system .default = nixosConfiguration.config.microvm.declaredRunner;
     ;
 
I start the microVM with a templated systemd user service:
[Unit]
Description=MicroVM for Haskell development
Requires=microvm-virtiofsd-persistent@.service
After=microvm-virtiofsd-persistent@.service
AssertFileNotEmpty=%h/.local/share/microvm/vms/%i/flake/flake.nix
[Service]
Type=forking
ExecStartPre=/usr/bin/sh -c "[ /nix/var/nix/db/db.sqlite -ot %h/.local/share/microvm/nix-store-db-dump ]   nix-store --dump-db >%h/.local/share/microvm/nix-store-db-dump"
ExecStartPre=ln -f -t %h/.local/share/microvm/vms/%i/persistent/ %h/.local/share/microvm/nix-store-db-dump
ExecStartPre=-%h/.local/state/nix/profile/bin/tmux new -s microvm -d
ExecStart=%h/.local/state/nix/profile/bin/tmux new-window -t microvm: -n "%i" "exec %h/.local/state/nix/profile/bin/nix run --impure %h/.local/share/microvm/vms/%i/flake"
The above service definition creates a dump of the hosts nix store db so that it can be imported in the guest. This is necessary so that the guest can actually use what is available in /nix/store. There is an effort for an overlayed nix store that would be preferable to this hack. Finally the microvm is started inside a tmux session named microvm . This way I can use the VM with SSH or through the console and also access the qemu console. And for completeness the virtiofsd service:
[Unit]
Description=serve host persistent folder for dev VM
AssertPathIsDirectory=%h/.local/share/microvm/vms/%i/persistent
[Service]
ExecStart=%h/.local/state/nix/profile/bin/virtiofsd \
 --socket-path=$ XDG_RUNTIME_DIR /microvm-%i-persistent \
 --shared-dir=%h/.local/share/microvm/vms/%i/persistent \
 --gid-map :995:%G:1: \
 --uid-map :1000:%U:1:

12 March 2024

Russell Coker: Android vs FOSS Phones

To achieve my aims regarding Convergence of mobile phone and PC [1] I need something a big bigger than the 4G of RAM that s in the PinePhone Pro [2]. The PinePhonePro was released at the end of 2021 but has a SoC that was first released in 2016. That SoC seems to compare well to the ones used in the Pixel and Pixel 2 phones that were released in the same time period so it s not a bad SoC, but it doesn t compare well to more recent Android devices and it also isn t a great fit for the non-Android things I want to do. Also the PinePhonePro and Librem5 have relatively short battery life so reusing Android functionality for power saving could provide a real benefit. So I want a phone designed for the mass market that I can use for running Debian. PostmarketOS One thing I m definitely not going to do is attempt a full port of Linux to a different platform or support of kernel etc. So I need to choose a device that already has support from a somewhat free Linux system. The PostmarketOS system is the first I considered, the PostmarketOS Wiki page of supported devices [3] was the first place I looked. The main supported devices are the PinePhone (not Pro) and the Librem5, both of which are under-powered. For the community devices there seems to be nothing that supports calls, SMS, mobile data, and USB-OTG and which also has 4G of RAM or more. If I skip USB-OTG (which presumably means I d have to get dock functionality via wifi not impossible but not great) then I m left with the SHIFT6mq which was never sold in Australia and the Xiomi POCO F1 which doesn t appear to be available on ebay. LineageOS The libhybris libraries are a compatibility layer between Android and glibc programs [4]. Which includes running Wayland with Android display drivers. So running a somewhat standard Linux desktop on top of an Android kernel should be possible. Here is a table of the LineageOS supported devices that seem to have a useful feature set and are available in Australia and which could be used for running Debian with firmware and drivers copied from Android. I only checked LineageOS as it seems to be the main free Android build.
Phone RAM External Display Price
Edge 20 Pro [5] 6-12G HDMI $500 not many on sale
Edge S aka moto G100 [6] 6-8G HDMI $500 to $600+
Fairphone 4 6-8G USBC-DP $1000+
Nubia Red Magic 5G 8-16G USBC-DP $600+
The LineageOS device search page [9] allows searching by kernel version. There are no phones with a 6.6 (2023) or 6.1 (2022) Linux kernel and only the Pixel 8/8Pro and the OnePlus 11 5G run 5.15 (2021). There are 8 Google devices (Pixel 6/7 and a tablet) running 5.10 (2020), 18 devices running 5.4 (2019), and 32 devices running 4.19 (2018). There are 186 devices running kernels older than 4.19 which aren t in the kernel.org supported release list [10]. The Pixel 8 Pro with 12G of RAM and the OnePlus 11 5G with 16G of RAM are appealing as portable desktop computers, until recently my main laptop had 8G of RAM. But they cost over $1000 second hand compared to $359 for my latest laptop. Fosdem had an interesting lecture from two Fairphone employees about what they are doing to make phone production fairer for workers and less harmful for the environment [11]. But they don t have the market power that companies like Google have to tell SoC vendors what they want. IP Laws and Practices Bunnie wrote an insightful and informative blog post about the difference between intellectual property practices in China and US influenced countries and his efforts to reverse engineer a commonly used Chinese SoC [12]. This is a major factor in the lack of support for FOSS on phones and other devices. Droidian and Buying a Note 9 The FOSDEM 2023 has a lecture about the Droidian project which runs Debian with firmware and drivers from Android to make a usable mostly-FOSS system [13]. It s interesting how they use containers for the necessary Android apps. Here is the list of devices supported by Droidian [14]. Two notable entries in the list of supported devices are the Volla Phone and Volla Phone 22 from Volla a company dedicated to making open Android based devices [15]. But they don t seem to be available on ebay and the new price of the Volla Phone 22 is E452 ($AU750) which is more than I want to pay for a device that isn t as open as the Pine64 and Purism products. The Volla Phone 22 only has 4G of RAM.
Phone RAM Price Issues
Note 9 128G/512G 6G/8G <$300 Not supporting external display
Galaxy S9+ 6G <$300 Not supporting external display
Xperia 5 6G >$300 Hotspot partly working
OnePlus 3T 6G $200 $400+ photos not working
I just bought a Note 9 with 128G of storage and 6G of RAM for $109 to try out Droidian, it has some screen burn but that s OK for a test system and if I end up using it seriously I ll just buy another that s in as-new condition. With no support for an external display I ll need to setup a software dock to do Convergence, but that s not a serious problem. If I end up making a Note 9 with Droidian my daily driver then I ll use the 512G/8G model for that and use the cheap one for testing. Mobian I should have checked the Mobian list first as it s the main Debian variant for phones. From the Mobian Devices list [16] the OnePlus 6T has 8G of RAM or more but isn t available in Australia and costs more than $400 when imported. The PocoPhone F1 doesn t seem to be available on ebay. The Shift6mq is made by a German company with similar aims to the Fairphone [17], it looks nice but costs E577 which is more than I want to spend and isn t on the officially supported list. Smart Watches The same issues apply to smart watches. AstereoidOS is a free smart phone OS designed for closed hardware [18]. I don t have time to get involved in this sort of thing though, I can t hack on every device I use.

4 March 2024

Paulo Henrique de Lima Santana: Bits from FOSDEM 2023 and 2024

Link para vers o em portugu s

Intro Since 2019, I have traveled to Brussels at the beginning of the year to join FOSDEM, considered the largest and most important Free Software event in Europe. The 2024 edition was the fourth in-person edition in a row that I joined (2021 and 2022 did not happen due to COVID-19) and always with the financial help of Debian, which kindly paid my flight tickets after receiving my request asking for help to travel and approved by the Debian leader. In 2020 I wrote several posts with a very complete report of the days I spent in Brussels. But in 2023 I didn t write anything, and becayse last year and this year I coordinated a room dedicated to translations of Free Software and Open Source projects, I m going to take the opportunity to write about these two years and how it was my experience. After my first trip to FOSDEM, I started to think that I could join in a more active way than just a regular attendee, so I had the desire to propose a talk to one of the rooms. But then I thought that instead of proposing a tal, I could organize a room for talks :-) and with the topic translations which is something that I m very interested in, because it s been a few years since I ve been helping to translate the Debian for Portuguese.

Joining FOSDEM 2023 In the second half of 2022 I did some research and saw that there had never been a room dedicated to translations, so when the FOSDEM organization opened the call to receive room proposals (called DevRoom) for the 2023 edition, I sent a proposal to a translation room and it was accepted! After the room was confirmed, the next step was for me, as room coordinator, to publicize the call for talk proposals. I spent a few weeks hoping to find out if I would receive a good number of proposals or if it would be a failure. But to my happiness, I received eight proposals and I had to select six to schedule the room programming schedule due to time constraints . FOSDEM 2023 took place from February 4th to 5th and the translation devroom was scheduled on the second day in the afternoon. Fosdem 2023 The talks held in the room were these below, and in each of them you can watch the recording video. And on the first day of FOSDEM I was at the Debian stand selling the t-shirts that I had taken from Brazil. People from France were also there selling other products and it was cool to interact with people who visited the booth to buy and/or talk about Debian.
Fosdem 2023

Fosdem 2023
Photos

Joining FOSDEM 2024 The 2023 result motivated me to propose the translation devroom again when the FOSDEM 2024 organization opened the call for rooms . I was waiting to find out if the FOSDEM organization would accept a room on this topic for the second year in a row and to my delight, my proposal was accepted again :-) This time I received 11 proposals! And again due to time constraints, I had to select six to schedule the room schedule grid. FOSDEM 2024 took place from February 3rd to 4th and the translation devroom was scheduled for the second day again, but this time in the morning. The talks held in the room were these below, and in each of them you can watch the recording video. This time I didn t help at the Debian stand because I couldn t bring t-shirts to sell from Brazil. So I just stopped by and talked to some people who were there like some DDs. But I volunteered for a few hours to operate the streaming camera in one of the main rooms.
Fosdem 2024

Fosdem 2024
Photos

Conclusion The topics of the talks in these two years were quite diverse, and all the lectures were really very good. In the 12 talks we can see how translations happen in some projects such as KDE, PostgreSQL, Debian and Mattermost. We had the presentation of tools such as LibreTranslate, Weblate, scripts, AI, data model. And also reports on the work carried out by communities in Africa, China and Indonesia. The rooms were full for some talks, a little more empty for others, but I was very satisfied with the final result of these two years. I leave my special thanks to Jonathan Carter, Debian Leader who approved my flight tickets requests so that I could join FOSDEM 2023 and 2024. This help was essential to make my trip to Brussels because flight tickets are not cheap at all. I would also like to thank my wife Jandira, who has been my travel partner :-) Bruxelas As there has been an increase in the number of proposals received, I believe that interest in the translations devroom is growing. So I intend to send the devroom proposal to FOSDEM 2025, and if it is accepted, wait for the future Debian Leader to approve helping me with the flight tickets again. We ll see.

1 March 2024

Scarlett Gately Moore: Kubuntu: Week 4, Feature Freeze and what comes next.

First I would like to give a big congratulations to KDE for a superb KDE 6 mega release  While we couldn t go with 6 on our upcoming LTS release, I do recommend KDE neon if you want to give it a try! I want to say it again, I firmly stand by the Kubuntu Council in the decision to stay with the rock solid Plasma 5 for the 24.04 LTS release. The timing was just to close to feature freeze and the last time we went with the shiny new stuff on an LTS release, it was a nightmare ( KDE 4 anyone? ). So without further ado, my weekly wrap-up. Kubuntu: Continuing efforts from last week Kubuntu: Week 3 wrap up, Contest! KDE snaps, Debian uploads. , it has been another wild and crazy week getting everything in before feature freeze yesterday. We will still be uploading the upcoming Plasma 5.27.11 as it is a bug fix release  and right now it is all about the finding and fixing bugs! Aside from many uploads my accomplishments this week are: What comes next? Testing, testing, testing! Bug fixes and of course our re-branding. My focus is on bug triage right now. I am also working on new projects in launchpad to easily track our bugs as right now they are all over the place and hard to track down. Snaps: I have started the MRs to fix our latest 23.08.5 snaps, I hope to get these finished in the next week or so. I have also been speaking to a prospective student with some GSOC ideas that I really like and will mentor, hopefully we are not too late. Happy with my work? My continued employment depends on you! Please consider a donation http://kubuntu.org/donate Thank you!

23 February 2024

Scarlett Gately Moore: Kubuntu: Week 3 wrap up, Contest! KDE snaps, Debian uploads.

Witch Wells AZ SunsetWitch Wells AZ Sunset
It has been a very busy 3 weeks here in Kubuntu! Kubuntu 22.04.4 LTS has been released and can be downloaded from here: https://kubuntu.org/getkubuntu/ Work done for the upcoming 24.04 LTS release: We have a branding contest! Please do enter, there are some exciting prizes https://kubuntu.org/news/kubuntu-graphic-design-contest/ Debian: I have uploaded to NEW the following packages: I am currently working on: KDE Snaps: KDE applications 23.08.5 have been uploaded to Candidate channel, testing help welcome. https://snapcraft.io/search?q=KDE I have also working on bug fixes, time allowing. My continued employment depends on you, please consider a donation! https://kubuntu.org/donate/ Thank you for stopping by! ~Scarlett

2 February 2024

Scarlett Gately Moore: Some exciting news! Kubuntu: I m back!!!

It s official, the Kubuntu Council has hired me part time to work on the 24.04 LTS release, preparation for Plasma 6, and to bring life back into the Distribution. First I want thank the Kubuntu Council for this opportunity and I plan a long and successful journey together!!!! My first week ( I started midweek ): It has been a busy one! Many meet and greets with the team and other interested parties. I had the chance to chat with Mike from Kubuntu Focus and I have to say I am absolutely amazed with the work they have done, and if you are in the market for a new laptop, you must check these out!!! https://kfocus.org Or if you want to try before you buy you can download the OS! All they ask is for an e-mail, which is completely reasonable. Hosting isn t free! Besides, you can opt out anytime and they don t share it with anyone. I look forward to working closely with this project. We now have a Kubuntu Team in KDE invent https://invent.kde.org/teams/distribution-kubuntu if you would like to join us, please don t hesitate to ask! I have started a new Wiki and our first page is the ever important Bug triaging! It is still a WIP but you can check it out here: https://invent.kde.org/teams/distribution-kubuntu/docs/-/wikis/Bug-Triage-Story-WIP , with that said I have started the launchpad work to make tracking our bugs easier buy subscribing kubuntu-bugs to all our packages and creating proper projects for our packages missing them. We have compiled a list of our various documentation links that need updated and Rick Timmis is updating kubuntu.org! Aaron Honeycutt has been busy with the Kubuntu Manual https://github.com/kubuntu-team/kubuntu-manual which is in good shape. We just need to improve our developer story  I have been working on the rather massive Apparmor bug https://bugs.launchpad.net/ubuntu/+source/apparmor/+bug/2046844 with testing the fixes from the ppa and writing profiles for the various KDE packages affected ( pretty much anything that uses webengine ) and making progress there. My next order of business staging Frameworks 5.114 with guidance from our super awesome Rik Mills that has been doing most of the heavy lifting in Kubuntu for many years now. So thank you for that Rik  I will also start on our big transition to the Calamaras Installer! I do have experience here, so I expect it will be a smooth one. I am so excited for the future of Kubuntu and the exciting things to come! With that said, the Kubuntu funding is community donation driven. There is enough to pay me part time for a couple contracts, but it will run out and a full-time contract would be super awesome. I am reaching out to anyone enjoying Kubuntu and want to help with the future of Kubuntu to please consider a donation! We are working on more donation options, but for now you can donate through paypal at https://kubuntu.org/donate/ Thank you!!!!!

28 January 2024

Russell Coker: Links January 2024

Long Now has an insightful article about domestication that considers whether humans have evolved to want to control nature [1]. The OMG Elite hacker cable is an interesting device [2]. A Wifi device in a USB cable to allow remote control and monitoring of data transfer, including remote keyboard control and sniffing. Pity that USB-C cables have chips in them so you can t use a spark to remove unwanted chips from modern cables. David Brin s blog post The core goal of tyrants: The Red-Caesar Cult and a restored era of The Great Man has some insightful points about authoritarianism [3]. Ron Garret wrote an interesting argument against Christianity [4], and a follow-up titled Why I Don t Believe in Jesus [5]. He has a link to a well written article about the different theologies of Jesus and Paul [6]. Dimitri John Ledkov wrote an interesting blog post about how they reduced disk space for Ubuntu kernel packages and RAM for the initramfs phase of boot [7]. I hope this gets copied to Debian soon. Joey Hess wrote an interesting blog post about trying to make LLM systems produce bad code if trained on his code without permission [8]. Arstechnica has an interesting summary of research into the security of fingerprint sensors [9]. Not surprising that the products of the 3 vendors that supply almost all PC fingerprint readers are easy to compromise. Bruce Schneier wrote an insightful blog post about how AI will allow mass spying (as opposed to mass surveillance) [10]. ZDnet has an informative article How to Write Better ChatGPT Prompts in 5 Steps [11]. I sent this to a bunch of my relatives. AbortRetryFail has an interesting article about the Itanic Saga [12]. Erberus sounds interesting, maybe VLIW designs could give a good ration of instructions to power unlike the Itanium which was notorious for being power hungry. Bruce Schneier wrote an insightful article about AI and Trust [13]. We really need laws controlling these things! David Brin wrote an interesting blog post on the obsession with historical cycles [14].

13 January 2024

Freexian Collaborators: Debian Contributions: LXD/Incus backend bug, /usr-merge updates, gcc-for-host, and more! (by Utkarsh Gupta)

Contributing to Debian is part of Freexian s mission. This article covers the latest achievements of Freexian and their collaborators. All of this is made possible by organizations subscribing to our Long Term Support contracts and consulting services.

LXD/Incus backend bug in autopkgtest by Stefano Rivera While working on the Python 3.12 transition, Stefano repeatedly ran into a bug in autopkgtest when using LXD (or in the future Incus), that caused it to hang when running cython s multi-hour autopkgtests. After some head-banging, the bug turned out to be fairly straightforward: LXD didn t shut down on receiving a SIGTERM, so when a testsuite timed out, it would hang forever. A simple fix has been applied.

/usr-merge, by Helmut Grohne Thanks to Christian Hofstaedtler and others, the effort is moving into a community effort and the work funded by Freexian becomes more difficult to separate from non-funded work. In particular, since the community fully handled all issues around lost udev rules, dh_installudev now installs rules to /usr. The story around diversions took another detour. We learned that conflicts do not reliably prevent concurrent unpack and the reiterated mitigation for molly-guard triggered this. After a bit of back and forth and consultation with the developer mailing list, we concluded that avoiding the problematic behavior when using apt or an apt-based upgrader combined with a loss mitigation would be good enough. The involved packages bfh-container, molly-guard, progress-linux-container and systemd have since been uploaded to unstable and the matter seems finally solved except that it doesn t quite work with sysvinit yet. The same approach is now being proposed for the diversions of zutils for gzip. We thank involved maintainers for their timely cooperation.

gcc-for-host, by Helmut Grohne Since forever, it has been difficult to correctly express a toolchain build dependency. This can be seen in the Build-Depends of the linux source package for instance. While this has been solved for binutils a while back, the patches for gcc have been unfinished. With lots of constructive feedback from gcc package maintainer Matthias Klose, Helmut worked on finalizing and testing these patches. Patch stacks are now available for gcc-13 and gcc-14 and Matthias already included parts of them in test builds for Ubuntu noble. Finishing this work would enable us to resolve around 1000 cross build dependency satisfiability issues in unstable.

Miscellaneous contributions
  • Stefano continued work on the Python 3.12 transition, including uploads of cython, pycxx, numpy, python-greenlet, twisted, foolscap and dh-python.
  • Stefano reviewed and selected from a new round of DebConf 24 bids, as part of the DebConf Committee. Busan, South Korea was selected.
  • For debian-printing Thorsten uploaded hplip to unstable to fix a /usr-merge bug and cups to Bookworm to fix bugs related to printing in color.
  • Utkarsh helped newcomers in mentoring and reviewing their packaging; eg: golang-github-prometheus-community-pgbouncer-exporter.
  • Helmut sent patches for 42 cross build failures unrelated to the gcc-for-host work.
  • Helmut continues to maintain rebootstrap. In December, blt started depending on libjpeg and this poses a dependency loop. Ideally, Python would stop depending on blt. Also linux-libc-dev having become Multi-Arch: foreign poses non-trivial issues that are not fully resolved yet.
  • Enrico participated in /usr-merge discussions with Helmut.

30 December 2023

Riku Voipio: Adguard DNS, or how to reduce ads without apps/extensions

Looking at the options for blocking ads, people usually first look at browser extensions. Google's plan is to disable adblock extensions in 2024. The alternative is usually an app (on phones) or a "VPN" that does filtering for you. All these methods are quite heavyweight, and require installing software on your phone or PC. What is less known, is that you can you DNS-over-TLS or DNS-over-HTTPS for ad blocking.
What is DNS-over-TLS and DNS-over-HTTPS
Since Android 9, Google has provided a setting calledPrivate DNS. Traditional DNS is unencrypted UDP so anyone can monitor your requests and/or return false records. With private DNS, DNS-over-TLS or DNS-over-HTTPS is used to guarantee the DNS request is sent to the server you configured. Which Google hopes is of course Google's own public servers. If you do so, your ISP and hotspot providers no longer can monitor, monetize and enshittify your DNS requests - only Google can do so.
Subverting private DNS for ad blocking
This is where AdGuard DNS comes useful. By setting the AdGuard DNS server as your "private DNS" server following the instructions,you can start blocking right away. Note, on PC you can also configure the Adguard DNS server on the Browser settings (Firefox -> Enable secure DNS and Chrome -> Use Secure DNS) instead of configuring a system-wide DNS server. Blocking via DNS, of course, limits effectiveness to ads distributed from 3rd party servers.
Other uses for AdGuard DNS
If you register for Adguard DNS, you get your "own", customizable DNS server address to point to. You can, for example, create your own /etc/hosts style records that are now available to all you devices you have connected to the Adguard DNS server - whether your a are home or not. Of course, you choose to use the personal DNS server, your DNS query privacy is in the hands of AdGuard.
Going further
What else is ruining the web than Ads? Well commercial social media. An article ("Ei n in! Algoritmi hky") from the latest Finnish Magazine SKROLLI (mainos: jos luet suomeksi, Tilaa skrolli!) hit a chord for me. The algorithms of social media sites are designed not to serve you, but to addict you. For example, If you stop to watch a hateful meme image, the algorithm will record "The user spent time watching this, show more of the same!". It doesn't help block or mute - yeah that spefic hate engager will be blocked, but all the dozens similar hate pages will still be shown to you. Worse, the social media sites are being overrun by AI-generated crap. Unfortunately the addictive nature of the algorithms works. You reload in vain, hoping this time the algorithmic god will show something your friends share. How do you cure addiction? By blocking yourself out:
Epilogue
I didn't block myself out of Fediverse - yet. It's not engineered to be addictive, which is also probably why it isn't as popular as the commercial alternatives...

26 December 2023

Russ Allbery: Review: A Study in Honor

Review: A Study in Honor, by Claire O'Dell
Series: Janet Watson Chronicles #1
Publisher: Harper Voyager
Copyright: July 2018
ISBN: 0-06-269932-6
Format: Kindle
Pages: 295
A Study in Honor is a near-future science fiction novel by Claire O'Dell, a pen name for Beth Bernobich. You will see some assertions, including by the Lambda Literary Award judges, that it is a mystery novel. There is a mystery, but... well, more on that in a moment. Janet Watson was an Army surgeon in the Second US Civil War when New Confederacy troops overran the lines in Alton, Illinois. Watson lost her left arm to enemy fire. As this book opens, she is returning to Washington, D.C. with a medical discharge, PTSD, and a field replacement artificial arm scavenged from a dead soldier. It works, sort of, mostly, despite being mismatched to her arm and old in both technology and previous wear. It does not work well enough for her to resume her career as a surgeon. Watson's plan is to request a better artificial arm from the VA (the United States Department of Veterans Affairs, which among other things is responsible for the medical care of wounded veterans). That plan meets a wall of unyielding and uninterested bureaucracy. She has a pension, but it's barely enough for cheap lodging. A lifeline comes in the form of a chance encounter with a former assistant in the Army, who has a difficult friend looking to split the cost of an apartment. The name of that friend is Sara Holmes. At this point, you know what to expect. This is clearly one of the many respinnings of Arthur Conan Doyle. This time, the setting is in the future and Watson and Holmes are both black women, but the other elements of the setup are familiar: the immediate deduction that Watson came from the front, the shared rooms (2809 Q Street this time, sacrificing homage for the accuracy of a real address), Holmes's tendency to play an instrument (this time the piano), and even the title of this book, which is an obvious echo of the title of the first Holmes novel, A Study in Scarlet. Except that's not what you'll get. There are a lot of parallels and references here, but this is not a Holmes-style detective novel. First, it's only arguably a detective novel at all. There is a mystery, which starts with a patient Watson sees in her fallback job as a medical tech in the VA hospital and escalates to a physical attack, but that doesn't start until a third of the way into the book. It certainly is not solved through minute clues and leaps of deduction; instead, that part of the plot has the shape of a thriller rather than a classic mystery. There is a good argument that the thriller is the modern mystery novel, so I don't want to overstate my case, but I think someone who came to this book wanting a Doyle-style mystery would be disappointed. Second, the mystery is not the heart of this book. Watson is. She, like Doyle's Watson, is the first-person narrator, but she is far more present in the book. I have no idea how accurate O'Dell's portrayal of Watson's PTSD is, but it was certainly compelling and engrossing reading. Her fight for basic dignity and her rage at the surface respect and underlying disinterested hostility of the bureaucratic war machinery is what kept me turning the pages. The mystery plot is an outgrowth of that and felt more like a case study than the motivating thread of the plot. And third, Sara Holmes... well, I hesitate to say definitively that she's not Sherlock Holmes. There have been so many versions of Holmes over the years, even apart from the degree to which a black woman would necessarily not be like Doyle's character. But she did not remind me of Sherlock Holmes. She reminded me of a cross between James Bond and a high fae. This sounds like a criticism. It very much is not. I found this high elf spy character far more interesting than I have ever found Sherlock Holmes. But here again, if you came into this book hoping for a Holmes-style master detective, I fear you may be wrong-footed. The James Bond parts will be obvious when you get there and aren't the most interesting (and thankfully the misogyny is entirely absent). The part I found more fascinating is the way O'Dell sets Holmes apart by making her fae rather than insufferable. She projects effortless elegance, appears and disappears on a mysterious schedule of her own, thinks nothing of reading her roommate's diary, leaves meticulously arranged gifts, and even bargains with Watson for answers to precisely three questions. The reader does learn some mundane explanations for some of this behavior, but to be honest I found them somewhat of a letdown. Sara Holmes is at her best as a character when she tacks her own mysterious path through a rather grim world of exhausted war, penny-pinching bureaucracy, and despair, pursuing an unexplained agenda of her own while showing odd but unmistakable signs of friendship and care. This is not a romance, at least in this book. It is instead a slowly-developing friendship between two extremely different people, one that I thoroughly enjoyed. I do have a couple of caveats about this book. The first is that the future US in which it is set is almost pure Twitter doomcasting. Trump's election sparked a long slide into fascism, and when that was arrested by the election of a progressive candidate backed by a fragile coalition, Midwestern red states seceded to form the New Confederacy and start a second civil war that has dragged on for nearly eight years. It's a very specific mainstream liberal dystopian scenario that I've seen so many times it felt like a cliche even though I don't remember seeing it in a book before. This type of future projection of current fears is of course not new for science fiction; Cold War nuclear war novels are probably innumerable. But I had questions, such as how a sparsely-populated, largely non-industrial, and entirely landlocked set of breakaway states could maintain a war footing for eight years. Despite some hand-waving about covert support, those questions are not really answered here. The second problem is that the ending of this book kind of falls apart. The climax of the mystery investigation is unsatisfyingly straightforward, and the resulting revelation is a hoary cliche. Maybe I'm just complaining about the banality of evil, but if I'd been engrossed in this book for the thriller plot, I think I would have been annoyed. I wasn't, though; I was here for the characters, for Watson's PTSD and dogged determination, for Sara's strangeness, and particularly for the growing improbable friendship between two women with extremely different life experiences, emotions, and outlooks. That part was great, regardless of the ending. Do not pick this book up because you want a satisfying deductive mystery with bumbling police and a blizzard of apparently inconsequential clues. That is not at all what's happening here. But this was great on its own terms, and I will be reading the sequel shortly. Recommended, although if you are very online expect to do a bit of eye-rolling at the setting. Followed by The Hound of Justice, but the sequel is not required. This book reaches a satisfying conclusion of its own. Rating: 8 out of 10

21 December 2023

Russell Coker: Links December 2023

David Brin wrote an insightful blog post about the latest round of UFO delusion [1]. There aren t a heap of scientists secretly working on UFOs. David Brin wrote an informative and insightful blog post about rich doomsday preppers who want to destroy democracy [2]. Cory Doctorow wrote an interesting article about how ChatGPT helps people write letters and how that decreases the value of the letter [3]. What can we do to show that letters mean something? Hand deliver them? Pay someone to hand deliver them? Cory concentrates on legal letters and petitions but this can apply to other things too. David Brin wrote an informative blog post about billionaires prepping for disaster and causing the disaster [4]. David Brin wrote an insightful Wired article about ways of dealing with potential rogue AIs [5]. David Brin has an interesting take on government funded science [6]. Bruce Schneier wrote an insightful article about AI Risks which is worth reading [7]. Ximion wrote a great blog post about how tp use AppStream metadata to indicate what type of hardware/environment is required to use an app [8]. This is great for the recent use of Debian on phones and can provide real benefits for more traditional uses (like all those servers that accidentally got LibreOffice etc installed). Also for Convergence it will be good to have the app launcher take note of this, when your phone isn t connected to a dock there s no point offering to launch apps that require a full desktop screen. Russ Albery wrote an interesting summary of the book Going Infinite about the Sam Bankman-Fried FTX fiasco [9]. That summary really makes Sam sound Autistic. Cory Doctorow wrote an insightful article Microincentives and Enshittification explaining why Google search has to suck [10]. Charles Stross posted the text of a lecture he gave titles We re Sorry We Created the Torment Nexus [11] about sci-fi ideas that shouldn t be implemented. The Daily WTF has many stories of corporate computer stupidity, but The White Appliphant is one of the most epic [12]. The Verge has an informative article on new laws in the US and the EU to give a right to repair and how this explains the sudden change to 7 year support for Pixel phones [13].

15 December 2023

Scarlett Gately Moore: KDE: Snaps, KDEneon, Debian and my future.

First I want to thank KDE for this wonderful write up on LinkedIn. https://www.linkedin.com/posts/kde_akademy-2023-over-a-million-reasons-why-activity-7139965489153208320-PNem?utm_source=share&utm_medium=member_desktop It made my heart explode with happiness as I prepped for my interview on Monday. I didn t get the job ( Just the usual we were very impressed with your experience, but we went with another candidate ). I think the cosmos is determined for me to hold out for the project even though it is only part time work, it is work and if I have learned nothing else this year, I have learned how to live on a very tight budget! It turns out many of the things we think we need, we don t. So with hard work of Kevin Ottens ( Thank you!!!! ), this should be finalized after the first of the year. This plan also allows me time for KDEneon and Debian work. I am happy with it and look forward to the coming year and the exciting things to come. My holiday plans are to help the Debian KDE team with the KF6 packaging, On-going KDEneon efforts, and continue to make sure the snaps Qt6 transition is a painless as possible. I will also be working on the qt6 kde-neon extension. In closing, despite my terrible luck with job-hunting, I am in an amazing community and I am truly grateful to each and every one of you. It has been a great year and I have added many new things to my skillset and I look forward to many more next year. As usual, it is that time of month where I have not raised enough to pay my internet bill ( phone company taking an extra 200.00 didn t help ) If you can spare any change ( any amount helps! ) please consider a donation https://gofund.me/b74e4c6f Thank you! I hope everyone has a wonderful <insert your holiday here> !!!!! ~ Scarlett

14 December 2023

Russell Coker: Fat Finger Shell

I ve been trying out the Fat Finger Shell which is a terminal emulator for Linux on touch screen devices where the keyboard is overlayed with the terminal output. This means that instead of having a tiny keyboard and a tiny terminal output you have the full screen for both. There is a YouTube video showing how the Fat Finger Shell works [1]. Here is a link to the Github page [2], which hasn t changed much in the last 11 years. Currently the shell is hard-coded to a 80*24 terminal and a 640*480 screen which doesn t match any modern hardware. Some parts of this are easy to change but then there s the comment I ran once XGetGeometry and I am harcoded (bad) values for x, y, etc.. which is followed by some magic numbers that are not easy to change which are hacked into the source of xvt. The configuration of this is almost great. It has a plain text file where each line has 4 numbers representing the X and Y coordinates of opposite corners of a rectangle and additional information on what the key is, which is relatively easy to edit. But then it has an image which has to match that, the obvious improvement would be to not have an image but to just display rectangles for each pair of corner coordinates and display the glyph of the character in question inside it. I think there is a real need for a terminal like this for use on devices like the PinePhonePro, it won t be to everyone s taste but the people who like it will really like it. The features that such a shell needs for modern use are being based on Wayland, supporting a variety of screen resolutions and particularly the commonly used ones like 720*1440 and 1920*1080 (with terminal resolution matching the combination of screen resolution and font), and having code derived from a newer terminal emulator. As a final note it would be good for such a terminal to also take input from a regular keyboard so when you plug your Linux phone into a dock you don t need to close your existing terminal sessions. There is a Debian RFP/ITP bug for this [3] which I think should be closed due to nothing happening for 11 years and the fact that so much work is required to make this usable. The current Fat Finger Shell code is a good demonstration of the concept, but I don t think it makes sense to move on with this code base. One of the many possible ways of addressing this with modern graphics technology might be to have a semi transparent window overlaying the screen and generating virtual keyboard events for whichever window happens to be below it so instead of being limited to one terminal program by the choice of input method have that input work for any terminal that the user may choose as well as any other text based program (email, IM, etc).

10 December 2023

Scarlett Gately Moore: KDE: KDE Snaps 23.08.4, PIM! KDE neon, Debian

KDE PIM Kaddressbook snapKDE PIM Kaddressbook snap
KDE Snaps: This weeks big accomplishment is KDE PIM snaps! I have successfully added akonadi as a service via an akonadi content snap and running it as a service. Kaddressbook is our first PIM snap with this setup and it works flawlessly! It is available in the snap store. I have a pile of MRs awaiting approvals, so keep your eye out for the rest of PIM in the next day. KDE Applications 23.08.4 has been released and available in the snap store. Krita 5.2.2 has been released. I have created a new kde-qt6 snap as the qt-framework snap has not been updated and the maintainer is unreachable. It is in edge and I will be rebuilding our kf6 snap with this one. I am debugging an issue with the latest Labplot release. KDE neon: This week I helped with frameworks release 5.113 and KDE applications 23.08.4. I also worked on the ongoing Unstable turning red into green builds as the porting to qt6 continues. Debian: With my on going learning packaging for all the programming languages, Rust packaging: I started on Rustic https://github.com/rustic-rs/rustic unfortunately, it was a bit of wasted time as it depends on a feature of tracing-subcriber that depends on matchers which has a grave bug, so it remains disabled. Personal: I do have an interview tomorrow! And it looks like the project may go through after the new year. So things are looking up, unfortunately I must still ask, if you have any spare change please consider a donation. The phone company decided to take an extra $200.00 I didn t have to spare and while I resolved it, they refused a refund, but gave me a credit to next months bill, which doesn t help me now. Thank you for your consideration. https://gofund.me/b74e4c6f

Freexian Collaborators: Debian Contributions: Python 3.12 preparations, debian-printing, merged-/usr tranisition updates, and more! (by Utkarsh Gupta)

Contributing to Debian is part of Freexian s mission. This article covers the latest achievements of Freexian and their collaborators. All of this is made possible by organizations subscribing to our Long Term Support contracts and consulting services.

Preparing for Python 3.12 by Stefano Rivera Stefano uploaded a few packages in preparation for Python 3.12, including pycxx and cython. Cython has a major new version (Cython 3), adding support for 3.12, but also bringing changes that many packages in Debian aren t ready to build with, yet. Stefano uploaded it to Debian experimental and did an archive rebuild of affected packages, and some analysis of the result. Matthias Klose has since filed bugs for all of these issues.

debian-printing, by Thorsten Alteholz This month Thorsten invested some of the previously obtained money to build his own printlab. At the moment it only consists of a dedicated computer with an USB printer attached. Due to its 64GB RAM and an SSD, building of debian-printing packages is much faster now. Over time other printers will be added and understanding bugs should be a lot easier now. Also Thorsten again adopted two packages, namely mink and ink, and moved them to the debian-printing team.

Merged-/usr transition by Helmut Grohne, et al The dumat analysis tool has been improved in quite some aspects. Beyond fixing false negative diagnostics, it now recognizes protective diversions used for mitigating Multi-Arch: same file loss. It was found that the proposed mitigation for ineffective diversions does not work as expected. Trying to fix it up resulted in more problems, some of which remain unsolved as of this writing. Initial work on moving shared libraries in the essential set has been done. Meanwhile, the wider Debian community worked on fixing all known Multi-Arch: same file loss scenarios. This work is now being driven by Christian Hofstaedler and during the Mini DebConf in Cambridge, Chris Boot, tienne Mollier, Miguel Landaeta, Samuel Henrique, and Utkarsh Gupta sent the other half of the necessary patches.

Miscellaneous contributions
  • Stefano merged patches to support loong64 and hurd-amd64 in re2.
  • For the Cambridge mini-conf, Stefano added a web player to the DebConf video streaming frontend, as the Cambridge miniconf didn t have its own website to host the player.
  • Rapha l helped the upstream developers of hamster-time-tracker to prepare a new upstream release (the first in multiple years) and packaged that new release in Debian unstable.
  • Enrico joined Hemut in brainstorming some /usr-merge solutions.
  • Thorsten took care of RM-bugs to remove no longer needed packages from the Debian archive and closed about 50 of them.
  • Helmut ported the feature of mounting a fuse connection via /dev/fd/N from fuse3 to fuse2.
  • Helmut sent a number of patches simplifying unprivileged use of piuparts.
  • Roberto worked with Helmut to prepare the Shorewall package for the ongoing /usr-move transition.
  • Utkarsh also helped with the ongoing /usr-merge work by preparing patches for gitlab, libnfc, and net-tools.
  • Utkarsh, along with Helmut, brainstormed on fixing #961138, as this affects the whole archive and all the suites and not just R packages. Utkarsh intends to follow up on the bug in December.
  • Santiago organized a MiniDebConf in Uruguay. In total, nine people attended, including most of DDs in the surrounding area. Here s a nicely written blog by Gunnar Wolf.
  • Santiago also worked on some issues on Salsa CI, fixed with some merge requests: #462, #463, and #466.

7 December 2023

Daniel Kahn Gillmor: New OpenPGP certificate for dkg, December 2023

dkg's New OpenPGP certificate in December 2023 In December of 2023, I'm moving to a new OpenPGP certificate. You might know my old OpenPGP certificate, which had an fingerprint of C29F8A0C01F35E34D816AA5CE092EB3A5CA10DBA. My new OpenPGP certificate has a fingerprint of: D477040C70C2156A5C298549BB7E9101495E6BF7. Both certificates have the same set of User IDs:
  • Daniel Kahn Gillmor
  • <dkg@debian.org>
  • <dkg@fifthhorseman.net>
You can find a version of this transition statement signed by both the old and new certificates at: https://dkg.fifthhorseman.net/2023-dkg-openpgp-transition.txt The new OpenPGP certificate is:
-----BEGIN PGP PUBLIC KEY BLOCK-----
xjMEZXEJyxYJKwYBBAHaRw8BAQdA5BpbW0bpl5qCng/RiqwhQINrplDMSS5JsO/Y
O+5Zi7HCwAsEHxYKAH0FgmVxCcsDCwkHCRC7fpEBSV5r90cUAAAAAAAeACBzYWx0
QG5vdGF0aW9ucy5zZXF1b2lhLXBncC5vcmfUAgfN9tyTSxpxhmHA1r63GiI4v6NQ
mrrWVLOBRJYuhQMVCggCmwECHgEWIQTUdwQMcMIValwphUm7fpEBSV5r9wAAmaEA
/3MvYJMxQdLhIG4UDNMVd2bsovwdcTrReJhLYyFulBrwAQD/j/RS+AXQIVtkcO9b
l6zZTAO9x6yfkOZbv0g3eNyrAs0QPGRrZ0BkZWJpYW4ub3JnPsLACwQTFgoAfQWC
ZXEJywMLCQcJELt+kQFJXmv3RxQAAAAAAB4AIHNhbHRAbm90YXRpb25zLnNlcXVv
aWEtcGdwLm9yZ4l+Z3i19Uwjw3CfTNFCDjRsoufMoPOM7vM8HoOEdn/vAxUKCAKb
AQIeARYhBNR3BAxwwhVqXCmFSbt+kQFJXmv3AAALZQEAhJsgouepQVV98BHUH6Sv
WvcKrb8dQEZOvHFbZQQPNWgA/A/DHkjYKnUkCg8Zc+FonqOS/35sHhNA8CwqSQFr
tN4KzRc8ZGtnQGZpZnRoaG9yc2VtYW4ubmV0PsLACgQTFgoAfQWCZXEJywMLCQcJ
ELt+kQFJXmv3RxQAAAAAAB4AIHNhbHRAbm90YXRpb25zLnNlcXVvaWEtcGdwLm9y
ZxLvwkgnslsAuo+IoSa9rv8+nXpbBdab2Ft7n4H9S+d/AxUKCAKbAQIeARYhBNR3
BAxwwhVqXCmFSbt+kQFJXmv3AAAtFgD4wqcUfQl7nGLQOcAEHhx8V0Bg8v9ov8Gs
Y1ei1BEFwAD/cxmxmDSO0/tA+x4pd5yIvzgfGYHSTxKS0Ww3hzjuZA7NE0Rhbmll
bCBLYWhuIEdpbGxtb3LCwA4EExYKAIAFgmVxCcsDCwkHCRC7fpEBSV5r90cUAAAA
AAAeACBzYWx0QG5vdGF0aW9ucy5zZXF1b2lhLXBncC5vcmd7X4TgiINwnzh4jar0
Pf/b5hgxFPngCFxJSmtr/f0YiQMVCggCmQECmwECHgEWIQTUdwQMcMIValwphUm7
fpEBSV5r9wAAMuwBAPtMonKbhGOhOy+8miAb/knJ1cIPBjLupJbjM+NUE1WyAQD1
nyGW+XwwMrprMwc320mdJH9B0jdokJZBiN7++0NoBM4zBGVxCcsWCSsGAQQB2kcP
AQEHQI19uRatkPSFBXh8usgciEDwZxTnnRZYrhIgiFMybBDQwsC/BBgWCgExBYJl
cQnLCRC7fpEBSV5r90cUAAAAAAAeACBzYWx0QG5vdGF0aW9ucy5zZXF1b2lhLXBn
cC5vcmfCopazDnq6hZUsgVyztl5wmDCmxI169YLNu+IpDzJEtQKbAr6gBBkWCgBv
BYJlcQnLCRB3LRYeNc1LgUcUAAAAAAAeACBzYWx0QG5vdGF0aW9ucy5zZXF1b2lh
LXBncC5vcmcQglI7G7DbL9QmaDkzcEuk3QliM4NmleIRUW7VvIBHMxYhBHS8BMQ9
hghL6GcsBnctFh41zUuBAACwfwEAqDULksr8PulKRcIP6N9NI/4KoznyIcuOHi8q
Gk4qxMkBAIeV20SPEnWSw9MWAb0eKEcfupzr/C+8vDvsRMynCWsDFiEE1HcEDHDC
FWpcKYVJu36RAUlea/cAAFD1AP0YsE3Eeig1tkWaeyrvvMf5Kl1tt2LekTNWDnB+
FUG9SgD+Ka8vfPR8wuV8D3y5Y9Qq9xGO+QkEBCW0U1qNypg65QHOOARlcQnLEgor
BgEEAZdVAQUBAQdAWTLEa0WmnhUmDBdWXX0ZlYAa4g1CK/fXg0NPOQSteA4DAQgH
wsAABBgWCgByBYJlcQnLCRC7fpEBSV5r90cUAAAAAAAeACBzYWx0QG5vdGF0aW9u
cy5zZXF1b2lhLXBncC5vcmexrMBZe0QdQ+ZJOZxFkAiwCw2I7yTSF2Ox9GVFWKmA
mAKbDBYhBNR3BAxwwhVqXCmFSbt+kQFJXmv3AABcJQD/f4ltpSvLBOBEh/C2dIYa
dgSuqkCqq0B4WOhFRkWJZlcA/AxqLWG4o8UrrmwrmM42FhgxKtEXwCSHE00u8wR4
Up8G
=9Yc8
-----END PGP PUBLIC KEY BLOCK-----
When I have some reasonable number of certifications, i'll update the certificate associated with my e-mail addresses on https://keys.openpgp.org, in DANE, and in WKD. Until then, those lookups should continue to provide the old certificate.

4 December 2023

Ian Jackson: Don t use apt-get source; use dgit

tl;dr: If you are a Debian user who knows git, don t work with Debian source packages. Don t use apt source, or dpkg-source. Instead, use dgit and work in git. Also, don t use: VCS links on official Debian web pages, debcheckout, or Debian s (semi-)official gitlab, Salsa. These are suitable for Debian experts only; for most people they can be beartraps. Instead, use dgit. > Struggling with Debian source packages? A friend of mine recently asked for help on IRC. They re an experienced Debian administrator and user, and were trying to: make a change to a Debian package; build and install and run binary packages from it; and record that change for their future self, and their colleagues. They ended up trying to comprehend quilt. quilt is an ancient utility for managing sets of source code patches, from well before the era of modern version control. It has many strange behaviours and footguns. Debian s ancient and obsolete tarballs-and-patches source package format (which I designed the initial version of in 1993) nowadays uses quilt, at least for most packages. You don t want to deal with any of this nonsense. You don t want to learn quilt, and suffer its misbehaviours. You don t want to learn about Debian source packages and wrestle dpkg-source. Happily, you don t need to. Just use dgit One of dgit s main objectives is to minimise the amount of Debian craziness you need to learn. dgit aims to empower you to make changes to the software you re running, conveniently and with a minimum of fuss. You can use dgit to get the source code to a Debian package, as a git tree, with dgit clone (and dgit fetch). The git tree can be made into a binary package directly. The only things you really need to know are:
  1. By default dgit fetches from Debian unstable, the main work-in-progress branch. You may want something like dgit clone PACKAGE bookworm,-security (yes, with a comma).
  2. You probably want to edit debian/changelog to make your packages have a different version number.
  3. To build binaries, run dpkg-buildpackage -uc -b.
  4. Debian package builds are often disastrously messsy: builds might modify source files; and the official debian/rules clean can be inadequate, or crazy. Always commit before building, and use git clean and git reset --hard instead of running clean rules from the package.
Don t try to make a Debian source package. (Don t read the dpkg-source manual!) Instead, to preserve and share your work, use the git branch. dgit pull or dgit fetch can be used to get updates. There is a more comprehensive tutorial, with example runes, in the dgit-user(7) manpage. (There is of course complete reference documentation, but you don t need to bother reading it.) Objections But I don t want to learn yet another tool One of dgit s main goals is to save people from learning things you don t need to. It aims to be straightforward, convenient, and (so far as Debian permits) unsurprising. So: don t learn dgit. Just run it and it will be fine :-). Shouldn t I be using official Debian git repos? Absolutely not. Unless you are a Debian expert, these can be terrible beartraps. One possible outcome is that you might build an apparently working program but without the security patches. Yikes! I discussed this in more detail in 2021 in another blog post plugging dgit. Gosh, is Debian really this bad? Yes. On behalf of the Debian Project, I apologise. Debian is a very conservative institution. Change usually comes very slowly. (And when rapid or radical change has been forced through, the results haven t always been pretty, either technically or socially.) Sadly this means that sometimes much needed change can take a very long time, if it happens at all. But this tendency also provides the stability and reliability that people have come to rely on Debian for. I m a Debian maintainer. You tell me dgit is something totally different! dgit is, in fact, a general bidirectional gateway between the Debian archive and git. So yes, dgit is also a tool for Debian uploaders. You should use it to do your uploads, whenever you can. It s more convenient and more reliable than git-buildpackage and dput runes, and produces better output for users. You too can start to forget how to deal with source packages! A full treatment of this is beyond the scope of this blog post.

comment count unavailable comments

1 December 2023

Scarlett Gately Moore: KDE neon and snaps, Debian Weekly report.

While the winter sets in, I have been mostly busy following up on job leads and applying to anything and everything. Something has to stick I am trying! Even out of industry. Debian: This weeks main focus was getting involved and familiar with Debian rust-packaging. It is really quite different from other teams! I was successful and the MR is https://salsa.debian.org/rust-team/debcargo-conf/-/merge_requests/566 if anyone on the rust team can take a gander, I will upload when merged. I will get a few more under my belt next week now that I understand the process. KDE neon: Unfortunately, I did not have much time for neon, but I did get some red builds fixed and started uploading the new signing key for mauikit* applications. KDE snaps: A big thank you to Josh and Albert for merging all my MR s and I have finished 23.08.3 releases to stable. I released a new Krita 5.2.1 with some runtime fixes, please update. Still no source of income so I must ask, if you have any spare change, please consider a donation. Thank you, Scarlett https://gofund.me/b74e4c6f

Next.