Search Results: "jan"

13 February 2026

Erich Schubert: Dogfood Generative AI

Current AI companies ignore licenses such as the GPL, and often train on anything they can scrape. This is not acceptable. The AI companies ignore web conventions, e.g., they deep link images from your web sites (even adding ?utm_source=chatgpt.com to image URIs, I suggest that you return 403 on these requests), but do not direct visitors to your site. You do not get a reliable way of opting out from generative AI training or use. For example, the only way to prevent your contents from being used in Google AI Overviews is to use data-nosnippet and cripple the snippet preview in Google. The AI browsers such as Comet, Atlas do not identify as such, but rather pretend they are standard Chromium. There is no way to ban such AI use on your web site. Generative AI overall is flooding the internet with garbage. It was estimated that 1/3rd of the content uploaded to YouTube is by now AI generated. This includes the same veteran stories crap in thousands of variants as well as brainrot content (that at least does not pretend to be authentic), some of which is among the most viewed recent uploads. Hence, these platforms even benefit from the AI slop. And don t blame the creators because you can currently earn a decent amount of money from such contents, people will generate brainrot content. If you have recently tried to find honest reviews of products you considered buying, you will have noticed thousands of sites with AI generated fake product reviews, that all are financed by Amazon PartnerNet commissions. Often with hilarious nonsense such as recommending sewing thread with German instructions as tool for repairing a sewing machine. And on Amazon, there are plenty of AI generated product reviews the use of emoji is a strong hint. And if you leave a negative product review, there is a chance they offer you a refund to get rid of it And the majority of SPAM that gets through my filters is by now sent via Gmail and Amazon SES. Partially because of GenAI, StackOverflow is pretty much dead which used to be one of the most valuable programming resources. (While a lot of people complain about moderation, famous moderator Shog9 from the early SO days suggested that a change in Google s ranking is also to blame, as it began favoring showing new content over the existing answered questions causing more and more duplicates to be posted because people no longer found the existing good answers. In January 2026, there were around 3400 questions and 6000 answers posted, less than in the first month of SO of August 2008 (before the official launch). Many open-source projects are suffering in many ways, e.g., false bug reports that caused curl to stop its bug bounty program. Wikipedia is also suffering badly from GenAI. Science is also flooded with poor AI generated papers, often reviewed with help from AI. This is largely due to bad incentives to graduate, you are expected to write many papers on certain A conferences, such as NeurIPS. On these conferences the number of submissions is growing insane, and the review quality plummets. All to often, the references in these papers are hallucinated, too; and libraries complain that they receive more and more requests to locate literature that does not appear to exist. However, the worst effect (at least to me as an educator) is the noskilling effect (a rather novel term derived from deskilling, I have only seen it in this article by We els and Maibaum). Instead of acquiring skills (writing, reading, summarizing, programming) by practising, too many people now outsource all this to AI, leading to them not learn the basics necessary to advance to a higher skill level. In my impression, this effect is dramatic. It is even worse than deskilling, as it does not mean losing an advanced skill that you apparently can replace, but often means not acquiring basic skills in the first place. And the earlier pupils start using generative AI, the less skills they acquire. Dogfood the AI Let s dogfood the AI. Here s an outline:
  1. Get a list of programming topics, e.g., get a list of algorithms from Wikidata, get a StackOverflow data dump.
  2. Generate flawed code examples for the algorithms / programming questions, maybe generate blog posts, too.
    You do not need a high-quality model for this. Use something you can run locally or access for free.
  3. Date everything back in time, remove typical indications of AI use.
  4. Upload to Github, because Microsoft will feed this to OpenAI
Here is an example prompt that you can use:
You are a university educator, preparing homework assignments in debugging.
The programming language used is  lang .
The students are tasked to find bugs in given code.
Do not just call existing implementations from libraries, but implement the algorithm from scratch.
Make sure there are two mistakes in the code that need to be discovered by the students.
Do NOT repeat instructions. Do NOT add small-talk. Do NOT provide a solution.
The code may have (misleading) comments, but must NOT mention the bugs.
If you do not know how to implement the algorithm, output an empty response.
Output only the code for the assignment! Do not use markdown.
Begin with a code comment that indicates the algorithm name and idea.
If you indicate a bug, always use a comment with the keyword BUG
Generate a  lang  implementation (with bugs) of:  n  ( desc )
Remember to remove the BUG comments! If you pick some slighly less common programming languages (by quantity of available code, say Go or Rust) you have higher chances that this gets into the training data. If many of us do this, we can feed GenAI its own garbage. If we generate thousands of bad code examples, this will poison their training data, and may eventually lead to an effect known as model collapse . On the long run, we need to get back to an internet for people, not an internet for bots. Some kind of internet 2.0 , but I do not have a clear vision on how to keep AI out if AI can train on it, they will. And someone will copy and paste the AI generated crap back into whatever system we built. Hence I don t think technology is the answere here, but human networks of trust.

8 February 2026

Colin Watson: Free software activity in January 2026

About 80% of my Debian contributions this month were sponsored by Freexian, as well as one direct donation via GitHub Sponsors (thanks!). If you appreciate this sort of work and are at a company that uses Debian, have a look to see whether you can pay for any of Freexian s services; as well as the direct benefits, that revenue stream helps to keep Debian development sustainable for me and several other lovely people. You can also support my work directly via Liberapay or GitHub Sponsors. Python packaging New upstream versions: Fixes for Python 3.14: Fixes for pytest 9: Porting away from the deprecated pkg_resources: Other build/test failures: I investigated several more build failures and suggested removing the packages in question: Other bugs: Other bits and pieces Alejandro Colomar reported that man(1) ignored the MANWIDTH environment variable in some circumstances. I investigated this and fixed it upstream. I contributed an ubuntu-dev-tools patch to stop recommending sudo. I added forky support to the images used in Salsa CI pipelines. I began working on getting a release candidate of groff 1.24.0 into experimental, though haven t finished that yet. I worked on some lower-priority security updates for OpenSSH. Code reviews

Thorsten Alteholz: My Debian Activities in January 2026

Debian LTS/ELTS This was my hundred-thirty-ninth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian (as the LTS- and ELTS-teams have been merged now, there is only one paragraph left for both activities). During my allocated time I uploaded or worked on: I also attended the monthly LTS/ELTS meeting. While working on updates, I stumbled upon packages, whose CVEs have been postponed for a long time and their CVSS score was rather high. I wonder whether one should pay more attention to postponed issues, otherwise one could have already marked them as ignored. Debian Printing Unfortunately I didn t found any time to work on this topic. Debian Lomiri This month I worked on unifying packaging on Debian and Ubuntu. This makes it easier to work on those packages independent of the used platform. This work is generously funded by Fre(i)e Software GmbH! Debian Astro This month I uploaded a new upstream version or a bugfix version of: Debian IoT Unfortunately I didn t found any time to work on this topic. Debian Mobcom Unfortunately I didn t found any time to work on this topic. misc This month I uploaded a new upstream version or a bugfix version of: Unfortunately this month I was distracted from my normal Debian work by other unpleasant things, so that the paragraphs above are mostly empty. I now have to think about how many of my spare time I am able to dedicate to Debian in the future.

6 February 2026

Reproducible Builds: Reproducible Builds in January 2026

Welcome to the first monthly report in 2026 from the Reproducible Builds project! These reports outline what we ve been up to over the past month, highlighting items of news from elsewhere in the increasingly-important area of software supply-chain security. As ever, if you are interested in contributing to the Reproducible Builds project, please see the Contribute page on our website.

  1. Flathub now testing for reproducibility
  2. Reproducibility identifying projects that will fail to build in 2038
  3. Distribution work
  4. Tool development
  5. Two new academic papers
  6. Upstream patches

Flathub now testing for reproducibility Flathub, the primary repository/app store for Flatpak-based applications, has begun checking for build reproducibility. According to a recent blog post:
We have started testing binary reproducibility of x86_64 builds targeting the stable repository. This is possible thanks to flathub-repro-checker, a tool doing the necessary legwork to recreate the build environment and compare the result of the rebuild with what is published on Flathub. While these tests have been running for a while now, we have recently restarted them from scratch after enabling S3 storage for diffoscope artifacts.
The test results and status is available on their reproducible builds page.

Reproducibility identifying software projects that will fail to build in 2038 Longtime Reproducible Builds developer Bernhard M. Wiedemann posted on Reddit on Y2K38 commemoration day T-12 that is to say, twelve years to the day before the UNIX Epoch will no longer fit into a signed 32-bit integer variable on 19th January 2038. Bernhard s comment succinctly outlines the problem as well as notes some of the potential remedies, as well as links to a discussion with the GCC developers regarding adding warnings for int time_t conversions . At the time of publication, Bernard s topic had generated 50 comments in response.

Distribution work Conda is language-agnostic package manager which was originally developed to help Python data scientists and is now a popular package manager for Python and R. conda-forge, a community-led infrastructure for Conda recently revamped their dashboards to rebuild packages straight to track reproducibility. There have been changes over the past two years to make the conda-forge build tooling fully reproducible by embedding the lockfile of the entire build environment inside the packages.
In Debian this month:
In NixOS this month, it was announced that the GNU Guix Full Source Bootstrap was ported to NixOS as part of Wire Jansen bachelor s thesis (PDF). At the time of publication, this change has landed in NiX stdev distribution.
Lastly, Bernhard M. Wiedemann posted another openSUSE monthly update for his work there.

Tool development diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. This month, Chris Lamb made the following changes, including preparing and uploading versions, 310 and 311 to Debian.
  • Fix test compatibility with u-boot-tools version 2026-01. [ ]
  • Drop the implied Rules-Requires-Root: no entry in debian/control. [ ]
  • Bump Standards-Version to 4.7.3. [ ]
  • Reference the Debian ocaml package instead of ocaml-nox. (#1125094)
  • Apply a patch by Jelle van der Waa to adjust a test fixture match new lines. [ ]
  • Also the drop implied Priority: optional from debian/control. [ ]

In addition, Holger Levsen uploaded two versions of disorderfs, first updating the package from FUSE 2 to FUSE 3 as described in last months report, as well as updating the packaging to the latest Debian standards. A second upload (0.6.2-1) was subsequently made, with Holger adding instructions on how to add the upstream release to our release archive and incorporating changes by Roland Clobus to set _FILE_OFFSET_BITS on 32-bit platforms, fixing a build failure on 32-bit systems. Vagrant Cascadian updated diffoscope in GNU Guix to version 311-2-ge4ec97f7 and disorderfs to 0.6.2.

Two new academic papers Julien Malka, Stefano Zacchiroli and Th o Zimmermann of T l com Paris in-house research laboratory, the Information Processing and Communications Laboratory (LTCI) published a paper this month titled Docker Does Not Guarantee Reproducibility:
[ ] While Docker is frequently cited in the literature as a tool that enables reproducibility in theory, the extent of its guarantees and limitations in practice remains under-explored. In this work, we address this gap through two complementary approaches. First, we conduct a systematic literature review to examine how Docker is framed in scientific discourse on reproducibility and to identify documented best practices for writing Dockerfiles enabling reproducible image building. Then, we perform a large-scale empirical study of 5,298 Docker builds collected from GitHub workflows. By rebuilding these images and comparing the results with their historical counterparts, we assess the real reproducibility of Docker images and evaluate the effectiveness of the best practices identified in the literature.
A PDF of their paper is available online.
Quentin Guilloteau, Antoine Waehren and Florina M. Ciorba of the University of Basel in Sweden also published a Docker-related paper, theirs called Longitudinal Study of the Software Environments Produced by Dockerfiles from Research Artifacts:
The reproducibility crisis has affected all scientific disciplines, including computer science (CS). To address this issue, the CS community has established artifact evaluation processes at conferences and in journals to evaluate the reproducibility of the results shared in publications. Authors are therefore required to share their artifacts with reviewers, including code, data, and the software environment necessary to reproduce the results. One method for sharing the software environment proposed by conferences and journals is to utilize container technologies such as Docker and Apptainer. However, these tools rely on non-reproducible tools, resulting in non-reproducible containers. In this paper, we present a tool and methodology to evaluate variations over time in software environments of container images derived from research artifacts. We also present initial results on a small set of Dockerfiles from the Euro-Par 2024 conference.
A PDF of their paper is available online.

Miscellaneous news On our mailing list this month: Lastly, kpcyrd added a Rust section to the Stable order for outputs page on our website. [ ]

Upstream patches The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:

Finally, if you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

Birger Schacht: Status update, January 2026

January was a slow month, I only did three uploads to Debian unstable: I was very happy to see the new dfsg-new-queue and that there are more hands now processing the NEW queue. I also finally got one of the packages accepted that I uploaded after the Trixie release: wayback which I uploaded last August. There has been another release since then, I ll try to upload that in the next few days. There was a bug report for carl asking for Windows support. carl used the xdg create for looking up the XDG directories, but xdg does not support windows systems (and it seems this will not change) The reporter also provided a PR to replace the dependency with the directories crate which more system agnostic. I adapted the PR a bit and merged it and released version 0.6.0 of carl. At my dayjob I refactored django-grouper. django-grouper is a package we use to find duplicate objects in our data. Our users often work with datasets of thousands of historical persons, places and institutions and in projects that run over years and ingest data from multiple sources, it happens that entries are created several times. I wrote the initial app in 2024, but was never really happy about the approach I used back then. It was based on this blog post that describes how to group spreadsheet text cells. It uses sklearns TfidfVectorizer with a custom analyzer and the library sparse_dot_topn for creating the matrix. All in all the module to calculate the clusters was 80 lines and with sparse_dot_topn it pulled in a rather niche Python library. I was pretty sure that this functionality could also be implemented with basic sklearn functionality and it was: we are now using DictVectorizer because in a Django app we are working with objects that can be mapped to dicts anyway. And for clustering the data, the app now uses the DBSCAN algorithm (with the manhattan distance as metric). The module is now only half the size and the whole app lost one dependency! I released those changes as version 0.3.0 of the app. At the end of January together with friends I went to Brussels to attend FOSDEM. We took the night train but there were a couple of broken down trains so the ride took 26 hours instead of one night. It is a good thing we had a one day buffer and FOSDEM only started on Saturday. As usual there were too many talks to visit, so I ll have to watch some of the recordings in the next few weeks. Some examples of talks I found interesting so far:

4 February 2026

Ben Hutchings: FOSS activity in January 2026

30 January 2026

Utkarsh Gupta: FOSS Activites in January 2026

Here s my monthly but brief update about the activities I ve done in the FOSS world.

Debian
Whilst I didn t get a chance to do much, here are still a few things that I worked on:
  • A few discussions with the new DFSG team, et al.
  • Assited a few folks in getting their patches submitted via Salsa.
  • Reviewing pyenv MR for Ujjwal.
  • Mentoring for newcomers.
  • Moderation of -project mailing list.

Ubuntu
I joined Canonical to work on Ubuntu full-time back in February 2021. Whilst I can t give a full, detailed list of things I did, here s a quick TL;DR of what I did:
  • Successfully released Resolute Snapshot 3!
    • This one was also done without the ISO tracker and cdimage access.
    • We also worked very hard to build and promote all the image in due time.
  • Worked further on the whole artifact signing story for cdimage.
  • Assisted a bunch of folks with my Archive Admin and Release team hats to:
    • Helped in EOL ing Plucky.
    • Starting to help with the upcoming 24.04.4 release.
  • With that, the mid-cycle sprints are around the corner, so quite busy preparing for that.

Debian (E)LTS
This month I have worked 59 hours on Debian Long Term Support (LTS) and on its sister Extended LTS project and did the following things:

Released Security Updates

Work in Progress
  • knot-resolver: Affected by CVE-2023-26249, CVE-2023-46317, and CVE-2022-40188, leading to Denial of Service.
  • ruby-rack: There were multiple vulnerabilities reported in Rack, leading to DoS (memory exhaustion) and proxy bypass.
    • [ELTS]: After completing the work for LTS myself, Bastien picked it up for ELTS and reached out about an upstream regression and we ve been doing some exchanges. Bastien has done most of the work backporting the patches but needs a review and help backporting CVE-2025-61771. Haven t made much progress since last month and will carry it over.
  • node-lodash: Affected by CVE-2025-13465, lrototype pollution in baseUnset function.
    • [stable]: The patch for trixie and bookworm are ready but haven t been uploaded yet as I d like for the unstable upload to settle a bit before I proceed with stable uploads.
    • [LTS]: The bullseye upload will follow once the stable uploads are in and ACK d by the SRMs.
  • xrdp: Affected by CVE-2025-68670, leading to a stack-based buffer overflow.

Other Activities
  • [ELTS] Helped Bastien Roucaries debug a tomcat9 regression for buster.
    • I spent quite a lot of time trying to help Bastien (with Markus and Santiago involved via mail thread) by reproducing the regression that the user(s) reported.
    • I also helped suggest a path forward by vendoring everything, which I was then requested to also help perform.
    • Whilst doing that, I noticed circular dependency hellhole and suggested another path forward by backporting bnd and its dependencies as separate NEW packages.
    • Bastien liked the idea and is going to work on that but preferred to revert the update to remedy the immediate regressions reported. I further helped him in reviewing his update. This conversation happened on #debian-elts IRC channel.
  • [LTS] Assisted Ben Hutchings with his question about the next possible steps with a plausible libvirt regression caused by the Linux kernel update. This was a thread on debian-lts@ mailing list.
  • [LTS] Attended the monthly LTS meeting on IRC. Summary here.
  • [E/LTS] Monitored discussions on mailing lists, IRC, and all the documentation updates.

Until next time.
:wq for today.

19 January 2026

Russell Coker: Furilabs FLX1s

The Aim I have just got a Furilabs FLX1s [1] which is a phone running a modified version of Debian. I want to have a phone that runs all apps that I control and can observe and debug. Android is very good for what it does and there are security focused forks of Android which have a lot of potential, but for my use a Debian phone is what I want. The FLX1s is not going to be my ideal phone, I am evaluating it for use as a daily-driver until a phone that meets my ideal criteria is built. In this post I aim to provide information to potential users about what it can do, how it does it, and how to get the basic functions working. I also evaluate how well it meets my usage criteria. I am not anywhere near an average user. I don t think an average user would ever even see one unless a more technical relative showed one to them. So while this phone could be used by an average user I am not evaluating it on that basis. But of course the features of the GUI that make a phone usable for an average user will allow a developer to rapidly get past the beginning stages and into more complex stuff. Features The Furilabs FLX1s [1] is a phone that is designed to run FuriOS which is a slightly modified version of Debian. The purpose of this is to run Debian instead of Android on a phone. It has switches to disable camera, phone communication, and microphone (similar to the Librem 5) but the one to disable phone communication doesn t turn off Wifi, the only other phone I know of with such switches is the Purism Librem 5. It has a 720*1600 display which is only slightly better than the 720*1440 display in the Librem 5 and PinePhone Pro. This doesn t compare well to the OnePlus 6 from early 2018 with 2280*1080 or the Note9 from late 2018 with 2960*1440 which are both phones that I ve run Debian on. The current price is $US499 which isn t that good when compared to the latest Google Pixel series, a Pixel 10 costs $US649 and has a 2424*1080 display and it also has 12G of RAM while the FLX1s only has 8G. Another annoying thing is how rounded the corners are, it seems that round corners that cut off the content are a standard practice nowadays, in my collection of phones the latest one I found with hard right angles on the display was a Huawei Mate 10 Pro which was released in 2017. The corners are rounder than the Note 9, this annoys me because the screen is not high resolution by today s standards so losing the corners matters. The default installation is Phosh (the GNOME shell for phones) and it is very well configured. Based on my experience with older phone users I think I could give a phone with this configuration to a relative in the 70+ age range who has minimal computer knowledge and they would be happy with it. Additionally I could set it up to allow ssh login and instead of going through the phone support thing of trying to describe every GUI setting to click on based on a web page describing menus for the version of Android they are running I could just ssh in and run diff on the .config directory to find out what they changed. Furilabs have done a very good job of setting up the default configuration, while Debian developers deserve a lot of credit for packaging the apps the Furilabs people have chosen a good set of default apps to install to get it going and appear to have made some noteworthy changes to some of them. Droidian The OS is based on Android drivers (using the same techniques as Droidian [2]) and the storage device has the huge number of partitions you expect from Android as well as a 110G Ext4 filesystem for the main OS. The first issue with the Droidian approach of using an Android kernel and containers for user space code to deal with drivers is that it doesn t work that well. There are 3 D state processes (uninterrupteable sleep which usually means a kernel bug if the process remains in that state) after booting and doing nothing special. My tests running Droidian on the Note 9 also had D state processes, in this case they are D state kernel threads (I can t remember if the Note 9 had regular processes or kernel threads stuck in D state). It is possible for a system to have full functionality in spite of some kernel threads in D state but generally it s a symptom of things not working as well as you would hope. The design of Droidian is inherently fragile. You use a kernel and user space code from Android and then use Debian for the rest. You can t do everything the Android way (with the full OS updates etc) and you also can t do everything the Debian way. The TOW Boot functionality in the PinePhone Pro is really handy for recovery [3], it allows the internal storage to be accessed as a USB mass storage device. The full Android setup with ADB has some OK options for recovery, but part Android and part Debian has less options. While it probably is technically possible to do the same things in regard to OS repair and reinstall the fact that it s different from most other devices means that fixes can t be done in the same way. Applications GUI The system uses Phosh and Phoc, the GNOME system for handheld devices. It s a very different UI from Android, I prefer Android but it is usable with Phosh. IM Chatty works well for Jabber (XMPP) in my tests. It supports Matrix which I didn t test because I don t desire the same program doing Matrix and Jabber and because Matrix is a heavy protocol which establishes new security keys for each login so I don t want to keep logging in on new applications. Chatty also does SMS but I couldn t test that without the SIM caddy. I use Nheko for Matrix which has worked very well for me on desktops and laptops running Debian. Email I am currently using Geary for email. It works reasonably well but is lacking proper management of folders, so I can t just subscribe to the important email on my phone so that bandwidth isn t wasted on less important email (there is a GNOME gitlab issue about this see the Debian Wiki page about Mobile apps [4]). Music Music playing isn t a noteworthy thing for a desktop or laptop, but a good music player is important for phone use. The Lollypop music player generally does everything you expect along with support for all the encoding formats including FLAC0 a major limitation of most Android music players seems to be lack of support for some of the common encoding formats. Lollypop has it s controls for pause/play and going forward and backward one track on the lock screen. Maps The installed map program is gnome-maps which works reasonably well. It gets directions via the Graphhopper API [5]. One thing we really need is a FOSS replacement for Graphhopper in GNOME Maps. Delivery and Unboxing I received my FLX1s on the 13th of Jan [1]. I had paid for it on the 16th of Oct but hadn t received the email with the confirmation link so the order had been put on hold. But after I contacted support about that on the 5th of Jan they rapidly got it to me which was good. They also gave me a free case and screen protector to apologise, I don t usually use screen protectors but in this case it might be useful as the edges of the case don t even extend 0.5mm above the screen. So if it falls face down the case won t help much. When I got it there was an open space at the bottom where the caddy for SIMs is supposed to be. So I couldn t immediately test VoLTE functionality. The contact form on their web site wasn t working when I tried to report that and the email for support was bouncing. Bluetooth As a test of Bluetooth I connected it to my Nissan LEAF which worked well for playing music and I connected it to several Bluetooth headphones. My Thinkpad running Debian/Trixie doesn t connect to the LEAF and to headphones which have worked on previous laptops running Debian and Ubuntu. A friend s laptop running Debian/Trixie also wouldn t connect to the LEAF so I suspect a bug in Trixie, I need to spend more time investigating this. Wifi Currently 5GHz wifi doesn t work, this is a software bug that the Furilabs people are working on. 2.4GHz wifi works fine. I haven t tested running a hotspot due to being unable to get 4G working as they haven t yet shipped me the SIM caddy. Docking This phone doesn t support DP Alt-mode or Thunderbolt docking so it can t drive an external monitor. This is disappointing, Samsung phones and tablets have supported such things since long before USB-C was invented. Samsung DeX is quite handy for Android devices and that type feature is much more useful on a device running Debian than on an Android device. Camera The camera works reasonably well on the FLX1s. Until recently for the Librem 5 the camera didn t work and the camera on my PinePhone Pro currently doesn t work. Here are samples of the regular camera and the selfie camera on the FLX1s and the Note 9. I think this shows that the camera is pretty decent. The selfie looks better and the front camera is worse for the relatively close photo of a laptop screen taking photos of computer screens is an important part of my work but I can probably work around that. I wasn t assessing this camera t find out if it s great, just to find out if I have the sorts of problems I had before and it just worked. The Samsung Galaxy Note series of phones has always had decent specs including good cameras. Even though the Note 9 is old comparing to it is a respectable performance. The lighting was poor for all photos. FLX1s
Note 9
Power Use In 93 minutes having the PinePhone Pro, Librem 5, and FLX1s online with open ssh sessions from my workstation the PinePhone Pro went from 100% battery to 26%, the Librem 5 went from 95% to 69%, and the FLX1s went from 100% to 99%. The battery discharge rate of them was reported as 3.0W, 2.6W, and 0.39W respectively. Based on having a 16.7Wh battery 93 minutes of use should have been close to 4% battery use, but in any case all measurements make it clear that the FLX1s will have a much longer battery life. Including the measurement of just putting my fingers on the phones and feeling the temperature (FLX1s felt cool and the others felt hot). The PinePhone Pro and the Librem 5 have an optional Caffeine mode which I enabled for this test, without that enabled the phone goes into a sleep state and disconnects from Wifi. So those phones would use much less power with caffeine mode enabled, but they also couldn t get fast response to notifications etc. I found the option to enable a Caffeine mode switch on the FLX1s but the power use was reported as being the same both with and without it. Charging One problem I found with my phone is that in every case it takes 22 seconds to negotiate power. Even when using straight USB charging (no BC or PD) it doesn t draw any current for 22 seconds. When I connect it it will stay at 5V and varying between 0W and 0.1W (current rounded off to zero) for 22 seconds or so and then start charging. After the 22 second display the phone will make the tick sound indicating that it s charging and the power meter will measure that it s drawing some current. I added the table from my previous post about phone charging speed [6] with an extra row for the FLX1s. For charging from my PC USB ports the results were the worst ever, the port that does BC did not work at all it was looping trying to negotiate after a 22 second negotiation delay the port would turn off. The non-BC port gave only 2.4W which matches the 2.5W given by the spec for a High-power device which is what that port is designed to give. In a discussion on the Purism forum about the Librem5 charging speed one of their engineers told me that the reason why their phone would draw 2A from that port was because the cable was identifying itself as a USB-C port not a High-power device port. But for some reason out of the 7 phones I tested the FLX1s and the One Plus 6 are the only ones to limit themselves to what the port is apparently supposed to do. Also the One Plus 6 charges slowly on every power supply so I don t know if it is obeying the spec or just sucking. On a cheap AliExpress charger the FLX1s gets 5.9V and on a USB battery it gets 5.8V. Out of all 42 combinations of device and charger I tested these were the only ones to involve more than 5.1V but less than 9V. I welcome comments suggesting an explanation. The case that I received has a hole for the USB-C connector that isn t wide enough for the plastic surrounds on most of my USB-C cables (including the Dell dock). Also to make a connection requires a fairly deep insertion (deeper than the One Plus 6 or the Note 9). So without adjustment I have to take the case off to charge it. It s no big deal to adjust the hole (I have done it with other cases) but it s an annoyance.
Phone Top z640 Bottom Z640 Monitor Ali Charger Dell Dock Battery Best Worst
FLX1s FAIL 5.0V 0.49A 2.4W 4.8V 1.9A 9.0W 5.9V 1.8A 11W 4.8V 2.1A 10W 5.8V 2.1A 12W 5.8V 2.1A 12W 5.0V 0.49A 2.4W
Note9 4.8V 1.0A 5.2W 4.8V 1.6A 7.5W 4.9V 2.0A 9.5W 5.1V 1.9A 9.7W 4.8V 2.1A 10W 5.1V 2.1A 10W 5.1V 2.1A 10W 4.8V 1.0A 5.2W
Pixel 7 pro 4.9V 0.80A 4.2W 4.8V 1.2A 5.9W 9.1V 1.3A 12W 9.1V 1.2A 11W 4.9V 1.8A 8.7W 9.0V 1.3A 12W 9.1V 1.3A 12W 4.9V 0.80A 4.2W
Pixel 8 4.7V 1.2A 5.4W 4.7V 1.5A 7.2W 8.9V 2.1A 19W 9.1V 2.7A 24W 4.8V 2.3A 11.0W 9.1V 2.6A 24W 9.1V 2.7A 24W 4.7V 1.2A 5.4W
PPP 4.7V 1.2A 6.0W 4.8V 1.3A 6.8W 4.9V 1.4A 6.6W 5.0V 1.2A 5.8W 4.9V 1.4A 5.9W 5.1V 1.2A 6.3W 4.8V 1.3A 6.8W 5.0V 1.2A 5.8W
Librem 5 4.4V 1.5A 6.7W 4.6V 2.0A 9.2W 4.8V 2.4A 11.2W 12V 0.48A 5.8W 5.0V 0.56A 2.7W 5.1V 2.0A 10W 4.8V 2.4A 11.2W 5.0V 0.56A 2.7W
OnePlus6 5.0V 0.51A 2.5W 5.0V 0.50A 2.5W 5.0V 0.81A 4.0W 5.0V 0.75A 3.7W 5.0V 0.77A 3.7W 5.0V 0.77A 3.9W 5.0V 0.81A 4.0W 5.0V 0.50A 2.5W
Best 4.4V 1.5A 6.7W 4.6V 2.0A 9.2W 8.9V 2.1A 19W 9.1V 2.7A 24W 4.8V 2.3A 11.0W 9.1V 2.6A 24W
Conclusion The Furilabs support people are friendly and enthusiastic but my customer experience wasn t ideal. It was good that they could quickly respond to my missing order status and the missing SIM caddy (which I still haven t received but believe is in the mail) but it would be better if such things just didn t happen. The phone is quite user friendly and could be used by a novice. I paid $US577 for the FLX1s which is $AU863 by today s exchange rates. For comparison I could get a refurbished Pixel 9 Pro Fold for $891 from Kogan (the major Australian mail-order company for technology) or a refurbished Pixel 9 Pro XL for $842. The Pixel 9 series has security support until 2031 which is probably longer than you can expect a phone to be used without being broken. So a phone with a much higher resolution screen that s only one generation behind the latest high end phones and is refurbished will cost less. For a brand new phone a Pixel 8 Pro which has security updates until 2030 costs $874 and a Pixel 9A which has security updates until 2032 costs $861. Doing what the Furilabs people have done is not a small project. It s a significant amount of work and the prices of their products need to cover that. I m not saying that the prices are bad, just that economies of scale and the large quantity of older stock makes the older Google products quite good value for money. The new Pixel phones of the latest models are unreasonably expensive. The Pixel 10 is selling new from Google for $AU1,149 which I consider a ridiculous price that I would not pay given the market for used phones etc. If I had a choice of $1,149 or a feature phone I d pay $1,149. But the FLX1s for $863 is a much better option for me. If all I had to choose from was a new Pixel 10 or a FLX1s for my parents I d get them the FLX1s. For a FOSS developer a FLX1s could be a mobile test and development system which could be lent to a relative when their main phone breaks and the replacement is on order. It seems to be fit for use as a commodity phone. Note that I give this review on the assumption that SMS and VoLTE will just work, I haven t tested them yet. The UI on the FLX1s is functional and easy enough for a new user while allowing an advanced user to do the things they desire. I prefer the Android style and the Plasma Mobile style is closer to Android than Phosh is, but changing it is something I can do later. Generally I think that the differences between UIs matter more when on a desktop environment that could be used for more complex tasks than on a phone which limits what can be done by the size of the screen. I am comparing the FLX1s to Android phones on the basis of what technology is available. But most people who would consider buying this phone will compare it to the PinePhone Pro and the Librem 5 as they have similar uses. The FLX1s beats both those phones handily in terms of battery life and of having everything just work. But it has the most non free software of the three and the people who want the $2000 Librem 5 that s entirely made in the US won t want the FLX1s. This isn t the destination for Debian based phones, but it s a good step on the way to it and I don t think I ll regret this purchase.

11 January 2026

Otto Kek l inen: Stop using MySQL in 2026, it is not true open source

Featured image of post Stop using MySQL in 2026, it is not true open sourceIf you care about supporting open source software, and still use MySQL in 2026, you should switch to MariaDB like so many others have already done. The number of git commits on github.com/mysql/mysql-server has been significantly declining in 2025. The screenshot below shows the state of git commits as of writing this in January 2026, and the picture should be alarming to anyone who cares about software being open source. MySQL GitHub commit activity decreasing drastically

This is not surprising Oracle should not be trusted as the steward for open source projects When Oracle acquired Sun Microsystems and MySQL along with it back in 2009, the European Commission almost blocked the deal due to concerns that Oracle s goal was just to stifle competition. The deal went through as Oracle made a commitment to keep MySQL going and not kill it, but (to nobody s surprise) Oracle has not been a good steward of MySQL as an open source project and the community around it has been withering away for years now. All development is done behind closed doors. The publicly visible bug tracker is not the real one Oracle staff actually uses for MySQL development, and the few people who try to contribute to MySQL just see their Pull Requests and patch submissions marked as received with mostly no feedback and then those changes may or may not be in the next MySQL release, often rewritten, and with only Oracle staff in the git author/committer fields. The real author only gets a small mention in a blog post. When I was the engineering manager for the core team working on RDS MySQL and RDS MariaDB at Amazon Web Services, I oversaw my engineers contributions to both MySQL and MariaDB (the latter being a fork of MySQL by the original MySQL author, Michael Widenius). All the software developers in my org disliked submitting code to MySQL due to how bad the reception by Oracle was to their contributions. MariaDB is the stark opposite with all development taking place in real-time on github.com/mariadb/server, anyone being able to submit a Pull Request and get a review, all bugs being openly discussed at jira.mariadb.org and so forth, just like one would expect from a true open source project. MySQL is open source only by license (GPL v2), but not as a project.

MySQL s technical decline in recent years Despite not being a good open source steward, Oracle should be given credit that it did keep the MySQL organization alive and allowed it to exist fairly independently and continue developing and releasing new MySQL versions well over a decade after the acquisition. I have no insight into how many customers they had, but I assume the MySQL business was fairly profitable and financially useful to Oracle, at least as long as it didn t gain too many features to threaten Oracle s own main database business. I don t know why, perhaps because too many talented people had left the organization, but it seems that from a technical point of view MySQL clearly started to deteriorate from 2022 onward. When MySQL 8.0.29 was released with the default ALTER TABLE method switched to run in-place, it had a lot of corner cases that didn t work, causing the database to crash and data to corrupt for many users. The issue wasn t fully fixed until a year later in MySQL 8.0.32. To many users annoyance Oracle announced the 8.0 series as evergreen and introduced features and changes in the minor releases, instead of just doing bugfixes and security fixes like users historically had learnt to expect from these x.y.Z maintenance releases. There was no new major MySQL version for six years. After MySQL 8.0 in 2018 it wasn t until 2023 when MySQL 8.1 was released, and it was just a short-term preview release. The first actual new major release MySQL 8.4 LTS was released in 2024. Even though it was a new major release, many users got disappointed as it had barely any new features. Many also reported degraded performance with newer MySQL versions, for example the benchmark by famous MySQL performance expert Mark Callaghan below shows that on write-heavy workloads MySQL 9.5 throughput is typically 15% less than in 8.0. Benchmark showing new MySQL versions being slower than the old Due to newer MySQL versions deprecating many features, a lot of users also complained about significant struggles regarding both MySQL 5.7->8.0 and 8.0->8.4 upgrades. With few new features and heavy focus on code base cleanup and feature deprecation, it became obvious to many that Oracle had decided to just keep MySQL barely alive, and put all new relevant features (e.g. vector search) into Heatwave, Oracle s closed-source and cloud-only service for MySQL customers. As it was evident that Oracle isn t investing in MySQL, Percona s Peter Zaitsev wrote Is Oracle Finally Killing MySQL in June 2024. At this time MySQL s popularity as ranked by DB-Engines had also started to tank hard, a trend that likely accelerates in 2026. MySQL dropping significantly in DB-Engines ranking In September 2025 news reported that Oracle was reducing its workforce and that the MySQL staff was getting heavily reduced. Obviously this does not bode well for MySQL s future, and Peter Zaitsev posted already in November stats showing that the latest MySQL maintenance release contained fewer bug fixes than before.

Open source is more than ideology: it has very real effects on software security and sovereignty Some say they don t care if MySQL is truly open source or not, or that they don t care if it has a future in coming years, as long as it still works now. I am afraid people thinking so are taking a huge risk. The database is often the most critical part of a software application stack, and any flaw or problem in operations, let alone a security issue, will have immediate consequences, and not caring will eventually get people fired or sued. In open source problems are discussed openly, and the bigger the problem, the more people and companies will contribute to fixing it. Open source as a development methodology is similar to the scientific method with free flow of ideas that are constantly contested and only the ones with the most compelling evidence win. Not being open means more obscurity, more risk and more just trust us bro attitude. This open vs. closed is very visible for example in how Oracle handles security issues. We can see that in 2025 alone MySQL published 123 CVEs about security issues, while MariaDB had 8. There were 117 CVEs that only affected MySQL and not MariaDB in 2025. I haven t read them all, but typically the CVEs hardly contain any real details. As an example, the most recent one CVE-2025-53067 states Easily exploitable vulnerability allows high privileged attacker with network access via multiple protocols to compromise MySQL Server. There is no information a security researcher or auditor could use to verify if any original issue actually existed, or if it was fixed, or if the fix was sufficient and fully mitigating the issue or not. MySQL users just have to take the word of Oracle that it is all good now. Handling security issues like this is in stark contrast to other open source projects, where all security issues and their code fixes are open for full scrutiny after the initial embargo is over and CVE made public. There is also various forms of enshittification going on one would not see in a true open source project, and everything about MySQL as a software, documentation and website is pushing users to stop using the open source version and move to the closed MySQL versions, and in particular to Heatwave, which is not only closed-source but also results in Oracle fully controlling customer s databases contents. Of course, some could say this is how Oracle makes money and is able to provide a better product. But stories on Reddit and elsewhere suggest that what is going on is more like Oracle milking hard the last remaining MySQL customers who are forced to pay more and more for getting less and less.

There are options and migrating is easy, just do it A large part of MySQL users switched to MariaDB already in the mid-2010s, in particular everyone who had cared deeply about their database software staying truly open source. That included large installations such as Wikipedia, and Linux distributions such as Fedora and Debian. Because it s open source and there is no centralized machine collecting statistics, nobody knows what the exact market shares look like. There are however some application specific stats, such as that 57% of WordPress sites around the world run MariaDB, while the share for MySQL is 42%. For anyone running a classic LAMP stack application such as WordPress, Drupal, Mediawiki, Nextcloud, or Magento, switching the old MySQL database to MariaDB is be straightforward. As MariaDB is a fork of MySQL and mostly backwards compatible with it, swapping out MySQL for MariaDB can be done without changing any of the existing connectors or database clients, as they will continue to work with MariaDB as if it was MySQL. For those running custom applications and who have the freedom to make changes to how and what database is used, there are tens of mature and well-functioning open source databases to choose from, with PostgreSQL being the most popular general database. If your application was built from the start for MySQL, switching to PostgreSQL may however require a lot of work, and the MySQL/MariaDB architecture and storage engine InnoDB may still offer an edge in e.g. online services where high performance, scalability and solid replication features are of highest priority. For a quick and easy migration MariaDB is probably the best option. Switching from MySQL to the Percona Server is also very easy, as it closely tracks all changes in MySQL and deviates from it only by a small number of improvements done by Percona. However, also precisely because of it being basically just a customized version of the MySQL Server, it s not a viable long-term solution for those trying to fully ditch the dependency on Oracle. There are also several open source databases that have no common ancestry with MySQL, but strive to be MySQL-compatible. Thus most apps built for MySQL can simply switch to using them without needing SQL statements to be rewritten. One such database is TiDB, which has been designed from scratch specifically for highly scalable and large systems, and is so good that even Amazon s latest database solution DSQL was built borrowing many ideas from TiDB. However, TiDB only really shines with larger distributed setups, so for the vast majority of regular small- and mid-scale applications currently using MySQL, the most practical solution is probably to just switch to MariaDB, which on most Linux distributions can simply be installed by running apt/dnf/brew install mariadb-server. Whatever you end up choosing, as long as it is not Oracle, you will be better off.

10 January 2026

Dirk Eddelbuettel: Rcpp 1.1.1 on CRAN: Many Improvements in Semi-Annual Update

rcpp logo Team Rcpp is thrilled to share that an exciting new version 1.1.1 of Rcpp is now on CRAN (and also uploaded to Debian and already built for r2u). Having switchted to C++11 as the minimum standard in the previous 1.1.0 release, this version takes full advantage of it and removes a lot of conditional code catering to older standards that no longer need to be supported. Consequently, the source tarball shrinks by 39% from 3.11 mb to 1.88 mb. That is a big deal. (Size peaked with Rcpp 1.0.12 two years ago at 3.43 mb; relative to its size we are down 45% !!) Removing unused code also makes maintenance easier, and quickens both compilation and installation in general. This release continues as usual with the six-months January-July cycle started with release 1.0.5 in July 2020. Interim snapshots are always available via the r-universe page and repo. We continue to strongly encourage the use of these development released and their testing we tend to run our systems with them too. Rcpp has long established itself as the most popular way of enhancing R with C or C++ code. Right now, 3020 packages on CRAN depend on Rcpp for making analytical code go faster and further. On CRAN, 13.1% of all packages depend (directly) on Rcpp, and 60.9% of all compiled packages do. From the cloud mirror of CRAN (which is but a subset of all CRAN downloads), Rcpp has been downloaded 109.8 million times. The two published papers (also included in the package as preprint vignettes) have, respectively, 2151 (JSS, 2011) and 405 (TAS, 2018) citations, while the the book (Springer useR!, 2013) has another 715. This time, I am not attempting to summarize the different changes. The full list follows below and details all these changes, their respective PRs and, if applicable, issue tickets. Big thanks from all of us to all contributors!

Changes in Rcpp release version 1.1.1 (2026-01-08)
  • Changes in Rcpp API:
    • An unused old R function for a compiler version check has been removed after checking no known package uses it (Dirk in #1395)
    • A narrowing warning is avoided via a cast (Dirk in #1398)
    • Demangling checks have been simplified (I aki in #1401 addressing #1400)
    • The treatment of signed zeros is now improved in the Sugar code (I aki in #1404)
    • Preparations for phasing out use of Rf_error have been made (I aki in #1407)
    • The long-deprecated function loadRcppModules() has been removed (Dirk in #1416 closing #1415)
    • Some non-API includes from R were refactored to accommodate R-devel changes (I aki in #1418 addressing #1417)
    • An accessor to Rf_rnbeta has been removed (Dirk in #1419 also addressing #1420)
    • Code accessing non-API Rf_findVarInFrame now uses R_getVarEx (Dirk in #1423 fixing #1421)
    • Code conditional on the R version now expects at least R 3.5.0; older code has been removed (Dirk in #1426 fixing #1425)
    • The non-API ATTRIB entry point to the R API is no longer used (Dirk in #1430 addressing #1429)
    • The unwind-protect mechanism is now used unconditionally (Dirk in #1437 closing #1436)
  • Changes in Rcpp Attributes:
    • The OpenMP plugin has been generalized for different macOS compiler installations (Kevin in #1414)
  • Changes in Rcpp Documentation:
    • Vignettes are now processed via a new "asis" processor adopted from R.rsp (Dirk in #1394 fixing #1393)
    • R is now cited via its DOI (Dirk)
    • A (very) stale help page has been removed (Dirk in #1428 fixing #1427)
    • The main README.md was updated emphasizing r-universe in favor of the local drat repos (Dirk in #1431)
  • Changes in Rcpp Deployment:
    • A temporary change in R-devel concerning NA part in complex variables was accommodated, and then reverted (Dirk in #1399 fixing #1397)
    • The macOS CI runners now use macos-14 (Dirk in #1405)
    • A message is shown if R.h is included before Rcpp headers as this can lead to errors (Dirk in #1411 closing #1410)
    • Old helper functions use message() to signal they are not used, deprecation and removal to follow (Dirk in #1413 closing #1412)
    • Three tests were being silenced following #1413 (Dirk in #1422)
    • The heuristic whether to run all available tests was refined (Dirk in #1434 addressing #1433)
    • Coverage has been tweaked via additional #nocov tags (Dirk in #1435)
  • Non-release Changes:
    • Two interim non-releases 1.1.0.8.1 and .2 were made in order to unblock CRAN due to changes in R-devel rather than Rcpp

Thanks to my CRANberries, you can also look at a diff to the previous interim release along with pre-releaseds 1.1.0.8, 1.1.0.8.1 and 1.1.0.8.2 that were needed because R-devel all of a sudden decided to move fast and break things. Not our doing. Questions, comments etc should go to the GitHub discussion section list]rcppdevellist off the R-Forge page. Bugs reports are welcome at the GitHub issue tracker as well. Both sections can be searched as well.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

7 January 2026

Ravi Dwivedi: Why my Riseup account got suspended (and reinstated)

Disclaimer: The goal of this post is not to attack Riseup. In fact, I love Riseup and support their work.

Story Riseup is an email provider, known for its privacy-friendly email service. The service requires an invite from an existing Riseup email user to get an account. I created my account on Riseup in the year 2020, of course with the help of a friend who invited me. Since then, I have used the email address only occasionally, although it is logged into my Thunderbird all the time. Fast-forward to the 4th of January 2026, when Thunderbird suddenly told me that it could not log in to my Riseup account. When I tried logging in using their webmail, it said invalid password . Finally, I tried logging in to my account on their website, and was told that
Log in for that account is temporary suspended while we perform maintenance. Please try again later.
At this point, I suspected that the Riseup service itself was facing some issues. I asked a friend who had an account there if the service was up, and they said that it was. The issue seemed to be specific only to my account. I contacted Riseup support and informed them of the issue. They responded the next day (the 5th of January) saying:
The my-username-redacted account was found inviting another account that violated our terms of use. As a security measure we suspend all related accounts to ToS violations.
(Before we continue, I would like to take a moment and reflect upon how nice it was to receive response from a human rather than an AI bot a trend that is unfortunately becoming the norm nowadays.) I didn t know who violated their ToS, so I asked which account violated their terms. Riseup told me:
username-redacted@riseup.net attempted to create aliases that could be abused to impersonate riseup itself.
I asked a friend whom I invited a month before the incident, and they confirmed that the username belonged to them. When I asked what they did, they told me they tried creating aliases such as floatup and risedown. I also asked Riseup which aliases violated their terms, but their support didn t answer this. I explained to the Riseup support that the impersonation wasn t intentional, that the user hadn t sent any emails, and that I had been a user for more than 5 years and had donated to them in the past. Furthermore, I suggested that they should block the creation of such aliases if they think the aliases violate their terms, like how email providers typically don t allow users to create admin@ or abuse@ email addresses. After I explained myself, Riseup reinstated my account. Update on the 10th of January 2025: My friend told me that the alias that violated Riseup s terms was cloudadmin and his account was reinstated on the 7th of January.

Issues with suspension I have the following issues regarding the way the suspension took place
  • There was no way of challenging the suspension before the action was taken
  • The action taken against me was disproportionate. Remember that I didn t violate any terms. It was allegedly done by a user I invited. They could just block the aliases while continuing the discussion in parallel.
  • I was locked out of my account with no way of saving my emails and without any chance to migrate. What if that email address was being used for important stuff such as bank access or train tickets? I know people who use Riseup email for such purposes.
  • The violation wasn t even proven. I wasn t told which alias violated the terms and how could that be used to impersonate Riseup itself
When I brought up the issue of me getting locked out of my account without a way of downloading my emails or migrating my account, Riseup support responded by saying:
You must understand that we react [by] protecting our service, and therefore we cannot provide notice messages on the affected accounts. We need to act preventing any potential damage to the service that might affect the rest of the users, and that measure is not excessive (think on how abusers/spammers/scammers/etc could trick us and attempt any action before their account is suspended).
This didn t address my concerns, so let s move on to the next section.

Room for improvement Here s how I think Riseup s ban policy could be changed while still protecting against spammers and other bad actors:
  • Even if Riseup can t provide notice to blocked accounts, perhaps they can scale back limitations on the inviting account which wasn t even involved for example, by temporarily disabling invites from that account until the issue is resolved.
  • In this case, the person didn t impersonate Riseup, so Riseup could have just blocked the aliases and let the user know about it, rather than banning the account outright.
  • Riseup should give blocked users access to their existing emails so they have a chance to migrate them to a different provider. (Riseup could disable SMTP and maybe incoming emails but keep IMAP access open). I know people who use Riseup for important things such as bank or train tickets, and a sudden block like this is not a good idea.
  • Riseup should factor in the account profile in making these decisions. I had an account on their service for 5 years and I had only created around 5 invites. (I don t remember the exact number and there s no way to retrieve this information.) This is not exactly an attacker profile. I feel long-term users like this deserve an explanation for a ban.
I understand Riseup is a community-run service and does not have unlimited resources like big corporations or commercial email providers do. Their actions felt disproportionate to me because I don t know what issues they face behind the scenes. I hope someone can help to improve the policies, or at least shed light on why they are the way they are. Signing off now. Meet you in the next one! Thanks to Badri and Contrapunctus for reviewing this blog post

5 January 2026

Colin Watson: Free software activity in December 2025

About 95% of my Debian contributions this month were sponsored by Freexian. You can also support my work directly via Liberapay or GitHub Sponsors. Python packaging I upgraded these packages to new upstream versions: Python 3.14 is now a supported version in unstable, and we re working to get that into testing. As usual this is a pretty arduous effort because it requires going round and fixing lots of odds and ends across the whole ecosystem. We can deal with a fair number of problems by keeping up with upstream (see above), but there tends to be a long tail of packages whose upstreams are less active and where we need to chase them, or where problems only show up in Debian for one reason or another. I spent a lot of time working on this: Fixes for pytest 9: I filed lintian: Report Python egg-info files/directories to help us track the migration to pybuild-plugin-pyproject. I did some work on dh-python: Normalize names in pydist lookups and pyproject plugin: Support headers (the latter of which allowed converting python-persistent and zope.proxy to pybuild-plugin-pyproject, although it needed a follow-up fix). I fixed or helped to fix several other build/test failures: Other bugs: Other bits and pieces Code reviews

4 January 2026

Valhalla's Things: And now for something completely different

Posted on January 4, 2026
Tags: topic:walking
Warning
mention of bodies being bodies and minds being minds, and not in the perfectly working sense.
One side of Porto Ceresio along the lake: there is a small strip of houses, and the hills behind them are covered in woods. A boat is parked around the middle of the picture. A lot of the youtube channels I follow tend to involve somebody making things, so of course one of the videos my SO and I watched a few days ago was about walking around San Francisco Bay, and that recalled my desire to go to places by foot. Now, for health-related reasons doing it properly would be problematic, and thus I ve never trained for that, but during this Christmas holiday-heavy time I suggested my very patient SO the next best thing: instead of our usual 1.5 hours uphill walk in the woods, a 2 hours and a bit mostly flat walk on paved streets, plus some train, to a nearby town: Porto Ceresio, on the Italian side of Lake Lugano. I started to prepare for it on the day before, by deciding it was a good time to upgrade my PinePhone, and wait, I m still on Trixie? I could try Forky, what could possibly go wrong? And well, the phone was no longer able to boot, and reinstalling from the latest weekly had a system where the on-screen keyboard didn t appear, and I didn t want to bother finding out why, so re-installed another time from the 13.0 image, and between that, and distracting myself with widelands while waiting for the downloads and uploads and reboots etc., well, all of the afternoon and the best part of the evening disappeared. So, in a hurry, between the evening and the next morning I prepared a nice healthy lunch, full of all the important nutrients such as sugar, salt, mercury and arsenic. Tuna (mercury) soboro (sugar and salt) on rice and since I was in a hurry I didn t prepare any vegetables, but used pickles (more salt) and shio kombu (arsenic and various heavy metals, sugar and salt). Plus a green tea mochi for dessert, in case we felt low on sugar. :D Then on the day of the walk we woke up a bit later than usual, and then my body decided it was a good day for my belly to not exactly hurt, but not not-hurt either, and there I took an executive decision to wear a corset, because if something feels like it wants to burst open, wrapping it in a steel reinforced cage will make it stop. (I m not joking. It does. At least in those specific circumstances.) This was followed by hurrying through the things I had to do before leaving the house, having a brief anxiety attack and feeling feverish (it wasn t fever), and finally being able to leave the house just half an hour late. A stream running on rocks with the woods to both sides. And then, 10 minutes after we had left, realizing that I had written down the password for the train website, since it was no longer saved on the phone, but i had forgotten the bit of paper at home. We could have gone back to take it, but decided not to bother, as we could also hopefully buy paper-ish tickets at the train station (we could). Later on, I also realized I had also forgotten my GPS tracker, so I have no record of where we went exactly (but it s not hard to recognize it on a map) nor on what the temperature was. It s a shame, but by that point it was way too late to go back. Anyway, that probably was when Murphy felt we had paid our respects, and from then on everything went lovingly well! Routing had been done on the OpenStreetMap website, with OSRM, and it looked pretty easy to follow, but we also had access to an Android phone, so we used OSMAnd to check that we were still on track. It tried to lead us to the Statale (i.e. most important and most trafficked road) a few times, but we ignored it, and after a few turns and a few changes of the precise destination point we managed to get it to cooperate. At one point a helpful person asked us if we needed help, having seen us looking at the phone, and gave us indication for the next fork (that way to Cuasso al Piano, that way to Porto Ceresio), but it was pretty easy, since the way was clearly marked also for cars. Then we started to notice red and white markings on poles and other places, and on the next fork there was a signpost for hiking routes with our destination and we decided to follow it instead of the sign for cars. I knew that from our starting point to or destination there was also a hiking route, uphill both ways :) , through the hills, about 5 or 6 hours instead of two, but the sign was pointing downhill and we were past the point where we would expect too long of a detour. A wide and flat unpaved track passing through a flat grassy area with trees to the sides and rolling hills in the background. And indeed, after a short while the paved road ended, but the path continued on a wide and flat track, and was a welcome detour through what looked like water works to prevent flood damage from a stream. In a warmer season, with longer grass and ticks maybe the fact that I was wearing a long skirt may have been an issue, but in winter it was just fine. And soon afterwards, we were in Porto Ceresio. I think I have been there as a child, but I had no memory of it. On the other hand, it was about as I expected: a tiny town with a lakeside street full of houses built in the early 1900s when the area was an important tourism destination, with older buildings a bit higher up on the hills (because streams in this area will flood). And of course, getting there by foot rather than by train we also saw the parts where real people live (but not work: that s cross-border commuters country). Dried winter grass with two strips of frost, exactly under the shade of a fence. Soon after arriving in Porto Ceresio we stopped to eat our lunch on a bench at the lakeside; up to then we had been pretty comfortable in the clothing we had decided to wear: there was plenty of frost on the ground, in the shade, but the sun was warm and the temperatures were cleanly above freezing. Removing the gloves to eat, however, resulted in quite cold hands, and we didn t want to stay still for longer than strictly necessary. So we spent another hour and a bit walking around Porto Ceresio like proper tourists and taking pictures. There was an exhibition of nativity scenes all around the streets, but to get a map one had to go to either facebook or instagram, or wait for the opening hours of an office that were later than the train we planned to get to go back home, so we only saw maybe half of them, as we walked around: some were quite nice, some were nativity scenes, and some showed that the school children must have had some fun making them. three gnome adjacent creatures made of branches of evergreen trees, with a pointy hat made of moss, big white moustaches and red Christmas tree balls at the end of the hat and as a nose. They are wrapped in LED strings, and the lake can be seen in the background. Another Christmas decoration were groups of creatures made of evergreen branches that dotted the sidewalks around the lake: I took pictures of the first couple of groups, and then after seeing a few more something clicked in my brain, and I noticed that they were wrapped in green LED strings, like chains, and they had a red ball that was supposed to be the nose, but could just be around the mouth area, and suddenly I felt the need to play a certain chord to release them, but sadly I didn t have a weaponized guitar on me :D A bench in the shape of an open book, half of the pages folded in a reversed U to make the seat and half of the pages standing straight to form the backrest. It has the title page and beginning of the Constitution of the Italian Republic. Another thing that we noticed were some benches in the shape of books, with book quotations on them; most were on reading-related topics, but the one with the Constitution felt worth taking a picture of, especially these days. And then, our train was waiting at the station, and we had to go back home for the afternoon; it was a nice outing, if a bit brief, and we agreed to do it again, possibly with a bit of a detour to make the walk a bit longer. And then maybe one day we ll train to do the whole 5-6 hour thing through the hills.

3 January 2026

Louis-Philippe V ronneau: 2025 A Musical Retrospective

2026 already! The winter weather here has really been beautiful and I always enjoy this time of year. Writing this yearly musical retrospective has now become a beloved tradition of mine1 and I enjoy retracing the year's various events through albums I listened to and concerts I went to. Albums In 2025, I added 141 new albums to my collection, around 60% more than last year's haul. I think this might have been too much? I feel like I didn't have time to properly enjoy all of them and as such, I decided to slow down my acquisition spree sometimes in early December, around the time I normally do the complete opposite. This year again, I bought the vast majority of my music on Bandcamp. Most of the other albums I bought as CDs and ripped them. Concerts In 2025, I went to the following 25 (!!) concerts: Although I haven't touched metalfinder's code in a good while, my instance still works very well and I get the occasional match when a big-name artist in my collection comes in town. Most the venues that advertise on Bandsintown are tied to Ticketmaster though, which means most underground artists (i.e. most of the music I listen to) end up playing elsewhere. As such, shout out again to the Gancio project and to the folks running the Montreal instance. It continues to be a smash hit and most of the interesting concerts end up being advertised there. See you all in 2026!

  1. see the 2022, 2023 and 2024 entries

31 December 2025

Junichi Uekawa: Happy new year.

Happy new year.

Bits from Debian: DebConf26 dates announced

Alt Debconf26 by Romina Molina As announced in Brest, France, in July, the Debian Conference is heading to Santa Fe, Argentina. The DebConf26 team and the local organizers team in Argentina are excited to announce Debconf26 dates, the 27th edition of the Debian Developers and Contributors Conference: DebCamp, the annual hacking session, will run from Monday July 13th to Sunday to July 19th 2026, followed by DebConf from Monday July 20th to Saturday July 25th 2026. For all those who wish to meet us in Santa Fe, the next step will be the opening of registration on January 26, 2026. The call for proposals period for anyone wishing to submit a conference or event proposal will be launched on the same day. DebConf26 is looking for sponsors; if you are interested or think you know of others who would be willing to help, please have a look at our sponsorship page and get in touch with sponsors@debconf.org. About Debian The Debian Project was founded in 1993 by Ian Murdock to be a truly free community project. Since then the project has grown to be one of the largest and most influential Open Source projects. Thousands of volunteers from all over the world work together to create and maintain Debian software. Available in 70 languages, and supporting a huge range of computer types, Debian calls itself the universal operating system. About DebConf DebConf is the Debian Project's developer conference. In addition to a full schedule of technical, social and policy talks, DebConf provides an opportunity for developers, contributors and other interested people to meet in person and work together more closely. It has taken place annually since 2000 in locations as varied as Scotland, Bosnia and Herzegovina, India, Korea. More information about DebConf is available from https://debconf.org/. For further information, please visit the DebConf26 web page at https://debconf26.debconf.org/ or send mail to press@debian.org. Debconf26 is made possible by Proxmox and others.

30 December 2025

Utkarsh Gupta: FOSS Activites in December 2025

Here s my monthly but brief update about the activities I ve done in the FOSS world.

Debian
Whilst I didn t get a chance to do much, here are still a few things that I worked on:
  • Prepared security update for wordpress for trixie and bookworm.
  • A few discussions with the new DFSG team, et al.
  • Assited a few folks in getting their patches submitted via Salsa.
  • Mentoring for newcomers.
  • Moderation of -project mailing list.

Ubuntu
I joined Canonical to work on Ubuntu full-time back in February 2021. Whilst I can t give a full, detailed list of things I did, here s a quick TL;DR of what I did:
  • Successfully released Resolute Snapshot 2!
    • This one was also done without the ISO tracker and cdimage access.
    • I think this one went rather smooth. Let s see what we re able to do for snapshot 3.
  • Worked on removing GPG keys from the cdimage instance. That took a while, whew!
  • Assisted a bunch of folks with my Archive Admin and Release team hats to:
    • review NEW packages for Ubuntu Studio.
    • remove old binaries that are stalling transition and/or migration.
    • LTS requalification of Ubuntu flavours.
    • bootstrapping dotnet-10 packages for Stable Release Updates.
  • With that, we ve entered the EOY break. :)
    • I was anyway on vacation for majority of this month. ;)

Debian (E)LTS
This month I have worked 72 hours on Debian Long Term Support (LTS) and on its sister Extended LTS project and did the following things:

Released Security Updates
  • ruby-git: Multiple vulnerabilities leading to command line injection and improper path escaping.
  • ruby-sidekiq: Multiple vulnerabilities leading to Cross-site Scripting (XSS) and Denial of Service in Web UI.
  • python-apt: Vulnerability leading to crash via invalid nullptr dereference in TagSection.keys().
    • [LTS]: Fixed CVE-2025-6966 via 2.2.1.1 for bullseye. This has been released as DLA 4408-1.
    • [ELTS]: Fixed CVE-2025-6966 via 1.8.4.4 for buster and 1.4.4 for stretch. This has been released as ELA 1596-1.
    • All of this was coordinated b/w the Security team and Julian Andres Klode. Julian will take care of the stable uploads.
  • node-url-parse: Vulnerability allowing authorization bypass through specially crafted URL with empty userinfo and no host.
  • wordpress: Multiple vulnerabilities in WordPress core, leading to Sent Data & Cross-site Scripting.
  • usbmuxd: Privilege escalation vulnerability via path traversal in SavePairRecord command.
    • [LTS]: Fixed CVE-2025-66004 via 1.1.1-2+deb11u1 for bullseye. This has been released as DLA 4417-1.
    • [ELTS]: Fixed CVE-2025-66004 via 1.1.1~git20181007.f838cf6-1+deb10u1 for buster and 1.1.0-2+deb9u1 for stretch. This has been released as ELA 1599-1.
    • All of this was coordinated b/w the Security team and Yves-Alexis Perez. Yves will take care of the stable uploads.
  • gst-plugins-good1.0: Multiple vulnerabilities in isomp4 plugin leading to potential out-of-bounds reads and information disclosure.
  • postgresql-13: Multiple vulnerabilities including unauthorized schema statistics creation and integer overflow in libpq allocation calculations.
  • gst-plugins-base1.0: Multiple vulnerabilities in SubRip subtitle parsing leading to potential crashes and buffer issues.

Work in Progress
  • ceph: Affected by CVE-2024-47866, using the argument x-amz-copy-source to put an object and specifying an empty string as its content leads to the RGW daemon crashing, resulting in a DoS attack.
  • knot-resolver: Affected by CVE-2023-26249, CVE-2023-46317, and CVE-2022-40188, leading to Denial of Service.
  • adminer: Affected by CVE-2023-45195 and CVE-2023-45196, leading to SSRF and DoS, respectively.
  • u-boot: Affected by CVE-2025-24857, where boot code access control flaw in U-Boot allowing arbitrary code execution via physical access.
    • [ELTS]: As it s only affected the version in stretch, I ve started the work to find the fixing commits and prepare a backport. Not much progress there, I ll roll it over to January.
  • ruby-rack: There were multiple vulnerabilities reported in Rack, leading to DoS (memory exhaustion) and proxy bypass.
    • [ELTS]: After completing the work for LTS myself, Bastien picked it up for ELTS and reached out about an upstream regression and we ve been doing some exchanges. Bastien has done most of the work backporting the patches but needs a review and help backporting CVE-2025-61771.

Other Activities
  • Frontdesk from 01-12-2025 to 07-12-2025.
    • Auto EOL d a bunch of packages.
    • Marked CVE-2025-12084/python2.7 as end-of-life for bullseye, buster, and stretch.
    • Marked CVE-2025-12084/jython as end-of-life for bullseye.
    • Marked CVE-2025-13992/chromium as end-of-life for bullseye.
    • Marked apache2 CVEs as postponed for bullseye, buster, and stretch.
    • Marked CVE-2025-13654/duc as postponed for bullseye and buster.
    • Marked CVE-2025-32900/kdeconnect as ignored for bullseye.
    • Marked CVE-2025-12084/pypy3 as postponed for bullseye.
    • Marked CVE-2025-14104/util-linux as postponed for bullseye, buster, and stretch.
    • Marked several CVEs for fastdds as postponed for bullseye.
    • Marked several CVEs for pytorch as postponed for bullseye.
    • Marked CVE-2025-2486/edk2 as postponed for bullseye.
    • Marked CVE-2025-6172 7,9 /golang-1.15 as postponed for bullseye.
    • Marked CVE-2025-65637/golang-logrus as postponed for bullseye.
    • Marked CVE-2025-12385/qtdeclarative-opensource-src ,gles as postponed for bullseye, buster, and stretch.
    • Marked TEMP-0000000-D08402/rust-maxminddb as postponed for bullseye.
    • Added the following packages to d,e la-needed.txt:
      • liblivemedia, sogo.
    • During my triage, I had to make the bin/elts-eol script robust to determine the lts_admin repository - did a back and forth with Emilio about this on the list.
    • I sent a gentle reminder to the LTS team about the issues fixed in bullseye but not in bookworm via mailing list: https://lists.debian.org/debian-lts/2025/12/msg00013.html.
  • I claimed php-horde-css-parser to work on CVE-2020-13756 for buster and did almost all the work only to realize that the patch already existed in buster and the changelog confirmed that it was intentionally fixed.
    • After speaking with Andreas Henriksson, we figured that the CVE ID was missed when the ELA was generated and so I fixed that via 87afaaf19ce56123bc9508d9c6cd5360b18114ef and 5621431e84818b4e650ffdce4c456daec0ee4d51 in the ELTS security tracker to reflect the situation.
  • Participated in a thread which I started last month around using Salsa CI for E/LTS packages and if we plan to sunset it in favor of using Debusine. The plan for now is to keep it around as it s still beneficial and Debusine is still in its early phase.
  • Did a lot of back and forth with Helmut about debusine uploads on #debian-elts.
    • While debugging a failure in dcut uploads, I ran into an SSH compatibility issue on deb-master.freexian.com that could be fixed on the server-side. I shared all my findings to Freexian s sysadmin team.
    • A minimal fix on the server side would be one of:
      PubkeyAcceptedAlgorithms -ssh-dss
      
      or explicitly restricting to modern algorithms, e.g.:
      PubkeyAcceptedAlgorithms
      ssh-ed25519,ecdsa-sha2-nistp256,rsa-sha2-512,rsa-sha2-256
      
  • Jelly on #debian-lts reported that all my DLA mails had broken GMail s DKIM signature. So I set up sending replies from @debian.org and that seems to have fixed that! \o/
  • [LTS] Attended a rather short monthly LTS meeting on Jitsi. Summary here.
  • [E/LTS] Monitored discussions on mailing lists, IRC, and all the documentation updates.

Until next time.
:wq for today.

28 December 2025

Jonathan Dowland: Our study, 2025

We re currently thinking of renovating our study/home office. I ll likely write more about that project. Embarking on it reminded me that I d taken a photo of the state of it nearly a year ago and forgot to post it, so here it is.
Home workspace, January 2025 Home workspace, January 2025
When I took that pic last January, it had been three years since the last one, and the major difference was a reduction in clutter. I've added a lava lamp (charity shop find) and Rob Sheridan print. We got rid of the PO NG chair (originally bought for breast feeding) so we currently have no alternate seating besides the desk chair. As much as I love my vintage mahogany writing desk, our current thinking is it s likely to go. I m exploring whether we could fit in two smaller desks: one main one for the computer, and another workbench for play: the synthesiser, Amiga, crafting and 3d printing projects, etc.

19 December 2025

Otto Kek l inen: Backtesting trailing stop-loss strategies with Python and market data

Featured image of post Backtesting trailing stop-loss strategies with Python and market dataIn January 2024 I wrote about the insanity of the Magnificent Seven dominating the MSCI World Index, and I wondered how long the number can continue to go up? It has continued to surge upward at an accelerating pace, which makes me worry that a crash is likely closer. As a software professional, I decided to analyze whether using stop-loss orders could reliably automate avoiding deep drawdowns. As everyone with some savings in the stock market (hopefully) knows, the stock market eventually experiences crashes. It is just a matter of when and how deep the crash will be. Staying on the sidelines for years is not a good investment strategy, as inflation will erode the value of your savings. Assuming the current true inflation rate is around 7%, a restaurant dinner that costs 20 euros today will cost 24.50 euros in three years. Savings of 1000 euros today would drop in purchasing power from 50 dinners to only 40 dinners in three years. Hence, if you intend to retain the value of your hard-earned savings, they need to be invested in something that grows in value. Most people try to beat inflation by buying shares in stable companies, directly or via broad market ETFs. These historically grow faster than inflation during normal years, but likely drop in value during recessions.

What is a trailing stop-loss order? What if you could buy stocks to benefit from their value increasing without having to worry about a potential crash? All modern online stock brokers have a feature called stop-loss, where you can enter a price at which your stocks automatically get sold if they drop down to that price. A trailing stop-loss order is similar, but instead of a fixed price, you enter a margin (e.g. 10%). If the stock price rises, the stop-loss price will trail upwards by that margin. For example, if you buy a share at 100 euros and it has risen to 110 euros, you can set a 10% trailing stop-loss order which automatically sells it if the price drops 10% from the peak of 110 euros, at 99 euros. Thus, no matter what happens, you only lost 1 euro. And if the stock price continues to rise to 150 euros, the trailing stop-loss would automatically readjust to 150 euros minus 10%, which is 135 euros (150-15=135). If the price dropped to 135 euros, you would lock in a gain of 35 euros, which is not the peak price of 150 euros, but still better than whatever the price fell down to as a result of a large crash. In the simple case above, it obviously makes sense in theory, but it might not make sense in practice. Prices constantly oscillate, so you don t want a margin that is too small, otherwise you exit too early. Conversely, having a large margin may result in too large a drawdown before exiting. If markets crash rapidly, it might be that nobody buys your stocks at the stop-loss price, and shares have to be sold at an even lower price. Also, what will you do once the position is sold? The reason you invested in the stock market was to avoid holding cash, so would you buy the same stock back when the crash bottoms? But how will you know when the bottom has been reached?

Backtesting stock market strategies with Python, YFinance, Pandas and Lightweight Charts I am not a professional investor, and nobody should take investment advice from me. However, I know what backtesting is and how to leverage open source software. So, I wrote a Python script to test if the trading strategy of using trailing stop-loss orders with specific margin values would have worked for a particular stock. First you need to have data. YFinance is a handy Python library that can be used to download the historic price data for any stock ticker on Yahoo.com. Then you need to manipulate the data. Pandas is the Python data analysis library with advanced data structures for working with relational or labeled data. Finally, to visualize the results, I used Lightweight Charts, which is a fast, interactive library for rendering financial charts, allowing you to plot the stock price, the trailing stop-loss line, and the points where trades would have occurred. I really like how the zoom is implemented in Lightweight Charts, which makes drilling into the data points feel effortless. The full solution is not polished enough to be published for others to use, but you can piece together your own by reusing some of the key snippets. To avoid re-downloading the same data repeatedly, I implemented a small caching wrapper that saves the data locally (as Parquet files):
python
CACHE_DIR.mkdir(parents=True, exist_ok=True)
end_date = datetime.today().strftime("%Y-%m-%d")
cache_file = CACHE_DIR / f" TICKER - START_DATE -- end_date .parquet"

if cache_file.is_file():
 dataframe = pandas.read_parquet(cache_file)
 print(f"Loaded price data from cache:  cache_file ")
else:
 dataframe = yfinance.download(
 TICKER,
 start=START_DATE,
 end=end_date,
 progress=False,
 auto_adjust=False
 )

 dataframe.to_parquet(cache_file)
 print(f"Fetched new price data from Yahoo Finance and cached to:  cache_file ")
The dataframe is a Pandas object with a powerful API. For example, to print a snippet from the beginning and the end of the dataframe to see what the data looks like, you can use:
python
print("First 5 rows of the raw data:")
print(df.head())
print("Last 5 rows of the raw data:")
print(df.tail())
Example output:
First 5 rows of the raw data
Price Adj Close Close High Low Open Volume
Ticker BNP.PA BNP.PA BNP.PA BNP.PA BNP.PA BNP.PA
Date
2014-01-02 29.956285 55.540001 56.910000 55.349998 56.700001 316552
2014-01-03 30.031801 55.680000 55.990002 55.290001 55.580002 210044
2014-01-06 30.080338 55.770000 56.230000 55.529999 55.560001 185142
2014-01-07 30.943321 57.369999 57.619999 55.790001 55.880001 370397
2014-01-08 31.385597 58.189999 59.209999 57.750000 57.790001 489940
Last 5 rows of the raw data
Price Adj Close Close High Low Open Volume
Ticker BNP.PA BNP.PA BNP.PA BNP.PA BNP.PA BNP.PA
Date
2025-12-11 78.669998 78.669998 78.919998 76.900002 76.919998 357918
2025-12-12 78.089996 78.089996 80.269997 78.089996 79.470001 280477
2025-12-15 79.080002 79.080002 79.449997 78.559998 78.559998 233852
2025-12-16 78.860001 78.860001 79.980003 78.809998 79.430000 283057
2025-12-17 80.080002 80.080002 80.150002 79.080002 79.199997 262818
Adding new columns to the dataframe is easy. For example, I used a custom function to calculate the Relative Strength Index (RSI). To add a new column RSI with a value for every row based on the price from that row, only one line of code is needed, without custom loops:
python
df["RSI"] = compute_rsi(df["price"], period=14)
After manipulating the data, the series can be converted into an array structure and printed as JSON into a placeholder in an HTML template:
python
 baseline_series = [
  "time": ts, "value": val 
 for ts, val in df_plot[["timestamp", BASELINE_LABEL]].itertuples(index=False)
 ]

 baseline_json = json.dumps(baseline_series)
 template = jinja2.Template("template.html")
 rendered_html = template.render(
 title=title,
 heading=heading,
 description=description_html,
 ...
 baseline_json=baseline_json,
 ...
 )

 with open("report.html", "w", encoding="utf-8") as f:
 f.write(rendered_html)
 print("Report generated!")
In the HTML template, the marker variable in Jinja syntax gets replaced with the actual JSON:
html
<!DOCTYPE html>
<html lang="en">
<head>
 <meta charset="UTF-8">
 <title>  title  </title>
 ...
</head>
<body>
 <h1>  heading  </h1>
 <div id="chart"></div>
 <script>
 // Ensure the DOM is ready before we initialise the chart
 document.addEventListener('DOMContentLoaded', () =>  
 // Parse the JSON data passed from Python
 const baselineData =   baseline_json   safe  ;
 const strategyData =   strategy_json   safe  ;
 const markersData =   markers_json   safe  ;

 // Create the chart
 const chart = LightweightCharts.createChart(document.getElementById('chart'),  
 width: document.getElementById('chart').clientWidth,
 height: 500,
 layout:  
 background:   color: "#222"  ,
 textColor: "#ccc"
  ,
 grid:  
 vertLines:   color: "#555"  ,
 horzLines:   color: "#555"  
  
  );

 // Add baseline series
 const baselineSeries = chart.addLineSeries( 
 title: '  baseline_label  ',
 lastValueVisible: false,
 priceLineVisible: false,
 priceLineWidth: 1
  );
 baselineSeries.setData(baselineData);

 baselineSeries.priceScale().applyOptions( 
 entireTextOnly: true
  );

 // Add strategy series
 const strategySeries = chart.addLineSeries( 
 title: '  strategy_label  ',
 lastValueVisible: false,
 priceLineVisible: false,
 color: '#FF6D00'
  );
 strategySeries.setData(strategyData);

 // Add buy/sell markers to the strategy series
 strategySeries.setMarkers(markersData);

 // Fit the chart to show the full data range (full zoom)
 chart.timeScale().fitContent();
  )
 </script>
</body>
</html>
There are also Python libraries built specifically for backtesting investment strategies, such as Backtrader and Zipline, but they do not seem to be actively maintained, and probably have too many features and complexity compared to what I needed for doing this simple test. The screenshot below shows an example of backtesting a strategy on the Waste Management Inc stock from January 2015 to December 2025. The baseline Buy and hold scenario is shown as the blue line and it fully tracks the stock price, while the orange line shows how the strategy would have performed, with markers for the sells and buys along the way. Backtest run example

Results I experimented with multiple strategies and tested them with various parameters, but I don t think I found a strategy that was consistently and clearly better than just buy-and-hold. It basically boils down to the fact that I was not able to find any way to calculate when the crash has bottomed based on historical data. You can only know in hindsight that the price has stopped dropping and is on a steady path to recovery, but at that point it is already too late to buy in. In my testing, most strategies underperformed buy-and-hold because they sold when the crash started, but bought back after it recovered at a slightly higher price. In particular when using narrow margins and selling on a 3-6% drawdown the strategy performed very badly, as those small dips tend to recover in a few days. Essentially, the strategy was repeating the pattern of selling 100 stocks at a 6% discount, then being able to buy back only 94 shares the next day, then again selling 94 shares at a 6% discount, and only being able to buy back maybe 90 shares after recovery, and so forth, never catching up to the buy-and-hold. The strategy worked better in large market crashes as they tended to last longer, and there were higher chances of buying back the shares while the price was still low. For example, in the 2020 crash selling at a 20% drawdown was a good strategy, as the stock I tested dropped nearly 50% and remained low for several weeks; thus, the strategy bought back the stocks while the price was still low and had not yet started to climb significantly. But that was just a lucky incident, as the delta between the trailing stop-loss margin of 20% and total crash of 50% was large enough. If the crash had been only 25%, the strategy would have missed the rebound and ended up buying back the stocks at a slightly higher price. Also, note that the simulation assumes that the trade itself is too small to affect the price formation. We should keep in mind that in reality, if many people have stop-loss orders in place, a large price drop would trigger all of them, creating a flood of sell orders, which in turn would affect the price and drive it lower even faster and deeper. Luckily, it seems that stop-loss orders are generally not a good strategy, and we don t need to fear that too many people will be using them.

Conclusion Even though using a trailing stop-loss strategy does not seem to help in getting consistently higher returns based on my backtesting, I would still say it is useful in protecting from the downside of stock investing. It can act as a kind of insurance policy to considerably decrease the chances of losing big while increasing the chances of losing a little bit. If you are risk-averse, which I think I probably am, this tradeoff can make sense. I d rather miss out on an initial 50% loss and an overall 3% gain on recovery than have to sit through weeks or months with a 50% loss before the price recovers to prior levels. Most notably, the trailing stop-loss strategy works best if used only once. If it is repeated multiple times, the small losses in gains will compound into big losses overall. Thus, I think I might actually put this automation in place at least on the stocks in my portfolio that have had the highest gains. If they keep going up, I will ride along, but once the crash happens, I will be out of those particular stocks permanently. Do you have a favorite open source investment tool or are you aware of any strategy that actually works? Comment below!

12 December 2025

Freexian Collaborators: Debian Contributions: Updates about DebConf Video Team Sprint, rebootstrap, SBOM tooling in Debian and more! (by Anupa Ann Joseph)

Debian Contributions: 2025-11 Contributing to Debian is part of Freexian s mission. This article covers the latest achievements of Freexian and their collaborators. All of this is made possible by organizations subscribing to our Long Term Support contracts and consulting services.

DebConf Video Team Sprint The DebConf Video Team records, streams, and publishes talks from DebConf and many miniDebConfs. A lot of the infrastructure development happens during setup for these events, but we also try to organize a sprint once a year to work on infrastructure, when there isn t a DebConf about to happen. Stefano attended the sprint in Herefordshire this year and wrote up a report.

rebootstrap, by Helmut Grohne A number of jobs were stuck in architecture-specific failures. gcc-15 and dpkg still disagree about whether PIE is enabled occasionally and big endian mipsen needed fixes in systemd. Beyond this regular uploads of libxml2 and gcc-15 required fixes and rebasing of pending patches. Earlier, Loongson used rebootstrap to create the initial package set for loong64 and Miao Wang now submitted their changes. Therefore, there is now initial support for suites other than unstable and use with derivatives.

Building the support for Software Bill Of Materials tooling in Debian, by Santiago Ruano Rinc n Vendors of Debian-based products may/should be paying attention to the evolution of different jurisdictions (such as the CRA or updates on CISA s Minimum Elements for a Software Bill of Materials) that require to make available Software Bill of Materials (SBOM) of their products. It is important then to have tools in Debian to make it easier to produce such SBOMs. In this context, Santiago continued the work on packaging libraries related to SBOMs. This includes the packaging of the SPDX python library (python-spdx-tools), and its dependencies rdflib and mkdocs-include-markdown-plugin. System Package Data Exchange (SPDX), defined by ISO/IEC 5962:2021, is an open standard capable of representing systems with software components as SBOMs and other data and security references. SPDX and CycloneDX (whose python library python3-cyclonedx-lib was packaged by prior efforts this year), encompass the two main SBOM standards available today.

Miscellaneous contributions
  • Carles improved po-debconf-manager: added checking status of bug reports automatically via python-debianbts; changed some command line options naming or output based on user feedback; finished refactoring user interaction to rich; codebase is now flake8-compliant; added type safety with mypy.
  • Carles, using po-debconf-manager, created 19 bug reports for translations where the merge requests were pending; reviewed and created merge requests for 4 packages.
  • Carles planned a second version of the tool that detects packages that Recommends or Suggests packages which are not in Debian. He is taking ideas from dumat.
  • Carles submitted a pull request to python-unidiff2 (adapted from the original pull request to python-unidiff). He also started preparing a qnetload update.
  • Stefano did miscellaneous python package updates: mkdocs-macros-plugin, python-confuse, python-pip, python-mitogen.
  • Stefano reviewed a beets upload for a new maintainer who is taking it over.
  • Stefano handled some debian.net infrastructure requests.
  • Stefano updated debian.social infrastructure for the trixie point release.
  • The update broke jitsi.debian.social, Stefano put some time into debugging it and eventually enlisted upstream assistance, who solved the problem!
  • Stefano worked on some patches for Python that help Debian:
    • GH-139914: The main HP PA-RISC support patch for 3.14.
    • GH-141930: We observed an unhelpful error when failing to write a .pyc file during package installation. We may have fixed the problem, and at least made the error better.
    • GH-141011: Ignore missing ifunc support on HP PA-RISC.
  • Stefano spun up a website for hamburg2026.mini.debconf.org.
  • Rapha l reviewed a merge request updating tracker.debian.org to rely on bootstrap
    version 5.
  • Emilio coordinated various transitions.
  • Helmut sent patches for 26 cross build failures.
  • Helmut officially handed over the cleanup of the /usr-move transition.
  • Helmut monitored the transition moving libcrypt-dev out of build-essential and bumped the remaining bugs to rc-severity in coordination with the release team.
  • Helmut updated the Build-Profiles patch for debian-policy incorporating feedback from Sean Whitton with a lot of help from Nattie Mayer-Hutchings and Freexian colleagues.
  • Helmut discovered that the way mmdebstrap deals with start-stop-daemon may result in broken output and sent a patch.
  • As a result of armel being removed from sid , but not from forky , the multiarch hinter broke. Helmut fixed it.
  • Helmut uploaded debvm accepting a patch from Luca Boccassi to fix it for newer
    systemd.
  • Colin began preparing for the second stage of the OpenSSH GSS-API key exchange package split.
  • Colin caught and fixed a devscripts regression due to it breaking part of Debusine.
  • Colin packaged django-pgtransaction and backported it to trixie , since it looks useful for Debusine.
  • Thorsten uploaded the packages lprng, cpdb-backend-cups, cpdb-libs and ippsample to fix some RC bugs as well as other bugs that accumulated over time. He also uploaded cups-filters to all Debian releases to fix three CVEs.

Next.