Search Results: "phil"

11 February 2025

Freexian Collaborators: Debian Contributions: Python 3.13 as the default Python 3 version, Fixing qtpaths6 for cross compilation, sbuild support for Salsa CI, Rails 7 transition, DebConf preparations and more! (by Anupa Ann Joseph)

Debian Contributions: 2025-01 Contributing to Debian is part of Freexian s mission. This article covers the latest achievements of Freexian and their collaborators. All of this is made possible by organizations subscribing to our Long Term Support contracts and consulting services.

Python 3.13 is now the default Python 3 version in Debian, by Stefano Rivera and Colin Watson The Python 3.13 as default transition has now completed. The next step is to remove Python 3.12 from the archive, which should be very straightforward, it just requires rebuilding C extension packages in no particular order. Stefano fixed some miscellaneous bugs blocking the completion of the 3.13 as default transition.

Fixing qtpaths6 for cross compilation, by Helmut Grohne While Qt5 used to use qmake to query installation properties, Qt6 is moving more and more to CMake and to ease that transition it relies on more qtpaths. Since this tool is not naturally aware of the architecture it is called for, it tends to produce results for the build architecture. Therefore, more than 100 packages were picking up a multiarch directory for the build architecture during cross builds. In collaboration with the Qt/KDE team and Sandro Knau in particular (none affiliated with Freexian), we added an architecture-specific wrapper script in the same way qmake has one for Qt5 and Qt6 already. The relevant CMake module has been updated to prefer the triplet-prefixed wrapper. As a result, most of the KDE packages now cross build on unstable ready in time for the trixie release.

/usr-move, by Helmut Grohne In December, Emil S dergren reported that a live-build was not working for him and in January, Colin Watson reported that the proposed mitigation for debian-installer-utils would practically fail. Both failures were to be attributed to a wrong understanding of implementation-defined behavior in dpkg-divert. As a result, all M18 mitigations had to be reviewed and many of them replaced. Many have been uploaded already and all instances have received updated patches. Even though dumat has been in operation for more than a year, it gained recent changes. For one thing, analysis of architectures other than amd64 was requested. Chris Hofstaedler (not affiliated with Freexian) kindly provided computing resources for repeatedly running it on the larger set. Doing so revealed various cross-architecture undeclared file conflicts in gcc, glibc, and binutils-z80, but it also revealed a previously unknown /usr-move issue in rpi.rpi-common. On top of that, dumat produced false positive diagnostics and wrongly associated Debian bugs in some cases, both of which have now been fixed. As a result, a supposedly fixed python3-sepolicy issue had to be reopened.

rebootstrap, by Helmut Grohne As much as we think of our base system as stable, it is changing a lot and the architecture cross bootstrap tooling is very sensitive to such changes requiring permanent maintenance. A problem that recently surfaced was that building a binutils cross toolchain would result in a binutils-for-host package that would not be practically installable as it would depend on a binutils-common package that was not built. This turned into an examination of binutils-common and noticing that it actually differed across architectures even though it should not. Johannes Schauer Marin Rodrigues (not affiliated with Freexian) and Colin Watson kindly helped brainstorm possible solutions. Eventually, Helmut provided a patch to move gprofng bits out of binutils-common. Independently, Matthias Klose (not affiliated with Freexian) split out binutils-gold into a separate source package. As a result, binutils-common is now equal across architectures and can be marked Multi-Arch: foreign resolving the initial problem.

Salsa CI, by Santiago Ruano Rinc n Santiago continued the work about the sbuild support for Salsa CI, that was mentioned in the previous month report. The !568 merge request that created the new build image was merged, making it easier to test !569 with external projects. Santiago used a fork of the debusine repo to try the draft !569, and some issues were spotted, and part of them fixed. This is the last debusine pipeline run with the current !569: https://salsa.debian.org/santiago/debusine/-/pipelines/794233. One of the last improvements relates to how to enable projects to customize the pipeline, in an equivalent way than they currently do in the extract-source and build jobs. While this is work-in-progress, the results are rather promising. Next steps include deciding on introducing schroot support for bookworm, bookworm-security, and older releases, as they are done in the official debian buildd.

DebConf preparations, by Stefano Rivera and Santiago Ruano Rinc n DebConf will be happening in Brest, France, in July. Santiago continued the DebConf 25 organization work, looking for catering providers. Both Stefano and Santiago have been reaching out to some potential sponsors. DebConf depends on sponsors to cover the organization cost, if your company depends on Debian, please consider sponsoring DebConf. Stefano has been winding up some of the finances from previous DebConfs. Finalizing reimbursements to team members from DebConf 23, and handling some outstanding issues from DebConf 24. Stefano and the rest of the DebConf committee have been reviewing bids for DebConf 25, to select the next venue.

Ruby 3.3 is now the default Ruby interpreter, by Lucas Kanashiro Ruby 3.3 is about to become the default Ruby interpreter for Trixie. Many bugs were fixed by Lucas and the Debian Ruby team during the sprint hold in Paris during Jan 27-31. The next step is to remove support of Ruby 3.1, which is the alternative Ruby interpreter for now. Thanks to the Debian Release team for all the support, especially Emilio Pozuelo Monfort.

Rails 7 transition, by Lucas Kanashiro Rails 6 has been shipped by Debian since Bullseye, and as a WEB framework, many issues (especially security related issues) have been encountered and the maintainability of it becomes harder and harder. With that in mind, during the Debian Ruby team sprint last month, the transition to Rack 3 (an important dependency of rails containing many breaking changes) was started in Debian unstable, it is ongoing. Once it is done, the Rails 7 transition will take place, and Rails 7 should be shipped in Debian Trixie.

Miscellaneous contributions
  • Stefano improved a poor ImportError for users of the turtle module on Python 3, who haven t installed the python3-tk package.
  • Stefano updated several packages to new upstream releases.
  • Stefano added the Python extension to the re2 package, allowing for the use of the Google RE2 regular expression library as a direct replacement for the standard library re module.
  • Stefano started provisioning a new physical server for the debian.social infrastructure.
  • Carles improved simplemonitor (documentation on systemd integration, worked with upstream for fixing a bug).
  • Carles upgraded packages to new upstream versions: python-ring-doorbell and python-asyncclick.
  • Carles did po-debconf translations to Catalan: reviewed 44 packages and submitted translations to 90 packages (via salsa merge requests or bugtracker bugs).
  • Carles maintained po-debconf-manager with small fixes.
  • Rapha l worked on some outstanding DEP-14 merge request and participated in the associated discussion. The discussions have been more contentious than anticipated, somewhat exacerbated by Otto s desire to conclude fast while the required tool support is not yet there.
  • Rapha l, with the help of Philipp Kern from the DSA team, upgraded tracker.debian.org to use Django 4.2 (from bookworm-backports) which in turn enabled him to configure authentication via salsa.debian.org. It s now possible to login to tracker.debian.org with your salsa credentials!
  • Rapha l updated zim a nice desktop wiki that is very handy to organize your day-to-day digital life to the latest upstream version (0.76).
  • Helmut sent patches for 10 cross build failures.
  • Helmut continued working on a tool for memory-based concurrency limit of builds.
  • Helmut NMUed libtool, opensysusers and virtualbox.
  • Enrico tried to support Helmut in working out tricky usrmerge situations
  • Thorsten Alteholz uploaded a new upstream version of brlaser.
  • Colin Watson upgraded 33 Python packages to new upstream versions, including fixes for CVE-2024-42353, CVE-2024-47532, and CVE-2025-22153.
  • Emilio Pozuelo managed various transitions, and fixed various RC bugs (telepathy-glib, xorg, xserver-xorg-video-vesa, apitrace, mesa).
  • Anupa attended the monthly team meeting for Debian publicity team and shared the social media stats.
  • Anupa assisted Jean-Pierre Giraud in the point release announcement for Debian 12.9 and published the Micronews.
  • Anupa took part in multiple Debian publicity team discussions regarding our presence in social media platforms.

9 February 2025

Philipp Kern: 20 years

20 years ago, I got my Debian Developer account. I was 18 at the time, it was Shrove Tuesday and - as is customary - I was drunk when I got the email. There was so much that I did not know - which is also why the process took 1.5 years from the time I applied. I mostly only maintained a package or two. I'm still amazed that Christian Perrier and Joerg Jaspert put sufficient trust in me at that time. Nevertheless now feels like a good time for a personal reflection of my involvement in Debian.
During my studies I took on more things. In January 2008 I joined the Release Team as an assistant, which taught me a lot of code review. I have been an Application Manager on the side.
Going to my first Debconf was really a turning point. My first one was Mar del Plata in Argentina in August 2008, when I was 21. That was quite an excitement, traveling that far from Germany for the first time. The personal connections I made there made quite the difference. It was also a big boost for motivation. I attended 8 (Argentina), 9 (Spain), 10 (New York), 11 (Bosnia and Herzegovina), 12 (Nicaragua), 13 (Switzerland), 14 (Portland), 15 (Germany), 16 (South Africa), and hopefully I'll make it to this year's in Brest. At all of them I did not see much of the countries as I prioritized all of my time focused on Debian, even skipping some of the day trips in favor of team meetings. Yet I am very grateful to the project (and to my employer) for shipping me there.I ended up as Stable Release Manager for a while, from August 2008 - when Martin Zobel-Helas moved into DSA - until I got dropped in March 2020. I think my biggest achievements were pushing for the creation of -updates in favor of a separate volatile archive and a change of the update policy to allow for more common sense updates in the main archive vs. the very strict "breakage or security" policy we had previously. I definitely need to call out Adam D. Barratt for being the partner in crime, holding up the fort for even longer.In 2009 I got too annoyed at the existing wanna-build team not being responsive anymore and pushed for the system to be given to a new team. I did not build it and significant contributions were done by other people (like Andreas Barth and Joachim Breitner, and later Aurelien Jarno). I mostly reworked the way the system was triggered, investigated when it broke and was around when people wanted things merged.
In the meantime I worked sys/netadmin jobs while at university, both paid and as a volunteer with the students' council. For a year or two I was the administrator of a System z mainframe IBM donated to my university. We had a mainframe course and I attended two related conferences. That's where my s390(x) interest came from, although credit for the port needs to go to Aurelien Jarno.
Since completing university in 2013 I have been working for a company for almost 12 years. Debian experience was very relevant to the job and I went on maintaining a Linux distro or two at work - before venturing off into security hardening. People in megacorps - in my humble opinion - disappear from the volunteer projects because a) they might previously have been studying and thus had a lot more time on their hands and b) the job is too similar to the volunteer work and thus the same brain cells used for work are exhausted and can't be easily reused for volunteer work. I kept maintaining a couple of things (buildds, some packages) - mostly because of a sense of commitment and responsibility, but otherwise kind of scaled down my involvement. I also felt less connected as I dropped off IRC.Last year I finally made it to Debian events again: MiniDebconf in Berlin, where we discussed the aftermath of the xz incident, and the Debian BSP in Salzburg. I rejoined IRC using the Matrix bridge. That also rekindled my involvement, with me guiding a new DD through NM and ending up in DSA. To be honest, only in the last two or three years I felt like a (more) mature old-timer.
I have a new gig at work lined up to start soon and next to that I have sysadmining for Debian. It is pretty motivating to me that I can just get things done - something that is much harder to achieve at work due to organizational complexities. It balances out some frustration I'd otherwise have. The work is different enough to be enjoyable and the people I work with are great.

The future
I still think the work we do in Debian is important, as much as I see a lack of appreciation in a world full of containers. We are reaping most of the benefits of standing on the shoulders of giants and of great decisions made in the past (e.g. the excellent Debian policy, but also the organizational model) that made Debian what it is today.Given the increase in size and complexity of what Debian ships - and the somewhat dwindling resource of developer time, it would benefit us to have better processes for large-scale changes across all packages. I greatly respect the horizontal effects that are currently being driven and that suck up a lot of energy.A lot of our infrastructure is also aging and not super well maintained. Many take it for granted that the services we have keep existing, but most are only maintained by a person or two, if even. Software stacks are aging and it is even a struggle to have all necessary packages in the next release.Hopefully I can contribute a bit or two to these efforts in the future.

31 January 2025

Gunnar Wolf: ChatGPT is bullshit

This post is an unpublished review for ChatGPT is bullshit
As people around the world understand how LLMs behave, more and more people wonder as to why these models hallucinate, and what can be done about to reduce it. This provocatively named article by Michael Townsen Hicks, James Humphries and Joe Slater bring is an excellent primer to better understanding how LLMs work and what to expect from them. As humans carrying out our relations using our language as the main tool, we are easily at awe with the apparent ease with which ChatGPT (the first widely available, and to this day probably the best known, LLM-based automated chatbot) simulates human-like understanding and how it helps us to easily carry out even daunting data aggregation tasks. It is common that people ask ChatGPT for an answer and, if it gets part of the answer wrong, they justify it by stating that it s just a hallucination. Townsen et al. invite us to switch from that characterization to a more correct one: LLMs are bullshitting. This term is formally presented by Frankfurt [1]. To Bullshit is not the same as to lie, because lying requires to know (and want to cover) the truth. A bullshitter not necessarily knows the truth, they just have to provide a compelling description, regardless of what is really aligned with truth. After introducing Frankfurt s ideas, the authors explain the fundamental ideas behind LLM-based chatbots such as ChatGPT; a Generative Pre-trained Transformer (GPT) s have as their only goal to produce human-like text, and it is carried out mainly by presenting output that matches the input s high-dimensional abstract vector representation, and probabilistically outputs the next token (word) iteratively with the text produced so far. Clearly, a GPT s ask is not to seek truth or to convey useful information they are built to provide a normal-seeming response to the prompts provided by their user. Core data are not queried to find optimal solutions for the user s requests, but are generated on the requested topic, attempting to mimic the style of document set it was trained with. Erroneous data emitted by a LLM is, thus, not equiparable with what a person could hallucinate with, but appears because the model has no understanding of truth; in a way, this is very fitting with the current state of the world, a time often termed as the age of post-truth [2]. Requesting an LLM to provide truth in its answers is basically impossible, given the difference between intelligence and consciousness: Following Harari s definitions [3], LLM systems, or any AI-based system, can be seen as intelligent, as they have the ability to attain goals in various, flexible ways, but they cannot be seen as conscious, as they have no ability to experience subjectivity. This is, the LLM is, by definition, bullshitting its way towards an answer: their goal is to provide an answer, not to interpret the world in a trustworthy way. The authors close their article with a plea for literature on the topic to adopt the more correct bullshit term instead of the vacuous, anthropomorphizing hallucination . Of course, being the word already loaded with a negative meaning, it is an unlikely request. This is a great article that mixes together Computer Science and Philosophy, and can shed some light on a topic that is hard to grasp for many users. [1] Frankfurt, Harry (2005). On Bullshit. Princeton University Press. [2] Zoglauer, Thomas (2023). Constructed truths: truth and knowledge in a post-truth world. Springer. [3] Harari, Yuval Noah (2023. Nexus: A Brief History of Information Networks From the Stone Age to AI. Random House.

Russell Coker: Links January 2025

Aaron Quigley s Everything Open lecture about Intelligent Interfaces is one of the most interesting research reports I ve seen in a long time [1]. This one can be understood and appreciated by people who don t have a strong background in computer science. Statites (satellites that don t orbit the sun but use solar sails to hover in place) could be used to catch up to interstellar objects [2]. Slashgear has an interesting article about an AI piloted F16 beating a human piloted F16 [3]. Given the serious handicaps of flying a plane designed for humans and flying to minimise risk to itself and other crewed aircraft this is a serious victory. Hopefully crewed military aircraft will be obsolete soon. Amusing video about the performance of cats with MMORPG style descriptions [4]. John Goerzen wrote an interesting blog post about censorship and the changes to Facebook [5]. Ron Garret wrote an interesting blog post 15 years ago when going through what he now describes as an existential crisis [6]. A comment on Ron s post is references Alan Crowe s blog post about whether the self exists which is an interesting philosophical post [7]. But I m still going to think of myself as a person. Another comment on Ron s post references Aaron Swartz blog post about Noam Chomsky etc [8]. I have to watch Manufacturing Consent: Noam Chomsky and the Media. Ron Garret wrote an interesting blog post about his failed attempts to start a company and how it all worked out well for him any way [9]. Amusing video about a failed crowdfunded e-bike [10]. Cory Doctorow wrote an insightful article about how Enshittification is not caused by VCs but by lack of controls [11].

29 January 2025

Russ Allbery: Review: The Sky Road

Review: The Sky Road, by Ken MacLeod
Series: Fall Revolution #4
Publisher: Tor
Copyright: 1999
Printing: August 2001
ISBN: 0-8125-7759-0
Format: Mass market
Pages: 406
The Sky Road is the fourth book in the Fall Revolution series, but it represents an alternate future that diverges after (or during?) the events of The Sky Fraction. You probably want to read that book first, but I'm not sure reading The Stone Canal or The Cassini Division adds anything to this book other than frustration. Much more on that in a moment. Clovis colha Gree is a aspiring doctoral student in history with a summer job as a welder. He works on the platform for the project, which the reader either slowly discovers from the book or quickly discovers from the cover is a rocket to get to orbit. As the story opens, he meets (or, as he describes it) is targeted by a woman named Merrial, a tinker who works on the guidance system. The early chapters provide only a few hints about Clovis's world: a statue of the Deliverer on a horse that forms the backdrop of their meeting, the casual carrying of weapons, hints that tinkers are socially unacceptable, and some division between the white logic and the black logic in programming. Also, because this is a Ken MacLeod novel, everyone is obsessed with smoking and tobacco the way that the protagonists of erotica are obsessed with sex. Clovis's story is one thread of this novel. The other, told in the alternating chapters, is the story of Myra Godwin-Davidova, chair of the governing Council of People's Commissars of the International Scientific and Technical Workers' Republic, a micronation embedded in post-Soviet Kazakhstan. Series readers will remember Myra's former lover, David Reid, as the villain of The Stone Canal and the head of the corporation Mutual Protection, which is using slave labor (sort of) to support a resurgent space movement and its attempt to take control of a balkanized Earth. The ISTWR is in decline and a minor power by all standards except one: They still have nuclear weapons. So, first, we need to talk about the series divergence. I know from reading about this book on-line that The Sky Road is an alternate future that does not follow the events of The Stone Canal and The Cassini Division. I do not know this from the text of the book, which is completely silent about even being part of a series. More annoyingly, while the divergence in the Earth's future compared to The Cassini Division is obvious, I don't know what the Jonbar hinge is. Everything I can find on-line about this book is maddeningly coy. Wikipedia claims the divergence happens at the end of The Sky Fraction. Other reviews and the Wikipedia talk page claim it happens in the middle of The Stone Canal. I do have a guess, but it's an unsatisfying one and I'm not sure how to test its correctness. I suppose I shouldn't care and instead take each of the books on their own terms, but this is the type of thing that my brain obsesses over, and I find it intensely irritating that MacLeod didn't explain it in the books themselves. It's the sort of authorial trick that makes me feel dumb, and books that gratuitously make me feel dumb are less enjoyable to read. The second annoyance I have with this book is also only partly its fault. This series, and this book in particular, is frequently mentioned as good political science fiction that explores different ways of structuring human society. This was true of some of the earlier books in a surprisingly superficial way. Here, I would call it hogwash. This book, or at least the Myra portion of it, is full of people doing politics in a tactical sense, but like the previous books of this series, that politics is mostly embedded in personal grudges and prior romantic relationships. Everyone involved is essentially an authoritarian whose ability to act as they wish is only contested by other authoritarians and is largely unconstrained by such things as persuasion, discussions, elections, or even theory. Myra and most of the people she meets are profoundly cynical and almost contemptuous of any true discussion of political systems. This is the trappings and mechanisms of politics without the intellectual debate or attempt at consensus, turning it into a zero-sum game won by whoever can threaten the others more effectively. Given the glowing reviews I've seen in relatively political SF circles, presumably I am missing something that other people see in MacLeod's approach. Perhaps this level of pettiness and cynicism is an accurate depiction of what it's like inside left-wing political movements. (What an appalling condemnation of left-wing political movements, if so.) But many of the on-line reviews lead me to instead conclude that people's understanding of "political fiction" is stunted and superficial. For example, there is almost nothing Marxist about this book it contains essentially no economic or class analysis whatsoever but MacLeod uses a lot of Marxist terminology and sets half the book in an explicitly communist state, and this seems to be enough for large portions of the on-line commentariat to conclude that it's full of dangerous, radical ideas. I find this sadly hilarious given that MacLeod's societies tend, if anything, towards a low-grade libertarianism that would be at home in a Robert Heinlein novel. Apparently political labels are all that's needed to make political fiction; substance is optional. So much for the politics. What's left in Clovis's sections is a classic science fiction adventure in which the protagonist has a radically different perspective from the reader and the fun lies in figuring out the world-building through the skewed perspective of the characters. This was somewhat enjoyable, but would have been more fun if Clovis had any discernible personality. Sadly he instead seems to be an empty receptacle for the prejudices and perspective of his society, which involve a lot of quasi-religious taboos and an essentially magical view of the world. Merrial is a more interesting character, although as always in this series the romance made absolutely no sense to me and seemed to be conjured by authorial fiat and weirdly instant sexual attraction. Myra's portion of the story was the part I cared more about and was more invested in, aided by the fact that she's attempting to do something more interesting than launch a crewed space vehicle for no obvious reason. She at least faces some true moral challenges with no obviously correct response. It's all a bit depressing, though, and I found Myra's unwillingness to ground her decisions in a more comprehensive moral framework disappointing. If you're going to make a protagonist the ruler of a communist state, even an ironic one, I'd like to hear some real political philosophy, some theory of sociology and economics that she used to justify her decisions. The bits that rise above personal animosity and vibes were, I think, said better in The Cassini Division. This series was disappointing, and I can't say I'm glad to have read it. There is some small pleasure in finishing a set of award-winning genre books so that I can have a meaningful conversation about them, but the awards failed to find me better books to read than I would have found on my own. These aren't bad books, but the amount of enjoyment I got out of them didn't feel worth the frustration. Not recommended, I'm afraid. Rating: 6 out of 10

23 January 2025

Sergio Talens-Oliag: Ghostty Terminal Emulator

For a long time I ve been using the Terminator terminal emulator on Linux machines, but last week I read a LWN article about a new emulator called Ghostty that looked interesting and I decided to give it a try. The author sells it as a fast, feature-rich and cross-platform terminal emulator that follows the zero configuration philosophy.

Installation and configurationI installed the debian package for Ubuntu 24.04 from the ghostty-ubuntu project and started playing with it. The first thing I noticed is that the zero configuration part is true; I was able to use the terminal without a configuration file, although I created one to change the theme and the font size, but other than that it worked OK for me; my $HOME/.config/ghostty/config file is as simple as:
font-size=14
theme=/usr/share/ghostty/themes/iTerm2 Solarized Light

Starting the terminal maximizedAfter playing a little bit with the terminal I was turned off by the fact that there was no option to start it maximized, but is seemed to me that someone should have asked for the feature, or, if not, I could ask for it. I did a quick search on the project and I found out that there was a merged PR that added the option, so I downloaded the source code, installed Zig and built the program on my machine. As the change is going to be included on the next version on the package I replaced the binary with my version and started playing with the terminal.

Accessing remote machinesThe first thing I noticed was that when logging into remote machines using ssh the terminal variable was not known, but on the help section of the project documentation there was an entry about how to fix it copying the terminfo configuration to remote machines, it is as simple as running the following:
infocmp -x   ssh YOUR-SERVER -- tic -x -

Dead keys on UbuntuWith that sorted out everything looked good to me until I tried to add an accented character when editing a file and the terminal stopped working. Again, I looked at the project issues and found one that matched what was happening to me, and it remembered me about one of the best things about actively maintained open source software. It turns out that the issue is related to a bug on ibus, but other terminals were working right, so the ghostty developer was already working on a fix on the way the terminal handles the keyboard input on GTK, so I subscribed to the issue and stopped using ghostty until there was something new to try again (I use an Spanish keyboard map and I can t use a terminal that does not support dead keys). Yesterday I saw some messages about things being almost fixed, so I pulled the latest changes on my cloned repository, compiled it and writing accented characters works now, there is a small issue with the cursor (the dead key pressed is left on the block cursor unless you change the window focus), but that is something manageable for me.

ConclusionI think that ghostty is a good terminal emulator and I m going to keep using it on my laptop unless I find something annoying that I can t work with (i hope that the cursor issue will be fixed soon and I can live with it as the only thing I need to do to recover from it is changing the widow focus, and that can be done really quickly using keyboard shortcuts). As it is actively maintained and the developer seems to be quite active I don t expect problems and is nice to play with new things from time to time.

14 January 2025

Louis-Philippe V ronneau: Montreal Subway Foot Traffic Data, 2024 edition

Another year of data from Soci t de Transport de Montr al, Montreal's transit agency! A few highlights this year:
  1. The closure of the Saint-Michel station had a drastic impact on D'Iberville, the station closest to it.
  2. The opening of the Royalmount shopping center nearly doubled the traffic of the De La Savane station.
  3. The Montreal subway continues to grow, but has not yet recovered from the pandemic. Berri-UQAM station (the largest one) is still below 1 million entries per quarter compared to its pre-pandemic record.
By clicking on a subway station, you'll be redirected to a graph of the station's foot traffic. Licences

9 January 2025

Freexian Collaborators: Debian Contributions: Tracker.debian.org updates, Salsa CI improvements, Coinstallable build-essential, Python 3.13 transition, Ruby 3.3 transition and more! (by Anupa Ann Joseph, Stefano Rivera)

Debian Contributions: 2024-12 Contributing to Debian is part of Freexian s mission. This article covers the latest achievements of Freexian and their collaborators. All of this is made possible by organizations subscribing to our Long Term Support contracts and consulting services.

Tracker.debian.org updates, by Rapha l Hertzog Profiting from end-of-year vacations, Rapha l prepared for tracker.debian.org to be upgraded to Debian 12 bookworm by getting rid of the remnants of python3-django-jsonfield in the code (it was superseded by a Django-native field). Thanks to Philipp Kern from the Debian System Administrators team, the upgrade happened on December 23rd. Rapha l also improved distro-tracker to better deal with invalid Maintainer fields which recently caused multiples issues in the regular data updates (#1089985, MR 105). While working on this, he filed #1089648 asking dpkg tools to error out early when maintainers make such mistakes. Finally he provided feedback to multiple issues and merge requests (MR 106, issues #21, #76, #77), there seems to be a surge of interest in distro-tracker lately. It would be nice if those new contributors could stick around and help out with the significant backlog of issues (in the Debian BTS, in Salsa).

Salsa CI improvements, by Santiago Ruano Rinc n Given that the Debian buildd network now relies on sbuild using the unshare backend, and that Salsa CI s reproducibility testing needs to be reworked (#399), Santiago resumed the work for moving the build job to use sbuild. There was some related work a few months ago that was focused on sbuild with the schroot and the sudo backends, but those attempts were stalled for different reasons, including discussions around the convenience of the move (#296). However, using sbuild and unshare avoids all of the drawbacks that have been identified so far. Santiago is preparing two merge requests: !568 to introduce a new build image, and !569 that moves all the extract-source related tasks to the build job. As mentioned in the previous reports, this change will make it possible for more projects to use the pipeline to build the packages (See #195). Additional advantages of this change include a more optimal way to test if a package builds twice in a row: instead of actually building it twice, the Salsa CI pipeline will configure sbuild to check if the clean target of debian/rules correctly restores the source tree, saving some CPU cycles by avoiding one build. Also, the images related to Ubuntu won t be needed anymore, since the build job will create chroots for different distributions and vendors from a single common build image. This will save space in the container registry. More changes are to come, especially those related to handling projects that customize the pipeline and make use of the extract-source job.

Coinstallable build-essential, by Helmut Grohne Building on the gcc-for-host work of last December, a notable patch turning build-essential Multi-Arch: same became feasible. Whilst the change is small, its implications and foundations are not. We still install crossbuild-essential-$ARCH for cross building and due to a britney2 limitation, we cannot have it depend on the host s C library. As a result, there are workarounds in place for sbuild and pbuilder. In turning build-essential Multi-Arch: same, we may actually express these dependencies directly as we install build-essential:$ARCH instead. The crossbuild-essential-$ARCH packages will continue to be available as transitional dummy packages.

Python 3.13 transition, by Colin Watson and Stefano Rivera Building on last month s work, Colin, Stefano, and other members of the Debian Python team fixed 3.13 compatibility bugs in many more packages, allowing 3.13 to now be a supported but non-default version in testing. The next stage will be to switch to it as the default version, which will start soon. Stefano did some test-rebuilds of packages that only build for the default Python 3 version, to find issues that will block the transition. The default version transition typically shakes out some more issues in applications that (unlike libraries) only test with the default Python version. Colin also fixed Sphinx 8.0 compatibility issues in many packages, which otherwise threatened to get in the way of this transition.

Ruby 3.3 transition, by Lucas Kanashiro The Debian Ruby team decided to ship Ruby 3.3 in the next Debian release, and Lucas took the lead of the interpreter transition with the assistance of the rest of the team. In order to understand the impact of the new interpreter in the ruby ecosystem, ruby-defaults was uploaded to experimental adding ruby3.3 as an alternative interpreter, and a mass rebuild of reverse dependencies was done here. Initially, a couple of hundred packages were failing to build, after many rounds of rebuilds, adjustments, and many uploads we are down to 30 package build failures, of those, 21 packages were asked to be removed from testing and for the other 9, bugs were filled. All the information to track this transition can be found here. Now, we are waiting for PHP 8.4 to finish to avoid any collision. Once it is done the Ruby 3.3 transition will start in unstable.

Miscellaneous contributions
  • Enrico Zini redesigned the way nm.debian.org stores historical audit logs and personal data backups.
  • Carles Pina submitted a new package (python-firebase-messaging) and prepared updates for python3-ring-doorbell.
  • Carles Pina developed further po-debconf-manager: better state transition, fixed bugs, automated assigning translators and reviewers on edit, updating po header files automatically, fixed bugs, etc.
  • Carles Pina reviewed, submitted and followed up the debconf templates translation (more than 20 packages) and translated some packages (about 5).
  • Santiago continued to work on DebConf 25 organization related tasks, including handling the logo survey and results. Stefano spent time on DebConf 25 too.
  • Santiago continued the exploratory work about linux livepatching with Emmanuel Arias. Santiago and Emmanuel found a challenge since kpatch won t fully support linux in trixie and newer, so they are exploring alternatives such as klp-build.
  • Helmut maintained the /usr-move transition filing bugs in e.g. bubblewrap, e2fsprogs, libvpd-2.2-3, and pam-tmpdir and corresponding on related issues such as kexec-tools and live-build. The removal of the usrmerge package unfortunately broke debootstrap and was quickly reverted. Continued fallout is expected and will continue until trixie is released.
  • Helmut sent patches for 10 cross build failures and worked with Sandro Knau on stuck Qt/KDE patches related to cross building.
  • Helmut continued to maintain rebootstrap removing the need to build gnu-efi in the process.
  • Helmut collaborated with Emanuele Rocca and Jochen Sprickerhof on an interesting adventure in diagnosing why gcc would FTBFS in recent sbuild.
  • Helmut proposed supporting build concurrency limits in coreutils s nproc. As it turns out nproc is not a good place for this functionality.
  • Colin worked with Sandro Tosi and Andrej Shadura to finish resolving the multipart vs. python-multipart name conflict, as mentioned last month.
  • Colin upgraded 48 Python packages to new upstream versions, fixing four CVEs and a number of compatibility bugs with recent Python versions.
  • Colin issued an openssh bookworm update with a number of fixes that had accumulated over the last year, especially fixing GSS-API key exchange which had been quite broken in bookworm.
  • Stefano fixed a minor bug in debian-reimbursements that was disallowing combination PDFs containing JAL tickets, encoded in UTF-16.
  • Stefano uploaded a stable update to PyPy3 in bookworm, catching up with security issues resolved in cPython.
  • Stefano fixed a regression in the eventlet from his Python 3.13 porting patch.
  • Stefano continued discussing a forwarded patch (renaming the sysconfigdata module) with cPython upstream, ending in a decision to drop the patch from Debian. This will need some continued work.
  • Anupa participated in the Debian Publicity team meeting in December, which discussed the team activities done in 2024 and projects for 2025.

4 January 2025

Louis-Philippe V ronneau: Montreal's Debian & Stuff - December 2024

Our Debian User Group met on December 22nd for our last meeting of 2024. I wasn't sure at first it was a good idea, but many people showed up and it was great! Here's what we did: pollo: anarcat: lelutin: lavamind: tvaz: mjeanson and joeDoe: Some of us ended up grabbing a drink after the event at l'Isle de Garde, a pub right next to the venue. Pictures This time around, we were hosted by l'Espace des possibles, at their new location (they moved since our last visit). It was great! People liked the space so much we actually discussed going back there more often :) Group photo at l'Espace des possibles

1 January 2025

Tim Retout: Strauss as Pop Music

While watching the Vienna New Year s Concert today, reading about its perhaps somewhat problematic origins, I was struck by the observation that the Strauss family s polkas were seen as pop music during their lifetime, not as serious as proper classical composers, and so it took some time before the Vienna Philharmonic would actually play their work. (Perhaps the space-themed interval today and the ballet dancers pretending to be a steam train were a continuation of the true spirit of this? It felt very Eurovision.) I can t decide if it s remarkable that this year was the first time a female composer (Constanze Geiger) was represented at this concert, or if that is what you get when you set up a tradition of playing mainly Strauss?

Louis-Philippe V ronneau: 2024 A Musical Retrospective

Another musical retrospective. If you enjoy this, I also did a 2022 and a 2023 one. Albums In 2024, I added 88 new albums to my collection that's a lot! This year again, I bought the vast majority of my music on Bandcamp. To be honest, I'm quite distraught by what's become of that website. Although it stays a wonderful place to buy underground music, Songtradr, the new owner of the platform, has been shown to be viciously anti-union. Money continues to ruin the world, I guess. Concerts I continued to go to a lot of concerts in 2024 (25!). Over the past 3 years, I have been going to more and more concerts, and I think I've reached my "peak". A mean of a concert every two weeks is quite a lot :) If you also like music and concerts, but find yourself not going to as many as you would like, the real secret is not to be afraid to go to concerts alone. Going with friends is always fun, but if I restricted myself to only going to concerts in a group, I'd barely see a few each year. Another good advice is to bring a book or something else1 to pass the time between sets. It can often take 30-45 minutes between sets for the artists to get their instruments ready, which can get quite boring if you just stand there and wait. Anyway, here are the concerts I went to in 2024: Shout out to the Gancio project and to the folks running the Montreal instance. It continues to be a smash hit and most of the interesting concerts end up being advertised there. See you all in 2025!

  1. I bought a Miyoo Mini Plus, a handheld Linux console running OnionOS, for that express reason. So far it's been great and I've been very happy to revisit some childhood classics.

31 December 2024

Chris Lamb: Favourites of 2024

Here are my favourite books and movies that I read and watched throughout 2024. It wasn't quite the stellar year for books as previous years: few of those books that make you want to recommend and/or buy them for all your friends. In subconscious compensation, perhaps, I reread a few classics (e.g. True Grit, Solaris), and I'm almost finished my second read of War and Peace.

Books

Elif Batuman: Either/Or (2022) Stella Gibbons: Cold Comfort Farm (1932) Michel Faber: Under The Skin (2000) Wallace Stegner: Crossing to Safety (1987) Gustave Flaubert: Madame Bovary (1857) Rachel Cusk: Outline (2014) Sara Gran: The Book of the Most Precious Substance (2022) Anonymous: The Railway Traveller s Handy Book (1862) Natalie Hodges: Uncommon Measure: A Journey Through Music, Performance, and the Science of Time (2022)Gary K. Wolf: Who Censored Roger Rabbit? (1981)

Films Recent releases

Seen at a 2023 festival. Disappointments this year included Blitz (Steve McQueen), Love Lies Bleeding (Rose Glass), The Room Next Door (Pedro Almod var) and Emilia P rez (Jacques Audiard), whilst the worst new film this year was likely The Substance (Coralie Fargeat), followed by Megalopolis (Francis Ford Coppola), Unfrosted (Jerry Seinfeld) and Joker: Folie Deux (Todd Phillips).
Older releases ie. Films released before 2023, and not including rewatches from previous years. Distinctly unenjoyable watches included The Island of Dr. Moreau (John Frankenheimer, 1996), Southland Tales (Richard Kelly, 2006), Any Given Sunday (Oliver Stone, 1999) & The Hairdresser s Husband (Patrice Leconte, 19990). On the other hand, unforgettable cinema experiences this year included big-screen rewatches of Solaris (Andrei Tarkovsky, 1972), Blade Runner (Ridley Scott, 1982), Apocalypse Now (Francis Ford Coppola, 1979) and Die Hard (John McTiernan, 1988).

24 December 2024

Russ Allbery: Review: Number Go Up

Review: Number Go Up, by Zeke Faux
Publisher: Crown Currency
Copyright: 2023
Printing: 2024
ISBN: 0-593-44382-9
Format: Kindle
Pages: 373
Number Go Up is a cross between a history and a first-person account of investigative journalism around the cryptocurrency bubble and subsequent collapse in 2022. The edition I read has an afterward from June 2024 that brings the story up to date with Sam Bankman-Fried's trial and a few other events. Zeke Faux is a reporter for Bloomberg News and a fellow of New America. Last year, I read Michael Lewis's Going Infinite, a somewhat-sympathetic book-length profile of Sam Bankman-Fried that made a lot of people angry. One of the common refrains at the time was that people should read Number Go Up instead, and since I'm happy to read more about the absurdities of the cryptocurrency world, I finally got around to reading the other big crypto book of 2023. This is a good book, with some caveats that I am about to explain at absurd length. If you want a skeptical history of the cryptocurrency bubble, you should read it. People who think that it's somehow in competition with Michael Lewis's book or who think the two books disagree (including Faux himself) have profoundly missed the point of Going Infinite. I agree with Matt Levine: Both of these books are worth your time if this is the sort of thing you like reading about. But (much) more on Faux's disagreements with Lewis later. The frame of Number Go Up is Faux's quixotic quest to prove that Tether is a fraud. To review this book, I therefore need to briefly explain what Tether is. This is only the first of many extended digressions. One natural way to buy cryptocurrency would be to follow the same pattern as a stock brokerage account. You would deposit some amount of money into the account (or connect the brokerage account to your bank account), and then exchange money for cryptocurrency or vice versa, using bank transfers to put money in or take it out. However, there are several problems with this. One is that swapping cryptocurrency for money is awkward and sometimes expensive. Another is that holding people's investment money for them is usually highly regulated, partly for customer safety but also to prevent money laundering. These are often called KYC laws (Know Your Customer), and the regulation-hostile world of cryptocurrency didn't want to comply with them. Tether is a stablecoin, which means that the company behind Tether attempts to guarantee that one Tether is always worth exactly one US dollar. It is not a speculative investment like Bitcoin; it's a cryptocurrency substitute for dollars. People exchange dollars for Tether to get their money into the system and then settle all of their subsequent trades in Tether, only converting the Tether back to dollars when they want to take their money out of cryptocurrency entirely. In essence, Tether functions like the cash reserve in a brokerage account: Your Tether holdings are supposedly guaranteed to be equivalent to US dollars, you can withdraw them at any time, and because you can do so, you don't bother, instead leaving your money in the reserve account while you contemplate what new coin you want to buy. As with a bank, this system rests on the assurance that one can always exchange one Tether for one US dollar. The instant people stop believing this is true, people will scramble to get their money out of Tether, creating the equivalent of a bank run. Since Tether is not a regulated bank or broker and has no deposit insurance or strong legal protections, the primary defense against a run on Tether is Tether's promise that they hold enough liquid assets to be able to hand out dollars to everyone who wants to redeem Tether. (A secondary defense that I wish Faux had mentioned is that Tether limits redemptions to registered accounts redeeming more than $100,000, which is a tiny fraction of the people who hold Tether, but for most purposes this doesn't matter because that promise is sufficient to maintain the peg with the dollar.) Faux's firmly-held belief throughout this book is that Tether is lying. He believes they do not have enough money to redeem all existing Tether coins, and that rather than backing every coin with very safe liquid assets, they are using the dollars deposited in the system to make illiquid and risky investments. Faux never finds the evidence that he's looking for, which makes this narrative choice feel strange. His theory was tested when there was a run on Tether following the collapse of the Terra stablecoin. Tether passed without apparent difficulty, redeeming $16B or about 20% of the outstanding Tether coins. This doesn't mean Faux is wrong; being able to redeem 20% of the outstanding tokens is very different from being able to redeem 100%, and Tether has been fined for lying about its reserves. But Tether is clearly more stable than Faux thought it was, which makes the main narrative of the book weirdly unsatisfying. If he admitted he might be wrong, I would give him credit for showing his work even if it didn't lead where he expected, but instead he pivots to focusing on Tether's role in money laundering without acknowledging that his original theory took a serious blow. In Faux's pursuit of Tether, he wanders through most of the other elements of the cryptocurrency bubble, and that's the strength of this book. Rather than write Number Go Up as a traditional history, Faux chooses to closely follow his own thought processes and curiosity. This has the advantage of giving Faux an easy and natural narrative, something that non-fiction books of this type can struggle with, and it lets Faux show how confusing and off-putting the cryptocurrency world is to an outsider. The best parts of this book were the parts unrelated to Tether. Faux provides an excellent summary of the Axie Infinity speculative bubble and even traveled to the Philippines to interview people who were directly affected. He then wandered through the bizarre world of NFTs, and his first-hand account of purchasing one (specifically a Mutant Ape) to get entrance to a party (which sounded like a miserable experience I would pay money to get out of) really drives home how sketchy and weird cryptocurrency-related software and markets can be. He also went to El Salvador to talk to people directly about the country's supposed embrace of Bitcoin, and there's no substitute for that type of reporting to show how exaggerated and dishonest the claims of cryptocurrency adoption are. The disadvantage of this personal focus on Faux himself is that it sometimes feels tedious or sensationalized. I was much less interested in his unsuccessful attempts to interview the founder of Tether than Faux was, and while the digression into forced labor compounds in Cambodia devoted to pig butchering scams was informative (and horrific), I think Faux leaned too heavily on an indirect link to Tether. His argument is that cryptocurrency enables a type of money laundering that is particularly well-suited to supporting scams, but both scams and this type of economic slavery existed before cryptocurrency and will exist afterwards. He did not make a very strong case that Tether was uniquely valuable as a money laundering service, as opposed to a currently useful tool that would be replaced with some other tool should it go away. This part of the book is essentially an argument that money laundering is bad because it enables crime, and sure, to an extent I agree. But if you're going to put this much emphasis on the evils of money laundering, I think you need to at least acknowledge that many people outside the United States do not want to give US government, which is often openly hostile to them, veto power over their financial transactions. Faux does not. The other big complaint I have with this book, and with a lot of other reporting on cryptocurrency, is that Faux is sloppy with the term "Ponzi scheme." This is going to sound like nit-picking, but I think this sloppiness matters because it may obscure an ongoing a shift in cryptocurrency markets. A Ponzi scheme is not any speculative bubble. It is a very specific type of fraud in which investors are promised improbably high returns at very low risk and with safe principal. These returns are paid out, not via investment in some underlying enterprise, but by taking the money from new investments and paying it to earlier investors. Ponzi schemes are doomed because satisfying their promises requires a constantly increasing flow of new investors. Since the population of the world is finite, all Ponzi schemes are mathematically guaranteed to eventually fail, often in a sudden death spiral of ever-increasing promises to lure new investors when the investment stream starts to dry up. There are some Ponzi schemes in cryptocurrency, but most practices that are called Ponzi schemes are not. For example, Faux calls Axie Infinity a Ponzi scheme, but it was missing the critical elements of promised safe returns and fraudulently paying returns from the investments of later investors. It was simply a speculative bubble that people bought into on the assumption that its price would increase, and like any speculative bubble those who sold before the peak made money at the expense of those who bought at the peak. The reason why this matters is that Ponzi schemes are a self-correcting problem. One can decry the damage caused when they collapse, but one can also feel the reassuring certainty that they will inevitably collapse and prove the skeptics correct. The same is not true of speculative assets in general. You may think that the lack of an underlying economic justification for prices means that a speculative bubble is guaranteed to collapse eventually, but in the famous words of Gary Schilling, "markets can remain irrational a lot longer than you and I can remain solvent." One of the people Faux interviews explains this distinction to him directly:
Rong explained that in a true Ponzi scheme, the organizer would have to handle the "fraud money." Instead, he gave the sneakers away and then only took a small cut of each trade. "The users are trading between each other. They are not going through me, right?" Rong said. Essentially, he was arguing that by downloading the Stepn app and walking to earn tokens, crypto bros were Ponzi'ing themselves.
Faux is openly contemptuous of this response, but it is technically correct. Stepn is not a Ponzi scheme; it's a speculative bubble. There are no guaranteed returns being paid out of later investments and no promise that your principal is safe. People are buying in at price that you may consider irrational, but Stepn never promised you would get your money back, let alone make a profit, and therefore it doesn't have the exponential progression of a Ponzi scheme. One can argue that this is a distinction without a moral difference, and personally I would agree, but it matters immensely if one is trying to analyze the future of cryptocurrencies. Schemes as transparently unstable as Stepn (which gives you coins for exercise and then tries to claim those coins have value through some vigorous hand-waving) are nearly as certain as Ponzi schemes to eventually collapse. But it's also possible to create a stable business around allowing large numbers of people to regularly lose money to small numbers of sophisticated players who are collecting all of the winnings. It's called a poker room at a casino, and no one thinks poker rooms are Ponzi schemes or are doomed to collapse, even though nearly everyone who plays poker will lose money. This is the part of the story that I think Faux largely missed, and which Michael Lewis highlights in Going Infinite. FTX was a legitimate business that made money (a lot of money) off of trading fees, in much the same way that a casino makes money off of poker rooms. Lots of people want to bet on cryptocurrencies, similar to how lots of people want to play poker. Some of those people will win; most of those people will lose. The casino doesn't care. Its profit comes from taking a little bit of each pot, regardless of who wins. Bankman-Fried also speculated with customer funds, and therefore FTX collapsed, but there is no inherent reason why the core exchange business cannot be stable if people continue to want to speculate in cryptocurrencies. Perhaps people will get tired of this method of gambling, but poker has been going strong for 200 years. It's also important to note that although trading fees are the most obvious way to be a profitable cryptocurrency casino, they're not the only way. Wall Street firms specialize in finding creative ways to take a cut of every financial transaction, and many of those methods are more sophisticated than fees. They are so good at this that buying and selling stock through trading apps like Robinhood is free. The money to run the brokerage platform comes from companies that are delighted to pay for the opportunity to handle stock trades by day traders with a phone app. This is not, as some conspiracy theories would have you believe, due to some sort of fraudulent price manipulation. It is because the average person with a Robinhood phone app is sufficiently unsophisticated that companies that have invested in complex financial modeling will make a steady profit taking the other side of their trades, mostly because of the spread (the difference between offered buy and sell prices). Faux is so caught up in looking for Ponzi schemes and fraud that I think he misses this aspect of cryptocurrency's transformation. Wall Street trading firms aren't piling into cryptocurrency because they want to do securities fraud. They're entering this market because there seems to be persistent demand for this form of gambling, cryptocurrency markets reward complex financial engineering, and running a legal casino is a profitable business model. Michael Lewis appears as a character in this book, and Faux portrays him quite negatively. The root of this animosity appears to stem from a cryptocurrency conference in the Bahamas that Faux attended. Lewis interviewed Bankman-Fried on stage, and, from Faux's account, his questions were fawning and he praised cryptocurrencies in ways that Faux is certain he knew were untrue. From that point on, Faux treats Lewis as an apologist for the cryptocurrency industry and for Sam Bankman-Fried specifically. I think this is a legitimate criticism of Lewis's methods of getting close to the people he wants to write about, but I think Faux also makes the common mistake of assuming Lewis is a muckraking reporter like himself. This has never been what Lewis is interested in. He writes about people he finds interesting and that he thinks a reader will also find interesting. One can legitimately accuse him of being credulous, but that's partly because he's not even trying to do the same thing Faux is doing. He's not trying to judge; he's trying to understand. This shows when it comes to the parts of this book about Sam Bankman-Fried. Faux's default assumption is that everyone involved in cryptocurrency is knowingly doing fraud, and a lot of his research is looking for evidence to support the conclusion he had already reached. I don't think there's anything inherently wrong with that approach: Faux is largely, although not entirely, correct, and this type of hostile journalism is incredibly valuable for society at large. Upton Sinclair didn't start writing The Jungle with an open mind about the meat-packing industry. But where Faux and Lewis disagree on Bankman-Fried's motivations and intentions, I think Lewis has the much stronger argument. Faux's position is that Bankman-Fried always intended to steal people's money through fraud, perhaps to fund his effective altruism donations, and his protestations that he made mistakes and misplaced funds are obvious lies. This is an appealing narrative if one is looking for a simple villain, but Faux's evidence in support of this is weak. He mostly argues through stereotype: Bankman-Fried was a physics major and a Jane Street trader and therefore could not possibly be the type of person to misplace large amounts of money or miscalculate risk. If he wants to understand how that could be possible, he could read Going Infinite? I find it completely credible that someone with what appears to be uncontrolled, severe ADHD could be adept at trading and calculating probabilities and yet also misplace millions of dollars of assets because he wasn't thinking about them and therefore they stopped existing. Lewis made a lot of people angry by being somewhat sympathetic to someone few people wanted to be sympathetic towards, but Faux (and many others) are also misrepresenting his position. Lewis agrees that Bankman-Fried intentionally intermingled customer funds with his hedge fund and agrees that he lied about doing this. His only contention is that Bankman-Fried didn't do this to steal the money; instead, he invested customer money in risky bets that he thought would pay off. In support of this, Lewis made a prediction that was widely scoffed at, namely that much less of FTX's money was missing than was claimed, and that likely most or all of it would be found. And, well, Lewis was basically correct? The FTX bankruptcy is now expected to recover considerably more than the amount of money owed to creditors. Faux argues that this is only because the bankruptcy clawed back assets and cryptocurrencies have gone up considerably since the FTX bankruptcy, and therefore that the lost money was just replaced by unexpected windfall profits on other investments, but I don't think this point is as strong as he thinks it is. Bankman-Fried lost money on some of what he did with customer funds, made money on other things, and if he'd been able to freeze withdrawals for the year that the bankruptcy froze them, it does appear most of the money would have been recoverable. This does not make what he did legal or morally right, but no one is arguing that, only that he didn't intentionally steal money for his own personal gain or for effective altruism donations. And on that point, I don't think Faux is giving Lewis's argument enough credit. I have a lot of complaints about this book because I know way too much about this topic than anyone should probably know. I think Faux missed the plot in a couple of places, and I wish someone would write a book about where cryptocurrency markets are currently going. (Matt Levine's Money Stuff newsletter is quite good, but it's about all sorts of things other than cryptocurrency and isn't designed to tell a coherent story.) But if you know less about cryptocurrency and just want to hear the details of the run-up to the 2022 bubble, this is a great book for that. Faux is writing for people who are already skeptical and is not going to convince people who are cryptocurrency true believers, but that's fine. The details are largely correct (and extensively footnoted) and will satisfy most people's curiosity. Lewis's Going Infinite is a better book, though. It's not the same type of book at all, and it will not give you the broader overview of the cryptocurrency world. But if you're curious about what was going through the head of someone at the center of all of this chaos, I think Lewis's analysis is much stronger than Faux's. I'm happy I read both books. Rating: 8 out of 10

23 December 2024

Sahil Dhiman: Debian Mirrors Hierarchy

After finding AlmaLinux mirror sync capacity at Tier 0 (or Tier 1, however you look at it) is around 140 Gbps, I wanted to find source and hierarchy in Debian mirroring systems. There are two main types of mirrors in Debian - Debian package mirrors (for package installs and updates) and Debian CD mirrors (for ISO and others medias). Let s talk about package mirrors (and it s hierarchy) first.

Package mirror hierarchy Trace file was a good starting point for checking upstream for a package mirror in Debian. It resides at <URL>/debian/project/trace/_traces and shows flow of data. Sample trace file from jing.rocks s mirror. It showed, canonical source for packages is ftp-master.debian.org. Checking via https://db.debian.org/machines.cgi, showed it s fasolo.d.o hosted at Brown University, US. This serves as Master Archive Server , making it a Tier 0 mirror. It s entry mentions that it has 1 Gbps shared LAN connectivity (dated information?) but it only has to push to 3 other machines/sites. Side note - .d.o is .debian.org As shown on https://mirror-master.debian.org/status/mirror-hierarchy.html, the three sites are:
  • syncproxy2.eu.debian.org ie smit.d.o hosted by University of Twente, Netherlands with 2x10 Gbps connectivity.
  • syncproxy4.eu.debian.org ie schmelzer.d.o hosted by Conova in Austria with 2x10 Gbps connectivity.
  • syncproxy2.wna.debian.org - https://db.debian.org/machines.cgi entry mentions it being hosted at UBC here, but IP seems to be pointing to OSUOSL IP range as of now. IIRC few months ago, syncproxy2.wna.d.o was made to point to other host due to some issue (?). mirror-osuosl.d.o seems to be serving as syncproxy2.wna.d.o now. Bandwidth isn t explicitly mentioned but from my experience seeing bandwidths which other free software projects hosted at OSUOSL have, it would be atleast 10 Gbps and maybe more for Debian.

                     syncproxy2.eu.d.o (NL) ---> to the world
                    /
ftp-master.d.o (US) -- syncproxy4.eu.d.o (AT)  --> to the world 
                    \
                     syncproxy2.wna.d.o (US) --> to the world
A visualation of flow of package from ftp-master.d.o
These form the Debian Tier 1 mirror network, as all the mirrors sync from them. So Debian has atleast 50 Gbps+ capacity at Tier 1. A normal Debian user might never directly interact with any of these 3 machines, but every Debian package they run/download/install flows through these machines. Though, I m unsure what wna stands for in syncproxy2.wna.d.o. NA probably is North America and W is west (coast)? If you know, do let me know. After Tier 1, there are a few more syncproxies (detailed below). There are atleast 45 mirrors at Tier 2, updates for which are directly pushed from the three Tier 1 sync proxies. Most country mirrors i.e. ftp..debian.org are at Tier 2 too (barring a few like ftp.au.d.o, ftp.nz.do etc). Coming back to Sync proxies at Tier 2:
  • syncproxy3.wna.debian.org - gretchaninov.d.o which is marked as syncproxy2 on db.d.o (information dated). It s hosted in University of British Columbia, Canada, where a lot of Debian infrastructure including Salsa is hosted.
  • syncproxy.eu.debian.org - Croatian Academic and Research Network managed machine. CNAME directs to debian.carnet.hr.
  • syncproxy.au.debian.org - mirror-anu.d.o hosted by Australian National University with 100Mbps connectivity. Closest sync proxy for all Australian mirrors.
  • syncproxy4.wna.debian.org - syncproxy-aws-wna-01.d.o hosted in AWS, in US (according to GeoIP). IPv6 only (CNAME to syncproxy-aws-wna-01.debian.org. which only has an AAAA record, no A record). A m6g.2xlarge instance which has speeds upto 10 Gbps.
Coming back to https://mirror-master.debian.org/status/mirror-hierarchy.html, one can see chain extend till Tier 6 like in case of this mirror in AU which should add some latency for the updates from being pushed at ftp-master.d.o to them. Ideally, which shouldn t be a problem as https://www.debian.org/mirror/ftpmirror#when mentions The main archive gets updated four times a day . In my case, I get my updates from NITC mirror, so my updates flows from US > US > TW > IN > me in IN. CDNs have to internally manage cache purging too unlike normal mirrors which directly serve static file. Both deb.debian.org (sponsored by Fastly) and cdn-aws.deb.debian.org (sponsored by Amazon Cloudfront) sync from following CDN backends: See deb.d.o trace file and cdn-aws.deb.d.o trace file. (Thanks to Philipp Kern for the heads up here.)

CD image mirrors Hierarchy Till now, I have only talked about Debian package mirrors. When you see /debian directory on various mirrors, they re usually for package install and updates. If you want to grab the latest (and greatest) Debian ISO, you go to Debian CD (as they re still called) mirror site. casulana.d.o is mentioned as CD builder site hosted by Bytemark while pettersson-ng.d.o is mentioned as CD publishing server hosted at Academic Computer Club in Ume , Sweden. Primary download site for Debian CD when you click download on debian.org homepage is https://cdimage.debian.org/debian-cd/ is hosted here as well. This essentially becomes Tier 0 mirror for Debian CD. All Debian CD mirrors are downstream to it.
pettersson-ng.d.o / cdimage.d.o (SE) ---> to the world
A visualation of flow of Debian CD from cdimage.d.o
Academic Computer Club s mirror setup uses a combination of multiple machines (called frontends and offloading servers) to load balance requests. Their document setup is a highly recommended read. Also, in that document, they mention , All machines are reachable via both IPv4 and IPv6 and connected with 10 or 25 gigabit Ethernet, external bandwidth available is 200 gigabit/s. For completeness sake, following mirror (or mirror systems) exists too for Debian: Debian heavily rely on various organizations to donate resources (hosting and hardware) to distribute and update Debian. Compiling above information made me thankful to all these organizations. Many thanks to DSA and mirror team as well for managing these stuffs. I relied heavily on https://db.debian.org/machines.cgi which seems to be manually updated, so things might have changed along the way. If anything looks amiss, feel free to ping.

15 December 2024

Russell Coker: Hisense 65U80G 65 Inch 8K ULED Android TV (2021)

The Aim I just bought a Hisense 65U80G 65 Inch 8K ULED Android TV (2021 model) for $1,568 including delivery. I got that deal by googling refurbished 8K TVs and finding the cheapest one I could buy. Amazon and eBay didn t have any good prices on second hand 8K TVs and new ones start at $3,000 on special. I didn t assess how Hisense compares to other TVs, as far as I could determine there was only one model of 8K TV on sale in Australia in the price range I was prepared to pay. So I won t review how this TV compares to other models but how refurbished TVs compare to other display options. I bought this because the highest resolution monitor in my price range is 5120*2160 [1]. While I could get a 5128*2880 monitor for around $1,500 paying 3* the money for 33% more pixels is bad value for money. Getting 4* the pixels for under 3* the price is good value even when it s a TV with the lower display quality that involves. Before buying this TV I read this blog post by Daniel Lawrence about using an 8K TV as a primary monitor [2]. While he has an interesting setup with a 65 TV on a large desk it s not what I plan to do at this time. My Plans for Use I don t plan to make it a main monitor. While 5120*2160 isn t as good as I like on my desk it s bearable and the quality of the display is high. High resolution isn t needed for all tasks, for example I m writing this blog post on my laptop while watching a movie on the 8K TV. One thing I d like to do with the 8K TV when I get it working as a monitor is to share the screen for team programming projects. I don t have any specific plans other than team coding projects at the moment. But it will be interesting to experiment with it when I get it working. Technical Issues with High Resolution Monitors Hardware Needed A lot of the graphic hardware out there don t support resolutions higher than 5120*2880. It seems that most laptops don t support resolutions higher than that and higher resolutions than 4K are difficult. Only quite recent and high end video cards will do 8K. Apparently the RTX 2080 is one of the oldest ones that does and that s $400 on ebay. Strangely the GPU chipset spec pages don t list the maximum resolution and there s the additional complication that the other chips might not support the resolutions that the GPU itself can support. As an aside I don t use NVidia cards for regular workstations due to reliability problems. But they are good for ML work and for special purpose systems. Interface Versions To do 8K video it seems that you need HDMI 2.1 (or maybe 2.0 with 4:2:0 chroma subsampling) or DisplayPort 1.3 for 30Hz with 24bit color and 2.0 for higher refresh rates. But using a particular version of the interface doesn t require supporting all the resolutions that it might support. This TV has HDMI 2.1 inputs, I ve bought an adaptor cable that does DisplayPort 1.4 to HDMI 2.1 at 8K resolution. So I need a video card that does DisplayPort 1.4 or HDMI 2.1 output. That doesn t mean that the card will work, but it could work. It s a pity that no-one has made a USB-C video controller that has a basic frame-buffer supporting 8K and the minimal GPU capabilities. The consensus of opinion is that no games will run well at 8K at this time so anyone using 8K resolution doesn t need GPU power unless it s for ML stuff. I m thinking of making a system that can be used as a ML server and X/Wayland server so a GPU with a decent amount of RAM and compute power would be good. I m not particularly interested in spending $1,500+ to get a GPU that can drive a $1,568 TV. I m looking into getting a RTX A2000 with 12G of RAM which should be adequate for ML experiments and can handle 8K@60Hz output. I ve ordered a DisplayPort to HDMI converter cable so if I get a DisplayPort card it will work. Software Support When I first got started with 4K monitors I had significant problems in adjusting the UI to be usable. The support for scaling software is much better now than it was then and 8K 65 has a lower DPI than 4K 32 . So I hope this won t be an issue. Progress So Far My first Hisense 8K TV stopped working properly. It would change to a mostly white screen after being used for some time. The screen would change in ways that correlate to changes in what should appear, but not in a way that was usable. It was just a different pattern of white blobs when I changed to a menu view not anything that allowed using it. I presume that this was the problem that drove a need for refurbishment as when I first got the TV it was still signed in to Google accounts for YouTube and to NetFlix. Best Buy Electrical was good about providing a quick replacement, they took away the old TV and delivered a new one on the same visit and it s now working well. I ve obtained a NVidia card that can allegedly do 8K output and a combination of cables that might be able to carry an 8K signal. Now I just need to get the NVidia drivers to not cause a kernel panic to get things to work.

12 December 2024

Matthew Garrett: When should we require that firmware be free?

The distinction between hardware and software has historically been relatively easy to understand - hardware is the physical object that software runs on. This is made more complicated by the existence of programmable logic like FPGAs, but by and large things tend to fall into fairly neat categories if we're drawing that distinction.

Conversations usually become more complicated when we introduce firmware, but should they? According to Wikipedia, Firmware is software that provides low-level control of computing device hardware, and basically anything that's generally described as firmware certainly fits into the "software" side of the above hardware/software binary. From a software freedom perspective, this seems like something where the obvious answer to "Should this be free" is "yes", but it's worth thinking about why the answer is yes - the goal of free software isn't freedom for freedom's sake, but because the freedoms embodied in the Free Software Definition (and by proxy the DFSG) are grounded in real world practicalities.

How do these line up for firmware? Firmware can fit into two main classes - it can be something that's responsible for initialisation of the hardware (such as, historically, BIOS, which is involved in initialisation and boot and then largely irrelevant for runtime[1]) or it can be something that makes the hardware work at runtime (wifi card firmware being an obvious example). The role of free software in the latter case feels fairly intuitive, since the interface and functionality the hardware offers to the operating system is frequently largely defined by the firmware running on it. Your wifi chipset is, these days, largely a software defined radio, and what you can do with it is determined by what the firmware it's running allows you to do. Sometimes those restrictions may be required by law, but other times they're simply because the people writing the firmware aren't interested in supporting a feature - they may see no reason to allow raw radio packets to be provided to the OS, for instance. We also shouldn't ignore the fact that sufficiently complicated firmware exposed to untrusted input (as is the case in most wifi scenarios) may contain exploitable vulnerabilities allowing attackers to gain arbitrary code execution on the wifi chipset - and potentially use that as a way to gain control of the host OS (see this writeup for an example). Vendors being in a unique position to update that firmware means users may never receive security updates, leaving them with a choice between discarding hardware that otherwise works perfectly or leaving themselves vulnerable to known security issues.

But even the cases where firmware does nothing other than initialise the hardware cause problems. A lot of hardware has functionality controlled by registers that can be locked during the boot process. Vendor firmware may choose to disable (or, rather, never to enable) functionality that may be beneficial to a user, and then lock out the ability to reconfigure the hardware later. Without any ability to modify that firmware, the user lacks the freedom to choose what functionality their hardware makes available to them. Again, the ability to inspect this firmware and modify it has a distinct benefit to the user.

So, from a practical perspective, I think there's a strong argument that users would benefit from most (if not all) firmware being free software, and I don't think that's an especially controversial argument. So I think this is less of a philosophical discussion, and more of a strategic one - is spending time focused on ensuring firmware is free worthwhile, and if so what's an appropriate way of achieving this?

I think there's two consistent ways to view this. One is to view free firmware as desirable but not necessary. This approach basically argues that code that's running on hardware that isn't the main CPU would benefit from being free, in the same way that code running on a remote network service would benefit from being free, but that this is much less important than ensuring that all the code running in the context of the OS on the primary CPU is free. The maximalist position is not to compromise at all - all software on a system, whether it's running at boot or during runtime, and whether it's running on the primary CPU or any other component on the board, should be free.

Personally, I lean towards the former and think there's a reasonably coherent argument here. I think users would benefit from the ability to modify the code running on hardware that their OS talks to, in the same way that I think users would benefit from the ability to modify the code running on hardware the other side of a network link that their browser talks to. I also think that there's enough that remains to be done in terms of what's running on the host CPU that it's not worth having that fight yet. But I think the latter is absolutely intellectually consistent, and while I don't agree with it from a pragmatic perspective I think things would undeniably be better if we lived in that world.

This feels like a thing you'd expect the Free Software Foundation to have opinions on, and it does! There are two primarily relevant things - the Respects your Freedoms campaign focused on ensuring that certified hardware meets certain requirements (including around firmware), and the Free System Distribution Guidelines, which define a baseline for an OS to be considered free by the FSF (including requirements around firmware).

RYF requires that all software on a piece of hardware be free other than under one specific set of circumstances. If software runs on (a) a secondary processor and (b) within which software installation is not intended after the user obtains the product, then the software does not need to be free. (b) effectively means that the firmware has to be in ROM, since any runtime interface that allows the firmware to be loaded or updated is intended to allow software installation after the user obtains the product.

The Free System Distribution Guidelines require that all non-free firmware be removed from the OS before it can be considered free. The recommended mechanism to achieve this is via linux-libre, a project that produces tooling to remove anything that looks plausibly like a non-free firmware blob from the Linux source code, along with any incitement to the user to load firmware - including even removing suggestions to update CPU microcode in order to mitigate CPU vulnerabilities.

For hardware that requires non-free firmware to be loaded at runtime in order to work, linux-libre doesn't do anything to work around this - the hardware will simply not work. In this respect, linux-libre reduces the amount of non-free firmware running on a system in the same way that removing the hardware would. This presumably encourages users to purchase RYF compliant hardware.

But does that actually improve things? RYF doesn't require that a piece of hardware have no non-free firmware, it simply requires that any non-free firmware be hidden from the user. CPU microcode is an instructive example here. At the time of writing, every laptop listed here has an Intel CPU. Every Intel CPU has microcode in ROM, typically an early revision that is known to have many bugs. The expectation is that this microcode is updated in the field by either the firmware or the OS at boot time - the updated version is loaded into RAM on the CPU, and vanishes if power is cut. The combination of RYF and linux-libre doesn't reduce the amount of non-free code running inside the CPU, it just means that the user (a) is more likely to hit since-fixed bugs (including security ones!), and (b) has less guidance on how to avoid them.

As long as RYF permits hardware that makes use of non-free firmware I think it hurts more than it helps. In many cases users aren't guided away from non-free firmware - instead it's hidden away from them, leaving them less aware that their freedom is constrained. Linux-libre goes further, refusing to even inform the user that the non-free firmware that their hardware depends on can be upgraded to improve their security.

Out of sight shouldn't mean out of mind. If non-free firmware is a threat to user freedom then allowing it to exist in ROM doesn't do anything to solve that problem. And if it isn't a threat to user freedom, then what's the point of requiring linux-libre for a Linux distribution to be considered free by the FSF? We seem to have ended up in the worst case scenario, where nothing is being done to actually replace any of the non-free firmware running on people's systems and where users may even end up with a reduced awareness that the non-free firmware even exists.

[1] Yes yes SMM

comment count unavailable comments

7 December 2024

Louis-Philippe V ronneau: lintian.debian.org: Episode IV A New Hope

After weeks dare I say months of work, it is finally done. lintian.debian.org is back online! Screenshot of the new lintian.debian.org website Many, many thanks to everyone who worked hard to make this possible: All in all, I did very little (mostly coordinating these fine folks) and they should get the credit for this very useful service being back.

5 December 2024

Reproducible Builds: Reproducible Builds in November 2024

Welcome to the November 2024 report from the Reproducible Builds project! Our monthly reports outline what we ve been up to over the past month and highlight items of news from elsewhere in the world of software supply-chain security where relevant. As ever, if you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. Table of contents:
  1. Reproducible Builds mourns the passing of Lunar
  2. Introducing reproduce.debian.net
  3. New landing page design
  4. SBOMs for Python packages
  5. Debian updates
  6. Reproducible builds by default in Maven 4
  7. PyPI now supports digital attestations
  8. Dependency Challenges in OSS Package Registries
  9. Zig programming language demonstrated reproducible
  10. Website updates
  11. Upstream patches
  12. Misc development news
  13. Reproducibility testing framework

Reproducible Builds mourns the passing of Lunar The Reproducible Builds community sadly announced it has lost its founding member, Lunar. J r my Bobbio aka Lunar passed away on Friday November 8th in palliative care in Rennes, France. Lunar was instrumental in starting the Reproducible Builds project in 2013 as a loose initiative within the Debian project. He was the author of our earliest status reports and many of our key tools in use today are based on his design. Lunar s creativity, insight and kindness were often noted. You can view our full tribute elsewhere on our website. He will be greatly missed.

Introducing reproduce.debian.net In happier news, this month saw the introduction of reproduce.debian.net. Announced at the recent Debian MiniDebConf in Toulouse, reproduce.debian.net is an instance of rebuilderd operated by the Reproducible Builds project. rebuilderd is our server designed monitor the official package repositories of Linux distributions and attempts to reproduce the observed results there. In November, reproduce.debian.net began rebuilding Debian unstable on the amd64 architecture, but throughout the MiniDebConf, it had attempted to rebuild 66% of the official archive. From this, it could be determined that it is currently possible to bit-for-bit reproduce and corroborate approximately 78% of the actual binaries distributed by Debian that is, using the .buildinfo files hosted by Debian itself. reproduce.debian.net also contains instructions how to setup one s own rebuilderd instance, and we very much invite everyone with a machine to spare to setup their own version and to share the results. Whilst rebuilderd is still in development, it has been used to reproduce Arch Linux since 2019. We are especially looking for installations targeting Debian architectures other than i386 and amd64.

New landing page design As part of a very productive partnership with the Sovereign Tech Fund and Neighbourhoodie, we are pleased to unveil our new homepage/landing page. We are very happy with our collaboration with both STF and Neighbourhoodie (including many changes not directly related to the website), and look forward to working with them in the future.

SBOMs for Python packages The Python Software Foundation has announced a new cross-functional project for SBOMs and Python packages . Seth Michael Larson writes that the project is specifically looking to solve these issues :
  • Enable Python users that require SBOM documents (likely due to regulations like CRA or SSDF) to self-serve using existing SBOM generation tools.
  • Solve the phantom dependency problem, where non-Python software is bundled in Python packages but not recorded in any metadata. This makes the job of software composition analysis (SCA) tools difficult or impossible.
  • Make the adoption work by relevant projects such as build backends, auditwheel-esque tools, as minimal as possible. Empower users who are interested in having better SBOM data for the Python projects they are using to be able to contribute engineering time towards that goal.
A GitHub repository for the initiative is available, and there are a number of queries, comments and remarks on Seth s Discourse forum post.

Debian updates There was significant development within Debian this month. Firstly, at the recent MiniDebConf in Toulouse, France, Holger Levsen gave a Debian-specific talk on rebuilding packages distributed from ftp.debian.org that is to say, how to reproduce the results from the official Debian build servers: Holger described the talk as follows:
For more than ten years, the Reproducible Builds project has worked towards reproducible builds of many projects, and for ten years now we have build Debian packages twice with maximal variations applied to see if they can be build reproducible still. Since about a month, we ve also been rebuilding trying to exactly match the builds being distributed via ftp.debian.org. This talk will describe the setup and the lessons learned so far, and why the results currently are what they are (spoiler: they are less than 30% reproducible), and what we can do to fix that.
The Debian Project Leader, Andreas Tille, was present at the talk and remarked later in his Bits from the DPL update that:
It might be unfair to single out a specific talk from Toulouse, but I d like to highlight the one on reproducible builds. Beyond its technical focus, the talk also addressed the recent loss of Lunar, whom we mourn deeply. It served as a tribute to Lunar s contributions and legacy. Personally, I ve encountered packages maintained by Lunar and bugs he had filed. I believe that taking over his packages and addressing the bugs he reported is a meaningful way to honor his memory and acknowledge the value of his work.
Holger s slides and video in .webm format are available.
Next, rebuilderd is the server to monitor package repositories of Linux distributions and attempt to reproduce the observed results. This month, version 0.21.0 released, most notably with improved support for binNMUs by Jochen Sprickerhof and updating the rebuilderd-debian.sh integration to the latest debrebuild version by Holger Levsen. There has also been significant work to get the rebuilderd package into the Debian archive, in particular, both rust-rebuilderd-common version 0.20.0-1 and rust-rust-lzma version 0.6.0-1 were packaged by kpcyrd and uploaded by Holger Levsen. Related to this, Holger Levsen submitted three additional issues against rebuilderd as well:
  • rebuildctl should be more verbose when encountering issues. [ ]
  • Please add an option to used randomised queues. [ ]
  • Scheduling and re-scheduling multiple packages at once. [ ]
and lastly, Jochen Sprickerhof submitted one an issue requested that rebuilderd downloads the source package in addition to the .buildinfo file [ ] and kpcyrd also submitted and fixed an issue surrounding dependencies and clarifying the license [ ]
Separate to this, back in 2018, Chris Lamb filed a bug report against the sphinx-gallery package as it generates unreproducible content in various ways. This month, however, Dmitry Shachnev finally closed the bug, listing the multiple sub-issues that were part of the problem and how they were resolved.
Elsewhere, Roland Clobus posted to our mailing list this month, asking for input on a bug in Debian s ca-certificates-java package. The issue is that the Java key management tools embed timestamps in its output, and this output ends up in the /etc/ssl/certs/java/cacerts file on the generated ISO images. A discussion resulted from Roland s post suggesting some short- and medium-term solutions to the problem.
Holger Levsen uploaded some packages with reproducibility-related changes:
Lastly, 12 reviews of Debian packages were added, 5 were updated and 21 were removed this month adding to our knowledge about identified issues in Debian.

Reproducible builds by default in Maven 4 On our mailing list this month, Herv Boutemy reported the latest release of Maven (4.0.0-beta-5) has reproducible builds enabled by default. In his mailing list post, Herv mentions that this story started during our Reproducible Builds summit in Hamburg , where he created the upstream issue that builds on a multi-year effort to have Maven builds configured for reproducibility.

PyPI now supports digital attestations Elsewhere in the Python ecosystem and as reported on LWN and elsewhere, the Python Package Index (PyPI) has announced that it has finalised support for PEP 740 ( Index support for digital attestations ). Trail of Bits, who performed much of the development work, has an in-depth blog post about the work and its adoption, as well as what is left undone:
One thing is notably missing from all of this work: downstream verification. [ ] This isn t an acceptable end state (cryptographic attestations have defensive properties only insofar as they re actually verified), so we re looking into ways to bring verification to individual installing clients. In particular, we re currently working on a plugin architecture for pip that will enable users to load verification logic directly into their pip install flows.
There was an in-depth discussion on LWN s announcement page, as well as on Hacker News.

Dependency Challenges in OSS Package Registries At BENEVOL, the Belgium-Netherlands Software Evolution workshop in Namur, Belgium, Tom Mens and Alexandre Decan presented their paper, An Overview and Catalogue of Dependency Challenges in Open Source Software Package Registries . The abstract of their paper is as follows:
While open-source software has enabled significant levels of reuse to speed up software development, it has also given rise to the dreadful dependency hell that all software practitioners face on a regular basis. This article provides a catalogue of dependency-related challenges that come with relying on OSS packages or libraries. The catalogue is based on the scientific literature on empirical research that has been conducted to understand, quantify and overcome these challenges. [ ]
A PDF of the paper is available online.

Zig programming language demonstrated reproducible Motiejus Jak ty posted an interesting and practical blog post on his successful attempt to reproduce the Zig programming language without using the pre-compiled binaries checked into the repository, and despite the circular dependency inherent in its bootstrapping process. As a summary, Motiejus concludes that:
I can now confidently say (and you can also check, you don t need to trust me) that there is nothing hiding in zig1.wasm [the checked-in binary] that hasn t been checked-in as a source file.
The full post is full of practical details, and includes a few open questions.

Website updates Notwithstanding the significant change to the landing page (screenshot above), there were an enormous number of changes made to our website this month. This included:
  • Alex Feyerke and Mariano Gim nez:
    • Dramatically overhaul the website s landing page with new benefit cards tailored to the expected visitors to our website and a reworking of the visual hierarchy and design. [ ][ ][ ][ ][ ][ ][ ][ ][ ][ ]
  • Bernhard M. Wiedemann:
    • Update the System images page to document the e2fsprogs approach. [ ]
  • Chris Lamb:
  • FC (Fay) Stegerman:
    • Replace more inline markdown with HTML on the Success stories page. [ ]
    • Add some links, fix some other links and correct some spelling errors on the Tools page. [ ]
  • Holger Levsen:
    • Add a historical presentation ( Reproducible builds everywhere eg. in Debian, OpenWrt and LEDE ) from October 2016. [ ]
    • Add jochensp and Oejet to the list of known contributors. [ ][ ]
  • Julia Kr ger:
  • Ninette Adhikari & hulkoba:
    • Add/rework the list of success stories into a new page that clearly shows milestones in Reproducible Builds. [ ][ ][ ][ ][ ][ ]
  • Philip Rinn:
    • Import 47 historical weekly reports. [ ]
  • hulkoba:
    • Add alt text to almost all images (!). [ ][ ]
    • Fix a number of links on the Talks . [ ][ ]
    • Avoid so-called ghost buttons by not using <button> elements as links, as the affordance of a <button> implies an action with (potentially) a side effect. [ ][ ]
    • Center the sponsor logos on the homepage. [ ]
    • Move publications and generate them instead from a data.yml file with an improved layout. [ ][ ]
    • Make a large number of small but impactful stylisting changes. [ ][ ][ ][ ]
    • Expand the Tools to include a number of missing tools, fix some styling issues and fix a number of stale/broken links. [ ][ ][ ][ ][ ][ ]

Upstream patches The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:

Misc development news

Reproducibility testing framework The Reproducible Builds project operates a comprehensive testing framework running primarily at tests.reproducible-builds.org in order to check packages and other artifacts for reproducibility. In November, a number of changes were made by Holger Levsen, including:
  • reproduce.debian.net-related changes:
    • Create and introduce a new reproduce.debian.net service and subdomain [ ]
    • Make a large number of documentation changes relevant to rebuilderd. [ ][ ][ ][ ][ ]
    • Explain a temporary workaround for a specific issue in rebuilderd. [ ]
    • Setup another rebuilderd instance on the o4 node and update installation documentation to match. [ ][ ]
    • Make a number of helpful/cosmetic changes to the interface, such as clarifying terms and adding links. [ ][ ][ ][ ][ ]
    • Deploy configuration to the /opt and /var directories. [ ][ ]
    • Add an infancy (or alpha ) disclaimer. [ ][ ]
    • Add more notes to the temporary rebuilderd documentation. [ ]
    • Commit an nginx configuration file for reproduce.debian.net s Stats page. [ ]
    • Commit a rebuilder-worker.conf configuration for the o5 node. [ ]
  • Debian-related changes:
    • Grant jspricke and jochensp access to the o5 node. [ ][ ]
    • Build the qemu package with the nocheck build flag. [ ]
  • Misc changes:
    • Adapt the update_jdn.sh script for new Debian trixie systems. [ ]
    • Stop installing the PostgreSQL database engine on the o4 and o5 nodes. [ ]
    • Prevent accidental reboots of the o4 node because of a long-running job owned by josch. [ ][ ]
In addition, Mattia Rizzolo addressed a number of issues with reproduce.debian.net [ ][ ][ ][ ]. And lastly, both Holger Levsen [ ][ ][ ][ ] and Vagrant Cascadian [ ][ ][ ][ ] performed node maintenance.
If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

28 November 2024

Bits from Debian: New Debian Developers and Maintainers (September and October 2024)

The following contributors got their Debian Developer accounts in the last two months: The following contributors were added as Debian Maintainers in the last two months: Congratulations!

18 November 2024

Philipp Kern: debian.org now supports Security Key-backed SSH keys

debian.org's infrastructure now supports using Security Key-backed SSH keys. DDs (and guests) can use the mail gateway to add SSH keys of the types sk-ecdsa-sha2-nistp256@openssh.com and sk-ssh-ed25519@openssh.com to their LDAP accounts.This was done in support of hardening our infrastructure: Hopefully we can require these hardware-backed keys for sensitive machines in the future, to have some assertion that it is a human that is connecting to them.
As some of us shell to machines a little too often, I also wrote a small SSH CA that issues short-lived certificates (documentation). It requires the user to login via SSH using an SK-backed key and then issues a certificate that is valid for less than a day. For cases where you need to frequently shell to a machine or to a lot of machines at once that should be a nice compromise of usability vs. security.The capabilities of various keys differ a lot and it is not always easy to determine what feature set they support. Generally SK-backed keys work with FIDO U2F keys, if you use the ecdsa key type. Resident keys (i.e. keys stored on the token, to be used from multiple devices) require FIDO2-compatible keys. no-touch-required is its own maze, e.g. the flag is not properly restored today when pulling the public key from a resident key. The latter is also one reason for writing my own CA.SomeoneTM should write up a matrix on what is supported where and how. In the meantime it is probably easiest to generate an ed25519 key - or if that does not work an ecdsa key - and make a backup copy of the resulting on-disk key file. And copy that around to other devices (or OSes) that require access to the key.

Next.