Search Results: "cyb"

11 April 2024

Russell Coker: ML Training License

Last year a Debian Developer blogged about writing Haskell code to give a bad result for LLMs that were trained on it. I forgot who wrote the post and I d appreciate the URL if anyone has it. I respect such technical work to enforce one s legal rights when they aren t respected by corporations, but I have a different approach. As an aside the Fosdem lecture Fortify AI against regulation, litigation and lobotomies is interesting on this topic [1], it s what inspired me to write about this. For what I write I am at this time happy to allow it to be used as part of a large training data set (consider this blog post a licence grant that applies until such time as I edit this post to change it). But only if aggregated with so much other data that my content is only a tiny portion of the data set by any metric. So I don t want someone to make a programming LLM that has my code as the only C code or a political data set that has my blog posts as the only left-wing content. If someone wants to train an LLM on only my content to make a Russell-simulator then I don t license my work for that purpose but also as it s small enough that anyone with a bit of skill could do it on a weekend I can t stop it. I would be really interested in seeing the results if someone from the FOSS community wanted to make a Russell-simulator and would probably issue them a license for such work if asked. If my work comprises more than 0.1% of the content in a particular measure (theme, programming language, political position, etc) in a training data set then I don t permit that without prior discussion. Finally if someone wants to make a FOSS training data set to be used for FOSS LLM systems (maybe under the AGPL or some similar license) then I ll allow my writing to be used as part of that.

6 April 2024

John Goerzen: Facebook is Censoring Stories about Climate Change and Illegal Raid in Marion, Kansas

It is, sadly, not entirely surprising that Facebook is censoring articles critical of Meta. The Kansas Reflector published an artical about Meta censoring environmental articles about climate change deeming them too controversial . Facebook then censored the article about Facebook censorship, and then after an independent site published a copy of the climate change article, Facebook censored it too. The CNN story says Facebook apologized and said it was a mistake and was fixing it. Color me skeptical, because today I saw this: Yes, that s right: today, April 6, I get a notification that they removed a post from August 12. The notification was dated April 4, but only showed up for me today. I wonder why my post from August 12 was fine for nearly 8 months, and then all of a sudden, when the same website runs an article critical of Facebook, my 8-month-old post is a problem. Hmm. Riiiiiight. Cybersecurity. This isn t even the first time they ve done this to me. On September 11, 2021, they removed my post about the social network Mastodon (click that link for screenshot). A post that, incidentally, had been made 10 months prior to being removed. While they ultimately reversed themselves, I subsequently wrote Facebook s Blocking Decisions Are Deliberate Including Their Censorship of Mastodon. That this same pattern has played out a second time again with something that is a very slight challenege to Facebook seems to validate my conclusion. Facebook lets all sort of hateful garbage infest their site, but anything about climate change or their own censorship gets removed, and this pattern persists for years. There s a reason I prefer Mastodon these days. You can find me there as @jgoerzen@floss.social. So. I ve written this blog post. And then I m going to post it to Facebook. Let s see if they try to censor me for a third time. Bring it, Facebook.

11 February 2024

Freexian Collaborators: Debian Contributions: Upcoming Improvements to Salsa CI, /usr-move, and more! (by Utkarsh Gupta)

Contributing to Debian is part of Freexian s mission. This article covers the latest achievements of Freexian and their collaborators. All of this is made possible by organizations subscribing to our Long Term Support contracts and consulting services.

Upcoming Improvements to Salsa CI, by Santiago Ruano Rinc n Santiago started picking up the work made by Outreachy Intern, Enock Kashada (a big thanks to him!), to solve some long-standing issues in Salsa CI. Currently, the first job in a Salsa CI pipeline is the extract-source job, used to produce a debianize source tree of the project. This job was introduced to make it possible to build the projects on different architectures, on the subsequent build jobs. However, that extract-source approach is sub-optimal: not only it increases the execution time of the pipeline by some minutes, but also projects whose source tree is too large are not able to use the pipeline. The debianize source tree is passed as an artifact to the build jobs, and for those large projects, the size of their source tree exceeds the Salsa s limits. This is specific issue is documented as issue #195, and the proposed solution is to get rid of the extract-source job, relying on sbuild in the very build job (see issue #296). Switching to sbuild would also help to improve the build source job, solving issues such as #187 and #298. The current work-in-progress is very preliminary, but it has already been possible to run the build (amd64), build-i386 and build-source job using sbuild with the unshare mode. The image on the right shows a pipeline that builds grep. All the test jobs use the artifacts of the new build job. There is a lot of remaining work, mainly making the integration with ccache work. This change could break some things, it will also be important to test how the new pipeline works with complex projects. Also, thanks to Emmanuel Arias, we are proposing a Google Summer of Code 2024 project to improve Salsa CI. As part of the ongoing work in preparation for the GSoC 2024 project, Santiago has proposed a merge request to make more efficient how contributors can test their changes on the Salsa CI pipeline.

/usr-move, by Helmut Grohne In January, we sent most of the moving patches for the set of packages involved with debootstrap. Notably missing is glibc, which turns out harder than anticipated via dumat, because it has Conflicts between different architectures, which dumat does not analyze. Patches for diversion mitigations have been updated in a way to not exhibit any loss anymore. The main change here is that packages which are being diverted now support the diverting packages in transitioning their diversions. We also supported a few packages with non-trivial changes such as netplan.io. dumat has been enhanced to better support derivatives such as Ubuntu.

Miscellaneous contributions
  • Python 3.12 migration trundles on. Stefano Rivera helped port several new packages to support 3.12.
  • Stefano updated the Sphinx configuration of DebConf Video Team s documentation, which was broken by Sphinx 7.
  • Stefano published the videos from the Cambridge MiniDebConf to YouTube and PeerTube.
  • DebConf 24 planning has begun, and Stefano & Utkarsh have started work on this.
  • Utkarsh re-sponsored the upload of golang-github-prometheus-community-pgbouncer-exporter for Lena.
  • Colin Watson added Incus support to autopkgtest.
  • Colin discovered Perl::Critic and used it to tidy up some poor practices in several of his packages, including debconf.
  • Colin did some overdue debconf maintenance, mainly around tidying up error message handling in several places (1, 2, 3).
  • Colin figured out how to update the mirror size documentation in debmirror, last updated in 2010. It should now be much easier to keep it up to date regularly.
  • Colin issued a man-db buster update to clean up some irritations due to strict sandboxing.
  • Thorsten Alteholz adopted two more packages, magicfilter and ifhp, for the debian-printing team. Those packages are the last ones of the latest round of adoptions to preserve the old printing protocol within Debian. If you know of other packages that should be retained, please don t hesitate to contact Thorsten.
  • Enrico participated in /usr-merge discussions with Helmut.
  • Helmut sent patches for 16 cross build failures.
  • Helmut supported Matthias Klose (not affiliated with Freexian) with adding -for-host support to gcc-defaults.
  • Helmut uploaded dput-ng enabling dcut migrate and merging two MRs of Ben Hutchings.
  • Santiago took part in the discussions relating to the EU Cyber Resilience Act (CRA) and the Debian public statement that was published last year. He participated in a meeting with Members of the European Parliament (MEPs), Marcel Kolaja and Karen Melchior, and their teams to clarify some points about the impact of the CRA and Debian and downstream projects, and the improvements in the last version of the proposed regulation.

27 December 2023

Bits from Debian: Statement about the EU Cyber Resilience Act

Debian Public Statement about the EU Cyber Resilience Act and the Product Liability Directive The European Union is currently preparing a regulation "on horizontal cybersecurity requirements for products with digital elements" known as the Cyber Resilience Act (CRA). It is currently in the final "trilogue" phase of the legislative process. The act includes a set of essential cybersecurity and vulnerability handling requirements for manufacturers. It will require products to be accompanied by information and instructions to the user. Manufacturers will need to perform risk assessments and produce technical documentation and, for critical components, have third-party audits conducted. Discovered security issues will have to be reported to European authorities within 25 hours (1). The CRA will be followed up by the Product Liability Directive (PLD) which will introduce compulsory liability for software. While a lot of these regulations seem reasonable, the Debian project believes that there are grave problems for Free Software projects attached to them. Therefore, the Debian project issues the following statement:
  1. Free Software has always been a gift, freely given to society, to take and to use as seen fit, for whatever purpose. Free Software has proven to be an asset in our digital age and the proposed EU Cyber Resilience Act is going to be detrimental to it. a. As the Debian Social Contract states, our goal is "make the best system we can, so that free works will be widely distributed and used." Imposing requirements such as those proposed in the act makes it legally perilous for others to redistribute our work and endangers our commitment to "provide an integrated system of high-quality materials with no legal restrictions that would prevent such uses of the system". (2) b. Knowing whether software is commercial or not isn't feasible, neither in Debian nor in most free software projects - we don't track people's employment status or history, nor do we check who finances upstream projects (the original projects that we integrate in our operating system). c. If upstream projects stop making available their code for fear of being in the scope of CRA and its financial consequences, system security will actually get worse rather than better. d. Having to get legal advice before giving a gift to society will discourage many developers, especially those without a company or other organisation supporting them.
  2. Debian is well known for its security track record through practices of responsible disclosure and coordination with upstream developers and other Free Software projects. We aim to live up to the commitment made in the Debian Social Contract: "We will not hide problems." (3) a.The Free Software community has developed a fine-tuned, tried-and-tested system of responsible disclosure in case of security issues which will be overturned by the mandatory reporting to European authorities within 24 hours (Art. 11 CRA). b. Debian spends a lot of volunteering time on security issues, provides quick security updates and works closely together with upstream projects and in coordination with other vendors. To protect its users, Debian regularly participates in limited embargos to coordinate fixes to security issues so that all other major Linux distributions can also have a complete fix when the vulnerability is disclosed. c. Security issue tracking and remediation is intentionally decentralized and distributed. The reporting of security issues to ENISA and the intended propagation to other authorities and national administrations would collect all software vulnerabilities in one place. This greatly increases the risk of leaking information about vulnerabilities to threat actors, representing a threat for all the users around the world, including European citizens. d. Activists use Debian (e.g. through derivatives such as Tails), among other reasons, to protect themselves from authoritarian governments; handing threat actors exploits they can use for oppression is against what Debian stands for. e. Developers and companies will downplay security issues because a "security" issue now comes with legal implications. Less clarity on what is truly a security issue will hurt users by leaving them vulnerable.
  3. While proprietary software is developed behind closed doors, Free Software development is done in the open, transparent for everyone. To retain parity with proprietary software the open development process needs to be entirely exempt from CRA requirements, just as the development of software in private is. A "making available on the market" can only be considered after development is finished and the software is released.
  4. Even if only "commercial activities" are in the scope of CRA, the Free Software community - and as a consequence, everybody - will lose a lot of small projects. CRA will force many small enterprises and most probably all self employed developers out of business because they simply cannot fulfill the requirements imposed by CRA. Debian and other Linux distributions depend on their work. If accepted as it is, CRA will undermine not only an established community but also a thriving market. CRA needs an exemption for small businesses and, at the very least, solo-entrepreneurs.

Information about the voting process: Debian uses the Condorcet method for voting. Simplistically, plain Condorcets method can be stated like so : "Consider all possible two-way races between candidates. The Condorcet winner, if there is one, is the one candidate who can beat each other candidate in a two-way race with that candidate." The problem is that in complex elections, there may well be a circular relationship in which A beats B, B beats C, and C beats A. Most of the variations on Condorcet use various means of resolving the tie. Debian's variation is spelled out in the constitution, specifically, A.5(3) Sources: (1) CRA proposals and links & PLD proposals and links (2) Debian Social Contract No. 2, 3, and 4 (3) Debian Constitution

11 November 2023

Reproducible Builds: Reproducible Builds in October 2023

Welcome to the October 2023 report from the Reproducible Builds project. In these reports we outline the most important things that we have been up to over the past month. As a quick recap, whilst anyone may inspect the source code of free software for malicious flaws, almost all software is distributed to end users as pre-compiled binaries.

Reproducible Builds Summit 2023 Between October 31st and November 2nd, we held our seventh Reproducible Builds Summit in Hamburg, Germany! Our summits are a unique gathering that brings together attendees from diverse projects, united by a shared vision of advancing the Reproducible Builds effort, and this instance was no different. During this enriching event, participants had the opportunity to engage in discussions, establish connections and exchange ideas to drive progress in this vital field. A number of concrete outcomes from the summit will documented in the report for November 2023 and elsewhere. Amazingly the agenda and all notes from all sessions are already online. The Reproducible Builds team would like to thank our event sponsors who include Mullvad VPN, openSUSE, Debian, Software Freedom Conservancy, Allotropia and Aspiration Tech.

Reflections on Reflections on Trusting Trust Russ Cox posted a fascinating article on his blog prompted by the fortieth anniversary of Ken Thompson s award-winning paper, Reflections on Trusting Trust:
[ ] In March 2023, Ken gave the closing keynote [and] during the Q&A session, someone jokingly asked about the Turing award lecture, specifically can you tell us right now whether you have a backdoor into every copy of gcc and Linux still today?
Although Ken reveals (or at least claims!) that he has no such backdoor, he does admit that he has the actual code which Russ requests and subsequently dissects in great but accessible detail.

Ecosystem factors of reproducible builds Rahul Bajaj, Eduardo Fernandes, Bram Adams and Ahmed E. Hassan from the Maintenance, Construction and Intelligence of Software (MCIS) laboratory within the School of Computing, Queen s University in Ontario, Canada have published a paper on the Time to fix, causes and correlation with external ecosystem factors of unreproducible builds. The authors compare various response times within the Debian and Arch Linux distributions including, for example:
Arch Linux packages become reproducible a median of 30 days quicker when compared to Debian packages, while Debian packages remain reproducible for a median of 68 days longer once fixed.
A full PDF of their paper is available online, as are many other interesting papers on MCIS publication page.

NixOS installation image reproducible On the NixOS Discourse instance, Arnout Engelen (raboof) announced that NixOS have created an independent, bit-for-bit identical rebuilding of the nixos-minimal image that is used to install NixOS. In their post, Arnout details what exactly can be reproduced, and even includes some of the history of this endeavour:
You may remember a 2021 announcement that the minimal ISO was 100% reproducible. While back then we successfully tested that all packages that were needed to build the ISO were individually reproducible, actually rebuilding the ISO still introduced differences. This was due to some remaining problems in the hydra cache and the way the ISO was created. By the time we fixed those, regressions had popped up (notably an upstream problem in Python 3.10), and it isn t until this week that we were back to having everything reproducible and being able to validate the complete chain.
Congratulations to NixOS team for reaching this important milestone! Discussion about this announcement can be found underneath the post itself, as well as on Hacker News.

CPython source tarballs now reproducible Seth Larson published a blog post investigating the reproducibility of the CPython source tarballs. Using diffoscope, reprotest and other tools, Seth documents his work that led to a pull request to make these files reproducible which was merged by ukasz Langa.

New arm64 hardware from Codethink Long-time sponsor of the project, Codethink, have generously replaced our old Moonshot-Slides , which they have generously hosted since 2016 with new KVM-based arm64 hardware. Holger Levsen integrated these new nodes to the Reproducible Builds continuous integration framework.

Community updates On our mailing list during October 2023 there were a number of threads, including:
  • Vagrant Cascadian continued a thread about the implementation details of a snapshot archive server required for reproducing previous builds. [ ]
  • Akihiro Suda shared an update on BuildKit, a toolkit for building Docker container images. Akihiro links to a interesting talk they recently gave at DockerCon titled Reproducible builds with BuildKit for software supply-chain security.
  • Alex Zakharov started a thread discussing and proposing fixes for various tools that create ext4 filesystem images. [ ]
Elsewhere, Pol Dellaiera made a number of improvements to our website, including fixing typos and links [ ][ ], adding a NixOS Flake file [ ] and sorting our publications page by date [ ]. Vagrant Cascadian presented Reproducible Builds All The Way Down at the Open Source Firmware Conference.

Distribution work distro-info is a Debian-oriented tool that can provide information about Debian (and Ubuntu) distributions such as their codenames (eg. bookworm) and so on. This month, Benjamin Drung uploaded a new version of distro-info that added support for the SOURCE_DATE_EPOCH environment variable in order to close bug #1034422. In addition, 8 reviews of packages were added, 74 were updated and 56 were removed this month, all adding to our knowledge about identified issues. Bernhard M. Wiedemann published another monthly report about reproducibility within openSUSE.

Software development The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including: In addition, Chris Lamb fixed an issue in diffoscope, where if the equivalent of file -i returns text/plain, fallback to comparing as a text file. This was originally filed as Debian bug #1053668) by Niels Thykier. [ ] This was then uploaded to Debian (and elsewhere) as version 251.

Reproducibility testing framework The Reproducible Builds project operates a comprehensive testing framework (available at tests.reproducible-builds.org) in order to check packages and other artifacts for reproducibility. In October, a number of changes were made by Holger Levsen:
  • Debian-related changes:
    • Refine the handling of package blacklisting, such as sending blacklisting notifications to the #debian-reproducible-changes IRC channel. [ ][ ][ ]
    • Install systemd-oomd on all Debian bookworm nodes (re. Debian bug #1052257). [ ]
    • Detect more cases of failures to delete schroots. [ ]
    • Document various bugs in bookworm which are (currently) being manually worked around. [ ]
  • Node-related changes:
    • Integrate the new arm64 machines from Codethink. [ ][ ][ ][ ][ ][ ]
    • Improve various node cleanup routines. [ ][ ][ ][ ]
    • General node maintenance. [ ][ ][ ][ ]
  • Monitoring-related changes:
    • Remove unused Munin monitoring plugins. [ ]
    • Complain less visibly about too many installed kernels. [ ]
  • Misc:
    • Enhance the firewall handling on Jenkins nodes. [ ][ ][ ][ ]
    • Install the fish shell everywhere. [ ]
In addition, Vagrant Cascadian added some packages and configuration for snapshot experiments. [ ]

If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

7 November 2023

Matthew Palmer: PostgreSQL Encryption: The Available Options

On an episode of Postgres FM, the hosts had a (very brief) discussion of data encryption in PostgreSQL. While Postgres FM is a podcast well worth a subscribe, the hosts aren t data security experts, and so as someone who builds a queryable database encryption system, I found the coverage to be somewhat lacking. I figured I d provide a more complete survey of the available options for PostgreSQL-related data encryption.

The Status Quo By default, when you install PostgreSQL, there is no data encryption at all. That means that anyone who gets access to any part of the system can read all the data they have access to. This is, of course, not peculiar to PostgreSQL: basically everything works much the same way. What s stopping an attacker from nicking off with all your data is the fact that they can t access the database at all. The things that are acting as protection are perimeter defences, like putting the physical equipment running the server in a secure datacenter, firewalls to prevent internet randos connecting to the database, and strong passwords. This is referred to as tortoise security it s tough on the outside, but soft on the inside. Once that outer shell is cracked, the delicious, delicious data is ripe for the picking, and there s absolutely nothing to stop a miscreant from going to town and making off with everything. It s a good idea to plan your defenses on the assumption you re going to get breached sooner or later. Having good defence-in-depth includes denying the attacker to your data even if they compromise the database. This is where encryption comes in.

Storage-Layer Defences: Disk / Volume Encryption To protect against the compromise of the storage that your database uses (physical disks, EBS volumes, and the like), it s common to employ encryption-at-rest, such as full-disk encryption, or volume encryption. These mechanisms protect against offline attacks, but provide no protection while the system is actually running. And therein lies the rub: your database is always running, so encryption at rest typically doesn t provide much value. If you re running physical systems, disk encryption is essential, but more to prevent accidental data loss, due to things like failing to wipe drives before disposing of them, rather than physical theft. In systems where volume encryption is only a tickbox away, it s also worth enabling, if only to prevent inane questions from your security auditors. Relying solely on storage-layer defences, though, is very unlikely to provide any appreciable value in preventing data loss.

Database-Layer Defences: Transparent Database Encryption If you ve used proprietary database systems in high-security environments, you might have come across Transparent Database Encryption (TDE). There are also a couple of proprietary extensions for PostgreSQL that provide this functionality. TDE is essentially encryption-at-rest implemented inside the database server. As such, it has much the same drawbacks as disk encryption: few real-world attacks are thwarted by it. There is a very small amount of additional protection, in that physical level backups (as produced by pg_basebackup) are protected, but the vast majority of attacks aren t stopped by TDE. Any attacker who can access the database while it s running can just ask for an SQL-level dump of the stored data, and they ll get the unencrypted data quick as you like.

Application-Layer Defences: Field Encryption If you want to take the database out of the threat landscape, you really need to encrypt sensitive data before it even gets near the database. This is the realm of field encryption, more commonly known as application-level encryption. This technique involves encrypting each field of data before it is sent to be stored in the database, and then decrypting it again after it s retrieved from the database. Anyone who gets the data from the database directly, whether via a backup or a direct connection, is out of luck: they can t decrypt the data, and therefore it s worthless. There are, of course, some limitations of this technique. For starters, every ORM and data mapper out there has rolled their own encryption format, meaning that there s basically zero interoperability. This isn t a problem if you build everything that accesses the database using a single framework, but if you ever feel the need to migrate, or use the database from multiple codebases, you re likely in for a rough time. The other big problem of traditional application-level encryption is that, when the database can t understand what data its storing, it can t run queries against that data. So if you want to encrypt, say, your users dates of birth, but you also need to be able to query on that field, you need to choose between one or the other: you can t have both at the same time. You may think to yourself, but this isn t any good, an attacker that breaks into my application can still steal all my data! . That is true, but security is never binary. The name of the game is reducing the attack surface, making it harder for an attacker to succeed. If you leave all the data unencrypted in the database, an attacker can steal all your data by breaking into the database or by breaking into the application. Encrypting the data reduces the attacker s options, and allows you to focus your resources on hardening the application against attack, safe in the knowledge that an attacker who gets into the database directly isn t going to get anything valuable.

Sidenote: The Curious Case of pg_crypto PostgreSQL ships a contrib module called pg_crypto, which provides encryption and decryption functions. This sounds ideal to use for encrypting data within our applications, as it s available no matter what we re using to write our application. It avoids the problem of framework-specific cryptography, because you call the same PostgreSQL functions no matter what language you re using, which produces the same output. However, I don t recommend ever using pg_crypto s data encryption functions, and I doubt you will find many other cryptographic engineers who will, either. First up, and most horrifyingly, it requires you to pass the long-term keys to the database server. If there s an attacker actively in the database server, they can capture the keys as they come in, which means all the data encrypted using that key is exposed. Sending the keys can also result in the keys ending up in query logs, both on the client and server, which is obviously a terrible result. Less scary, but still very concerning, is that pg_crypto s available cryptography is, to put it mildly, antiquated. We have a lot of newer, safer, and faster techniques for data encryption, that aren t available in pg_crypto. This means that if you do use it, you re leaving a lot on the table, and need to have skilled cryptographic engineers on hand to avoid the potential pitfalls. In short: friends don t let friends use pg_crypto.

The Future: Enquo All this brings us to the project I run: Enquo. It takes application-layer encryption to a new level, by providing a language- and framework-agnostic cryptosystem that also enables encrypted data to be efficiently queried by the database. So, you can encrypt your users dates of birth, in such a way that anyone with the appropriate keys can query the database to return, say, all users over the age of 18, but an attacker just sees unintelligible gibberish. This should greatly increase the amount of data that can be encrypted, and as the Enquo project expands its available data types and supported languages, the coverage of encrypted data will grow and grow. My eventual goal is to encrypt all data, all the time. If this appeals to you, visit enquo.org to use or contribute to the open source project, or EnquoDB.com for commercial support and hosted database options.

21 September 2023

Jonathan McDowell: DebConf23 Writeup

DebConf2023 Logo (I wrote this up for an internal work post, but I figure it s worth sharing more publicly too.) I spent last week at DebConf23, this years instance of the annual Debian conference, which was held in Kochi, India. As usual, DebConf provides a good reason to see a new part of the world; I ve been going since 2004 (Porto Alegre, Brazil), and while I ve missed a few (Mexico, Bosnia, and Switzerland) I ve still managed to make it to instances on 5 continents. This has absolutely nothing to do with work, so I went on my own time + dime, but I figured a brief write-up might prove of interest. I first installed Debian back in 1999 as a machine that was being co-located to operate as a web server / email host. I was attracted by the promise of easy online upgrades (or, at least, upgrades that could be performed without the need to be physically present at the machine, even if they naturally required a reboot at some point). It has mostly delivered on this over the years, and I ve never found a compelling reason to move away. I became a Debian Developer in 2000. As a massively distributed volunteer project DebConf provides an opportunity to find out what s happening in other areas of the project, catch up with team mates, and generally feel more involved and energised to work on Debian stuff. Also, by this point in time, a lot of Debian folk are good friends and it s always nice to catch up with them. On that point, I felt that this year the hallway track was not quite the same as usual. For a number of reasons (COVID, climate change, travel time, we re all getting older) I think fewer core teams are achieving critical mass at DebConf - I was the only member physically present from 2 teams I m involved in, and I d have appreciated the opportunity to sit down with both of them for some in-person discussions. It also means it s harder to use DebConf as a venue for advancing major changes; previously having all the decision makers in the same space for a week has meant it s possible to iron out the major discussion points, smoothing remote implementation after the conference. I m told the mini DebConfs are where it s at for these sorts of meetings now, so perhaps I ll try to attend at least one of those next year. Of course, I also went to a bunch of talks. I have differing levels of comment about each of them, but I ve written up some brief notes below about the ones I remember something about. The comment was made that we perhaps had a lower level of deep technical talks, which is perhaps true but I still think there were a number of high level technical talks that served to pique ones interest about the topic. Finally, this DebConf was the first I m aware of that was accompanied by tragedy; as part of the day trip Abraham Raji, a project member and member of the local team, was involved in a fatal accident.

Talks (videos not yet up for all, but should appear for most)
  • Opening Ceremony
    Not much to say here; welcome to DebConf!
  • Continuous Key-Signing Party introduction
    I ended up running this, as Gunnar couldn t make it. Debian makes heavy use of the OpenPGP web of trust (no mass ability to send out Yubikeys + perform appropriate levels of identity verification), so making sure we re appropriately cross-signed, and linked to local conference organisers, is a dull but important part of the conference. We use a modified keysigning approach where identity verification + fingerprint confirmation happens over the course of the conference, so this session was just to explain how that works and confirm we were all working from the same fingerprint list.
  • State of Stateless - A Talk about Immutability and Reproducibility in Debian
    Stateless OSes seem to be gaining popularity, so I went along to this to see if there was anything of note. It was interesting, but nothing earth shattering - very high level.
  • What s missing so that Debian is finally reproducible?
    Reproducible builds are something I ve been keeping an eye on for a long time, and I continue to be impressed by the work folks are putting into this - both for Debian, and other projects. From a security standpoint reproducible builds provide confidence against trojaned builds, and from a developer standpoint knowing you can build reproducibly helps with not having to keep a whole bunch of binary artefacts around.
  • Hello from keyring-maint
    In the distant past the process of getting your OpenPGP key into the Debian keyring (which is used to authenticate uploads + votes, amongst other things) was a clunky process that was often stalled. This hasn t been the case for at least the past 10 years, but there s still a residual piece of project memory that thinks keyring is a blocker. So as a team we say hi and talk about the fact we do monthly updates and generally are fairly responsive these days.
  • A declarative approach to Linux networking with Netplan
    Debian s /etc/network/interfaces is a fairly basic (if powerful) mechanism for configuring network interfaces. NetworkManager is a better bet for dynamic hosts (i.e. clients), and systemd-network seems to be a good choice for servers (I m gradually moving machines over to it). Netplan tries to provide a unified mechanism for configuring both with a single configuration language. A noble aim, but I don t see a lot of benefit for anything I use - my NetworkManager hosts are highly dynamic (so no need to push shared config) and systemd-network (or /etc/network/interfaces) works just fine on the other hosts. I m told Netplan has more use with more complicated setups, e.g. when OpenVSwitch is involved.
  • Quick peek at ZFS, A too good to be true file system and volume manager.
    People who use ZFS rave about it. I m naturally suspicious of any file system that doesn t come as part of my mainline kernel. But, as a longtime cautious mdraid+lvm+ext4 user I appreciate that there have been advances in the file system space that maybe I should look at, and I ve been trying out btrfs on more machines over the past couple of years. I can t deny ZFS has a bunch of interesting features, but nothing I need/want that I can t get from an mdraid+lvm+btrfs stack (in particular data checksumming + reflinks for dedupe were strong reasons to move to btrfs over ext4).
  • Bits from the DPL
    Exactly what it says on the tin; some bits from the DPL.
  • Adulting
    Enrico is always worth hearing talk; Adulting was no exception. Main takeaway is that we need to avoid trying to run the project on martyrs and instead make sure we build a sustainable project. I ve been trying really hard to accept I just don t have time to take on additional responsibilities, no matter how interesting or relevant they might seem, so this resonated.
  • My life in git, after subversion, after CVS.
    Putting all of your home directory in revision control. I ve never made this leap; I ve got some Ansible playbooks that push out my core pieces of configuration, which is held in git, but I don t actually check this out directly on hosts I have accounts on. Interesting, but not for me.
  • EU Legislation BoF - Cyber Resilience Act, Product Liability Directive and CSAM Regulation
    The CRA seems to be a piece of ill informed legislation that I m going to have to find time to read properly. Discussion was a bit more alarmist than I personally feel is warranted, but it was a short session, had a bunch of folk in it, and even when I removed my mask it was hard to make myself understood.
  • What s new in the Linux kernel (and what s missing in Debian)
    An update from Ben about new kernel features. I m paying less attention to such things these days, so nice to get a quick overview of it all.
  • Intro to SecureDrop, a sort-of Linux distro
    Actually based on Ubuntu, but lots of overlap with Debian as a result, and highly customised anyway. Notable, to me, for using OpenPGP as some of the backend crypto support. I managed to talk to Kunal separately about some of the pain points around that, which was an interesting discussion - they re trying to move from GnuPG to Sequoia, primarily because of the much easier integration and lack of requirement for the more complicated GnuPG features that sometimes get in the way.
  • The Docker(.io) ecosystem in Debian
    I hate Docker. I m sure it s fine if you accept it wants to take over the host machine entirely, but when I ve played around with it that s not been the case. This talk was more about the difficulty of trying to keep a fast moving upstream with lots of external dependencies properly up to date in a stable release. Vendoring the deps and trying to get a stable release exception seems like the least bad solution, but it s a problem that affects a growing number of projects.
  • Chiselled containers
    This was kinda of interesting, but I think I missed the piece about why more granular packaging wasn t an option. The premise is you can take an existing .deb and chisel it into smaller components, which then helps separate out dependencies rather than pulling in as much as the original .deb would. This was touted as being useful, in particular, for building targeted containers. Definitely appealing over custom built userspaces for containers, but in an ideal world I think we d want the information in the main packaging and it becomes a lot of work.
  • Debian Contributors shake-up
    Debian Contributors is a great site for massaging your ego around contributions to Debian; it s also a useful point of reference from a data protection viewpoint in terms of information the project holds about contributors - everything is already public, but the Contributors website provides folk with an easy way to find their own information (with various configurable options about whether that s made public or not). T ssia is working on improving the various data feeds into the site, but realistically this is the responsibility of every Debian service owner.
  • New Member BOF
    I m part of the teams that help get new folk into Debian - primarily as a member of the New Member Front Desk, but also as a mostly inactive Application Manager. It s been a while since we did one of these sessions so the Front Desk/Debian Account Managers that were present did a panel session. Nothing earth shattering came out of it; like keyring-maint this is a team that has historically had problems, but is currently running smoothly.

11 September 2023

Debian Brasil: Debian Day 30 anos em Macei

O Debian Day em Macei 2023 foi realizado no audit rio do Senai em Macei com apoio e realiza o do Oxe Hacker Club. Se inscreveram cerca de 90 pessoas, e 40 estiveram presentes no s bado para participarem do evento que contou com as 6 palestras a seguir: O Debian Day teve ainda um install fest e desconfer ncia (papo aleat rio, comes e bebes). Debian Day Macei  2023 1 Debian Day Macei  2023 1 Debian Day Macei  2023 1 Debian Day Macei  2023 1 Debian Day Macei  2023 1 Debian Day Macei  2023 1 Debian Day Macei  2023 1 Debian Day Macei  2023 1 Debian Day Macei  2023 1 Debian Day Macei  2023 1 Debian Day Macei  2023 1

Debian Brasil: Debian Day 30 anos in Macei - Brazil

The Debian Day in Macei 2023 took place at the Senai auditorium in Macei with the support and organization of Oxe Hacker Club. There were around 90 people registered, and 40 ateendees present on Saturday to participate in the event, which featured the following 6 talks: Debian Day also had an install fest and unconference (random chat, food and drinks). Debian Day Macei  2023 1 Debian Day Macei  2023 1 Debian Day Macei  2023 1 Debian Day Macei  2023 1 Debian Day Macei  2023 1 Debian Day Macei  2023 1 Debian Day Macei  2023 1 Debian Day Macei  2023 1 Debian Day Macei  2023 1 Debian Day Macei  2023 1 Debian Day Macei  2023 1

31 August 2023

Russell Coker: Links August 2023

This is an interesting idea from Bruce Schneier, an AI Dividend paid to every person for their contributions to the input of ML systems [1]. We can t determine who s input was most used so sharing the money equally seems fair. It could end up as yet another justification for a Universal Basic Income. The Long Now foundation has an insightful article about preserving digital data [2]. It covers the history of lost data and the new challenges archivists face with proprietary file formats. Tesla gets fined for having special Elon mode [3], turns out that being a billionaire isn t an exemption from road safety legislation. Wired has an interesting article about the Olympics Destroyer malware that Russia used to attack the 2018 Olympics [4]. Wired has an interesting article about Marcus Hutchins, how he prevented a serious bot attack and how he had a history in crime when he was a teenager [5]. It s good to see that some people can reform. The IEEE has a long and informative article about what needs to be done to transition to electric cars [6]. It s a lot of work and we should try and do it as fast as possible. Linux Tech Tips has an interesting video about a new cooling system for laptops (and similar use cases for moving tens of watts from a thin space) [7]. This isn t going to be useful for servers or desktops as big heavy heatsinks work well for them. But for something to put on top of a laptop CPU or to have several of them connected to a laptop CPU by heat pipes it could be very useful. The technology of piezo electric cooling devices is interesting on it s own, I expect we will see more of that in future.

30 July 2023

Russell Coker: Links July 2023

Phys.org has an interesting article about finding evidence for nanohertz gravity waves [1]. 1nano-Herz is a wavelength of 31.7 light years! Wired has an interesting story about OpenAI saying that no further advances will be made with larger training models [2]. Bruce Schneier and Nathan Sanders wrote an insightful article about the need for government run GPT type systems [3]. He focuses on the US, but having other countries/groups of countries do it would be good too. We could have a Chinese one, an EU one, etc. I don t think it would necessarily make sense for a small country like Australia to have one but it would make a lot more sense than having nuclear submarines (which are much more expensive). The Roadmap project is a guide for learning new technologies [4]. The content seems quite good. Bigthink has an informative and darkly amusing article Horror stories of cryonics: The gruesome fates of futurists hoping for immortality [5]. From this month in Australia psilocybin (active ingredient in Magic Mushrooms) can be prescribed for depression and MDMA (known as Ecstacy on the streets) can be prescribed for PTSD [6]. That s great news! Slate has an interesting article about the Operation Underground Railroad organisation that purports to help sex trafficed chilren [7]. This is noteworthy now with the controverst over the recent movie about that. Apparently they didn t provide much help for kids after they had been rescued and at least some of the kids were trafficed specifically to fulfill the demand that they created by offering to pay for it. Vigilantes aren t as effective as law enforcement. The ACCC is going to prevent Apple and Google from forcing app developers to give them a share of in-app purchases in Australia [8]. We need this in every country! This site has links to open source versions of proprietary games [9]. Vice has an interesting article about the Hungarian neuroscientist Viktor T th who taught rats to play Doom 2 [10]. The next logical step is to have mini tanks that they can use in real battlefields. Like the Mason s Rats episode of Love Death and Robots on Netflix. Brian Krebs wrote a mind boggling pair of blog posts about the Ashley Adison hack [11]. A Jewish disgruntled ex-employee sending anti-semitic harassment to the Jewish CEO and maybe cooperating with anti-semitic organisations to harass him is one of the people involved, but he killed himself (due to mental health problems) before the hack took place. Long Now has an insightful blog post about digital avatars being used after the death of the people they are based on [12]. Tavis Ormandy s description of the zenbleed bug is interesting [13]. The technique for finding the bug is interesting as well as the information on how the internals of the CPUs in question work. I don t think this means AMD is bad, trying to deliver increasing performance while limited by the laws of physics is difficult and mistakes are sometimes made. Let s hope the microcode updates are well distributed. The Hacktivist documentary about Andrew Bunnie Huang is really good [14]. Bunnie s lecture about supply chain attacks is worth watching [15]. Most descriptions of this issue don t give nearly as much information. However bad you thought this problem was, after you watch this lecture you will realise it s worse than that!

29 July 2023

Shirish Agarwal: Manipur, Data Leakage, Aadhar, and IRCv3

Manipur Lot of news from Manipur. Seems the killings haven t stopped. In fact, there was a huge public rally in support of the rapists and murderers as reported by Imphal Free Press. The Ruling Govt. both at the Center and the State being BJP continuing to remain mum. Both the Internet shutdowns have been criticized and seems no effect on the Government. Their own MLA was attacked but they have chosen to also be silent about that. The opposition demanded that the PM come in both the houses and speak but he has chosen to remain silent. In that quite a few bills were passed without any discussions. If it was not for the viral videos nobody would have come to know of anything  . Internet shutdowns impact women disproportionately as more videos of assaults show  Of course, as shared before that gentleman has been arrested under Section 66A as I shared in the earlier blog post. In any case, in the last few years, this Government has chosen to pass most of its bills without any discussions. Some of the bills I will share below. The attitude of this Govt. can be seen through this cartoon
The above picture shows the disqualified M.P. Rahul Gandhi because he had asked what is the relationship between Adani and Modi. The other is the Mr. Modi, the Prime Minister who refuses to enter and address the Parliament. Prem Panicker shares how we chillingly have come to this stage when even after rapes we are silent

Data Leakage According to most BJP followers this is not a bug but a feature of this Government. Sucheta Dalal of Moneylife shared how the data leakage has been happening at the highest levels in the Government. The leakage is happening at the ministerial level because unless the minister or his subordinate passes a certain startup others cannot come to know. As shared in the article, while the official approval may take 3-4 days, within hours other entities start congratulating. That means they know that the person/s have been approved.While reading this story, the first thought that immediately crossed my mind was data theft and how easily that would have been done. There was a time when people would be shocked by articles such as above and demand action but sadly even if people know and want to do something they feel powerless to do anything

PAN Linking and Aadhar Last month GOI made PAN Linking to Aadhar a thing. This goes against the judgement given by the honored Supreme Court in September 2018. Around the same time, Moneylife had reported on the issue on how the info. on Aadhar cards is available and that has its consequences. But to date nothing has happened except GOI shrugging. In the last month, 13 crore+ users of PAN including me affected by it  I had tried to actually delink the two but none of the banks co-operated in the same  Aadhar has actually number of downsides, most people know about the AEPS fraud that has been committed time and time again. I have shared in previous blog posts the issue with biometric data as well as master biometric data that can and is being used for fraud. GOI either ignorant or doesn t give a fig as to what happens to you, citizen of India. I could go on and on but it would result in nothing constructive so will stop now

IRCv3 I had been enthused when I heard about IRCV3. While it was founded in 2016, it sorta came on in its own in around 2020. I did try matrix or rather riot-web and went through number of names while finally setting on element. While I do have the latest build 1.11.36 element just hasn t been workable for me. It is too outsized, and occupies much more real estate than other IM s (Instant Messengers and I cannot correct size it like I do say for qbittorrent or any other app. I had filed couple of bugs on it but because it apparently only affects me, nothing happened afterwards  But that is not the whole story at all. Because of Debconf happening in India, and that too Kochi, I decided to try out other tools to see how IRC is doing. While the Debian wiki page shares a lot about IRC clients and is also helpful in sharing stats by popcounter ( popularity-contest, thanks to whoever did that), it did help me in trying two of the most popular clients. Pidgin and Hexchat, both of which have shared higher numbers. This might be simply due to the fact that both get downloaded when you install the desktop version or they might be popular in themselves, have no idea one way or the other. But still I wanted to see what sort of experience I could expect from both of them in 2023. One of the other things I noticed is that Pidgin is not a participating organization in ircv3 while hexchat is. Before venturing in, I also decided to take a look at oftc.net. Came to know that for sometime now, oftc has started using web verify. I didn t see much of a difference between hcaptcha and gcaptcha other than that the fact that they looked more like oil paintings rather than anything else. While I could easily figure the odd man out or odd men out to be more accurate, I wonder how a person with low or no vision would pass that ??? Also much of our world is pretty much contextual based, figuring who the odd one is or are could be tricky. I do not have answers to the above other than to say more work needs to be done by oftc in that area. I did get a link that I verified. But am getting ahead of the story. Another thing I understood that for some reason oftc is also not particpating in ircv3, have no clue why not :(I

Account Registration in Pidgin and Hexchat This is the biggest pain point in both. I failed to register via either Pidgin or Hexchat. I couldn t find a way in either client to register my handle. I have had on/off relationships with IRC over the years, the biggest issue being IIRC is that if you stop using your handle for a month or two others can use it. IIRC, every couple of months or so, irc/oftc releases the dormant ones. Matrix/Vector has done quite a lot in that regard but that s a different thing altogether so for the moment will keep that aside. So, how to register for the network. This is where webchat.oftc.net comes in. You get a quaint 1970 s IRC window (probably emulated) where you call Nickserv to help you. As can be seen it one of the half a dozen bots that helps IRC. So the first thing you need to do is /msg nickserv help what you are doing is asking nickserv what services they have and Nickserv shares the numbers of services it offers. After looking into, you are looking for register /msg nickerv register Both the commands tell you what you need to do as can be seen by this
Let s say you are XYZ and your e-mail address is xyz@xyz.com This is just a throwaway id I am taking for the purpose of showing how the process is done. For this, also assume your passowrd is 1234xyz;0x something like this. I have shared about APG (Advanced Password Generator) before so you could use that to generate all sorts of passwords for yourself. So next would be /msg nickserv register 1234xyz;0x xyz@xyz.com Now the thing to remember is you need to be sure that the email is valid and in your control as it would generate a link with hcaptcha. Interestingly, their accessibility signup fails or errors out. I just entered my email and it errors out. Anyway back to it. Even after completing the puzzle, even with the valid username and password neither pidgin or hexchat would let me in. Neither of the clients were helpful in figuring out what was going wrong. At this stage, I decided to see the specs of ircv3 if they would help out in anyway and came across this. One would have thought that this is one of the more urgent things that need to be fixed, but for reasons unknown it s still in draft mode. Maybe they (the participants) are not in consensus, no idea. Unfortunately, it seems that the participants of IRCv3 have chosen a sort of closed working model as the channel is restricted. The only notes of any consequence are being shared by Ilmari Lauhakangas from Finland. Apparently, Mr/Ms/they Ilmari is also a libreoffice hacker. It is possible that their is or has been lot of drama before or something and that s why things are the way they are. In either way, doesn t tell me when this will be fixed, if ever. For people who are on mobiles and whatnot, without element, it would be 10x times harder. Update :- Saw this discussion on github. Don t see a way out  It seems I would be unable to unable to be part of Debconf Kochi 2023. Best of luck to all the participants and please share as much as possible of what happens during the event.

26 July 2023

Shirish Agarwal: Manipur Violence, Drugs, Binging on Northshore, Alaska Daily, Doogie Kamealoha and EU Digital Resilence Act.

Manipur Videos Warning: The text might be mature and will have references to violence so if there are kids or you are sensitive, please excuse. Few days back, saw the videos and I cannot share the rage, shame and many conflicting emotions that were going through me. I almost didn t want to share but couldn t stop myself. The woman in the video were being palmed, fingered, nude, later reportedly raped and murdered. And there have been more than a few cases. The next day saw another video that showed beheaded heads, and Kukis being killed just next to their houses. I couldn t imagine what those people must be feeling as the CM has been making partisan statements against them. One of the husbands of the Kuki women who had been paraded, fondled is an Army Officer in the Indian Army. The Meiteis even tried to burn his home but the Army intervened and didn t let it get burnt. The CM s own statement as shared before tells his inability to bring the situation out of crisis. In fact, his statement was dumb stating that the Internet shutdown was because there were more than 100 such cases. And it s spreading to the nearby Northeast regions. Now Mizoram, the nearest neighbor is going through similar things where the Meitis are not dominant. The Mizos have told the Meitis to get out. To date, the PM has chosen not to visit Manipur. He just made a small 1 minute statement about it saying how the women have shamed India, an approximation of what he said.While it s actually not the women but the men who have shamed India. The Wire has been talking to both the Meitis, the Kukis, the Nagas. A Kuki women sort of bared all. She is right on many counts. The GOI while wanting to paint the Kukis in a negative light have forgotten what has been happening in its own state, especially its own youth as well as in other states while also ignoring the larger geopolitics and business around it. Taliban has been cracking as even they couldn t see young boys, women becoming drug users. I had read somewhere that 1 in 4 or 1 in 5 young person in Afghanistan is now in its grip. So no wonder,the Taliban is trying to eradicate and shutdown drug use among it s own youth. Circling back to Manipur, I was under the wrong impression that the Internet shutdown is now over. After those videos became viral as well as the others I mentioned, again the orders have been given and there is shutdown. It is not fully shut but now only Govt. offices have it. so nobody can share a video that goes against any State or Central Govt. narrative  A real sad state of affairs  Update: There is conditional reopening whatever that means  When I saw the videos, the first thing is I felt was being powerless, powerless to do anything about it. The second was if I do not write about it, amplify it and don t let others know about it then what s the use of being able to blog

Mental Health, Binging on various Webseries Both the videos shocked me and I couldn t sleep that night or the night after. it. Even after doing work and all, they would come in unobtrusively in my nightmares  While I felt a bit foolish, I felt it would be nice to binge on some webseries. Little I was to know that both Northshore and Alaska Daily would have stories similar to what is happening here  While the story in Alaska Daily is fictional it resembles very closely to a real newspaper called Anchorage Daily news. Even there the Intuit women , one of the marginalized communities in Alaska. The only difference I can see between GOI and the Alaskan Government is that the Alaskan Government was much subtle in doing the same things. There are some differences though. First, the State is and was responsive to the local press and apart from one close call to one of its reporters, most reporters do not have to think about their own life in peril. Here, the press cannot look after either their livelihood or their life. It was a juvenile kid who actually shot the video, uploaded and made it viral. One needs to just remember the case details of Siddique Kappan. Just for sharing the news and the video he was arrested. Bail was denied to him time and time again citing that the Police were investigating . Only after 2 years and 3 months he got bail and that too because none of the charges that the Police had they were able to show any prima facie evidence. One of the better interviews though was of Vrinda Grover. For those who don t know her, her Wikipedia page does tell a bit about her although it is woefully incomplete. For example, most recently she had relentlessly pursued the unconstitutional Internet Shutdown that happened in Kashmir for 5 months. Just like in Manipur, the shutdown was there to bury crimes either committed or being facilitated by the State. For the issues of livelihood, one can take the cases of Bipin Yadav and Rashid Hussain. Both were fired by their employer Dainik Bhaskar because they questioned the BJP MP Smriti Irani what she has done for the state. The problems for Dainik Bhaskar or for any other mainstream media is most of them rely on Government advertisements. Private investment in India has fallen to record lows mostly due to the policies made by the Centre. If any entity or sector grows a bit then either Adani or Ambani will one way or the other take it. So, for most first and second generation entrepreneurs it doesn t make sense to grow and then finally sell it to one of these corporates at a loss  GOI on Adani, Ambani side of any deal. The MSME sector that is and used to be the second highest employer hasn t been able to recover from the shocks of demonetization, GST and then the pandemic. Each resulting in more and more closures and shutdowns. Most of the joblessness has gone up tremendously in North India which the Government tries to deny. The most interesting points in all those above examples is within a month or less, whatever the media reports gets scrubbed. Even the firing of the journos that was covered by some of the mainstream media isn t there anymore. I have to use secondary sources instead of primary sources. One can think of the chilling effects on reportage due to the above. The sad fact is even with all the money in the world the PM is unable to come to the Parliament to face questions.
Why is PM not answering in Parliament,, even Rahul Gandhi is not there - Surya Pratap Singh, prev. IAS Officer.
The above poster/question is by Surya Pratap Singh, a retired IAS officer. He asks why the PM is unable to answer in either of the houses. As shared before, the Govt. wants very limited discussion. Even yesterday, the Lok Sabha TV just showed the BJP MP s making statements but silent or mic was off during whatever questions or statements made by the opposition. If this isn t mockery of Indian democracy then I don t know what is  Even the media landscape has been altered substantially within the last few years. Both Adani and Ambani have distributed the media pie between themselves. One of the last bastions of the free press, NDTV was bought by Adani in a hostile takeover. Both Ambani and Adani are close to this Goverment. In fact, there is no sector in which one or the other is not present. Media houses like Newsclick, The Wire etc. that are a fraction of mainstream press are where most of the youth have been going to get their news as they are not partisan. Although even there, GOI has time and again interfered. The Wire has had too many 504 Gateway timeouts in the recent months and they had been forced to move most of their journalism from online to video, rather Youtube in order to escape both the censoring and the timeouts as shared above. In such a hostile environment, how both the organizations are somehow able to survive is a miracle. Most local reportage is also going to YouTube as that s the best way for them to not get into Govt. censors. Not an ideal situation, but that s the way it is. The difference between Indian and Israeli media can be seen through this
The above is a Screenshot shared by how the Israeli media has reacted to the Israeli Government s Knesset over the judicial overhaul . Here, the press itself erodes its own by giving into the Government day and night

Binging on Webseries Saw Northshore, Three Pines, Alaska Daily and Doogie Kamealoha M.D. which is based on Doogie Howser M.D. Of the four, enjoyed Doogie Kamealoha M.D. the most but then it might be because it s a copy of Doogie Howser, just updated to the new millenia and there are some good childhood memories associated with that series. The others are also good. I tried to not see European stuff as most of them are twisted and didn t want that space.

EU Digital Operational Resilience Act and impact on FOSS Few days ago, apparently the EU shared the above Act. One can read about it more here. This would have more impact on FOSS as most development of various FOSS distributions happens in EU. Fair bit of Debian s own development happens in Germany and France. While there have been calls to make things more clearer, especially for FOSS given that most developers do foss development either on side or as a hobby while their day job is and would be different. The part about consumer electronics and FOSS is a tricky one as updates can screw up your systems. Microsoft has had a huge history of devices not working after an update or upgrade. And this is not limited to Windows as they would like to believe. Even apple seems to be having its share of issues time and time again. One would have hoped that these companies that make billions of dollars from their hardware and software sales would be doing more testing and Q&A and be more aware about security issues. FOSS, on the other hand while being more responsive doesn t make as much money vis-a-vis the competitors. Let s take the most concrete example. The most successful mobile phone having FOSS is Purism. But it s phone, it has priced itself out of the market. A huge part of that is to do with both economies of scale and trying to get an infrastructure and skills in the States where none or minimally exists. Compared that to say Pinepro that is manufactured in Hong Kong and is priced 1/3rd of the same. For most people it is simply not affordable in these times. Add to that the complexity of these modern cellphones make it harder, not easier for most people to be vigilant and update the phone at all times. Maybe we need more dumphones such as Light and Punkt but then can those be remotely hacked or not, there doesn t seem to be any answers on that one. I haven t even seen anybody even ask those questions. They may have their own chicken and egg issues. For people like me who have lost hearing, while I can navigate smartphones for now but as I become old I don t see anything that would help me. For many an elderly population, both hearing and seeing are the first to fade. There doesn t seem to be any solutions targeted for them even though they are 5-10% of any population at the very least. Probably more so in Europe and the U.S. as well as Japan and China. All of them are clearly under-served markets but dunno a solution for them. At least to me that s an open question.

6 June 2023

Russell Coker: PinePhonePro First Impression

Hardware I received my PinePhone Pro [1] on Thursday, it seems in many ways better than the Purism Librem 5 [2] that I have previously written about. The PinePhone is thinner, lighter, and yet has a much longer battery life. A friend described the Librem5 as the CyberTruck phone and not in a good way. In a test I had my PinePhone and my Librem5 fully charged, left them for 4.5 hours without doing anything much with them, and then the PinePhone was at 85% and the Librem5 was at 57%. So the Librem5 will run out of battery after about 10 hours of not being used while a PinePhonePro can be expected to last about 30 hours. The PinePhonePro isn t as good as some of the recent Android phones in this regard but it shows the potential to be quite usable. For this test both phones were connected to a 2.4GHz Wifi network (which uses less power than 5GHz) and doing nothing much with an out of the box configuration. A phone that is checking email, social networking, and a couple of IM services will use the battery faster. But even if the PinePhone has it s battery used twice as fast in a more realistic test that will still be usable. Here are the passmark results from the PinePhone Pro [3] which got a CPU score of 888 compared to 507 for the Librem 5 and 678 for one of the slower laptops I ve used. The results are excluded from the Passmark averages because they identified the CPU as only having 4 cores (expecting just 4*A72) while the PinePhonePro has 6 cores (2*A72+4*A53). This phone definitely has the CPU power for convergence [4]! Default OS By default the PinePhone has a KDE based GUI and the Librem5 has a GNOME based GUI. I don t like any iteration of GNOME (I have tried them all and disliked them all) and I like KDE so I will tend to like anything that is KDE based more than anything GNOME based. But in addition to that the PinePhone has an interface that looks a lot like Android with the three on-screen buttons at the bottom of the display and the way it has the slide up tray for installed apps. Android is the most popular phone OS and looking like the most common option is often a good idea for a new and different product, this seems like an objective criteria to determine that the default GUI on the PinePhone is a better choice (at least for the default). When I first booted it and connected it to Wifi the updates app said that there were 633 updates to apply, but never applied them (I tried clicking on the update button but to no avail) and didn t give any error message. For me not being Debian is enough reason to dislike Manjaro, but if that wasn t enough then the failure to update would be a good start. When I ran pacman in a terminal window it said that each package was corrupt and asked if I wanted to delete it. According to tar tvJf the packages weren t corrupt. After downloading them again it said that they were corrupt again so it seemed that pacman wasn t working correctly. When the screen is locked and a call comes in it gives a window with Accept and Reject buttons but neither of them works. The default country code for Spacebar (the SMS app) is +1 (US) even though I specified Australia on the initial login. It also doesn t get the APN unlike Android phones which seem to have some sort of list of APNs. Upgrading to Debian The Debian Wiki page about Installing on the PinePhone Pro has the basic information [5]. The first thing it covers is installing the TOW boot loader which is already installed by default in recent PinePhones (such as mine). You can recognise that TOW is installed by pressing the volume-up button in the early stages of boot up (described as before and during the second vibration ), then the LED will turn blue and the phone will act as a USB mass storage device which makes it easy to do other install/recovery tasks. The other TOW option is to press volume-down to boot from a MicroSD card (the default is to boot the OS on the eMMC). The images linked from the Debian wiki page are designed to be installed with bmaptool from the bmap-tools Debian package. After installing that package and downloading the pre-built Mobian image I installed it with the command bmaptool copy mobian-pinephonepro-phosh-bookworm-12.0-rc3.img.gz /dev/sdb where /dev/sdb is the device that the USB mapped PinePhone storage was located. That took 6 minutes and then I rebooted my PinePhone into Mobian! Unfortunately the default GUI for Mobian is GNOME/Phosh. Changing it to KDE is my next task.

5 June 2023

Reproducible Builds: Reproducible Builds in May 2023

Welcome to the May 2023 report from the Reproducible Builds project In our reports, we outline the most important things that we have been up to over the past month. As always, if you are interested in contributing to the project, please visit our Contribute page on our website.


Holger Levsen gave a talk at the 2023 edition of the Debian Reunion Hamburg, a semi-informal meetup of Debian-related people in northern Germany. The slides are available online.
In April, Holger Levsen gave a talk at foss-north 2023 titled Reproducible Builds, the first ten years. Last month, however, Holger s talk was covered in a round-up of the conference on the Free Software Foundation Europe (FSFE) blog.
Pronnoy Goswami, Saksham Gupta, Zhiyuan Li, Na Meng and Daphne Yao from Virginia Tech published a paper investigating the Reproducibility of NPM Packages. The abstract includes:
When using open-source NPM packages, most developers download prebuilt packages on npmjs.com instead of building those packages from available source, and implicitly trust the downloaded packages. However, it is unknown whether the blindly trusted prebuilt NPM packages are reproducible (i.e., whether there is always a verifiable path from source code to any published NPM package). [ ] We downloaded versions/releases of 226 most popularly used NPM packages and then built each version with the available source on GitHub. Next, we applied a differencing tool to compare the versions we built against versions downloaded from NPM, and further inspected any reported difference.
The paper reports that among the 3,390 versions of the 226 packages, only 2,087 versions are reproducible, and furthermore that multiple factors contribute to the non-reproducibility including flexible versioning information in package.json file and the divergent behaviors between distinct versions of tools used in the build process. The paper concludes with insights for future verifiable build procedures. Unfortunately, a PDF is not available publically yet, but a Digital Object Identifier (DOI) is available on the paper s IEEE page.
Elsewhere in academia, Betul Gokkaya, Leonardo Aniello and Basel Halak of the School of Electronics and Computer Science at the University of Southampton published a new paper containing a broad overview of attacks and comprehensive risk assessment for software supply chain security. Their paper, titled Software supply chain: review of attacks, risk assessment strategies and security controls, analyses the most common software supply-chain attacks by providing the latest trend of analyzed attack, and identifies the security risks for open-source and third-party software supply chains. Furthermore, their study introduces unique security controls to mitigate analyzed cyber-attacks and risks by linking them with real-life security incidence and attacks . (arXiv.org, PDF)
NixOS is now tracking two new reports at reproducible.nixos.org. Aside from the collection of build-time dependencies of the minimal and Gnome installation ISOs, this page now also contains reports that are restricted to the artifacts that make it into the image. The minimal ISO is currently reproducible except for Python 3.10, which hopefully will be resolved with the coming update to Python version 3.11.
On our rb-general mailing list this month: David A. Wheeler started a thread noting that the OSSGadget project s oss-reproducible tool was measuring something related to but not the same as reproducible builds. Initially they had adopted the term semantically reproducible build term for what it measured, which they defined as being if its build results can be either recreated exactly (a bit for bit reproducible build), or if the differences between the release package and a rebuilt package are not expected to produce functional differences in normal cases. This generated a significant number of replies, and several were concerned that people might confuse what they were measuring with reproducible builds . After discussion, the OSSGadget developers decided to switch to the term semantically equivalent for what they measured in order to reduce the risk of confusion. Vagrant Cascadian (vagrantc) posted an update about GCC, binutils, and Debian s build-essential set with some progress, some hope, and I daresay, some fears . Lastly, kpcyrd asked a question about building a reproducible Linux kernel package for Arch Linux (answered by Arnout Engelen). In the same, thread David A. Wheeler pointed out that the Linux Kernel documentation has a chapter about Reproducible kernel builds now as well.
In Debian this month, nine reviews of Debian packages were added, 20 were updated and 6 were removed this month, all adding to our knowledge about identified issues. In addition, Vagrant Cascadian added a link to the source code causing various ecbuild issues. [ ]
The F-Droid project updated its Inclusion How-To with a new section explaining why it considers reproducible builds to be best practice and hopes developers will support the team s efforts to make as many (new) apps reproducible as it reasonably can.
In diffoscope development this month, version 242 was uploaded to Debian unstable by Chris Lamb who also made the following changes: In addition, Mattia Rizzolo documented how to (re)-produce a binary blob in the code [ ] and Vagrant Cascadian updated the version of diffoscope in GNU Guix to 242 [ ].
reprotest is our tool for building the same source code twice in different environments and then checking the binaries produced by each build for any differences. This month, Holger Levsen uploaded versions 0.7.24 and 0.7.25 to Debian unstable which added support for Tox versions 3 and 4 with help from Vagrant Cascadian [ ][ ][ ]

Upstream patches The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including: In addition, Jason A. Donenfeld filed a bug (now fixed in the latest alpha version) in the Android issue tracker to report that generateLocaleConfig in Android Gradle Plugin version 8.1.0 generates XML files using non-deterministic ordering, breaking reproducible builds. [ ]

Testing framework The Reproducible Builds project operates a comprehensive testing framework (available at tests.reproducible-builds.org) in order to check packages and other artifacts for reproducibility. In May, a number of changes were made by Holger Levsen:
  • Update the kernel configuration of arm64 nodes only put required modules in the initrd to save space in the /boot partition. [ ]
  • A huge number of changes to a new tool to document/track Jenkins node maintenance, including adding --fetch, --help, --no-future and --verbose options [ ][ ][ ][ ] as well as adding a suite of new actions, such as apt-upgrade, command, deploy-git, rmstamp, etc. [ ][ ][ ][ ] in addition a significant amount of refactoring [ ][ ][ ][ ].
  • Issue warnings if apt has updates to install. [ ]
  • Allow Jenkins to run apt get update in maintenance job. [ ]
  • Installed bind9-dnsutils on some Ubuntu 18.04 nodes. [ ][ ]
  • Fixed the Jenkins shell monitor to correctly deal with little-used directories. [ ]
  • Updated the node health check to warn when apt upgrades are available. [ ]
  • Performed some node maintenance. [ ]
In addition, Vagrant Cascadian added the nocheck, nopgo and nolto when building gcc-* and binutils packages [ ] as well as performed some node maintenance [ ][ ]. In addition, Roland Clobus updated the openQA configuration to specify longer timeouts and access to the developer mode [ ] and updated the URL used for reproducible Debian Live images [ ].

If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

18 April 2023

Tim Retout: Data Diodes

At ArgoCon today, Thomas Fricke gave a nice talk on Cloud Native Deployments in Air Gapped Environments describing container vulnerability scanning in the German energy sector and since he didn t mention data diodes, and since some of my colleagues at Oakdoor/PA Consulting make data diodes for a living, I thought this might be interesting to write about! It s one thing to have an air-gapped system, but eventually in order to be useful you re going to have to move data into it, and this is going to need something better than just plugging a USB stick into your critical system. Just ask Iran how well this goes. Eight years after Stuxnet, the UK National Cyber Security Centre published the NCSC Safely Importing Data Pattern - but I found this a bit cryptic on first reading, because it s not clear what type of systems the pattern applies to, and deliberately uses technology-neutral language. Also, this was published around the same time GDPR was being implemented, mentions sensitive or personal data , and claims to be aimed at small to medium organisations - but I don t know how many small businesses implement a MILS security architecture. So without picking up on the mention of data diode , you can be left scratching your head about how to actually implement the pattern. One answer using Oakdoor components: The Oakdoor diodes themselves are quite interesting - they re electrical rather than optical like most data diodes. The other thing I d always wondered is how on earth you could even establish a TCP handshake across one - the answer is, you can t, so you use a UDP-based protocol like TFTP for file transfer. In this way, you build the transform/verify and protocol break that the NCSC pattern requires. Congratulations, you can now import your documents to your otherwise air-gapped system without also importing malicious code, and without risking data exfiltration. Note carefully that the Safely Importing Data pattern makes no guarantees about the integrity of your documents - they could be severely modified going through this process. For the same reason, I anticipate challenges applying this pattern to software binaries.

7 April 2023

Matthew Palmer: Database Encryption: If It's So Good, Why Isn't Everyone Doing It?

a wordcloud of organisations who have been reported to have had data breaches in 2022 Just some of the organisations that leaked data in 2022
It seems like just about every day there s another report of another company getting hacked and having its sensitive data (or, worse, the sensitive data of its customers) stolen. Sometimes, people s most intimate information gets dumped for the world to see. Other times it s just used for identity theft, extortion, and other crimes. In the least worst case, the attacker gets cold feet, but people suffer stress and inconvenience from having to replace identity documents. A great way to protect information from being leaked is to encrypt it. We encrypt data while it s being sent over the Internet (with TLS), and we encrypt it when it s at rest (with disk or volume encryption). Yet, everyone s data seems to still get stolen on a regular basis. Why? Because the data is kept online in an unencrypted form, sitting in the database while its being used. This means that attackers can just connect to the database, or trick the application into dumping the database, and all the data is just lying there, waiting to be misused.

It s Not the Devs Fault, Though You may be thinking that leaving an entire database full of sensitive data unencrypted seems like a terrible idea. And you re right: it is a terrible idea. But it s seemingly unavoidable. The problem is that in order to do what a database does best (query, sort, and aggregate data), it needs to be able to know what the data is. When you encrypt data, however, all the database sees is a locked box.
a locked box Not very useful for a database
The database can t tell what s in the locked box whether it s a number equal to 42, or a date that s less than 2023-01-01, or a string that contains the substring foo . Every value is just an opaque blob of stuff , and the database is rendered completely useless. Since modern applications usually rely pretty heavily on their database, it s essentially impossible to build an application if you ve turned your database into a glorified flat-file by encrypting everything in it. Thus, it s hardly surprising that developers have to leave the data laying around unencrypted, for anyone to come along and take.

Introducing Enquo I said before that having data unencrypted in a database is seemingly unavoidable. That s because there are some innovative cryptographic techniques that can make it possible to query encrypted data.
Andy Dwyer being amazed Indeed
The purpose of the Enquo project is to provide a common set of cryptographic primitives that implement ENcrypted QUery Operations (ie Enquo ), and integrate those operations into databases, ORMs, and anywhere else that could benefit. The end goal is to provide the ability to encrypt all the data stored in any database server, while still allowing the data to be queried and aggregated. So far, the project consists of these components:
  • the enquo-core library, that implements queryable encrypted integers, dates, and text in Rust and Ruby;
  • a PostgreSQL extension, pg_enquo, that allows PostgreSQL to query encrypted data; and
  • a Rails ActiveRecord extension, ActiveEnquo, that augments ActiveRecord to do the encryption/decryption required.
Support for other languages and ORMs is designed to be as straightforward as possible, and integration with other databases is mostly dependent on their own extensibility. The project s core tenets emphasise both uncompromising security, and a friendly developer experience. Naturally, all Enquo code is open source, released under the MIT licence.

Would You Like To Know More?
Desire to know more intensifies Everyone who uses a database...
If all this sounds relevant to your interests:
  1. If you use Ruby on Rails and PostgreSQL, you re halfway home already. Follow the ActiveEnquo getting started tutorial and see how much of your data Enquo can already protect. When you find data you want to encrypt but can t, tell me about it.
    • If you use Ruby and PostgreSQL with another ORM, such as Sequel, writing a plugin to support Enquo shouldn t be too difficult. The ActiveEnquo code should give you a good start. If you get stuck, get in touch.
  2. If you use PostgreSQL with another programming language, tell me what language you use and we ll work together to get bindings for that library created.
  3. If you use another database server, support is coming for your database of choice eventually, but at present there s no timeline on support. On the off chance that you happen to be a hard-core database hacking expert, and would like to work on getting Enquo support in your preferred database server, I d love to talk to you.

20 March 2023

Russ Allbery: Review: The Star Fraction

Review: The Star Fraction, by Ken MacLeod
Series: Fall Revolution #1
Publisher: Orbit
Copyright: 1995
Printing: 2001
ISBN: 1-85723-833-8
Format: Trade paperback
Pages: 341
Ken MacLeod is a Scottish science fiction writer who has become amusingly famous for repeatedly winning the libertarian Prometheus Award despite being a (somewhat libertarian-leaning) socialist. The Star Fraction is the first of a loose series of four novels about future solar system politics and was nominated for the Clarke Award (as well as winning the Prometheus). It was MacLeod's first novel. Moh Kohn is a mercenary, part of the Felix Dzerzhinsky Workers' Defence collective. They're available for hire to protect research labs and universities against raids from people such as animal liberationists and anti-AI extremists (or, as Moh calls them, creeps and cranks). As The Star Fraction opens, he and his smart gun are protecting a lab against an attack. Janis Taine is a biologist who is currently testing a memory-enhancing drug on mice. It's her lab that is attacked, although it isn't vandalized the way she expected. Instead, the attackers ruined her experiment by releasing the test drug into the air, contaminating all of the controls. This sets off a sequence of events that results in Moh, Janis, and Jordon Brown, a stock trader for a religious theocracy, on the run from the US/UN and Space Defense. I had forgotten what it was like to read the uncompromising old-school style of science fiction novel that throws you into the world and explains nothing, leaving it to the reader to piece the world together as you go. It's weirdly fun, but I'm either out of practice or this was a particularly challenging example of the genre. MacLeod throws a lot of characters at you quickly, including some that have long and complicated personal histories, and it's not until well into the book that the pieces start to cohere into a narrative. Even once that happens, the relationship between the characters and the plot is unobvious until late in the book, and comes from a surprising direction. Science fiction as a genre is weirdly conservative about political systems. Despite the grand, futuristic ideas and the speculation about strange alien societies, the human governments rarely rise to the sophistication of a modern democracy. There are a lot of empires, oligarchies, and hand-waved libertarian semi-utopias, but not a lot of deep engagement with the speculative variety of government systems humans have proposed. The rare exceptions therefore get a lot of attention from those of us who find political systems fascinating. MacLeod has a reputation for writing political SF in that sense, and The Star Fraction certainly delivers. Moh (despite the name of his collective, which is explained briefly in the book) is a Trotskyist with a family history with the Fourth International that is central to the plot. The setting is a politically fractured Britain full of autonomous zones with wildly different forms of government, theoretically ruled by a restored monarchy. That monarchy is opposed by the Army of the New Republic, which claims to be the legitimate government of the United Kingdom and is considered by everyone else to be terrorists. Hovering in the background is a UN entirely subsumed by the US, playing global policeman over a chaotic world shattered by numerous small-scale wars. This satisfyingly different political world is a major plus for me. The main drawback is that I found the world-building and politics more interesting than the characters. It's not that I disliked them; I found them enjoyably quirky and odd. It's more that so much is happening and there are so many significant characters, all set in an unfamiliar and unexplained world and often divided into short scenes of a few pages, that I had a hard time keeping track of them all. Part of the point of The Star Fraction is digging into their tangled past and connecting it up with the present, but the flashbacks added a confused timeline on top of the other complexity and made it hard for me to get lost in the story. The characters felt a bit too much like puzzle pieces until the very end of the book. The technology is an odd mix with a very 1990s feel. MacLeod is one of the SF authors who can make computers and viruses believable, avoiding the cyberpunk traps, but AI becomes relevant to the plot and the conception of AI here feels oddly retro. (Not MacLeod's fault; it's been nearly 30 years and a lot has changed.) On-line discussion in the book is still based on newsgroups, which added to the nostalgic feel. I did like the eventual explanation for the computing part of the plot, though; I can't say much while avoiding spoilers, but it's one of the more believable explanations for how a technology could spread in a way required for the plot that I've read. I've been planning on reading this series for years but never got around to it. I enjoyed my last try at a MacLeod series well enough to want to keep reading, but not well enough to keep reading immediately, and then other books happened and now it's been 19 years. I feel similarly about The Star Fraction: it's good enough (and in a rare enough subgenre of SF) that I want to keep reading, but not enough to keep reading immediately. We'll see if I manage to get to the next book in a reasonable length of time. Followed by The Stone Canal. Rating: 6 out of 10

5 March 2023

Reproducible Builds: Reproducible Builds in February 2023

Welcome to the February 2023 report from the Reproducible Builds project. As ever, if you are interested in contributing to our project, please visit the Contribute page on our website.
FOSDEM 2023 was held in Brussels on the 4th & 5th of February and featured a number of talks related to reproducibility. In particular, Akihiro Suda gave a talk titled Bit-for-bit reproducible builds with Dockerfile discussing deterministic timestamps and deterministic apt-get (original announcement). There was also an entire track of talks on Software Bill of Materials (SBOMs). SBOMs are an inventory for software with the intention of increasing the transparency of software components (the US National Telecommunications and Information Administration (NTIA) published a useful Myths vs. Facts document in 2021).
On our mailing list this month, Larry Doolittle was puzzled why the Debian verilator package was not reproducible [ ], but Chris Lamb pointed out that this was due to the use of Python s datetime.fromtimestamp over datetime.utcfromtimestamp [ ].
James Addison also was having issues with a Debian package: in this case, the alembic package. Chris Lamb was also able to identify the Sphinx documentation generator as the cause of the problem, and provided a potential patch that might fix it. This was later filed upstream [ ].
Anthony Harrison wrote to our list twice, first by introducing himself and their background and later to mention the increasing relevance of Software Bill of Materials (SBOMs):
As I am sure everyone is aware, there is a growing interest in [SBOMs] as a way of improving software security and resilience. In the last two years, the US through the Exec Order, the EU through the proposed Cyber Resilience Act (CRA) and this month the UK has issued a consultation paper looking at software security and SBOMs appear very prominently in each publication. [ ]

Tim Retout wrote a blog post discussing AlmaLinux in the context of CentOS, RHEL and supply-chain security in general [ ]:
Alma are generating and publishing Software Bill of Material (SBOM) files for every package; these are becoming a requirement for all software sold to the US federal government. What s more, they are sending these SBOMs to a third party (CodeNotary) who store them in some sort of Merkle tree system to make it difficult for people to tamper with later. This should theoretically allow end users of the distribution to verify the supply chain of the packages they have installed?

Debian

F-Droid & Android

diffoscope diffoscope is our in-depth and content-aware diff utility. Not only can it locate and diagnose reproducibility issues, it can provide human-readable diffs from many kinds of binary formats. This month, Chris Lamb released versions 235 and 236; Mattia Rizzolo later released version 237. Contributions include:
  • Chris Lamb:
    • Fix compatibility with PyPDF2 (re. issue #331) [ ][ ][ ].
    • Fix compatibility with ImageMagick version 7.1 [ ].
    • Require at least version 23.1.0 to run the Black source code tests [ ].
    • Update debian/tests/control after merging changes from others [ ].
    • Don t write test data during a test [ ].
    • Update copyright years [ ].
    • Merged a large number of changes from others.
  • Akihiro Suda edited the .gitlab-ci.yml configuration file to ensure that versioned tags are pushed to the container registry [ ].
  • Daniel Kahn Gillmor provided a way to migrate from PyPDF2 to pypdf (#1029741).
  • Efraim Flashner updated the tool metadata for isoinfo on GNU Guix [ ].
  • FC Stegerman added support for Android resources.arsc files [ ], improved a number of file-matching regular expressions [ ][ ] and added support for Android dexdump [ ]; they also fixed a test failure (#1031433) caused by Debian s black package having been updated to a newer version.
  • Mattia Rizzolo:
    • updated the release documentation [ ],
    • fixed a number of Flake8 errors [ ][ ],
    • updated the autopkgtest configuration to only install aapt and dexdump on architectures where they are available [ ], making sure that the latest diffoscope release is in a good fit for the upcoming Debian bookworm freeze.

reprotest Reprotest version 0.7.23 was uploaded to both PyPI and Debian unstable, including the following changes:
  • Holger Levsen improved a lot of documentation [ ][ ][ ], tidied the documentation as well [ ][ ], and experimented with a new --random-locale flag [ ].
  • Vagrant Cascadian adjusted reprotest to no longer randomise the build locale and use a UTF-8 supported locale instead [ ] (re. #925879, #1004950), and to also support passing --vary=locales.locale=LOCALE to specify the locale to vary [ ].
Separate to this, Vagrant Cascadian started a thread on our mailing list questioning the future development and direction of reprotest.

Upstream patches The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:

Testing framework The Reproducible Builds project operates a comprehensive testing framework (available at tests.reproducible-builds.org) in order to check packages and other artifacts for reproducibility. In February, the following changes were made by Holger Levsen:
  • Add three new OSUOSL nodes [ ][ ][ ] and decommission the osuosl174 node [ ].
  • Change the order of listed Debian architectures to show the 64-bit ones first [ ].
  • Reduce the frequency that the Debian package sets and dd-list HTML pages update [ ].
  • Sort Tested suite consistently (and Debian unstable first) [ ].
  • Update the Jenkins shell monitor script to only query disk statistics every 230min [ ] and improve the documentation [ ][ ].

Other development work disorderfs version 0.5.11-3 was uploaded by Holger Levsen, fixing a number of issues with the manual page [ ][ ][ ].
Bernhard M. Wiedemann published another monthly report about reproducibility within openSUSE.
If you are interested in contributing to the Reproducible Builds project, please visit the Contribute page on our website. You can get in touch with us via:

26 February 2023

Russ Allbery: Review: An Informal History of the Hugos

Review: An Informal History of the Hugos, by Jo Walton
Publisher: Tor
Copyright: August 2018
ISBN: 1-4668-6573-3
Format: Kindle
Pages: 564
An Informal History of the Hugos is another collection of Jo Walton's Tor.com posts. As with What Makes This Book So Great, these are blog posts that are still available for free on-line. Unlike that collection, this series happened after Tor.com got better at tags, so it's much easier to find. Whether to buy it therefore depends on whether having it in convenient book form is worth it to you. Walton's previous collection was a somewhat random assortment of reviews of whatever book she felt like reviewing. As you may guess from the title, this one is more structured. She starts at the first year that the Hugo Awards were given out (1953) and discusses the winners for each year up through 2000. Nearly all of that discussion is about the best novel Hugo, a survey of other good books for that year, and, when other awards (Nebula, Locus, etc.) start up, comparing them to the winners and nominees of other awards. One of the goals of each discussion is to decide whether the Hugo nominees did a good job of capturing the best books of the year and the general feel of the genre at that time. There are a lot of pages in this book, but that's partly because there's a lot of filler. Each post includes all of the winners and (once a nomination system starts) nominees in every Hugo category. Walton offers an in-depth discussion of the novel in every year, and an in-depth discussion of the John W. Campbell Award for Best New Writer (technically not a Hugo but awarded with them and voted on in the same way) once those start. Everything else gets a few sentences at most, so it's mostly just lists, all of which you can readily find elsewhere if you cared. Personally, I would have omitted categories without commentary when this was edited into book form. Two other things are included in this book. Most helpfully, Walton's Tor.com reviews of novels in the shortlist are included after the discussion of that year. If you like Walton's reviews, this is great for all the reasons that What Makes This Book So Great was so much fun. Walton has a way of talking about books with infectious enthusiasm, brief but insightful technical analysis, and a great deal of genre context without belaboring any one point. They're concise and readable and never outlast my attention span, and I wish I could write reviews half as well. The other inclusion is a selection of the comments from the original blog posts. When these posts originally ran, they turned into a community discussion of the corresponding year of SF, and Tor included a selection of those comments in the book. Full disclosure: one of those comments is mine, about the way that cyberpunk latched on to some incorrect ideas of how computers work and made them genre conventions to such a degree that most cyberpunk takes place in a parallel universe with very different computer technology. (I suppose that technically makes me a published author to the tune of a couple of pages.) While I still largely agree with the comment, I blamed Neuromancer for this at the time, and embarrassingly discovered when re-reading it that I had been unfair. This is why one should never express opinions in public where someone might record them. Anyway, there is a general selection of comments from random people, but the vast majority of the comments are discussions of the year's short fiction by Rich Horton and Gardner Dozois. I understand why this was included; Walton doesn't talk about the short fiction, Dozois was a legendary SF short fiction editor and multiple Hugo winner, and both Horton and Dozois reviewed short fiction for Locus. But they don't attempt reviews. For nearly all stories under discussion, unless you recognized the title, you would have no idea even what sub-genre it was in. It's just a sequence of assertions about which title or author was better. Given that there are (in most years) three short fiction categories to the one novel category and both Horton and Dozois write about each category, I suspect there are more words in this book from Horton and Dozois than Walton. That's a problem when those comments turn into tedious catalogs. Reviewing short fiction, particularly short stories, is inherently difficult. I've tried to do a lot of that myself, and it's tricky to find something useful to say that doesn't spoil the story. And to be fair to Horton and Dozois, they weren't being paid to write reviews; they were just commenting on blog posts as part of a community conversation, and I doubt anyone thought this would turn into a book. But when read as a book, their inclusion in this form wasn't my favorite editorial decision. This is therefore a collection of Walton's commentary on the selections for best novel and best new writer alongside a whole lot of boring lists. In theory, the padding shouldn't matter; one can skip over it and just read Walton's parts, and that's still lots of material. But Walton's discussion of the best novels of the year also tends to turn into long lists of books with no commentary (particularly once the very-long Locus recommended list starts appearing), adding to the tedium. This collection requires a lot of skimming. I enjoyed this series of blog posts when they were first published, but even at the time I skimmed the short fiction comments. Gathered in book form with this light of editing, I think it was less successful. If you are curious about the history of science fiction awards and never read the original posts, you may enjoy this, but I would rather have read another collection of straight reviews. Rating: 6 out of 10

Next.