Search Results: "aba"

10 October 2024

Freexian Collaborators: Debian Contributions: Packaging Pydantic v2, Reworking of glib2.0 for cross bootstrap, Python archive rebuilds and more! (by Anupa Ann Joseph)

Debian Contributions: 2024-09 Contributing to Debian is part of Freexian s mission. This article covers the latest achievements of Freexian and their collaborators. All of this is made possible by organizations subscribing to our Long Term Support contracts and consulting services.

Pydantic v2, by Colin Watson Pydantic is a useful library for validating data in Python using type hints: Freexian uses it in a number of projects, including Debusine. Its Debian packaging had been stalled at 1.10.17 in testing for some time, partly due to needing to make sure everything else could cope with the breaking changes introduced in 2.x, but mostly due to needing to sort out packaging of its new Rust dependencies. Several other people (notably Alexandre Detiste, Andreas Tille, Drew Parsons, and Timo R hling) had made some good progress on this, but nobody had quite got it over the line and it seemed a bit stuck. Colin upgraded a few Rust libraries to new upstream versions, packaged rust-jiter, and chased various failures in other packages. This eventually allowed getting current versions of both pydantic-core and pydantic into testing. It should now be much easier for us to stay up to date routinely.

Reworking of glib2.0 for cross bootstrap, by Helmut Grohne Simon McVittie (not affiliated with Freexian) earlier restructured the libglib2.0-dev such that it would absorb more functionality and in particular provide tools for working with .gir files. Those tools practically require being run for their host architecture (practically this means running under qemu-user) which is at odds with the requirements of architecture cross bootstrap. The qemu requirement was expressed in package dependencies and also made people unhappy attempting to use libglib2.0-dev for i386 on amd64 without resorting to qemu. The use of qemu in architecture bootstrap is particularly problematic as it tends to not be ready at the time bootstrapping is needed. As a result, Simon proposed and implemented the introduction of a libgio-2.0-dev package providing a subset of libglib2.0-dev that does not require qemu. Packages should continue to use libglib2.0-dev in their Build-Depends unless involved in architecture bootstrap. Helmut reviewed and tested the implementation and integrated the necessary changes into rebootstrap. He also prepared a patch for libverto to use the new package and proposed adding forward compatibility to glib2.0. Helmut continued working on adding cross-exe-wrapper to architecture-properties and implemented autopkgtests later improved by Simon. The cross-exe-wrapper package now provides a generic mechanism to a program on a different architecture by using qemu when needed only. For instance, a dependency on cross-exe-wrapper:i386 provides a i686-linux-gnu-cross-exe-wrapper program that can be used to wrap an ELF executable for the i386 architecture. When installed on amd64 or i386 it will skip installing or running qemu, but for other architectures qemu will be used automatically. This facility can be used to support cross building with targeted use of qemu in cases where running host code is unavoidable as is the case for GObject introspection. This concludes the joint work with Simon and Niels Thykier on glib2.0 and architecture-properties resolving known architecture bootstrap regressions arising from the glib2.0 refactoring earlier this year.

Analyzing binary package metadata, by Helmut Grohne As Guillem Jover (not affiliated with Freexian) continues to work on adding metadata tracking to dpkg, the question arises how this affects existing packages. The dedup.debian.net infrastructure provides an easy playground to answer such questions, so Helmut gathered file metadata from all binary packages in unstable and performed an explorative analysis. Some results include: Guillem also performed a cursory analysis and reported other problem categories such as mismatching directory permissions for directories installed by multiple packages and thus gained a better understanding of what consistency checks dpkg can enforce.

Python archive rebuilds, by Stefano Rivera Last month Stefano started to write some tooling to do large-scale rebuilds in debusine, starting with finding packages that had already started to fail to build from source (FTBFS) due to the removal of setup.py test. This month, Stefano did some more rebuilds, starting with experimental versions of dh-python. During the Python 3.12 transition, we had added a dependency on python3-setuptools to dh-python, to ease the transition. Python 3.12 removed distutils from the stdlib, but many packages were expecting it to still be available. Setuptools contains a version of distutils, and dh-python was a convenient place to depend on setuptools for most package builds. This dependency was never meant to be permanent. A rebuild without it resulted in mass-filing about 340 bugs (and around 80 more by mistake). A new feature in Python 3.12, was to have unittest s test runner exit with a non-zero return code, if no tests were run. We added this feature, to be able to detect tests that are not being discovered, by mistake. We are ignoring this failure, as we wouldn t want to suddenly cause hundreds of packages to fail to build, if they have no tests. Stefano did a rebuild to see how many packages were affected, and found that around 1000 were. The Debian Python community has not come to a conclusion on how to move forward with this. As soon as Python 3.13 release candidate 2 was available, Stefano did a rebuild of the Python packages in the archive against it. This was a more complex rebuild than the others, as it had to be done in stages. Many packages need other Python packages at build time, typically to run tests. So transitions like this involve some manual bootstrapping, followed by several rounds of builds. Not all packages could be tested, as not all their dependencies support 3.13 yet. The result was around 100 bugs in packages that need work to support Python 3.13. Many other packages will need additional work to properly support Python 3.13, but being able to build (and run tests) is an important first step.

Miscellaneous contributions
  • Carles prepared the update of python-pyaarlo package to a new upstream release.
  • Carles worked on updating python-ring-doorbell to a new upstream release. Unfinished, pending to package a new dependency python3-firebase-messaging RFP #1082958 and its dependency python3-http-ece RFP #1083020.
  • Carles improved po-debconf-manager. Main new feature is that it can open Salsa merge requests. Aiming for a lightning talk in MiniDebConf Toulouse (November) to be functional end to end and get feedback from the wider public for this proof of concept.
  • Carles helped one translator to use po-debconf-manager (added compatibility for bullseye, fixed other issues) and reviewed 17 package templates.
  • Colin upgraded the OpenSSH packaging to 9.9p1.
  • Colin upgraded the various YubiHSM packages to new upstream versions, enabled more tests, fixed yubihsm-shell build failures on some 32-bit architectures, made yubihsm-shell build reproducibly, and fixed yubihsm-connector to apply udev rules to existing devices when the package is installed. As usual, bookworm-backports is up to date with all these changes.
  • Colin fixed quite a bit of fallout from setuptools 72.0.0 removing setup.py test, backported a large upstream patch set to make buildbot work with SQLAlchemy 2.0, and upgraded 25 other Python packages to new upstream versions.
  • Enrico worked with Jakob Haufe to get him up to speed for managing sso.debian.org
  • Rapha l did remove spam entries in the list of teams on tracker.debian.org (see #1080446), and he applied a few external contributions, fixing a rendering issue and replacing the DDPO link with a more useful alternative. He also gave feedback on a couple of merge requests that required more work. As part of the analysis of the underlying problem, he suggested to the ftpmasters (via #1083068) to auto-reject packages having the too-many-contacts lintian error, and he raised the severity of #1076048 to serious to actually have that 4 year old bug fixed.
  • Rapha l uploaded zim and hamster-time-tracker to fix issues with Python 3.12 getting rid of setuptools. He also uploaded a new gnome-shell-extension-hamster to cope with the upcoming transition to GNOME 47.
  • Helmut sent seven patches and sponsored one upload for cross build failures.
  • Helmut uploaded a Nagios/Icinga plugin check-smart-attributes for monitoring the health of physical disks.
  • Helmut collaborated on sbuild reviewing and improving a MR for refactoring the unshare backend.
  • Helmut sent a patch fixing coinstallability of gcc-defaults.
  • Helmut continued to monitor the evolution of the /usr-move. With more and more key packages such as libvirt or fuse3 fixed. We re moving into the boring long-tail of the transition.
  • Helmut proposed updating the meson buildsystem in debhelper to use env2mfile.
  • Helmut continued to update patches maintained in rebootstrap. Due to the work on glib2.0 above, rebootstrap moves a lot further, but still fails for any architecture.
  • Santiago reviewed some Merge Request in Salsa CI, such as: !478, proposed by Otto to extend the information about how to use additional runners in the pipeline and !518, proposed by Ahmed to add support for Ubuntu images, that will help to test how some debian packages, including the complex MariaDB are built on Ubuntu. Santiago also prepared !545, which will make the reprotest job more consistent with the result seen on reproducible-builds.
  • Santiago worked on different tasks related to DebConf 25. Especially he drafted the fundraising brochure (which is almost ready).
  • Thorsten Alteholz uploaded package libcupsfilter to fix the autopkgtest and a dependency problem of this package. After package splix was abandoned by upstream and OpenPrinting.org adopted its maintenance, Thorsten uploaded their first release.
  • Anupa published posts on the Debian Administrators group in LinkedIn and moderated the group, one of the tasks of the Debian Publicity Team.
  • Anupa helped organize DebUtsav 2024. It had over 100 attendees with hand-on sessions on making initial contributions to Linux Kernel, Debian packaging, submitting documentation to Debian wiki and assisting Debian Installations.

3 October 2024

Mike Gabriel: Creating (a) new frontend(s) for Polis

After (quite) a summer break, here comes the 4th article of the 5-episode blog post series on Polis, written by Guido Berh rster, member of staff at my company Fre(i)e Software GmbH. Have fun with the read on Guido's work on Polis,
Mike
Table of Contents of the Blog Post Series
  1. Introduction
  2. Initial evaluation and adaptation
  3. Issues extending Polis and adjusting our goals
  4. Creating (a) new frontend(s) for Polis (this article)
  5. Current status and roadmap
4. Creating (a) new frontend(s) for Polis Why a new frontend was needed... Our initial experiences of working with Polis, the effort required to implement more invasive changes and the desire of iterating changes more rapidly ultimately lead to the decision to create a new foundation for frontend development that would be independent of but compatible with the upstream project. Our primary objective was thus not to develop another frontend but rather to make frontend development more flexible and to facilitate experimentation and rapid prototyping of different frontends by providing abstraction layers and building blocks. This also implied developing a corresponding backend since the Polis backend is tightly coupled to the frontend and is neither intended to be used by third-party projects nor supporting cross-domain requests due to the expectation of being embedded as an iframe on third-party websites. The long-term plan for achieving our objectives is to provide three abstraction layers for building frontends: The Particiapp Project Under the umbrella of the Particiapp project we have so far developed two new components: Both the participation frontend and backend are fully compatible and require an existing Polis installation and can be run alongside the upstream frontend. More specifically, the administration frontend and common backend are required to administrate conversations and send out notifications and the statistics processing server is required for processing the voting results. Particiapi server For the backend the Python language and the Flask framework were chosen as a technological basis mainly due to developer mindshare, a large community and ecosystem and the smaller dependency chain and maintenance overhead compared to Node.js/npm. Instead of integrating specific identity providers we adopted the OpenID Connect standard as an abstraction layer for authentication which allows delegating authentication either to a self-hosted identity provider or a large number of existing external identity providers. Particiapp Example Frontend The experimental example frontend serves both as a test bed for the client library and as a tool for better understanding the needs of frontend designers. It also features a completely redesigned user interface and results visualization in line with our goals. Branded variants are currently used for evaluation and testing by the stakeholders. In order to simplify evaluation, development, testing and deployment a Docker Compose configuration is made available which contains all necessary components for running Polis with our experimental example frontend. In addition, a development environment is provided which includes a preconfigured OpenID Connect identity provider (KeyCloak), SMTP-Server with web interface (MailDev), and a database frontend (PgAdmin). The new frontend can also be tested using our public demo server.

24 September 2024

Vasudev Kamath: Note to Self: Enabling Secure Boot with UKI on Debian

Note

This post is a continuation of my previous article on enabling the Unified Kernel Image (UKI) on Debian.

In this guide, we'll implement Secure Boot by taking full control of the device, removing preinstalled keys, and installing our own. For a comprehensive overview of the benefits and process, refer to this excellent post from rodsbooks.
Key Components To implement Secure Boot, we need three essential keys:
  1. Platform Key (PK): The top-level key in Secure Boot, typically provided by the motherboard manufacturer. We'll replace the vendor-supplied PK with our own for complete control.
  2. Key Exchange Key (KEK): Used to sign updates for the Signatures Database and Forbidden Signatures Database.
  3. Database Key (DB): Used to sign or verify binaries (bootloaders, boot managers, shells, drivers, etc.).
There's also a Forbidden Signature Key (dbx), which is the opposite of the DB key. We won't be generating this key in this guide.
Preparing for Key Enrollment Before enrolling our keys, we need to put the device in Secure Boot Setup Mode. Verify the status using the bootctl status command. You should see output similar to the following image: UEFI Setup mode
Generating Keys Follow these instructions from the Arch Wiki to generate the keys manually. You'll need the efitools and openssl packages. I recommend using rsa:2048 as the key size for better compatibility with older firmware. After generating the keys, copy all .auth files to the /efi/loader/keys/<hostname>/ folder. For example:
  sudo ls /efi/loader/keys/chamunda
db.auth  KEK.auth  PK.auth
Signing the Bootloader Sign the systemd-boot bootloader with your new keys:
sbsign --key <path-to db.key> --cert <path-to db.crt> \
   /usr/lib/systemd/boot/efi/systemd-bootx64.efi
Install the signed bootloader using bootctl install. The output should resemble this: bootctl install

Note

If you encounter warnings about mount options, update your fstab with the umask=0077 option for the EFI partition.

Verify the signature using sbsign --verify: sbsign verify
Configuring UKI for Secure Boot Update the /etc/kernel/uki.conf file with your key paths:
SecureBootPrivateKey=/path/to/db.key
SecureBootCertificate=/path/to/db.crt
Signing the UKI Image On Debian, use dpkg-reconfigure to sign the UKI image for each kernel:
sudo dpkg-reconfigure linux-image-$(uname -r)
# Repeat for other kernel versions if necessary
You should see output similar to this:
sudo dpkg-reconfigure linux-image-$(uname -r)
/etc/kernel/postinst.d/dracut:
dracut: Generating /boot/initrd.img-6.10.9-amd64
Updating kernel version 6.10.9-amd64 in systemd-boot...
Signing unsigned original image
Using config file: /etc/kernel/uki.conf
+ sbverify --list /boot/vmlinuz-6.10.9-amd64
+ sbsign --key /home/vasudeva.sk/Documents/personal/secureboot/db.key --cert /home/vasudeva.sk/Documents/personal/secureboot/db.crt /tmp/ukicc7vcxhy --output /tmp/kernel-install.staging.QLeGLn/uki.efi
Wrote signed /tmp/kernel-install.staging.QLeGLn/uki.efi
/etc/kernel/postinst.d/zz-systemd-boot:
Installing kernel version 6.10.9-amd64 in systemd-boot...
Signing unsigned original image
Using config file: /etc/kernel/uki.conf
+ sbverify --list /boot/vmlinuz-6.10.9-amd64
+ sbsign --key /home/vasudeva.sk/Documents/personal/secureboot/db.key --cert /home/vasudeva.sk/Documents/personal/secureboot/db.crt /tmp/ukit7r1hzep --output /tmp/kernel-install.staging.dWVt5s/uki.efi
Wrote signed /tmp/kernel-install.staging.dWVt5s/uki.efi
Enrolling Keys in Firmware Use systemd-boot to enroll your keys:
systemctl reboot --boot-loader-menu=0
Select the enroll option with your hostname in the systemd-boot menu. After key enrollment, the system will reboot into the newly signed kernel. Verify with bootctl: uefi enabled
Dealing with Lockdown Mode Secure Boot enables lockdown mode on distro-shipped kernels, which restricts certain features like kprobes/BPF and DKMS drivers. To avoid this, consider compiling the upstream kernel directly, which doesn't enable lockdown mode by default. As Linus Torvalds has stated, "there is no reason to tie Secure Boot to lockdown LSM." You can read more about Torvalds' opinion on UEFI tied with lockdown.
Next Steps One thing that remains is automating the signing of systemd-boot on upgrade, which is currently a manual process. I'm exploring dpkg triggers for achieving this, and if I succeed, I will write a new post with details.
Acknowledgments Special thanks to my anonymous colleague who provided invaluable assistance throughout this process.

21 September 2024

Gunnar Wolf: 50 years of queries

This post is a review for Computing Reviews for 50 years of queries , a article published in Communications of the ACM
The relational model is probably the one innovation that brought computers to the mainstream for business users. This article by Donald Chamberlin, creator of one of the first query languages (that evolved into the ubiquitous SQL), presents its history as a commemoration of the 50th anniversary of his publication of said query language. The article begins by giving background on information processing before the advent of today s database management systems: with systems storing and processing information based on sequential-only magnetic tapes in the 1950s, adopting a record-based, fixed-format filing system was far from natural. The late 1960s and early 1970s saw many fundamental advances, among which one of the best known is E. F. Codd s relational model. The first five pages (out of 12) present the evolution of the data management community up to the 1974 SIGFIDET conference. This conference was so important in the eyes of the author that, in his words, it is the event that starts the clock on 50 years of relational databases. The second part of the article tells about the growth of the structured English query language (SEQUEL) eventually renamed SQL including the importance of its standardization and its presence in commercial products as the dominant database language since the late 1970s. Chamberlin presents short histories of the various implementations, many of which remain dominant names today, that is, Oracle, Informix, and DB2. Entering the 1990s, open-source communities introduced MySQL, PostgreSQL, and SQLite. The final part of the article presents controversies and criticisms related to SQL and the relational database model as a whole. Chamberlin presents the main points of controversy throughout the years: 1) the SQL language lacks orthogonality; 2) SQL tables, unlike formal relations, might contain null values; and 3) SQL tables, unlike formal relations, may contain duplicate rows. He explains the issues and tradeoffs that guided the language design as it unfolded. Finally, a section presents several points that explain how SQL and the relational model have remained, for 50 years, a winning concept, as well as some thoughts regarding the NoSQL movement that gained traction in the 2010s. This article is written with clear language and structure, making it easy and pleasant to read. It does not drive a technical point, but instead is a recap on half a century of developments in one of the fields most important to the commercial development of computing, written by one of the greatest authorities on the topic.

20 September 2024

Sahil Dhiman: Educational and Research Institutions With Own ASN in India

Another one of the ASN list. This turned out longer than I expected (which is good). If you want to briefly understand what is an ASN, my Personal ASNs From India post carries an introduction to it. Now, here re the Educational and Research Institutions with their own ASN in India, which I could find: Special Mentions Some observations: Let me know if I m missing someone.

17 September 2024

Russ Allbery: Review: The Book That Broke the World

Review: The Book That Broke the World, by Mark Lawrence
Series: Library Trilogy #2
Publisher: Ace
Copyright: 2024
ISBN: 0-593-43796-9
Format: Kindle
Pages: 366
The Book That Broke the World is high fantasy and a direct sequel to The Book That Wouldn't Burn. You should not start here. In a delightful break from normal practice, the author provides a useful summary of the previous volume at the start of this book to jog your memory. At the end of The Book That Wouldn't Burn, the characters were scattered and in various states of corporeality after some major revelations about the nature of the Library and the first appearance of the insectile Skeer. The Book That Wouldn't Burn picks up where it left off, and there is a lot more contact with the Skeer, but my guess that they would be the next viewpoint characters does not pan out. Instead, we get a new group and a new protagonist: Celcha, whose sees angels who come to visit her brother. I have complaints, but before I launch into those, I should say that I liked this book apart from the totally unnecessary cannibalism. (I'll get to that.) Livira is a bit sidelined, which is regrettable, but Celcha and her brother are interesting new characters, and both Arpix and Clovis, supporting characters in the first book, get some excellent character development. Similar to the first book, this is a puzzle box story full of world-building tidbits with intellectually-satisfying interactions. Lawrence elaborates and complicates his setting in ways that don't contradict earlier parts of the story but create more room and depth for the characters to be creative. I came away still invested in this world and eager to find out how Lawrence pulls the world-building and narrative threads together. The biggest drawback of this book is that it's not new. My thought after finishing the first book of the series was that if Lawrence had enough world-building ideas to fill three books to that same level of density, this had the potential of being one of my favorite fantasy series of all time. By the end of the second book, I concluded that this is not the case. Instead of showing us new twists and complications the way the first book did throughout, The Book That Broke the World mostly covers the same thematic ground from some new angles. It felt like Lawrence was worried the reader of the first book may not have understood the theme or the world-building, so he spent most of the second book nailing down anything that moved. I found that frustrating. One of the best parts of The Book That Wouldn't Burn was that Lawrence trusted the reader to keep up, which for me hit the glorious but rare sweet spot of pacing where I was figuring out the world at roughly the same pace as the characters. It surprised me in some very enjoyable ways. The Book That Broke the World did not surprise me. There are a few new things, which I enjoyed, and a few elaborations and developments of ideas, which I mostly enjoyed, but I saw the big plot twist coming at least fifty pages before it happened and found the aftermath more annoying than revelatory. It doesn't help that the plot rests on character misunderstandings, one of my least favorite tropes. One of the other disappointments of this book is that the characters stop using the Library as a library. The Library at the center of this series is a truly marvelous piece of world-building with numerous fascinating features that are unrelated to its contents, but Livira used it first and foremost as a repository of books. The first book was full of characters solving problems by finding a relevant book and reading it. In The Book That Broke the World, sadly, this is mostly gone. The Library is mostly reduced to a complicated Big Dumb Object setting. It's still a delightful bit of world-building, and we learn about a few new features, but I only remember two places where the actual books are important to the story. Even the book referenced in the title is mostly important as an artifact with properties unrelated to the words that it contains or to the act of reading it. I think this is a huge lost opportunity and something I hope Lawrence fixes in the last book of the trilogy. This book instead focuses on the politics around the existence of the Library itself. Here I'm cautiously optimistic, although a lot is going to depend on the third book. Lawrence has set up a three-sided argument between groups that I will uncharitably describe as the libertarian techbros, the "burn it all down" reactionaries, and the neoliberal centrist technocrats. All three of those positions suck, and Lawrence had better be setting the stage for Livira to find a different path. Her unwillingness to commit to any of those sides gives me hope, but bringing this plot to a satisfying conclusion is going to be tricky. I hope I like what Lawrence comes up with, but it feels far from certain. It doesn't help that he's started delivering some points with a sledgehammer, and that's where we get to the unnecessary cannibalism. Thankfully this is a fairly small part of the tail end of the book, but it was an unpleasant surprise that I did not want in this novel and that I don't think made the story any better. It's tempting to call the cannibalism gratuitous, but it does fit one of the main themes of this story, namely that humans are depressingly good at using any rule-based object in unexpected and nasty ways that are contrary to the best intentions of the designer. This is the fundamental challenge of the Library as a whole and the question that I suspect the third book will be devoted to addressing, so I understand why Lawrence wanted to emphasize his point. The reason why there is cannibalism here is directly related to a profound misunderstanding of the properties of the library, and I detected an echo of one of C.S. Lewis's arguments in The Last Battle about the nature of Hell. The problem, though, is that this is Satanic baby-killerism, to borrow a term from Fred Clark. There are numerous ways to show this type of perversion of well-intended systems, which I know because Lawrence used other ones in the first book that were more subtle but equally effective. One of the best parts of The Book That Wouldn't Burn is that there were few real villains. The conflict was structural, all sides had valid perspectives, and the ethical points of that story were made with some care and nuance. The problem with cannibalism as it's used here is not merely that it's gross and disgusting and off-putting to the reader, although it is all of those things. If I wanted to read horror, I would read horror novels. I don't appreciate surprise horror used for shock value in regular fantasy. But worse, it's an abandonment of moral nuance. The function of cannibalism in this story is like the function of Satanic baby-killers: it's to signal that these people are wholly and irredeemably evil. They are the Villains, they are Wrong, and they cease to be characters and become symbols of what the protagonists are fighting. This is destructive to the story because it's designed to provoke a visceral short-circuit in the reader and let the author get away with sloppy story-telling. If the author needs to use tactics like this to point out who is the villain, they have failed to set up their moral quandary properly. The worst part is that this was entirely unnecessary because Lawrence's story-telling wasn't sloppy and he set up his moral quandary just fine. No one was confused about the ethical point here. I as the reader was following without difficulty, and had appreciated the subtlety with which Lawrence posed the question. But apparently he thought he was too subtle and decided to come back to the point with a pile-driver. I think that seriously injured the story. The ethical argument here is much more engaging and thought-provoking when it's more finely balanced. That's a lot of complaints, mostly because this is a good book that I badly wanted to be a great book but which kept tripping over its own feet. A lot of trilogies have weak second books. Hopefully this is another example of the mid-story sag, and the finale will be worthy of the start of the story. But I have to admit the moral short-circuiting and the de-emphasis of the actual books in the library has me a bit nervous. I want a lot out of the third book, and I hope I'm not asking this author for too much. If you liked the first book, I think you'll like this one too, with the caveat that it's quite a bit darker and more violent in places, even apart from the surprise cannibalism. But if you've not started this series, you may want to wait for the third book to see if Lawrence can pull off the ending. Followed by The Book That Held Her Heart, currently scheduled for publication in April of 2025. Rating: 7 out of 10

11 September 2024

Jamie McClelland: MariaDB mystery

I keep getting an error in our backup logs:
Sep 11 05:08:03 Warning: mysqldump: Error 2013: Lost connection to server during query when dumping table  1C4Uonkwhe_options  at row: 1402
Sep 11 05:08:03 Warning: Failed to dump mysql databases ic_wp
It s a WordPress database having trouble dumping the options table. The error log has a corresponding message:
Sep 11 13:50:11 mysql007 mariadbd[580]: 2024-09-11 13:50:11 69577 [Warning] Aborted connection 69577 to db: 'ic_wp' user: 'root' host: 'localhost' (Got an error writing communication packets)
The Internet is full of suggestions, almost all of which either focus on the network connection between the client and the server or the FEDERATED plugin. We aren t using the federated plugin and this error happens when conneting via the socket. Check it out - what is better than a consistently reproducible problem! It happens if I try to select all the values in the table:
root@mysql007:~# mysql --protocol=socket -e 'select * from 1C4Uonkwhe_options' ic_wp > /dev/null
ERROR 2013 (HY000) at line 1: Lost connection to server during query
root@mysql007:~#
It happens when I specifiy one specific offset:
root@mysql007:~# mysql --protocol=socket -e 'select * from 1C4Uonkwhe_options limit 1 offset 1402' ic_wp
ERROR 2013 (HY000) at line 1: Lost connection to server during query
root@mysql007:~#
It happens if I specify the field name explicitly:
root@mysql007:~# mysql --protocol=socket -e 'select option_id,option_name,option_value,autoload from 1C4Uonkwhe_options limit 1 offset 1402' ic_wp
ERROR 2013 (HY000) at line 1: Lost connection to server during query
root@mysql007:~#
It doesn t happen if I specify the key field:
root@mysql007:~# mysql --protocol=socket -e 'select option_id from 1C4Uonkwhe_options limit 1 offset 1402' ic_wp
+-----------+
  option_id  
+-----------+
   16296351  
+-----------+
root@mysql007:~#
It does happen if I specify the value field:
root@mysql007:~# mysql --protocol=socket -e 'select option_value from 1C4Uonkwhe_options limit 1 offset 1402' ic_wp
ERROR 2013 (HY000) at line 1: Lost connection to server during query
root@mysql007:~#
It doesn t happen if I query the specific row by key field:
root@mysql007:~# mysql --protocol=socket -e 'select * from 1C4Uonkwhe_options where option_id = 16296351' ic_wp
+-----------+----------------------+--------------+----------+
  option_id   option_name            option_value   autoload  
+-----------+----------------------+--------------+----------+
   16296351   z_taxonomy_image8905                  yes       
+-----------+----------------------+--------------+----------+
root@mysql007:~#
Hm. Surely there is some funky non-printing character in that option_value right?
root@mysql007:~# mysql --protocol=socket -e 'select CHAR_LENGTH(option_value) from 1C4Uonkwhe_options where option_id = 16296351' ic_wp
+---------------------------+
  CHAR_LENGTH(option_value)  
+---------------------------+
                          0  
+---------------------------+
root@mysql007:~# mysql --protocol=socket -e 'select HEX(option_value) from 1C4Uonkwhe_options where option_id = 16296351' ic_wp
+-------------------+
  HEX(option_value)  
+-------------------+
                     
+-------------------+
root@mysql007:~#
Resetting the value to an empty value doesn t make a difference:
root@mysql007:~# mysql --protocol=socket -e 'update 1C4Uonkwhe_options set option_value = "" where option_id = 16296351' ic_wp
root@mysql007:~# mysql --protocol=socket -e 'select * from 1C4Uonkwhe_options' ic_wp > /dev/null
ERROR 2013 (HY000) at line 1: Lost connection to server during query
root@mysql007:~#
Deleting the row in question causes the error to specify a new offset:
root@mysql007:~# mysql --protocol=socket -e 'delete from 1C4Uonkwhe_options where option_id = 16296351' ic_wp
root@mysql007:~# mysql --protocol=socket -e 'select * from 1C4Uonkwhe_options' ic_wp > /dev/null
ERROR 2013 (HY000) at line 1: Lost connection to server during query
root@mysql007:~# mysqldump ic_wp > /dev/null
mysqldump: Error 2013: Lost connection to server during query when dumping table  1C4Uonkwhe_options  at row: 1401
root@mysql007:~#
If I put the record I deleted back in, we return to the old offset:
root@mysql007:~# mysql --protocol=socket -e 'insert into 1C4Uonkwhe_options VALUES(16296351,"z_taxonomy_image8905","","yes");' ic_wp 
root@mysql007:~# mysqldump ic_wp > /dev/null
mysqldump: Error 2013: Lost connection to server during query when dumping table  1C4Uonkwhe_options  at row: 1402
root@mysql007:~#
I m losing my little mind. Let s get drastic and create a whole new table, copy over the data delicately working around the deadly offset:
oot@mysql007:~# mysql --protocol=socket -e 'create table 1C4Uonkwhe_new_options like 1C4Uonkwhe_options;' ic_wp 
root@mysql007:~# mysql --protocol=socket -e 'insert into 1C4Uonkwhe_new_options select * from 1C4Uonkwhe_options limit 1402 offset 0;' ic_wp 
--- There is only 33 more records, not sure how to specify unlimited limit but 100 does the trick.
root@mysql007:~# mysql --protocol=socket -e 'insert into 1C4Uonkwhe_new_options select * from 1C4Uonkwhe_options limit 100 offset 1403;' ic_wp 
Now let s make sure all is working properly:
root@mysql007:~# mysql --protocol=socket -e 'select * from 1C4Uonkwhe_new_options' ic_wp >/dev/null;
Now let s examine which row we are missing:
root@mysql007:~# mysql --protocol=socket -e 'select option_id from 1C4Uonkwhe_options where option_id not in (select option_id from 1C4Uonkwhe_new_options) ;' ic_wp 
+-----------+
  option_id  
+-----------+
   18405297  
+-----------+
root@mysql007:~#
Wait, what? I was expecting option_id 16296351. Oh, now we are getting somewhere. And I see my mistake: when using offsets, you need to use ORDER BY or you won t get consistent results.
root@mysql007:~# mysql --protocol=socket -e 'select option_id from 1C4Uonkwhe_options order by option_id limit 1 offset 1402' ic_wp ;
+-----------+
  option_id  
+-----------+
   18405297  
+-----------+
root@mysql007:~#
Now that I have the correct row what is in it:
root@mysql007:~# mysql --protocol=socket -e 'select * from 1C4Uonkwhe_options where option_id = 18405297' ic_wp ;
ERROR 2013 (HY000) at line 1: Lost connection to server during query
root@mysql007:~#
Well, that makes a lot more sense. Let s start over with examining the value:
root@mysql007:~# mysql --protocol=socket -e 'select CHAR_LENGTH(option_value) from 1C4Uonkwhe_options where option_id = 18405297' ic_wp ;
+---------------------------+
  CHAR_LENGTH(option_value)  
+---------------------------+
                   50814767  
+---------------------------+
root@mysql007:~#
Wow, that s a lot of characters. If it were a book, it would be 35,000 pages long (I just discovered this site). It s a LONGTEXT field so it should be able to handle it. But now I have a better idea of what could be going wrong. The name of the option is rewrite_rules so it seems like something is going wrong with the generation of that option. I imagine there is some tweak I can make to allow MariaDB to cough up the value (read_buffer_size? tmp_table_size?). But I ll start with checking in with the database owner because I don t think 35,000 pages of rewrite rules is appropriate for any site.

10 September 2024

Ben Hutchings: FOSS activity in August 2024

7 September 2024

Sergio Durigan Junior: Chatting in the 21st century

Several people have been asking me to explain and/or write about my solution for chatting nowadays. I realize that the current scenario is much more complex than, say, 10 or 20 years ago. Back then, this post would probably be more about the IRC client I used than about different chatting technologies. I have also spent a non trivial amount of time setting things up the way I want, so I understand that it s about time to write about my setup not only because I think it can be helpful to others, but also because I would like to document things for myself.

The backbone: Matrix I chose to use Matrix as the place where I integrate everything. Despite there being some heavy (and justified) criticism on the protocol itself, it serves me well for what I need right now. Obviously, I don t like the fact that I have to provide Matrix and all of its accompanying bridges a VPS with 4GB of RAM and 3 vCPUs, but I think that that ship has sailed, unfortunately. In an ideal world, I would be using XMPP and dedicating only a fraction of the resources I m using today to have a full chat system. And since I have been running my personal XMPP server for more than a decade now, I did try to find a solution that would allow me to keep using it, but unfortunately the protocol became almost a hobbyist thing, so there s that.

A few disclaimers I self-host everything, including my Matrix server. Much of what I did won t work if you don t self-host Matrix, so keep that in mind. This won t be a post teaching you how to deploy the services. My intention is to describe what I use and for what purpose. Also, as much as I try to use Debian packages for everything I do, I opted to deploy all services using a community-maintained Ansible playbook which is very well written and organized: matrix-docker-ansible-deploy. Last but not least, as I said above, you will likely need a machine with a good amount of RAM, CPU and storage, especially if you deploy Synapse as your Matrix homeserver (which is what I recommend if you plan to use the bridges I ll mention). My current VPS has 4GB of RAM, 3 vCPUs and 80GB of storage (of which I m currently using approximately 55GB).

Problem #1: my Matrix client(s) There are a lot of clients that can talk the Matrix protocol, but most of them are either web clients or GUI programs. I live on the terminal, more specifically inside Emacs, so I settled for the amazing ement.el Emacs mode. It works surprisingly well, but unfortunately doesn t support end-to-end encryption out of the box; for that, you have to hook it up with pantalaimon. Unfortunately, the project seems abandoned and therefore I don t recommend you to use it. I don t use it myself. When I have to reply some E2E encrypted message from another user, I go to my web browser and use my self-hosted Element client. It s a nuisance, but one that I m willing to accept because of security concerns. If you re into web clients and don t want to use Element (because it is heavy), you can try Cinny. It s lightweight and supports a decent set of features. If you re a terminal lover but don t use Emacs, you may want to try gomuks or iamb.

Problem #2: IRC bridging There are basically two types of IRC bridges for Matrix:
  • The regular and most used matrix-appservice-irc. This bridge takes Matrix to IRC (think of IRC users with the [m] suffix appended to their nicknames), and is what the matrix.org and other big homeservers (including matrix.debian.social) use. It s a complex service which allows thousands of Matrix users to connect to IRC networks, but that unfortunately has complex problems and is only worth using if you intend to host a community server.
  • A bouncer-like bridge called Heisenbridge. This is what I use personally. It takes IRC to Matrix, which means that people on IRC will not know that you re using Matrix. This bridge is much simpler, and because it acts like a bouncer it s pretty much impossible for it to cause problems with the IRC network.
Due to the fact that I sometimes like to use other IRC clients, I still run a regular ZNC bouncer, and I use Heisenbridge to connect to my ZNC. This means that I can use, e.g., ERC inside Emacs and my Matrix bridge at the same time. But you don t necessarily need to run another bouncer; you can simply use Heisenbridge and connect directly to the IRC network(s) you want. A word of caution, though: unlike ZNC, Heisenbridge doesn t support per-user configuration when you use it in bouncer mode. This is the reason why you need to self-host it, and why it s not possible to offer the service to other users (they would have access to your IRC network configuration otherwise). It s also worth talking about logs. I find that keeping logs of everything that goes on IRC has saved me a bunch of times, and so I find it really important to continue doing that. Unfortunately, neither ement.el nor Element support logging things out of the box (at least not that I know). This is also one of the reasons why I still keep my ZNC around: I configure it to log everything.

Problem #3: Telegram I don t use Telegram myself, but unfortunately several people from the Debian community do, especially in Brazil. There is a whole Debian community on Telegram, and I wanted to be able to bridge our Debian Matrix channels to their Telegram counterparts. I am currently using mautrix-telegram for that, and it s working great. You need someone with a Telegram account to configure their credentials so that the bridge can connect to it, but afterwards it s really easy to bridge channels together.

Problem #4: GitLab webhooks Something else I wanted to be able to do was to receive notifications regarding new issues, merge requests and other activities from Salsa. For this, I m using maubot, which is awesome and has a huge list of plugins. I m using the gitlab one.

Final thoughts Overall, I m satisfied with the setup I have now. It has certainly taken some time and effort to find the right tool for each problem I needed to solve, and I still feel like there are some rough edges to soften (like the fact that my Emacs client doesn t support E2E encryption out of the box, or the whole logging situation), but otherwise things are working fine and I haven t had any big problems with the deployment. You do have to be much more careful about stuff (for example, when I installed an unrelated service that hijacked my Apache configuration and made Matrix s federation silently stop working), though. If you have more specific questions about any part of my setup, shoot me an email and I ll do my best to help. Happy chatting!

4 September 2024

Reproducible Builds: Reproducible Builds in August 2024

Welcome to the August 2024 report from the Reproducible Builds project! Our reports attempt to outline what we ve been up to over the past month, highlighting news items from elsewhere in tech where they are related. As ever, if you are interested in contributing to the project, please visit our Contribute page on our website. Table of contents:
  1. LWN: The history, status, and plans for reproducible builds
  2. Intermediate Autotools build artifacts removed from PostgreSQL distribution tarballs
  3. Distribution news
  4. Mailing list news
  5. diffoscope
  6. Website updates
  7. Upstream patches
  8. Reproducibility testing framework

LWN: The history, status, and plans for reproducible builds The free software newspaper of record, Linux Weekly News, published an in-depth article based on Holger Levsen s talk, Reproducible Builds: The First Eleven Years which was presented at the recent DebConf24 conference in Busan, South Korea. Titled The history, status, and plans for reproducible builds and written by Jake Edge, LWN s article not only summarises Holger s talk and clarifies its message but it links to external information as well. Holger s original talk can also be watched on the DebConf24 webpage (direct .webm link and his HTML slides are available also). There are also a significant number of comments on LWN s page as well. Holger Levsen also headed a scheduled discussion session at DebConf24 on Preserving *other* build artifacts addressing a topic where a number of Debian packages are (or would like to) produce results that are neither the .deb files, the build logs nor the logs of CI tests. This is an issue for reproducible builds as this 4th type of build artifact are typically shipped within the binary .deb packages, and are invariably non-deterministic; thus making the .deb files unreproducible. (A direct .webm link and HTML slides are available).

Intermediate Autotools build artifacts removed from PostgreSQL distribution tarballs Peter Eisentraut wrote a detailed blog post on the subject of The new PostgreSQL 17 make dist . Like many projects, the PostgreSQL database has previously pre-built parts of its GNU Autotools build system: the reason for this is a mix of convenience and traditional practice . Peter astutely notes that this arrangement in the build system is quite tricky as:
You need to carefully maintain the different states of clean source code , partially built source code , and fully built source code , and the commands to transition between them.
However, Peter goes on to mention that:
a lot more attention is nowadays paid to the software supply chain. There are security and legal reasons for this. When users install software, they want to know where it came from, and they want to be sure that they got the right thing, not some fake version or some version of dubious legal provenance.
And cites the XZ Utils backdoor as a reason to care about transparent and reproducible ways of distributing and communicating a source tarball and provenance. Because of this, intermediate build artifacts are now henceforth essentially disallowed from PostgreSQL distribution tarballs.

Distribution news In Debian this month, 30 reviews of Debian packages were added, 17 were updated and 10 were removed this month adding to our knowledge about identified issues. One issue type was added by Chris Lamb, too. [ ] In addition, an issue was filed to update the Salsa CI pipeline (used by 1,000s of Debian packages) to no longer test for reproducibility with reprotest s build_path variation. Holger Levsen provided a rationale for this change in the issue, which has already been made to the tests being performed by tests.reproducible-builds.org.
In Arch Linux this month, Jelle van der Waa published a short blog post on the topic of Investigating creating reproducible images with mkosi, motivated by the desire to make it possible for anyone to re-recreate the official Arch cloud image bit-by-bit identical on their own machine as per [the] reproducible builds definition. In addition, Jelle filed a patch for pacman, the Arch Linux package manager, to respect the SOURCE_DATE_EPOCH environment variable when installing a package.
In openSUSE news, Bernhard M. Wiedemann published another report for that distribution.
In Android news, the IzzyOnDroid project added 49 new rebuilder recipes and now features 256 total reproducible applications representing 21% of the total offerings in the repository. IzzyOnDroid is an F-Droid style repository for Android apps[:] applications in this repository are official binaries built by the original application developers, taken from their resp. repositories (mostly GitHub).

Mailing list news From our mailing list this month:
  • Bernhard M. Wiedemann posted a brief message to the list with some helpful information regarding nondeterminism within Rust binaries, positing the use of the codegen-units = 16 default and resulting in a bug being filed in the Rust issue tracker. [ ]
  • Bernhard also wrote to the list, following up to a thread in November 2023, on attempts to make the LibreOffice suite of office applications build reproducibly. In the thread from this month, Bernhard could announce that the four patches previously mentioned have landed in LibreOffice upstream.
  • Fay Stegerman linked the mailing list to a thread she made on the Signal issue tracker regarding whether device-specific binaries [can] ever be considered meaningfully reproducible . In particular: the whole part about allow[ing] multiple third parties to come to a consensus on a correct result breaks down completely when correct is device-specific and not something everyone can agree on. [ ]
  • Developer kpcyrd posted an update for source code indexing project, whatsrc.org. Announcing that it now importing packages from live-bootstrap ( a usable Linux system [that is] created with only human-auditable, and wherever possible, human-written, source code ) into its database of provenance data.
  • Lastly, Mechtilde Stehmann posted an update to an earlier thread about how Java builds are not reproducible on the armhf architecture, enquiring how they might gain temporary access to such a machine in order to perform some deeper testing. [ ]

diffoscope diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. This month, Chris Lamb released versions 274, 275, 276 and 277, uploaded these to Debian, and made the following changes as well:
  • New features:
    • Strip ANSI escapes usually colour codes from the output of the Procyon Java decompiler. [ ]
    • Factor out a method for stripping ANSI escapes. [ ]
    • Append output from dumppdf(1) in more cases, avoiding situations where we fallback to a binary diff. [ ]
    • Add support for versions of Perl s IO::Compress::Zip version 2.212. [ ]
  • Bug fixes:
    • Also catch RuntimeError exceptions when importing the PyPDF library so that it, or, crucially, its transitive dependencies, cannot not cause diffoscope to traceback at runtime and build time. [ ]
    • Do not call marshal.load( ) of precompiled Python bytecode as it, alas, inherently unsafe. Replace for now with a brief summary of the code section of .pyc. [ ][ ]
    • Don t include excessive debug output when calling dumppdf(1). [ ]
  • Testsuite-related changes:
    • Don t bother to check version number in test_python.py: the fixture for this test is fixed. [ ][ ]
    • Update test_zip text fixtures and definitions to support new changes to the Perl IO::Compress library. [ ]
In addition, Mattia Rizzolo updated the available architectures for a number of test dependencies [ ] and Sergei Trofimovich fixed an issue to avoid diffoscope crashing when hashing directory symlinks [ ] and Vagrant Cascadian proposed GNU Guix updates for diffoscope versions 275 and 276 and 277.

Website updates There were a rather substantial number of improvements made to our website this month, including:
  • Alba Herrerias:
    • Substantially extend the guidance on the Contribute page. [ ]
  • Chris Lamb:
    • Set the future: true configuration value so we render all files and documents in the website, regardless of whether they have a date property in the future. After all, we don t re-generate the website on a timer, and have other ways of making unpublished, draft posts. [ ][ ]
  • Fay Stegerman:
  • hulkoba:
  • kpcyrd:
  • Mattia Rizzolo:
  • Pol Dellaiera:

Upstream patches The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:

Reproducibility testing framework The Reproducible Builds project operates a comprehensive testing framework running primarily at tests.reproducible-builds.org in order to check packages and other artifacts for reproducibility. In August, a number of changes were made by Holger Levsen, including:
  • Temporarily install the openssl-provider-legacy package for the Debian unstable environments for running diffoscope due to Debian bug #1078944. [ ][ ][ ][ ]
  • Mark Debian armhf architecture nodes as being down due to proxy down. [ ][ ]
  • Detect proxy failures. [ ][ ][ ]
  • Run the index-buildinfo for the builtin-pho script with the -q switch. [ ]
  • Disable all Arch Linux reproducible jobs. [ ]
In addition, Mattia Rizzolo updated the website configuration to install the ruby-jekyll-sitemap package as it is now used in the website [ ], Roland Clobus updated the script to build Debian live images to treat openQA issues as warnings [ ], and Vagrant Cascadian marked the cbxi4b node as down [ ].

If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

30 August 2024

Russ Allbery: Review: Thornhedge

Review: Thornhedge, by T. Kingfisher
Publisher: Tor
Copyright: 2023
ISBN: 1-250-24410-2
Format: Kindle
Pages: 116
Thornhedge is a fantasy novella by T. Kingfisher, the pen name that Ursula Vernon uses for her adult writing. It won the 2024 Hugo Award for best novella. No matter how much my brain wants to misspell the title, it is a story about a hedge, not a Neolithic earthwork.
The fairy was the greenish-tan color of mushroom stems and her skin bruised blue-black, like mushroom flesh. She had a broad, frog-like face and waterweed hair. She was neither beautiful nor made of malice, as many of the Fair Folk are said to be.
There is a princess asleep in a tower, surrounded by a wall of thorns. Toadling's job is to keep anyone from foolishly breaking in. At first, it was a constant struggle and all that she could manage, but with time, the flood of princes slowed to a trickle. A road was built and abandoned. People fled. There was a plague. With any luck, the tower was finally forgotten. Then a knight shows up. Not a very rich knight, nor a very successful knight. Just a polite and very persistent knight who wants to get into the tower that Toadling does not want him to get into. As you might have guessed, this is a Sleeping Beauty retelling. As you may have also guessed from the author, or from the cover text that says "not all curses should be broken," this version is a bit different. How and why it departs from the original is a surprise that slowly unfolds over the course of the story, in parallel to a delicate, cautious, and delightfully kind-hearted conversation between the knight and the fairy. If you have read a T. Kingfisher story before, particularly one of her fractured fairy tales, you know what to expect. Toadling is one of her typical well-meaning, earnest, slightly awkward protagonists who is just trying to do the right thing in a confusing world full of problems and dangers. She's constantly overwhelmed and yet she keeps going, because what else is there to do. Like a lot of Kingfisher's writing, it's a story about quiet courage from someone who doesn't consider herself courageous. One of the twists this time is that the knight is a character from a similar vein: doggedly unwilling to leave any problem alone, but equally determined to try to be kind. The two of them together make for a story with a gentle and rather melancholy tone. We do, eventually, learn the whole backstory of the tower, the wall of thorns, and Toadling. There is a god, a rather memorable one, who is frustratingly cryptic in the way that gods are. There are monsters who are more loving than most humans. There are humans who turn out to be surprisingly decent when it matters. And, like most of Kingfisher's writing, there is a constant awareness of how complicated the world is, how full it is of people who are just trying to get through each day, and how heavy of burdens people can shoulder when they don't see another way. This story pulled me right in. It is not horror, although there are a few odd bits like there always are in Kingfisher stories. Your largest risk as a reader is that it might make you cry if stories about earnest people doing their best in overwhelming situations hit you that way. My primary complaint is that there was nowhere near enough ending for me. After everything I learned about the characters, I wanted to spend some time with them outside of the bounds of the story. Kingfisher points the reader in a direction and then leaves the rest to your imagination, and I can see why she chose that story construction, but I wanted more catharsis than I got. That complaint aside, this is quintessential T. Kingfisher, and I am unsurprised that it won a Hugo. If you've read any of her other fractured fairy tales, or the 2023 Hugo winner for best novel, you know the sort of stories she tells, and you probably know whether you will like this. I am one of the people who like this. Rating: 8 out of 10

28 August 2024

Debian Brasil: Debian Day 2024 em Bel m e Po os de Caldas - Brasil

por Paulo Henrique de Lima Santana (phls) Listamos abaixo os links para os relatos e not cias do Debian Day 2024 realizado em Bel m e Po os de Caldas:

19 August 2024

Matthew Garrett: Client-side filtering of private data is a bad idea

(The issues described in this post have been fixed, I have not exhaustively researched whether any other issues exist)

Feeld is a dating app aimed largely at alternative relationship communities (think "classier Fetlife" for the most part), so unsurprisingly it's fairly popular in San Francisco. Their website makes the claim:

Can people see what or who I'm looking for?
No. You're the only person who can see which genders or sexualities you're looking for. Your curiosity and privacy are always protected.


which is based on you being able to restrict searches to people of specific genders, sexualities, or relationship situations. This sort of claim is one of those things that just sits in the back of my head worrying me, so I checked it out.

First step was to grab a copy of the Android APK (there are multiple sites that scrape them from the Play Store) and run it through apk-mitm - Android apps by default don't trust any additional certificates in the device certificate store, and also frequently implement certificate pinning. apk-mitm pulls apart the apk, looks for known http libraries, disables pinning, and sets the appropriate manifest options for the app to trust additional certificates. Then I set up mitmproxy, installed the cert on a test phone, and installed the app. Now I was ready to start.

What became immediately clear was that the app was using graphql to query. What was a little more surprising is that it appears to have been implemented such that there's no server state - when browsing profiles, the client requests a batch of profiles along with a list of profiles that the client has already seen. This has the advantage that the server doesn't need to keep track of a session, but also means that queries just keep getting larger and larger the more you swipe. I'm not a web developer, I have absolutely no idea what the tradeoffs are here, so I point this out as a point of interest rather than anything else.

Anyway. For people unfamiliar with graphql, it's basically a way to query a database and define the set of fields you want returned. Let's take the example of requesting a user's profile. You'd provide the profile ID in question, and request their bio, age, rough distance, status, photos, and other bits of data that the client should show. So far so good. But what happens if we request other data?

graphql supports introspection to request a copy of the database schema, but this feature is optional and was disabled in this case. Could I find this data anywhere else? Pulling apart the apk revealed that it's a React Native app, so effectively a framework for allowing writing of native apps in Javascript. Sometimes you'll be lucky and find the actual Javascript source there, but these days it's more common to find Hermes blobs. Fortunately hermes-dec exists and does a decent job of recovering something that approximates the original input, and from this I was able to find various lists of database fields.

So, remember that original FAQ statement, that your desires would never be shown to anyone else? One of the fields mentioned in the app was "lookingFor", a field that wasn't present in the default profile query. What happens if we perform the incredibly complicated hack of exporting a profile query as a curl statement, add "lookingFor" into the set of requested fields, and run it?

Oops.

So, point 1 is that you can't simply protect data by having your client not ask for it - private data must never be released. But there was a whole separate class of issue that was an even more obvious issue.

Looking more closely at the profile data returned, I noticed that there were fields there that weren't being displayed in the UI. Those included things like "ageRange", the range of ages that the profile owner was interested in, and also whether the profile owner had already "liked" or "disliked" your profile (which means a bunch of the profiles you see may already have turned you down, but the app simply didn't show that). This isn't ideal, but what was more concerning was that profiles that were flagged as hidden were still being sent to the app and then just not displayed to the user. Another example of this is that the app supports associating your profile with profiles belonging to partners - if one of those profiles was then hidden, the app would stop showing the partnership, but was still providing the profile ID in the query response and querying that ID would still show the hidden profile contents.

Reporting this was inconvenient. There was no security contact listed on the website or in the app. I ended up finding Feeld's head of trust and safety on Linkedin, paying for a month of Linkedin Pro, and messaging them that way. I was then directed towards a HackerOne program with a link to terms and conditions that 404ed, and it took a while to convince them I was uninterested in signing up to a program without explicit terms and conditions. Finally I was just asked to email security@, and successfully got in touch. I heard nothing back, but after prompting was told that the issues were fixed - I then looked some more, found another example of the same sort of issue, and eventually that was fixed as well. I've now been informed that work has been done to ensure that this entire class of issue has been dealt with, but I haven't done any significant amount of work to ensure that that's the case.

You can't trust clients. You can't give them information and assume they'll never show it to anyone. You can't put private data in a database with no additional acls and just rely on nobody ever asking for it. You also can't find a single instance of this sort of issue and fix it without verifying that there aren't other examples of the same class. I'm glad that Feeld engaged with me earnestly and fixed these issues, and I really do hope that this has altered their development model such that it's not something that comes up again in future.

(Edit to add: as far as I can tell, pictures tagged as "private" which are only supposed to be visible if there's a match were appropriately protected, and while there is a "location" field that contains latitude and longitude this appears to only return 0 rather than leaking precise location. I also saw no evidence that email addresses, real names, or any billing data was leaked in any way)

comment count unavailable comments

9 August 2024

Kalyani Kenekar: One Backpack, One Passport: My First Solo Trip

Planing A Self Organized Solo Trip You know the movie Queen? The actor Kangana Ranaut plays in that movie the role of Rani Mehra, a 24-year-old Punjabi woman, who was a simple, homely girl that was always reliant on her family. Similar to Rani I too rarely ventured out without my parents and often needed my younger sibling by my side. Inspired by her transformation, I decided it was time to take control of my own story and discover who I truly am. Queen movie picture Of Kangana

Trip Requirements

My First Passport The journey began with a significant first step: Obtaining my first passport Never having had one before, I scheduled the nearest available interview date on June 29 2022. This meant traveling to Solapur, a city 309 km from my hometown, accompanied by my father. After successfully completing the interview, I received my passport on July 14 2022.

Select A Country, Booking Flights And Accommodation Excited and ready to embark on my adventure, I planed trip to Albania and booked the flight tickets. Why? I had heard from friends that it was a beautiful European country with beaches and other attractions, and importantly, it didn t require a visa for Indian citizens and was more affordable than other European destinations. Before heading to Albania, I planned a overnight stop in Abu Dhabi with a transit visa, thanks to friend who knew the process for obtaining it. Some of my friends did travel also to Europe at the same time and quite close to my plannings, but that I realized just later the trip.

Day 1, Starting The Experience On July 20, 2022, I started my journey by traveling from Pune, Maharashtra, to Delhi, where my brother lives. He came to see me off at the airport, adding a touch of warmth and support to the beginning of my solo adventure. Upon arriving in Delhi, with my next flight scheduled for July 21, I stayed at a backpacker hostel named Zostel, Paharganj, Delhi to rest. During my stay, I noticed that many travelers at the hostel carried rucksacks, which sparked a desire in me to get one for my own trip to Europe. Up until then, I had always shopped with my mom and had never bought anything on my own. Inspired by the travelers, I set out to find a suitable rucksack. I traveled alone by metro from Paharganj to Rohini to visit a Decathlon store, where I purchased a 50-liter rucksack. This was a significant step in preparing for my European adventure and marked a milestone in my journey of self reliance. Rucksack description tag Kalyani s packpacker

Day 2, Flying To Abu Dhabi The following day, July 21 2024, I had a flight to Abu Dhabi. I spent the night at the hostel to rest before my journey. On the day of the flight, I needed to reach the airport by 3 PM, and a friend kindly came to drop me off. With my rucksack packed and excitement building, I was ready for the next leg of my adventure. When we arrived at the airport, my friend saw me off, marking the start of my international journey. With mom made spices, chutneys, and chilly flakes packed for comfort, I completed my immigration process in about two and a half hours. I then settled at the gate for my flight, feeling a mix of excitement and anxiety as thoughts raced through my mind. mom-made spices Passport and boarding pass To ease my nerves, I struck up a conversation with a man seated nearby who was also traveling to Abu Dhabi for work. He provided helpful information about safety and transportation in Abu Dhabi, which reassured me. With the boarding process complete and my anxiety somewhat eased. I found my window seat on the flight and settled in, excited for the journey ahead. Next to me was a young man from Ranchi(Zarkhand, India), heading to Abu Dhabi for work at a mining factory. We had an engaging conversation about work culture in Abu Dhabi and recruitment from India. Upon arriving in Abu Dhabi, I completed my transit, collected my luggage, and began finding my way to the hotel Premier Inn AbuDhabi, which was in the airport area. To my surprise, I ran into the same man from the flight, now in a cab. He kindly offered to drop me at my hotel, which I gladly accepted since navigating an unfamiliar city with a short acquaintance felt safer. At the hotel gate, he asked if I had local currency (Dirhams) for payment, as sometimes online transactions can fail. That hadn t crossed my mind, and I realized I might be left stranded if a transaction failed. Recognizing his help as a godsend, I asked if he could lend me some Dirhams, promising to transfer the amount later. He kindly assured me to pay him back once I reached the hotel room. With that relief, I checked into the hotel, feeling deeply grateful for the unexpected assistance and transferred the money to him after getting to my room. dhiramm money hotel room Kalyani in hotel room

Day 3, Flying And Arrive In Tirana Once in the hotel room, I found it hard to sleep, anxious about waking up on time for my flight. I set an alarm to wake up early, but my subconscious mind kept me alert, and I woke up before the alarm went off. I got freshened up and went down for breakfast, where I found some vegetarian options like Idli-Sambar and bread with butter, along with some morning tea. After breakfast, I headed back to the airport, ready to catch my flight to my final destination: Tirana, Albania. Breakfast at hotel Airport area I reached Tirana, Albania after a six hours flight, feeling exhausted and I was suffering from a headache. The air pressure had blocked my ears, and jet lag added to my fatigue. After collecting my checked luggage, I headed to the first ATM machine at the airport. Struggling to insert my card, I asked a nearby gentleman for help. He tried his best, but my card got stuck inside the machine. Panic set in as I worried about how I would survive without money. Taking a deep breath, I found an airport employee and explained the situation. The gentleman stayed with me, offering support and repeatedly apologizing for his mistake. However, it wasn t his fault, the ATM was out of order, which I hadn t noticed. My focus was solely on retrieving my ATM card. The airport employee worked diligently, using a hairpin to carefully extract my card. Finally, the card was freed, and I felt an immense sense of relief, grateful for the help of these kind strangers. I used another ATM, successfully withdrew money, and then went to an airport mobile SIM shop to buy a new SIM card for local internet and connectivity. sim plans

Day 4, Arriving In Tirana, Facing Challenges In A Foreign Country I had booked a stay at a backpacker hostel near the city center of Tirana. After sorting out the ATM and SIM card issues, I searched for a bus or any transport to get there. It was quite late, around 8:30 PM, and being in a new city, I was in a hurry. I saw a bus nearly leaving the airport, stopped it, and asked if it went to the city center. They gave me the green flag, so I boarded the airport service bus and reached the city center. Feeling very tired, I discovered that the hostel was about an hour and a half away by walking. Deciding to take a cab, I faced a challenge as the driver couldn t understand my English or accent. Using a mobile translator to convert my address from English to Albanian, I finally communicated my destination to him. With that sorted out, I headed to the Blue Door Backpacker Hostel and arrived around 9 PM, relieved to have finally reached my destination and I checked in. Hostel gate Street in Tirana I found my top bunk bed, only to realize I had booked a mixed-gender dormitory. This detail had completely escaped my notice during the booking process. I felt unsure about how to handle the situation. Coincidentally, my experience mirrored what Kangana faced in the movie Queen . Feeling acidic due to an empty stomach and the exhaustion of heavy traveling, I wasn t up to cooking in the hostel s kitchen. I asked the front desk about the nearest restaurant. It was nearly 9:30 PM, and the streets were deserted. To avoid any mishaps like in the movie Queen, I kept my passport securely locked in my bag, ensuring it wouldn t be a victim of theft. Venturing out for dinner, I felt uneasy on the quiet streets. I eventually found a restaurant recommended by the hostel, but the menu was almost entirely non-vegetarian. I struggled to ask about vegetarian options and was uncertain if any dishes contained eggs, as some people consider eggs to be vegetarian. Feeling frustrated and unsure, I left the restaurant without eating. I noticed a nearby grocery store that was about to close and managed to get a few extra minutes to shop. I bought some snacks, wafers, milk, and tea bags (though I couldn t find tea powder to make Indian-style tea). Returning to the hostel, I made do with wafers, cookies, and milk for dinner. That day was incredibly tough for me, I filled with exhaustion and struggle in a new country, I was on the verge of tears . I made a video call home before sleeping on the top bunk bed. It was a new experience for me, sharing a room with both unknown men and women. I kept my passport safe inside my purse and under my pillow while sleeping, staying very conscious about its security.

Day 5, Exploring Nearby Places I woke up the next day at noon. After having some coffee, the hostel management girl asked if I wanted breakfast. She offered curd with cornflakes, which I refused because I don t like curd. Instead, I ordered a pizza from a vegetarian pizza place with her help, and I started feeling better. I met some people in the hostel, some from Syria and others from Italy. I struggled to understand their accents but kept pushing myself to get involved in their discussions. Despite the challenges, I felt more at ease and was slowly adapting to my new environment. I went out from the hostel in the evening to buy some vegetables to cook something. I searched for shops and found some potatoes, tomatoes, and rice. I decided to cook Khichdi, an Indian dish made with rice, and added some chili flakes I brought from home. After preparing my dinner, I ate and then went to sleep again. vegetable shop cooking in kitchen Food

Day 6, Tiranas Recent History The next day, I planned to explore the city and visited Bunkart-1, a fascinating museum in a massive underground bunker from the communist era. Originally built as a shelter for Albania s political and military elite, it now offers a unique glimpse into the country s history under Enver Hoxha s oppressive regime. The museum s exhibits include historical artifacts, photographs, and multimedia displays that detail the lives of Albanians during that time. Walking through the dimly lit corridors, I felt the weight of history and gained a deeper understanding of Albania s past. Bunkart Bunkart Bunkart Bunkart Bunkart Bunkart Bunkar Bunkart Bunkart Bunkart Bunkart

Day 7-8, Meeting Friends From India The next day, I accidentally met with Chirag, who was returning from the Debian Conference 2022 held in Prizren, Kosovo, and staying at the same hostel. When I encountered him, he was talking on the phone, and I recognized he was Indian by his accent. I introduced myself, and we discovered we had some mutual friends. Chirag told me that our common friend, Raju, was also coming to stay at the hostel the next day. This news made me feel relaxed and happy to have known people around. When Raju arrived, the three of us, Chirag, Raju, and I planned to have dinner at an Indian restaurant and explore Tirana city. I had a great time talking and enjoying their company. Friends on street

Day 9-10, Meeting More Friends Raju had a ticket to leave soon, so Chirag and I made a plan to visit Shkod r and the nearby Komani Lake for kayaking. We started our journey early in the morning by bus and reached Shkod r. There, we met new friends from the conference, Pavit and Abraham, who were already there. We had dinner together and enjoyed an ice cream treat from Chirag. Friends on dinner

Day 12, Kayaking And Say Good Bye To Friends The next day, Pavit and Abraham had a flight back to India, so Chirag and I went to Komani Lake. We had an adventurous time kayaking, even though neither of us knew how to swim. We took a ferry through the backwaters to the island on Komani Lake and enjoyed a fantastic adventure together. After our trip, Chirag returned to Tirana for his flight back to India, leaving me to continue my journey alone. Lake with mountain Kayak

Day 13, Climbing Rozafa Castel By stopping at Shkod r, I visited Rozafa Castle. Despite the language barrier, as most locals only spoke Albanian, people around me guided me correctly on how to get there. At times, I used applications like Google Translate to communicate. To read signs or hotel menus, I used Google Photos' language converter. I even used the audio converter to understand and speak some basic Albanian phrases. View from top of Castel Rozafa castel I took a bus from Shkod r to the southern part of Albania, heading to Sarand . The journey lasted about five to six hours, and I had booked a stay at Mona s Hostel. Upon arrival, I met Eliza from America, and we went together to Ksamil Beach, spending a wonderful day there.

Day 14, Vlora Beach: Beach Side Cycling Next, I traveled to Vlor , where I stayed for one day. During my time there, I enjoyed beach side cycling with a cycle provided by the hostel owner and spent some time feeding fish. I also met a fellow traveler from Delhi who had brought along some preserved Indian curry. He kindly shared it with me, which was a welcome change after nearly 15 days without authentic Indian cuisine, except for what I had cooked myself in various hostels. Sunset on BeachKalyani on Beach Beach with streetBeach side cycling

Day 15-16 Visiting Durress, Travelling Back To Tirana I then visited Durr s, exploring its beautiful beaches, before heading back to Tirana one day before my flight home. On the day of my flight, my alarm didn t go off, and I woke up late at the hostel. In a frantic rush, I packed everything in just five minutes and dashed toward the city center to catch the bus to the airport. If I had been just five minutes later, I would have missed the bus. Thankfully, I managed to stop it just in time and began my journey back home, reflecting on the incredible adventure I had experienced. Fortunately, I wasn t late; I arrived at the airport just in time. After clearing immigration, I boarded my flight, which had a layover in Warsaw, Poland. The journey from Tirana to Warsaw took about two and a half hours, followed by a seven to eight-hour flight from Poland back to India. Once I arrived in Delhi, I returned to Zostel and booked a train ticket to Aurangabad for the next three days.

Backview This trip was an incredible adventure for me. I never imagined I could accomplish something like this, but I did. Meeting diverse people, experiencing different cultures, and learning so much made this journey truly unforgettable. Looking back, I realize how much I ve grown from this experience. Although I may have more opportunities to travel abroad in the future, this trip will always hold a special place in my heart. The memories I made and the incredible people I met along the way are irreplaceable. This experience goes beyond what I can express through this blog or words; it was incredibly precious to me. Every moment of this journey is etched in my memory, and I am grateful for every part of it.

8 August 2024

Jonathan Carter: DebConf24 Busan, South Korea

I m finishing typing up this blog entry hours before my last 13 hour leg back home, after I spent 2 weeks in Busan, South Korea for DebCamp24 and DebCamp24. I had a rough year and decided to take it easy this DebConf. So this is the first DebConf in a long time where I didn t give any talks. I mostly caught up on a bit of packaging, worked on DebConf video stuff, attended a few BoFs and talked to people. Overall it was a very good DebConf, which also turned out to be more productive than I expeced it would. In the welcome session on the first day of DebConf, Nicolas Dandrimont mentioned that a benefit of DebConf is that it provides a sort of caffeine for your Debian motivation. I could certainly feel that affect swell as the days went past, and it s nice to be excited about some ideas again that would otherwise be fading.

Recovering DPL It s a bit of a gear shift being DPL for 4 years, and DebConf Committee for nearly 5 years before that, and then being at DebConf while some issue arise (as it always does during a conference). At first I jump into high alert mode, but then I have to remind myself it s not your problem anymore and let others deal with it. It was nice spending a little in-person time with Andreas Tille, our new DPL, we did some more handover and discussed some current issues. I still have a few dozen emails in my DPL inbox that I need to collate and forward to Andreas, I hope to finish all that up by the end of August. During the Bits from the DPL talk, the usual question came up whether Andreas will consider running for DPL again, to which he just responded in a slide Maybe . I think it s a good idea for a DPL to do at least two terms if it all works out for everyone, since it takes a while to get up to speed on everything. Also, having been DPL for four years, I have a lot to say about it, and I think there s a lot we can fix in the role, or at least discuss it. If I had the bandwidth for it I would have scheduled a BoF for it, but I ll very likely do that for the next DebConf instead!

Video team I set up the standby loop for the video streaming setup. We call it loopy, it s a bunch of OBS scenes that provide announcements, shows sponsors, the schedule and some social content. I wrote about it back in 2020, but it s evolved quite a bit since then, so I m probably due to write another blog post with a bunch of updates on it. I hope to organise a video team sprint in Cape Town in the first half of next year, so I ll summarize everything before then.

It would ve been great if we could have some displays in social areas that could show talks, the loop and other content, but we were just too pressed for time for that. This year s DebConf had a very compressed timeline, and there was just too much that had to be done and that had to be figured out on the last minute. This put quite a lot of strain on the organisers, but I was glad to see how, for the most part, most attendees were very sympathetic to some rough edges (but I digress ). I added more of the OBS machine setup to the videoteam s ansible repository, so as of now it just needs an ansible setup and the OBS data and it s good to go. The loopy data is already in the videoteam git repository, so I could probably just add a git pull and create some symlinks in ansible and then that machine can be installed from 0% to 100% by just installing via debian-installer with our ansible hooks. This DebConf I volunteered quite a bit for actual video roles during the conference, something I didn t have much time for in recent DebConfs, and it s been fun, especially in a session or two where nearly none of the other volunteers showed up. Sometimes chaos is just fun :-)
Baekyongee is the university mascot, who s visible throughout the university. So of course we included this four legged whale creature on the loop too!

Packaging I was hoping to do more packaging during DebCamp, but at least it was a non-zero amount:
  • Uploaded gdisk 1.0.10-2 to unstable (previously tested effects of adding dh-sequence-movetousr) (Closes: #1073679).
  • Worked a bit on bcachefs-tools (updating git to 1.9.4), but has a build failure that I need to look into (we might need a newer bindgen) update: I m probably going to ROM this package soon, it doesn t seem suitable for packaging in Debian.
  • Calamares: Tested a fix for encrypted installs, and uploaded it.
  • Calamares: Uploaded (3.3.8-1) to backports (at the time of writing it s still in backports-NEW).
  • Backport obs-gradient-source for bookworm.
  • Did some initial packaging on Cambalache, I ll upload to unstable once wlroots (0.18) hits unstable.
  • Pixelorama 1.0 I did some initial packaging for Pixelorama back when we did the MiniDebConf Gaming Edition, but it had a few stoppers back then. Version 1.0 seems to fix all of that, but it depends on Godot 4.2 and we re still on the 3 series in Debian, so I ll upload this once Godot 4.2 hits at least experimental. Godot software/games is otherwise quite easy to run, it s basically just source code / data that is installed and then run via godot-runner (godot3-runner package in Debian).

BoFs Python Team BoF Link to the etherpad / pad archive link and video can be found on the talk page: https://debconf24.debconf.org/talks/31-python-bof/ The session ended up being extended to a second part, since all the issues didn t fit into the first session. I was distracted by too many thing during the Python 3.12 transition (to the point where I thought that 3.11 was still new in Debian), so it was very useful listening to the retrospective of that transition. There was a discussion whether Python 3.13 could still make it to testing in time for freeze, and it seems that there is consensus that it can, although, likely with new experimental features like disabling the global interpreter lock and the just in time compiler disabled. I learned for the first time about the dead batteries project, PEP-0594, which removes ancient modules that have mostly been superseded, from the Python standard library. There was some talk about the process for changing team policy, and a policy discussion on whether we should require autopkgtests as a SHOULD or a MUST for migration to testing. As with many things, the devil is in the details and in my opinion you could go either way and achieve a similar result (the original MUST proposal allowed exceptions which imho made it the same as the SHOULD proposal). There s an idea to do some ongoing remote sprints, like having co-ordinated days for bug squashing / working on stuff together. This is a nice idea and probably a good way to energise the team and also to gain some interest from potential newcomers. Louis-Philipe V ronneau was added as a new team admin and there was some discussion on various Sphinx issues and which Lintian tags might be needed for Python 3.13. If you want to know more, you probably have to watch the videos / read the notes :)
    Debian.net BoF Link to the etherpad / pad archive link can be found on the talk page: https://debconf24.debconf.org/talks/37-debiannet-team-bof Debian Developers can set up services on subdomains on debian.net, but a big problem we ve had before was that developers were on their own for hosting those services. This meant that they either hosted it on their DSL/fiber connection at home, paid for the hosting themselves, or hosted it at different services which became an accounting nightmare to claim back the used funds. So, a few of us started the debian.net hosting project (sometimes we just call it debian.net, this is probably a bit of a bug) so that Debian has accounts with cloud providers, and as admins we can create instances there that gets billed directly to Debian. We had an initial rush of services, but requests have slowed down since (not really a bad thing, we don t want lots of spurious requests). Last year we did a census, to check which of the instances were still used, whether they received system updates and to ask whether they are performing backups. It went well and some issues were found along the way, so we ll be doing that again. We also gained two potential volunteers to help run things, which is great. Debian Social BoF Link to the etherpad / pad archive link can be found on the talk page: https://debconf24.debconf.org/talks/34-debiansocial-bof We discussed the services we run, you can view the current state of things at: https://wiki.debian.org/Teams/DebianSocial Pleroma has shown some cracks over the last year or so, and there are some forks that seem promising. At the same time, it might be worth while considering Mastodon too. So we ll do some comparison of features and maintenance and find a way forward. At the time when Pleroma was installed, it was way ahead in terms of moderation features. Pixelfed is doing well and chugging along nicely, we should probably promote it more. Peertube is working well, although we learned that we still don t have all the recent DebConf videos on there. A bunch of other issues should be fixed once we move it to a new machine that we plan to set up. We re removing writefreely and plume. Nice concepts, but it didn t get much traction yet, and no one who signed up for these actually used it, which is fine, some experimentation with services is good and sometimes they prove to be very popular and other times not. The WordPress multisite instance has some mild use, otherwise haven t had any issues. Matrix ended up to be much, much bigger than we thought, both in usage and in its requirements. It s very stateful and remembers discussions for as long as you let it, so it s Postgres database is continuously expanding, this will also be a lot easier to manage once we have this on the new host. Jitsi is also quite popular, but it could probably be on jitsi.debian.net instead (we created this on debian.social during the initial height of COVID-19 where we didn t have the debian.net hosting yet), although in practice it doesn t really matter where it lives. Most of our current challenges will be solved by moving everything to a new big machine that has a few public IPs available for some VMs, so we ll be doing that shortly. Debian Foundation Discussion BoF This was some brainstorming about the future structure of Debian, and what steps might be needed to get there. It s way too big a problem to take on in a BoF, but we made some progress in figuring out some smaller pieces of the larger puzzle. The DPL is going to get in touch with some legal advisors and our trusted organisations so that we can aim to formalise our relationships a bit more by the time it s DebConf again. I also introduced my intention to join the Debian Partners delegation. When I was DPL, I enjoyed talking with external organisations who wanted to help Debian, but helping external organisations help Debian turned out to be too much additional load on the usual DPL roles, so I m pursuing this with the Debian Partners team, more on that some other time. This session wasn t recorded, but if you feel like you missed something, don t worry, all intentions will be communicated and discussed with project members before anything moves forward. There was a strong agreement in the room though that we should push forward on this, and not reach another DebConf where we didn t make progress on formalising Debian s structure more.

    Social Conference Dinner
    Conference Dinner Photo from Santiago
    The conference dinner took place in the university gymnasium. I hope not many people do sports there in the summer, because it got HOT. There was also some interesting observations on the thermodynamics of the attempted cooling solutions, which was amusing. On the plus side, the food was great, the company was good, and the speeches were kept to a minimum, so it was a great conference dinner, even though it was probably cut a bit short due to the heat. Cheese and Wine Cheese and Wine happened on 1 August, which happens to be the date I became a DD at DebConf17 in Montr al seven years before, so this was a nice accidental celebration of my Debiversary :) Since I m running out of time, I ll add some more photos to this post some time after publishing it :P Group Photo As per DebConf tradition, Aigars took the group photo. You can find the high resolution version on Debian s GitLab instance.
    Debian annual conference Debconf 24, Busan, South Korea
    Photography: Aigars Mahinovs aigarius@debian.org
    License: CC-BYv3+ or GPLv2+
    Talking Ah yes, talking to people is a big part of DebConf, but I didn t keep track of it very well.
    • I mostly listened to Alper a bit about his ideas for his talk about debian installer.
    • I talked to Rhonda a bit about ActivityPub and MQTT and whether they could be useful for publicising Debian activity.
    • Listened to Gunnar and Julian have a discussion about GPG and APT which was interesting.
    • I learned that you can learn Hangul, the Korean alphabet, in about an hour or so (I wish I knew that in all my years of playing StarCraft II).
    • We had the usual continuous keysigning party. Besides it s intended function, this is always a good ice breaker and a way to for shy people to meet other shy people.
    • and many other fly-by discussions.

    Stuff that didn t happen this DebConf
    • loo.py A simple Python script that could eventually replace the obs-advanced-scene-switcher sequencer in OBS. It would also be extremely useful if we d ever replace OBS for loopy. I was hoping to have some time to hack on this, and try to recreate the current loopy in loo.py, but didn t have the time.
    • toetally This year videoteam had to scramble to get a bunch of resistors to assemble some tally light. Even when assembled, they were a bit troublesome. It would ve been nice to hack on toetally and get something ready for testing, but it mostly relies on having something like a rasbperry pi zero with an attached screen in order to work on further. I ll try to have something ready for the next mini conf though.
    • extrepo on debian live I think we should have extrepo installed by default on desktop systems, I meant to start a discussion on this, but perhaps it s just time I go ahead and do it and announce it.
    • Live stream to peertube server It would ve been nice to live stream DebConf to PeerTube, but the dependency tree to get this going got a bit too huge. Following our plans discussed in the Debian Social BoF, we should have this safely ready before the next MiniDebConf and should be able to test it there.
    • Desktop Egg there was this idea to get a stand-in theme for Debian testing/unstable until the artwork for the next release is finalized (Debian bug: #1038660), I have an idea that I meant to implement months ago, but too many things got in the way. It s based on Juliette Taka s Homeworld theme, and basically transforms the homeworld into an egg. Get it? Something that hasn t hatched yet? I also only recently noticed that we never used the actual homeworld graphics (featuring the world image) in the final bullseye release. lol.
    So, another DebConf and another new plush animal. Last but not least, thanks to PKNU for being such a generous and fantastic host to us! See you again at DebConf25 in Brest, France next year!

      24 July 2024

      Kentaro Hayashi: apt-upgrade-canary - PoC apt JSON hook use case

      apt-upgrade-canary is a helper program to alert when upgrading packages via apt. If there are some packages which causes a critical or serious bug, it shows warnings for terminal. Then you can stay on current version of package by canceling upgrades. This program is aimed to help not to installing problematic packages via manual upgrades. Usually, you should delegate package upgrade to unattended-upgrades and apt-listbugs. apt-listbugs kindly warns if there are known bugs. In most cases it is enough safer. But there is one exception, it is not true that always apt-listbugs can follow very latest critical or serious bugs in timely manner. apt-upgrade-canary checks UDD mirror database (it take into account to cache query not to throw redundant one frequently)

      Getting started
      1. Install required packages.
      $ sudo apt install -y make ruby-pg ruby-json ruby-term-ansicolor
      1. Clone https://salsa.debian.org/kenhys/apt-upgrade-canary.
      2. Just sudo make install.

      Use case in action If there is no serious bugs which is reported against target packages, apt-upgrade-canary reports no error.
      apt-upgrade-canary: no problem
      If there are something weird happen, it reports matched bugs.
      apt-upgrade-canary: serious case
      So you can stay on.

      17 July 2024

      Gunnar Wolf: Scholarly spam Wulfenia

      I just got one of those utterly funny spam messages And yes, I recognize everybody likes building a name for themselves. But some spammers are downright silly. I just got the following mail:
      From: Hermine Wolf <hwolf850@gmail.com>
      To: me, obviously  
      Date: Mon, 15 Jul 2024 22:18:58 -0700
      Subject: Make sure that your manuscript gets indexed and showcased in the prestigious Scopus database soon.
      Message-ID: <CAEZZb3XCXSc_YOeR7KtnoSK4i3OhD=FH7u+A5xSMsYvhQZojQA@mail.gmail.com>
      This message has visual elements included. If they don't display, please   
      update your email preferences.
      *Dear Esteemed Author,*
      Upon careful examination of your recent research articles available online,
      we are excited to invite you to submit your latest work to our esteemed    
      journal, '*WULFENIA*'. Renowned for upholding high standards of excellence 
      in peer-reviewed academic research spanning various fields, our journal is 
      committed to promoting innovative ideas and driving advancements in        
      theoretical and applied sciences, engineering, natural sciences, and social
      sciences. 'WULFENIA' takes pride in its impressive 5-year impact factor of 
      *1.000* and is highly respected in prestigious databases including the     
      Science Citation Index Expanded (ISI Thomson Reuters), Index Copernicus,   
      Elsevier BIOBASE, and BIOSIS Previews.                                     
                                                                                 
      *Wulfenia submission page:*                                                
      [image: research--check.png][image: scrutiny-table-chat.png][image:        
      exchange-check.png][image: interaction.png]                                
      .                                                                          
      Please don't reply to this email                                           
                                                                                 
      We sincerely value your consideration of 'WULFENIA' as a platform to       
      present your scholarly work. We eagerly anticipate receiving your valuable 
      contributions.                                                             
      *Best regards,*                                                            
      Professor Dr. Vienna S. Franz                                              
      
      Scholarly spam Who cares what Wulfenia is about? It s about you, my stupid Wolf cousin!

      11 July 2024

      Petter Reinholdtsen: More than 200 orphaned Debian packages moved to git, 216 to go

      In April, I started migrating orphaned Debian packages without any version control system listed in debian/control to git. This morning, my Debian QA page finally reached 200 QA packages migrated. In reality there are a few more, as the packages uploaded by someone else after my initial upload have disappeared from my QA uploads list. As I am running out of steam and will most likely focus on other parts of Debian moving forward, I hope someone else will find time to continue the migration to bring the number of orphaned packages without any version control system down to zero. Here is the updated recipe if someone want to help out. To locate packages to work on, the following one-liner can be used:
      PGPASSWORD="udd-mirror" psql --port=5432 --host=udd-mirror.debian.net \
        --username=udd-mirror udd -c "select source from sources \
         where release = 'sid' and (vcs_url ilike '%anonscm.debian.org%' \
         OR vcs_browser ilike '%anonscm.debian.org%' or vcs_url IS NULL \
         OR vcs_browser IS NULL) AND maintainer ilike '%packages@qa.debian.org%' \
         order by random() limit 10;"
      
      Pick a random package from the list and run the latest edition of the script debian-snap-to-salsa with the package name as the argument to prepare a git repository with the existing packaging. This will download old Debian packages from snapshot.debian.org. Note that very recent uploads will not be included, so check out the package on tracker.debian.org. Next, run gbp buildpackage --git-ignore-new to verify that the package build as it should, and then visit https://salsa.debian.org/debian/ and make sure there is not already a git repository for the package there. I also did git log -p debian/control and look for vcs entries to check if the package used to have a git repository on Alioth, and see if it can be a useful starting point moving forward. If all this check out, I created a new gitlab project below the Debian group on salsa, push the package source there and upload a new version. I tend to also ensure build hardening is enabled, if it prove to be easy, and check if I can easily fix any lintian issues or bug reports. If the process took more than 20 minutes, I dropped it and moved on to another package. If I found patches in debian/patches/ that were not yet passed upstream, I would send an email to make sure upstream know about them. This has proved to be a valuable step, and caused several new releases for software that initially appeared abandoned. :) As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

      7 July 2024

      Russ Allbery: Review: Welcome to Boy.Net

      Review: Welcome to Boy.Net, by Lyda Morehouse
      Series: Earth's Shadow #1
      Publisher: Wizard's Tower Press
      Copyright: April 2024
      ISBN: 1-913892-71-9
      Format: Kindle
      Pages: 355
      Welcome to Boy.Net is a science fiction novel with cyberpunk vibes, the first of a possible series. Earth is a largely abandoned wasteland. Humanity has survived in the rest of the solar system and spread from Earth's moon to the outer planets. Mars is the power in the inner system, obsessed with all things Earth and effectively run by the Earth Nations' Peacekeeping Force, the ENForcers. An ENForcer soldier is raised in a creche from an early age, implanted with cybernetic wetware and nanite enhancements, and extensively trained to be an elite fighting unit. As befits a proper military, every ENForcer is, of course, male. The ENForcers thought Lucia Del Toro was a good, obedient soldier. They also thought she was a man. They were wrong about those and many other things. After her role in an atrocity that named her the Scourge of New Shanghai, she went AWOL and stole her command ship. Now she and her partner/girlfriend Hawk, a computer hacker from Luna, make a living with bounty hunting jobs in the outer system. The ENForcers rarely cross the asteroid belt; the United Miners see to that. The appearance of an F-class ENForcer battle cruiser in Jupiter orbit is a very unpleasant surprise. Lucia and Hawk hope it has nothing to do with them. That hope is dashed when ENForcers turn up in the middle of their next job: a bounty to retrieve an AI eye. I first found Lyda Morehouse via her AngeLINK cyberpunk series, the last of which was published in 2011. Since then, she's been writing paranormal romance and urban fantasy as Tate Hallaway. This return to science fiction is an adventure with trickster hackers, throwback anime-based cowboy bars, tense confrontations with fascist thugs, and unexpected mutual aid, but its core is a cyberpunk look at the people who are unwilling or unable to follow the rules of social conformity. Gender conformity, specifically. Once you understand what this book is about, Welcome to Boy.Net is a great title, but I'm not sure it serves its purpose as a marketing tool. This is not the book that I would have expected from that title in isolation, and I'm a bit worried that people who would like it might pass it by. Inside the story, Boy.Net is the slang term for the cybernetic network that links all ENForcers. If this were the derogatory term used by people outside the ENForcers, I could see it, but it's what the ENForcers themselves call it. That left me with a few suspension of disbelief problems, since the sort of macho assholes who are this obsessed with male gender conformance usually consider "boys" to be derogatory and wouldn't call their military cybernetic network something that sounds that belittling, even as a joke. It would be named after some sort of Orwellian reference to freedom, or something related to violence, dominance, brutality, or some other "traditional male" virtue. But although this term didn't work for me as world-building, it's a beautiful touch thematically. What Morehouse is doing here is the sort of concretized metaphor that science fiction is so good at: an element of world-building that is both an analogy for something the reader is familiar with and is also a concrete piece of world background that follows believable rules and can be manipulated by the characters. Boy.Net is trying to reconnect to Lucia against her will. If it succeeds, it will treat the body modifications she's made as damage and try to reverse all of them, attempting to convert her back to the model of an ENForcer. But it is also a sharp metaphor for how gender roles are enforced in our world: a child assigned male is connected to a pervasive network of gender expectations and is programmed, shaped, and monitored to match the social role of a boy. Even if they reject those expectations, the gender role keeps trying to reconnect and convert them back. I really enjoyed Morehouse's handling of the gender dynamics. It's an important part of the plot, but it's not the only thing going on or the only thing the characters think about. Lucia is occasionally caught by surprise by well-described gender euphoria, but mostly gender is something other people keep trying to impose on her because they're obsessed with forcing social conformity. The rest of the book is a fun romp with a few memorable characters and a couple of great moments with unexpected allies. Hawk and Lucia have an imperfect but low drama relationship that features a great combination of insight and the occasional misunderstanding. It's the kind of believable human relationship that I don't see very much in science fiction, written with the comfortable assurance of an author with over a dozen books under her belt. Some of the supporting characters are also excellent, including a non-binary deaf hacker that I wish had been a bit more central to the story. This is not the greatest science fiction novel I've read, but it was entertaining throughout and kept me turning the pages. Recommended if you want some solar-system cyberpunk in your life. Welcome to Boy.Net reaches a conclusion of sorts, but there's an obvious hook for a sequel and a lot of room left for more stories. I hope enough people buy this book so that I can read it. Rating: 7 out of 10

      4 July 2024

      Arturo Borrero Gonz lez: Wikimedia Toolforge: migrating Kubernetes from PodSecurityPolicy to Kyverno

      Le ch teau de Val re et le Haut de Cry en juillet 2022 Christian David, CC BY-SA 4.0, via Wikimedia Commons This post was originally published in the Wikimedia Tech blog, authored by Arturo Borrero Gonzalez. Summary: this article shares the experience and learnings of migrating away from Kubernetes PodSecurityPolicy into Kyverno in the Wikimedia Toolforge platform. Wikimedia Toolforge is a Platform-as-a-Service, built with Kubernetes, and maintained by the Wikimedia Cloud Services team (WMCS). It is completely free and open, and we welcome anyone to use it to build and host tools (bots, webservices, scheduled jobs, etc) in support of Wikimedia projects. We provide a set of platform-specific services, command line interfaces, and shortcuts to help in the task of setting up webservices, jobs, and stuff like building container images, or using databases. Using these interfaces makes the underlying Kubernetes system pretty much invisible to users. We also allow direct access to the Kubernetes API, and some advanced users do directly interact with it. Each account has a Kubernetes namespace where they can freely deploy their workloads. We have a number of controls in place to ensure performance, stability, and fairness of the system, including quotas, RBAC permissions, and up until recently PodSecurityPolicies (PSP). At the time of this writing, we had around 3.500 Toolforge tool accounts in the system. We early adopted PSP in 2019 as a way to make sure Pods had the correct runtime configuration. We needed Pods to stay within the safe boundaries of a set of pre-defined parameters. Back when we adopted PSP there was already the option to use 3rd party agents, like OpenPolicyAgent Gatekeeper, but we decided not to invest in them, and went with a native, built-in mechanism instead. In 2021 it was announced that the PSP mechanism would be deprecated, and removed in Kubernetes 1.25. Even though we had been warned years in advance, we did not prioritize the migration of PSP until we were in Kubernetes 1.24, and blocked, unable to upgrade forward without taking actions. The WMCS team explored different alternatives for this migration, but eventually we decided to go with Kyverno as a replacement for PSP. And so with that decision it began the journey described in this blog post. First, we needed a source code refactor for one of the key components of our Toolforge Kubernetes: maintain-kubeusers. This custom piece of software that we built in-house, contains the logic to fetch accounts from LDAP and do the necessary instrumentation on Kubernetes to accommodate each one: create namespace, RBAC, quota, a kubeconfig file, etc. With the refactor, we introduced a proper reconciliation loop, in a way that the software would have a notion of what needs to be done for each account, what would be missing, what to delete, upgrade, and so on. This would allow us to easily deploy new resources for each account, or iterate on their definitions. The initial version of the refactor had a number of problems, though. For one, the new version of maintain-kubeusers was doing more filesystem interaction than the previous version, resulting in a slow reconciliation loop over all the accounts. We used NFS as the underlying storage system for Toolforge, and it could be very slow because of reasons beyond this blog post. This was corrected in the next few days after the initial refactor rollout. A side note with an implementation detail: we stored a configmap on each account namespace with the state of each resource. Storing more state on this configmap was our solution to avoid additional NFS latency. I initially estimated this refactor would take me a week to complete, but unfortunately it took me around three weeks instead. Previous to the refactor, there were several manual steps and cleanups required to be done when updating the definition of a resource. The process is now automated, more robust, performant, efficient and clean. So in my opinion it was worth it, even if it took more time than expected. Then, we worked on the Kyverno policies themselves. Because we had a very particular PSP setting, in order to ease the transition, we tried to replicate their semantics on a 1:1 basis as much as possible. This involved things like transparent mutation of Pod resources, then validation. Additionally, we had one different PSP definition for each account, so we decided to create one different Kyverno namespaced policy resource for each account namespace remember, we had 3.5k accounts. We created a Kyverno policy template that we would then render and inject for each account. For developing and testing all this, maintain-kubeusers and the Kyverno bits, we had a project called lima-kilo, which was a local Kubernetes setup replicating production Toolforge. This was used by each engineer in their laptop as a common development environment. We had planned the migration from PSP to Kyverno policies in stages, like this:
      1. update our internal template generators to make Pod security settings explicit
      2. introduce Kyverno policies in Audit mode
      3. see how the cluster would behave with them, and if we had any offending resources reported by the new policies, and correct them
      4. modify Kyverno policies and set them in Enforce mode
      5. drop PSP
      In stage 1, we updated things like the toolforge-jobs-framework and tools-webservice. In stage 2, when we deployed the 3.5k Kyverno policy resources, our production cluster died almost immediately. Surprise. All the monitoring went red, the Kubernetes apiserver became irresponsibe, and we were unable to perform any administrative actions in the Kubernetes control plane, or even the underlying virtual machines. All Toolforge users were impacted. This was a full scale outage that required the energy of the whole WMCS team to recover from. We temporarily disabled Kyverno until we could learn what had occurred. This incident happened despite having tested before in lima-kilo and in another pre-production cluster we had, called Toolsbeta. But we had not tested that many policy resources. Clearly, this was something scale-related. After the incident, I went on and created 3.5k Kyverno policy resources on lima-kilo, and indeed I was able to reproduce the outage. We took a number of measures, corrected a few errors in our infrastructure, reached out to the Kyverno upstream developers, asking for advice, and at the end we did the following to accommodate the setup to our needs: I have to admit, I was briefly tempted to drop Kyverno, and even stop pursuing using an external policy agent entirely, and write our own custom admission controller out of concerns over performance of this architecture. However, after applying all the measures listed above, the system became very stable, so we decided to move forward. The second attempt at deploying it all went through just fine. No outage this time When we were in stage 4 we detected another bug. We had been following the Kubernetes upstream documentation for setting securityContext to the right values. In particular, we were enforcing the procMount to be set to the default value, which per the docs it was DefaultProcMount . However, that string is the name of the internal variable in the source code, whereas the actual default value is the string Default . This caused pods to be rightfully rejected by Kyverno while we figured the problem. I sent a patch upstream to fix this problem. We finally had everything in place, reached stage 5, and we were able to disable PSP. We unloaded the PSP controller from the kubernetes apiserver, and deleted every individual PSP definition. Everything was very smooth in this last step of the migration. This whole PSP project, including the maintain-kubeusers refactor, the outage, and all the different migration stages took roughly three months to complete. For me there are a number of valuable reasons to learn from this project. For one, the scale is something to consider, and test, when evaluating a new architecture or software component. Not doing so can lead to service outages, or unexpectedly poor performances. This is in the first chapter of the SRE handbook, but we got a reminder the hard way This post was originally published in the Wikimedia Tech blog, authored by Arturo Borrero Gonzalez.

      Next.