Search Results: "ender"

20 November 2024

Russell Coker: Solving Spam and Phishing for Corporations

Centralisation and Corporations An advantage of a medium to large company is that it permits specialisation. For example I m currently working in the IT department of a medium sized company and because we have standardised hardware (Dell Latitude and Precision laptops, Dell Precision Tower workstations, and Dell PowerEdge servers) and I am involved in fixing all Linux compatibility issues on that I can fix most problems in a small fraction of the time that I would take to fix on a random computer. There is scope for a lot of debate about the extent to which companies should standardise and centralise things. But for computer problems which can escalate quickly from minor to serious if not approached in the correct manner it s clear that a good deal of centralisation is appropriate. For people doing technical computer work such as programming there s a large portion of the employees who are computer hobbyists who like to fiddle with computers. But if the support system is run well even they will appreciate having computers just work most of the time and for a large portion of the failures having someone immediately recognise the problem, like the issues with NVidia drivers that I have documented so that first line support can implement workarounds without the need for a lengthy investigation. A big problem with email in the modern Internet is the prevalence of Phishing scams. The current corporate approach to this is to send out test Phishing email to people and then force computer security training on everyone who clicks on them. One problem with this is that attackers only need to fool one person on one occasion and when you have hundreds of people doing something on rare occasions that s not part of their core work they will periodically get it wrong. When every test Phishing run finds several people who need extra training it seems obvious to me that this isn t a solution that s working well. I will concede that the majority of people who click on the test Phishing email would probably realise their mistake if asked to enter the password for the corporate email system, but I think it s still clear that this isn t a great solution. Let s imagine for the sake of discussion that everyone in a company was 100% accurate at identifying Phishing email and other scam email, if that was the case would the problem be solved? I believe that even in that hypothetical case it would not be a solved problem due to the wasted time and concentration. People can spend minutes determining if a single email is legitimate. On many occasions I have had relatives and clients forward me email because they are unsure if it s valid, it s great that they seek expert advice when they are unsure about things but it would be better if they didn t have to go to that effort. What we ideally want to do is centralise the anti-Phishing and anti-spam work to a small group of people who are actually good at it and who can recognise patterns by seeing larger quantities of spam. When a spam or Phishing message is sent to 600 people in a company you don t want 600 people to individually consider it, you want one person to recognise it and delete/block all 600. If 600 people each spend one minute considering the matter then that s 10 work hours wasted! The Rationale for Human Filtering For personal email human filtering usually isn t viable because people want privacy. But corporate email isn t private, it s expected that the company can read it under certain circumstances (in most jurisdictions) and having email open in public areas of the office where colleagues might see it is expected. You can visit gmail.com on your lunch break to read personal email but every company policy (and common sense) says to not have actually private correspondence on company systems. The amount of time spent by reception staff in sorting out such email would be less than that taken by individuals. When someone sends a spam to everyone in the company instead of 500 people each spending a couple of minutes working out whether it s legit you have one person who s good at recognising spam (because it s their job) who clicks on a remove mail from this sender from all mailboxes button and 500 messages are deleted and the sender is blocked. Delaying email would be a concern. It s standard practice for CEOs (and C*Os at larger companies) to have a PA receive their email and forward the ones that need their attention. So human vetting of email can work without unreasonable delays. If we had someone checking all email for the entire company probably email to the senior people would never get noticeably delayed and while people like me would get their mail delayed on occasion people doing technical work generally don t have notifications turned on for email because it s a distraction and a fast response isn t needed. There are a few senders where fast response is required, which is mostly corporations sending a click this link within 10 minutes to confirm your password change email. Setting up rules for all such senders that are relevant to work wouldn t be difficult to do. How to Solve This Spam and Phishing became serious problems over 20 years ago and we have had 20 years of evolution of email filtering which still hasn t solved the problem. The vast majority of email addresses in use are run by major managed service providers and they haven t managed to filter out spam/phishing mail effectively so I think we should assume that it s not going to be solved by filtering. There is talk about what AI technology might do for filtering spam/phishing but that same technology can product better crafted hostile email to avoid filters. An additional complication for corporate email filtering is that some criteria that are used to filter personal email don t apply to corporate mail. If someone sends email to me personally about millions of dollars then it s obviously not legit. If someone sends email to a company then it could be legit. Companies routinely have people emailing potential clients about how their products can save millions of dollars and make purchases over a million dollars. This is not a problem that s impossible to solve, it s just an extra difficulty that reduces the efficiency of filters. It seems to me that the best solution to the problem involves having all mail filtered by a human. A company could configure their mail server to not accept direct external mail for any employee s address. Then people could email files to colleagues etc without any restriction but spam and phishing wouldn t be a problem. The issue is how to manage inbound mail. One possibility is to have addresses of the form it+russell.coker@example.com (for me as an employee in the IT department) and you would have a team of people who would read those mailboxes and forward mail to the right people if it seemed legit. Having addresses like it+russell.coker means that all mail to the IT department would be received into folders of the same account and they could be filtered by someone with suitable security level and not require any special configuration of the mail server. So the person who read the is mailbox would have a folder named russell.coker receiving mail addressed to me. The system could be configured to automate the processing of mail from known good addresses (and even domains), so they could just put in a rule saying that when Dell sends DMARC authenticated mail to is+$USER it gets immediately directed to $USER. This is the sort of thing that can be automated in the email client (mail filtering is becoming a common feature in MUAs). For a FOSS implementation of such things the server side of it (including extracting account data from a directory to determine which department a user is in) would be about a day s work and then an option would be to modify a webmail program to have extra functionality for approving senders and sending change requests to the server to automatically direct future mail from the same sender. As an aside I have previously worked on a project that had a modified version of the Horde webmail system to do this sort of thing for challenge-response email and adding certain automated messages to the allow-list. The Change One of the first things to do is configuring the system to add every recipient of an outbound message to the allow list for receiving a reply. Having a script go through the sent-mail folders of all accounts and adding the recipients to the allow lists would be easy and catch the common cases. But even with processing the sent mail folders going from a working system without such things to a system like this will take some time for the initial work of adding addresses to the allow lists, particularly for domain wide additions of all the sites that send password confirmation messages. You would need rules to direct inbound mail to the old addresses to the new style and then address a huge amount of mail that needs to be categorised. If you have 600 employees and the average amount of time taken on the first day is 10 minutes per user then that s 100 hours of work, 12 work days. If you had everyone from the IT department, reception, and executive assistants working on it that would be viable. After about a week there wouldn t be much work involved in maintaining it. Then after that it would be a net win for the company. The Benefits If the average employee spends one minute a day dealing with spam and phishing email then with 600 employees that s 10 hours of wasted time per day. Effectively wasting one employee s work! I m sure that s the low end of the range, 5 minutes average per day doesn t seem unreasonable especially when people are unsure about phishing email and send it to Slack so multiple employees spend time analysing it. So you could have 5 employees being wasted by hostile email and avoiding that would take a fraction of the time of a few people adding up to less than an hour of total work per day. Then there s the training time for phishing mail. Instead of having every employee spend half an hour doing email security training every few months (that s 300 hours or 7.5 working weeks every time you do it) you just train the few experts. In addition to saving time there are significant security benefits to having experts deal with possibly hostile email. Someone who deals with a lot of phishing email is much less likely to be tricked. Will They Do It? They probably won t do it any time soon. I don t think it s expensive enough for companies yet. Maybe government agencies already have equivalent measures in place, but for regular corporations it s probably regarded as too difficult to change anything and the costs aren t obvious. I have been unsuccessful in suggesting that managers spend slightly more on computer hardware to save significant amounts of worker time for 30 years.

18 November 2024

Russ Allbery: Review: Delilah Green Doesn't Care

Review: Delilah Green Doesn't Care, by Ashley Herring Blake
Series: Bright Falls #1
Publisher: Jove
Copyright: February 2022
ISBN: 0-593-33641-0
Format: Kindle
Pages: 374
Delilah Green Doesn't Care is a sapphic romance novel. It's the first of a trilogy, although in the normal romance series fashion each book follows a different protagonist and has its own happy ending. It is apparently classified as romantic comedy, which did not occur to me while reading but which I suppose I can see in retrospect. Delilah Green got the hell out of Bright Falls as soon as she could and tried not to look back. After her father died, her step-mother lavished all of her perfectionist attention on her overachiever step-sister, leaving Delilah feeling like an unwanted ghost. She escaped to New York where there was space for a queer woman with an acerbic personality and a burgeoning career in photography. Her estranged step-sister's upcoming wedding was not a good enough reason to return to the stifling small town of her childhood. The pay for photographing the wedding was, since it amounted to three months of rent and trying to sell photographs in galleries was not exactly a steady living. So back to Bright Falls Delilah goes. Claire never left Bright Falls. She got pregnant young and ended up with a different life than she expected, although not a bad one. Now she's raising her daughter as a single mom, running the town bookstore, and dealing with her unreliable ex. She and Iris are Astrid Parker's best friends and have been since fifth grade, which means she wants to be happy for Astrid's upcoming wedding. There's only one problem: the groom. He's a controlling, boorish ass, but worse, Astrid seems to turn into a different person around him. Someone Claire doesn't like. Then, to make life even more complicated, Claire tries to pick up Astrid's estranged step-sister in Bright Falls's bar without recognizing her. I have a lot of things to say about this novel, but here's the core of my review: I started this book at 4pm on a Saturday because I hadn't read anything so far that day and wanted to at least start a book. I finished it at 11pm, having blown off everything else I had intended to do that evening, completely unable to put it down. It turns out there is a specific type of romance novel protagonist that I absolutely adore: the sarcastic, confident, no-bullshit character who is willing to pick the fights and say the things that the other overly polite and anxious characters aren't able to get out. Astrid does not react well to criticism, for reasons that are far more complicated than it may first appear, and Claire and Iris have been dancing around the obvious problems with her surprise engagement. As the title says, Delilah thinks she doesn't care: she's here to do a job and get out, and maybe she'll get to tweak her annoying step-sister a bit in the process. But that also means that she is unwilling to play along with Astrid's obsessively controlling mother or her obnoxious fiance, and thus, to the barely disguised glee of Claire and Iris, is a direct threat to the tidy life that Astrid's mother is trying to shoehorn her daughter into. This book is a great example of why I prefer sapphic romances: I think this character setup would not work, at least for me, in a heterosexual romance. Delilah's role only works if she's a woman; if a male character were the sarcastic conversational bulldozer, it would be almost impossible to avoid falling into the gender stereotype of a male rescuer. If this were a heterosexual romance trying to avoid that trap, the long-time friend who doesn't know how to directly confront Astrid would have to be the male protagonist. That could work, but it would be a tricky book to write without turning it into a story focused primarily on the subversion of gender roles. Making both protagonists women dodges the problem entirely and gives them so much narrative and conceptual space to simply be themselves, rather than characters obscured by the shadows of societal gender rules. This is also, at it's core, a book about friendship. Claire, Astrid, and Iris have the sort of close-knit friend group that looks exclusive and unapproachable from the outside. Delilah was the stereotypical outsider, mocked and excluded when they thought of her at all. This, at least, is how the dynamics look at the start of the book, but Blake did an impressive job of shifting my understanding of those relationships without changing their essential nature. She fleshes out all of the characters, not just the romantic leads, and adds complexity, nuance, and perspective. And, yes, past misunderstanding, but it's mostly not the cheap sort that sometimes drives romance plots. It's the misunderstanding rooted in remembered teenage social dynamics, the sort of misunderstanding that happens because communication is incredibly difficult, even more difficult when one has no practice or life experience, and requires knowing oneself well enough to even know what to communicate. The encounter between Delilah and Claire in the bar near the start of the book is cornerstone of the plot, but the moment that grabbed me and pulled me in was Delilah's first interaction with Claire's daughter Ruby. That was the point when I knew these were characters I could trust, and Blake never let me down. I love how Ruby is handled throughout this book, with all of the messy complexity of a kid of divorced parents with her own life and her own personality and complicated relationships with both parents that are independent of the relationship their parents have with each other. This is not a perfect book. There's one prank scene that I thought was excessively juvenile and should have been counter-productive, and there's one tricky question of (nonsexual) consent that the book raises and then later seems to ignore in a way that bugged me after I finished it. There is a third-act breakup, which is not my favorite plot structure, but I think Blake handles it reasonably well. I would probably find more niggles and nitpicks if I re-read it more slowly. But it was utterly engrossing reading that exactly matched my mood the day that I picked it up, and that was a fantastic reading experience. I'm not much of a romance reader and am not the traditional audience for sapphic romance, so I'm probably not the person you should be looking to for recommendations, but this is the sort of book that got me to immediately buy all of the sequels and start thinking about a re-read. It's also the sort of book that dragged me back in for several chapters when I was fact-checking bits of my review. Take that recommendation for whatever it's worth. Content note: Reviews of Delilah Green Doesn't Care tend to call it steamy or spicy. I have no calibration for this for romance novels. I did not find it very sex-focused (I have read genre fantasy novels with more sex), but there are several on-page sex scenes if that's something you care about one way or the other. Followed by Astrid Parker Doesn't Fail. Rating: 9 out of 10

9 November 2024

Jonathan Dowland: Progressively enhancing CGI apps with htmx

I was interested in learning about htmx, so I used it to improve the experience of posting comments on my blog. It seems much of modern web development is structured around having a JavaScript program on the front-end (browser) which exchanges data encoded in JSON asynchronously with the back-end servers. htmx uses a novel (or throwback) approach: it asynchronously fetches snippets of HTML from the back-end, and splices the results into the live page. For example, a htmx-powered button may request a URI on the server, receive HTML in response, and then the button itself would be replaced by the resulting HTML, within the page. I experimented with incorporating it into an existing, old-school CGI web app: IkiWiki, which I became a co-maintainer of this year, and powers my blog. Throughout this project I referred to the excellent book Server-Driven Web Apps with htmx. Comment posting workflow I really value blog comments, but the UX for posting them on my blog was a bit clunky. It went like this:
  1. you load a given page (such as this blog post), which is a static HTML document. There's a link to add a comment to the page.
  2. The link loads a new page which is generated dynamically and served back to you via CGI. This contains a HTML form for you to write your comment.
  3. The form submits to the server via HTTP POST. IkiWiki validates the form content. Various static pages (in particular the one you started on, in Step 1) are regenerated.
  4. the server response to the request in (3) is a HTTP 302 redirect, instructing the browser to go back to the page in Step 1.
First step: fetching a comment form First, I wanted the "add a comment" link to present the edit box in the current page. This step was easiest: add four attributes to the "comment on this page" anchor tag:
hx-get="<CGI ENDPOINT GOES HERE>"
suppresses the normal behaviour of the tag, so clicking on it doesn't load a new page.
issues an asynchronous HTTP GET to the CGI end-point, which returns the full HTML document for the comment edit form
hx-select=".editcomment form"
extract the edit-comment form from within that document
hx-swap=beforeend and hx-target=".addcomment"
append (courtesy of beforeend) the form into the source page after the "add comment" anchor tag (.addcomment)
Now, clicking "comment on this page" loads in the edit-comment box below it without moving you away from the source page. All that without writing any new code! Second step: handling previews
The old Preview Comment page The old Preview Comment page
In the traditional workflow, clicking on "Preview" loaded a new page containing the edit form (but not the original page or any existing comments) with a rendering of the comment-in-progress below it. I wasn't originally interested in supporting the "Preview" feature, but I needed to for reasons I'll explain later. Rather than load new pages, I wanted "Preview" to insert a rendering of the comment-in-progress being inserted into the current page's list of comments, marked up to indicate that it's a preview. IkiWiki provides some templates which you can override to customise your site. I've long overridden page.tmpl, the template used for all pages. I needed to add a new empty div tag in order to have a "hook" to target with the previewed comment. The rest of this was achieved with htmx attributes on the "Preview" button, similar to in the last step: hx-post to define a target URI when you click the button (and specify HTTP POST); hx-select to filter the resulting HTML and extract the comment; hx-target to specify where to insert it. Now, clicking "Preview" does not leave the current page, but fetches a rendering of your comment-in-progress, and splices it into the comment list, appropriately marked up to be clear it's a preview. Third step: handling submitted comments IkiWiki is highly configurable, and many different things could happen once you post a comment. On my personal blog, all comments are held for moderation before they are published. The page you were served after submitting a comment was rather bare-bones, a status message "Your comment will be posted after moderator review", without the original page content or comments. I wanted your comment to appear in the page immediately, albeit marked up to indicate it was awaiting review. Since the traditional workflow didn't render or present your comment to you, I had to cheat. handling moderated comments
Moderation message upon submitting a comment Moderation message upon submitting a comment
One of my goals with this project was not to modify IkiWiki itself. I had to break this rule for moderated comments. When returning the "comment is moderated" page, IkiWiki uses HTTP status code 200, the same as for other scenarios. I wrote a tiny patch to return HTTP 202 (Accepted, but not processed) instead. I now have to write some actual JavaScript. htmx emits the htmx:beforeSwap event after an AJAX call returns, but before the corresponding swap is performed. I wrote a function that is triggered on this event, filters for HTTP 202 responses, triggers the "Preview" button, and then alters the result to indicate a moderated, rather than previewed, comment. (That's why I bothered to implement previews). You can read the full function here: jon.js. Summary I've done barely any front-end web development for years and I found working with htmx to be an enjoyable experience. You can leave a comment on this very blog post if you want to see it in action. I couldn't resist adding an easter egg: Brownie points if you can figure out what it is. Adding htmx to an existing CGI-based website let me improve one of the workflows in a gracefully-degrading way (without JavaScript, the old method will continue to work fine) without modifying the existing application itself (well, almost) and without having to write very much code of my own at all: nearly all of the configuration was declarative.

20 October 2024

Bits from Debian: Ada Lovelace Day 2024 - Interview with some Women in Debian

Alt Ada Lovelace portrait Ada Lovelace Day was celebrated on October 8 in 2024, and on this occasion, to celebrate and raise awareness of the contributions of women to the STEM fields we interviewed some of the women in Debian. Here we share their thoughts, comments, and concerns with the hope of inspiring more women to become part of the Sciences, and of course, to work inside of Debian. This article was simulcasted to the debian-women mail list. Beatrice Torracca 1. Who are you? I am Beatrice, I am Italian. Internet technology and everything computer-related is just a hobby for me, not my line of work or the subject of my academic studies. I have too many interests and too little time. I would like to do lots of things and at the same time I am too Oblomovian to do any. 2. How did you get introduced to Debian? As a user I started using newsgroups when I had my first dialup connection and there was always talk about this strange thing called Linux. Since moving from DR DOS to Windows was a shock for me, feeling like I lost the control of my machine, I tried Linux with Debian Potato and I never strayed away from Debian since then for my personal equipment. 3. How long have you been into Debian? Define "into". As a user... since Potato, too many years to count. As a contributor, a similar amount of time, since early 2000 I think. My first archived email about contributing to the translation of the description of Debian packages dates 2001. 4. Are you using Debian in your daily life? If yes, how? Yes!! I use testing. I have it on my desktop PC at home and I have it on my laptop. The desktop is where I have a local IMAP server that fetches all the mails of my email accounts, and where I sync and back up all my data. On both I do day-to-day stuff (from email to online banking, from shopping to taxes), all forms of entertainment, a bit of work if I have to work from home (GNU R for statistics, LibreOffice... the usual suspects). At work I am required to have another OS, sadly, but I am working on setting up a Debian Live system to use there too. Plus if at work we start doing bioinformatics there might be a Linux machine in our future... I will of course suggest and hope for a Debian system. 5. Do you have any suggestions to improve women's participation in Debian? This is a tough one. I am not sure. Maybe, more visibility for the women already in the Debian Project, and make the newcomers feel seen, valued and welcomed. A respectful and safe environment is key too, of course, but I think Debian made huge progress in that aspect with the Code of Conduct. I am a big fan of promoting diversity and inclusion; there is always room for improvement. Ileana Dumitrescu (ildumi) 1. Who are you? I am just a girl in the world who likes cats and packaging Free Software. 2. How did you get introduced to Debian? I was tinkering with a computer running Debian a few years ago, and I decided to learn more about Free Software. After a search or two, I found Debian Women. 3. How long have you been into Debian? I started looking into contributing to Debian in 2021. After contacting Debian Women, I received a lot of information and helpful advice on different ways I could contribute, and I decided package maintenance was the best fit for me. I eventually became a Debian Maintainer in 2023, and I continue to maintain a few packages in my spare time. 4. Are you using Debian in your daily life? If yes, how? Yes, it is my favourite GNU/Linux operating system! I use it for email, chatting, browsing, packaging, etc. 5. Do you have any suggestions to improve women's participation in Debian? The mailing list for Debian Women may attract more participation if it is utilized more. It is where I started, and I imagine participation would increase if it is more engaging. Kathara Sasikumar (kathara) 1. Who are you? I'm Kathara Sasikumar, 22 years old and a recent Debian user turned Maintainer from India. I try to become a creative person through sketching or playing guitar chords, but it doesn't work! xD 2. How did you get introduced to Debian? When I first started college, I was that overly enthusiastic student who signed up for every club and volunteered for anything that crossed my path just like every other fresher. But then, the pandemic hit, and like many, I hit a low point. COVID depression was real, and I was feeling pretty down. Around this time, the FOSS Club at my college suddenly became more active. My friends, knowing I had a love for free software, pushed me to join the club. They thought it might help me lift my spirits and get out of the slump I was in. At first, I joined only out of peer pressure, but once I got involved, the club really took off. FOSS Club became more and more active during the pandemic, and I found myself spending more and more time with it. A year later, we had the opportunity to host a MiniDebConf at our college. Where I got to meet a lot of Debian developers and maintainers, attending their talks and talking with them gave me a wider perspective on Debian, and I loved the Debian philosophy. At that time, I had been distro hopping but never quite settled down. I occasionally used Debian but never stuck around. However, after the MiniDebConf, I found myself using Debian more consistently, and it truly connected with me. The community was incredibly warm and welcoming, which made all the difference. 3. How long have you been into Debian? Now, I've been using Debian as my daily driver for about a year. 4. Are you using Debian in your daily life? If yes, how? It has become my primary distro, and I use it every day for continuous learning and working on various software projects with free and open-source tools. Plus, I've recently become a Debian Maintainer (DM) and have taken on the responsibility of maintaining a few packages. I'm looking forward to contributing more to the Debian community Rhonda D'Vine (rhonda) 1. Who are you? My name is Rhonda, my pronouns are she/her, or per/pers. I'm 51 years old, working in IT. 2. How did you get introduced to Debian? I was already looking into Linux because of university, first it was SuSE. And people played around with gtk. But when they packaged GNOME and it just didn't even install I looked for alternatives. A working colleague from back then gave me a CD of Debian. Though I couldn't install from it because Slink didn't recognize the pcmcia drive. I had to install it via floppy disks, but apart from that it was quite well done. And the early GNOME was working, so I never looked back. 3. How long have you been into Debian? Even before I was more involved, a colleague asked me whether I could help with translating the release documentation. That was my first contribution to Debian, for the slink release in early 1999. And I was using some other software before on my SuSE systems, and I wanted to continue to use them on Debian obviously. So that's how I got involved with packaging in Debian. But I continued to help with translation work, for a long period of time I was almost the only person active for the German part of the website. 4. Are you using Debian in your daily life? If yes, how? Being involved with Debian was a big part of the reason I got into my jobs since a long time now. I always worked with maintaining Debian (or Ubuntu) systems. Privately I run Debian on my laptop, with occasionally switching to Windows in dual boot when (rarely) needed. 5. Do you have any suggestions to improve women's participation in Debian? There are factors that we can't influence, like that a lot of women are pushed into care work because patriarchal structures work that way, and don't have the time nor energy to invest a lot into other things. But we could learn to appreciate smaller contributions better, and not focus so much on the quantity of contributions. When we look at longer discussions on mailing lists, those that write more mails actually don't contribute more to the discussion, they often repeat themselves without adding more substance. Through working on our own discussion patterns this could create a more welcoming environment for a lot of people. Sophie Brun (sophieb) 1. Who are you? I'm a 44 years old French woman. I'm married and I have 2 sons. 2. How did you get introduced to Debian? In 2004 my boyfriend (now my husband) installed Debian on my personal computer to introduce me to Debian. I knew almost nothing about Open Source. During my engineering studies, a professor mentioned the existence of Linux, Red Hat in particular, but without giving any details. I learnt Debian by using and reading (in advance) The Debian Administrator's Handbook. 3. How long have you been into Debian? I've been a user since 2004. But I only started contributing to Debian in 2015: I had quit my job and I wanted to work on something more meaningful. That's why I joined my husband in Freexian, his company. Unlike most people I think, I started contributing to Debian for my work. I only became a DD in 2021 under gentle social pressure and when I felt confident enough. 4. Are you using Debian in your daily life? If yes, how? Of course I use Debian in my professional life for almost all the tasks: from administrative tasks to Debian packaging. I also use Debian in my personal life. I have very basic needs: Firefox, LibreOffice, GnuCash and Rhythmbox are the main applications I need. Sruthi Chandran (srud) 1. Who are you? A feminist, a librarian turned Free Software advocate and a Debian Developer. Part of Debian Outreach team and DebConf Committee. 2. How did you get introduced to Debian? I got introduced to the free software world and Debian through my husband. I attended many Debian events with him. During one such event, out of curiosity, I participated in a Debian packaging workshop. Just after that I visited a Tibetan community in India and they mentioned that there was no proper Tibetan font in GNU/Linux. Tibetan font was my first package in Debian. 3. How long have you been into Debian? I have been contributing to Debian since 2016 and Debian Developer since 2019. 4. Are you using Debian in your daily life? If yes, how? I haven't used any other distro on my laptop since I got introduced to Debian. 5. Do you have any suggestions to improve women's participation in Debian? I was involved with actively mentoring newcomers to Debian since I started contributing myself. I specially work towards reducing the gender gap inside the Debian and Free Software community in general. In my experience, I believe that visibility of already existing women in the community will encourage more women to participate. Also I think we should reintroduce mentoring through debian-women. T ssia Cam es Ara jo (tassia) 1. Who are you? T ssia Cam es Ara jo, a Brazilian living in Canada. I'm a passionate learner who tries to push myself out of my comfort zone and always find something new to learn. I also love to mentor people on their learning journey. But I don't consider myself a typical geek. My challenge has always been to not get distracted by the next project before I finish the one I have in my hands. That said, I love being part of a community of geeks and feel empowered by it. I love Debian for its technical excellence, and it's always reassuring to know that someone is taking care of the things I don't like or can't do. When I'm not around computers, one of my favorite things is to feel the wind on my cheeks, usually while skating or riding a bike; I also love music, and I'm always singing a melody in my head. 2. How did you get introduced to Debian? As a student, I was privileged to be introduced to FLOSS at the same time I was introduced to computer programming. My university could not afford to have labs in the usual proprietary software model, and what seemed like a limitation at the time turned out to be a great learning opportunity for me and my colleagues. I joined this student-led initiative to "liberate" our servers and build LTSP-based labs - where a single powerful computer could power a few dozen diskless thin clients. How revolutionary it was at the time! And what an achievement! From students to students, all using Debian. Most of that group became close friends; I've married one of them, and a few of them also found their way to Debian. 3. How long have you been into Debian? I first used Debian in 2001, but my first real connection with the community was attending DebConf 2004. Since then, going to DebConfs has become a habit. It is that moment in the year when I reconnect with the global community and my motivation to contribute is boosted. And you know, in 20 years I've seen people become parents, grandparents, children grow up; we've had our own child and had the pleasure of introducing him to the community; we've mourned the loss of friends and healed together. I'd say Debian is like family, but not the kind you get at random once you're born, Debian is my family by choice. 4. Are you using Debian in your daily life? If yes, how? These days I teach at Vanier College in Montr al. My favorite course to teach is UNIX, which I have the pleasure of teaching mostly using Debian. I try to inspire my students to discover Debian and other FLOSS projects, and we are happy to run a FLOSS club with participation from students, staff and alumni. I love to see these curious young minds put to the service of FLOSS. It is like recruiting soldiers for a good battle, and one that can change their lives, as it certainly did mine. 5. Do you have any suggestions to improve women's participation in Debian? I think the most effective way to inspire other women is to give visibility to active women in our community. Speaking at conferences, publishing content, being vocal about what we do so that other women can see us and see themselves in those positions in the future. It's not easy, and I don't like being in the spotlight. It took me a long time to get comfortable with public speaking, so I can understand the struggle of those who don't want to expose themselves. But I believe that this space of vulnerability can open the way to new connections. It can inspire trust and ultimately motivate our next generation. It's with this in mind that I publish these lines. Another point we can't neglect is that in Debian we work on a volunteer basis, and this in itself puts us at a great disadvantage. In our societies, women usually take a heavier load than their partners in terms of caretaking and other invisible tasks, so it is hard to afford the free time needed to volunteer. This is one of the reasons why I bring my son to the conferences I attend, and so far I have received all the support I need to attend DebConfs with him. It is a way to share the caregiving burden with our community - it takes a village to raise a child. Besides allowing us to participate, it also serves to show other women (and men) that you can have a family life and still contribute to Debian. My feeling is that we are not doing super well in terms of diversity in Debian at the moment, but that should not discourage us at all. That's the way it is now, but that doesn't mean it will always be that way. I feel like we go through cycles. I remember times when we had many more active female contributors, and I'm confident that we can improve our ratio again in the future. In the meantime, I just try to keep going, do my part, attract those I can, reassure those who are too scared to come closer. Debian is a wonderful community, it is a family, and of course a family cannot do without us, the women. These interviews were conducted via email exchanges in October, 2024. Thanks to all the wonderful women who participated in this interview. We really appreciate your contributions in Debian and to Free/Libre software.

10 October 2024

Freexian Collaborators: Debian Contributions: Packaging Pydantic v2, Reworking of glib2.0 for cross bootstrap, Python archive rebuilds and more! (by Anupa Ann Joseph)

Debian Contributions: 2024-09 Contributing to Debian is part of Freexian s mission. This article covers the latest achievements of Freexian and their collaborators. All of this is made possible by organizations subscribing to our Long Term Support contracts and consulting services.

Pydantic v2, by Colin Watson Pydantic is a useful library for validating data in Python using type hints: Freexian uses it in a number of projects, including Debusine. Its Debian packaging had been stalled at 1.10.17 in testing for some time, partly due to needing to make sure everything else could cope with the breaking changes introduced in 2.x, but mostly due to needing to sort out packaging of its new Rust dependencies. Several other people (notably Alexandre Detiste, Andreas Tille, Drew Parsons, and Timo R hling) had made some good progress on this, but nobody had quite got it over the line and it seemed a bit stuck. Colin upgraded a few Rust libraries to new upstream versions, packaged rust-jiter, and chased various failures in other packages. This eventually allowed getting current versions of both pydantic-core and pydantic into testing. It should now be much easier for us to stay up to date routinely.

Reworking of glib2.0 for cross bootstrap, by Helmut Grohne Simon McVittie (not affiliated with Freexian) earlier restructured the libglib2.0-dev such that it would absorb more functionality and in particular provide tools for working with .gir files. Those tools practically require being run for their host architecture (practically this means running under qemu-user) which is at odds with the requirements of architecture cross bootstrap. The qemu requirement was expressed in package dependencies and also made people unhappy attempting to use libglib2.0-dev for i386 on amd64 without resorting to qemu. The use of qemu in architecture bootstrap is particularly problematic as it tends to not be ready at the time bootstrapping is needed. As a result, Simon proposed and implemented the introduction of a libgio-2.0-dev package providing a subset of libglib2.0-dev that does not require qemu. Packages should continue to use libglib2.0-dev in their Build-Depends unless involved in architecture bootstrap. Helmut reviewed and tested the implementation and integrated the necessary changes into rebootstrap. He also prepared a patch for libverto to use the new package and proposed adding forward compatibility to glib2.0. Helmut continued working on adding cross-exe-wrapper to architecture-properties and implemented autopkgtests later improved by Simon. The cross-exe-wrapper package now provides a generic mechanism to a program on a different architecture by using qemu when needed only. For instance, a dependency on cross-exe-wrapper:i386 provides a i686-linux-gnu-cross-exe-wrapper program that can be used to wrap an ELF executable for the i386 architecture. When installed on amd64 or i386 it will skip installing or running qemu, but for other architectures qemu will be used automatically. This facility can be used to support cross building with targeted use of qemu in cases where running host code is unavoidable as is the case for GObject introspection. This concludes the joint work with Simon and Niels Thykier on glib2.0 and architecture-properties resolving known architecture bootstrap regressions arising from the glib2.0 refactoring earlier this year.

Analyzing binary package metadata, by Helmut Grohne As Guillem Jover (not affiliated with Freexian) continues to work on adding metadata tracking to dpkg, the question arises how this affects existing packages. The dedup.debian.net infrastructure provides an easy playground to answer such questions, so Helmut gathered file metadata from all binary packages in unstable and performed an explorative analysis. Some results include: Guillem also performed a cursory analysis and reported other problem categories such as mismatching directory permissions for directories installed by multiple packages and thus gained a better understanding of what consistency checks dpkg can enforce.

Python archive rebuilds, by Stefano Rivera Last month Stefano started to write some tooling to do large-scale rebuilds in debusine, starting with finding packages that had already started to fail to build from source (FTBFS) due to the removal of setup.py test. This month, Stefano did some more rebuilds, starting with experimental versions of dh-python. During the Python 3.12 transition, we had added a dependency on python3-setuptools to dh-python, to ease the transition. Python 3.12 removed distutils from the stdlib, but many packages were expecting it to still be available. Setuptools contains a version of distutils, and dh-python was a convenient place to depend on setuptools for most package builds. This dependency was never meant to be permanent. A rebuild without it resulted in mass-filing about 340 bugs (and around 80 more by mistake). A new feature in Python 3.12, was to have unittest s test runner exit with a non-zero return code, if no tests were run. We added this feature, to be able to detect tests that are not being discovered, by mistake. We are ignoring this failure, as we wouldn t want to suddenly cause hundreds of packages to fail to build, if they have no tests. Stefano did a rebuild to see how many packages were affected, and found that around 1000 were. The Debian Python community has not come to a conclusion on how to move forward with this. As soon as Python 3.13 release candidate 2 was available, Stefano did a rebuild of the Python packages in the archive against it. This was a more complex rebuild than the others, as it had to be done in stages. Many packages need other Python packages at build time, typically to run tests. So transitions like this involve some manual bootstrapping, followed by several rounds of builds. Not all packages could be tested, as not all their dependencies support 3.13 yet. The result was around 100 bugs in packages that need work to support Python 3.13. Many other packages will need additional work to properly support Python 3.13, but being able to build (and run tests) is an important first step.

Miscellaneous contributions
  • Carles prepared the update of python-pyaarlo package to a new upstream release.
  • Carles worked on updating python-ring-doorbell to a new upstream release. Unfinished, pending to package a new dependency python3-firebase-messaging RFP #1082958 and its dependency python3-http-ece RFP #1083020.
  • Carles improved po-debconf-manager. Main new feature is that it can open Salsa merge requests. Aiming for a lightning talk in MiniDebConf Toulouse (November) to be functional end to end and get feedback from the wider public for this proof of concept.
  • Carles helped one translator to use po-debconf-manager (added compatibility for bullseye, fixed other issues) and reviewed 17 package templates.
  • Colin upgraded the OpenSSH packaging to 9.9p1.
  • Colin upgraded the various YubiHSM packages to new upstream versions, enabled more tests, fixed yubihsm-shell build failures on some 32-bit architectures, made yubihsm-shell build reproducibly, and fixed yubihsm-connector to apply udev rules to existing devices when the package is installed. As usual, bookworm-backports is up to date with all these changes.
  • Colin fixed quite a bit of fallout from setuptools 72.0.0 removing setup.py test, backported a large upstream patch set to make buildbot work with SQLAlchemy 2.0, and upgraded 25 other Python packages to new upstream versions.
  • Enrico worked with Jakob Haufe to get him up to speed for managing sso.debian.org
  • Rapha l did remove spam entries in the list of teams on tracker.debian.org (see #1080446), and he applied a few external contributions, fixing a rendering issue and replacing the DDPO link with a more useful alternative. He also gave feedback on a couple of merge requests that required more work. As part of the analysis of the underlying problem, he suggested to the ftpmasters (via #1083068) to auto-reject packages having the too-many-contacts lintian error, and he raised the severity of #1076048 to serious to actually have that 4 year old bug fixed.
  • Rapha l uploaded zim and hamster-time-tracker to fix issues with Python 3.12 getting rid of setuptools. He also uploaded a new gnome-shell-extension-hamster to cope with the upcoming transition to GNOME 47.
  • Helmut sent seven patches and sponsored one upload for cross build failures.
  • Helmut uploaded a Nagios/Icinga plugin check-smart-attributes for monitoring the health of physical disks.
  • Helmut collaborated on sbuild reviewing and improving a MR for refactoring the unshare backend.
  • Helmut sent a patch fixing coinstallability of gcc-defaults.
  • Helmut continued to monitor the evolution of the /usr-move. With more and more key packages such as libvirt or fuse3 fixed. We re moving into the boring long-tail of the transition.
  • Helmut proposed updating the meson buildsystem in debhelper to use env2mfile.
  • Helmut continued to update patches maintained in rebootstrap. Due to the work on glib2.0 above, rebootstrap moves a lot further, but still fails for any architecture.
  • Santiago reviewed some Merge Request in Salsa CI, such as: !478, proposed by Otto to extend the information about how to use additional runners in the pipeline and !518, proposed by Ahmed to add support for Ubuntu images, that will help to test how some debian packages, including the complex MariaDB are built on Ubuntu. Santiago also prepared !545, which will make the reprotest job more consistent with the result seen on reproducible-builds.
  • Santiago worked on different tasks related to DebConf 25. Especially he drafted the fundraising brochure (which is almost ready).
  • Thorsten Alteholz uploaded package libcupsfilter to fix the autopkgtest and a dependency problem of this package. After package splix was abandoned by upstream and OpenPrinting.org adopted its maintenance, Thorsten uploaded their first release.
  • Anupa published posts on the Debian Administrators group in LinkedIn and moderated the group, one of the tasks of the Debian Publicity Team.
  • Anupa helped organize DebUtsav 2024. It had over 100 attendees with hand-on sessions on making initial contributions to Linux Kernel, Debian packaging, submitting documentation to Debian wiki and assisting Debian Installations.

9 October 2024

Ben Hutchings: FOSS activity in September 2024

1 October 2024

Ravi Dwivedi: State of the Map Conference in Kenya

Last month, I traveled to Kenya to attend a conference called State of the Map 2024 ( SotM for short), which is an annual meetup of OpenStreetMap contributors from all over the world. It was held at the University of Nairobi Towers in Nairobi, from the 6th to the 8th of September.
University of Nairobi.
I have been contributing to OpenStreetMap for the last three years, and this conference seemed like a great opportunity to network with others in the community. As soon as I came across the travel grant announcement, I jumped in and filled the form immediately. I was elated when I was selected for the grant and couldn t wait to attend. The grant had an upper limit of 1200 and covered food, accommodation, travel and miscellaneous expenses such as visa fee. Pre-travel tasks included obtaining Kenya s eTA and getting a yellow fever vaccine. Before the conference, Mikko from the Humanitarian OpenStreetMap Team introduced me to Rabina and Pragya from Nepal, Ibtehal from Bangladesh, and Sajeevini from Sri Lanka. We all booked the Nairobi Transit Hotel, which was within walking distance of the conference venue. Pragya, Rabina, and I traveled together from Delhi to Nairobi, while Ibtehal was my roommate in the hotel.
Our group at the conference.
The venue, University of Nairobi Towers, was a tall building and the conference was held on the fourth, fifth and sixth floors. The open area on the fifth floor of the building had a nice view of Nairobi s skyline and was a perfect spot for taking pictures. Interestingly, the university had a wing dedicated to Mahatma Gandhi, who is regarded in India as the Father of the Nation.
View of Nairobi's skyline from the open area on the fifth floor.
A library in Mahatma Gandhi wing of the University of Nairobi.
The diversity of the participants was mind-blowing, with people coming from a whopping 54 countries. I was surprised to notice that I was the only participant traveling from India, despite India having a large OpenStreetMap community. That said, there were two other Indian participants who traveled from other countries. I finally got to meet Arnalie (from the Phillipines) and Letwin (from Zimbabwe), both of whom I had only met online before. I had met Anisa (from Albania) earlier during DebConf 2023. But I missed Mikko and Honey from the Humanitarian OpenStreetMap Team, whom I knew from the Open Mapping Guru program. I learned about the extent of OSM use through Pragya and Rabina s talk; about the logistics of running the OSM Board, in the OSMF (OpenStreetMap Foundation) session; about the Youth Mappers from Sajeevini, about the OSM activities in Malawi from Priscilla Kapolo, and about mapping in Zimbabwe from Letwin. However, I missed Ibtehal s lightning session. The ratio of women speakers and participants at the conference was impressive, and I hope we can get such gender representation in our Delhi/NCR mapping parties.
One of the conference halls where talks took place.
Outside of talks, the conference also had lunch and snack breaks, giving ample time for networking with others. In the food department, there were many options for a lacto-ovo vegetarian like myself, including potatoes, rice, beans, chips etc. I found out that the milk tea in Kenya (referred to as white tea ) is usually not as strong compared to India, so I switched to coffee (which is also called white coffee when taken with milk). The food wasn t spicy, but I can t complain :) Fruit juices served as a nice addition to lunch.
One of the lunch meals served during the conference.
At the end of the second day of the conference, there was a surprise in store for us a bus ride to the Bao Box restaurant. The ride gave us the experience of a typical Kenyan matatu (privately-owned minibuses used as share taxis), complete with loud rap music. I remember one of the songs being Kraff s Nursery Rhymes. That day, I was wearing an original Kenyan cricket jersey - one that belonged to Dominic Wesonga, who represented Kenya in four ODIs. This confused Priscilla Kapolo, who asked if I was from Kenya! Anyway, while it served as a good conversation starter, it didn t attract as much attention as I expected :) I had some pizza and chips there, and later some drinks with Ibtehal. After the party, Piyush went with us to our hotel and we played a few games of UNO.
Minibus which took us from the university to Bao Box restaurant.
This minibus in the picture gave a sense of a real matatu.
I am grateful to the organizers Laura and Dorothea for introducing me to Nikhil when I was searching for a companion for my post-conference trip. Nikhil was one of the aforementioned Indian participants, and a wildlife lover. We had some nice conversations; he wanted to go to the Masai Maara Natural Reserve, but it was too expensive for me. In addition, all the safaris were multi-day affairs, and I wasn t keen on being around wildlife for that long. Eventually I chose to go my own way, exploring the coastal side and visiting Mombasa. While most of the work regarding the conference was done using free software (including the reimbursement form and Mastodon announcements), I was disappointed by the use of WhatsApp for coordination with the participants. I don t use WhatsApp and so was left out. WhatsApp is proprietary software (they do not provide the source code) and users don t control it. It is common to highlight that OpenStreetMap is controlled by users and the community, rather than a company - this should apply to WhatsApp as well. My suggestion is to use XMPP, which shares similar principles with OpenStreetMap, as it is privacy-respecting, controlled by users, and powered by free software. I understand the concern that there might not be many participants using XMPP already. Although it is a good idea to onboard people to free software like XMPP, we can also create a Matrix group, and bridge it with both the XMPP group and the Telegram group. In fact, using Matrix and bridging it with Telegram is how I communicated with the South Asian participants. While it s not ideal - as Telegram s servers are proprietary and centralized - but it s certainly much better than creating a WhatsApp-only group. The setup can be bridged with IRC as well. On the other hand, self-hosted mailing lists for participants is also a good idea. Finally, I would like to thank SotM for the generous grant, enabling me to attend this conference, meet the diverse community behind OSM and visit the beautiful country of Kenya. Stay tuned for the blog post on Kenya trip. Thanks to Sahilister, Contrapunctus, Snehal and Badri for reviewing the draft of this blog post before publishing.

21 September 2024

Jamie McClelland: How do I warm up an IP Address?

After years on the waiting list, May First was just given a /24 block of IP addresses. Excellent. Now we want to start using them for, among other things, sending email. I haven t added a new IP address to our mail relays in a while and things seems to change regularly in the world of email so I m curious: what s the best 2024 way to warm up IP addresses, particularly using postfix? Sendergrid has a nice page on the topic. It establishes the number of messages to send per day. But I m not entirely sure how to fit messages per day into our setup. We use round robin DNS to direct email to one of several dozen email relay servers using postfix. And unfortunately our DNS software (knot) doesn t have a way to add weights to ensure some IPs show up more often than others (much less limit the specific number of messages a given relay should get). Postfix has some nice knobs for rate limiting, particularly: default_destination_recipient_limit and default_destination_rate_delay If default_destination_recipient_limit is over 1, then default_destination_rate_delay is equal to the minimum delay between sending email to the same domain. So, I m staring our IP addresses out at 30m - which prevents any single domain from receiving more than 2 messages per hour. Sadly, there are a lot of different domain names that deliver to the same set of popular corporate MX servers, so I am not sure I can accurately control how many messages a given provider sees coming from a given IP address. But it s a start. A bigger problem is that messages that exceed the limit hang out in the active queue until they can be sent without violating the rate limit. Since I can t fully control the number of messages a given queue receives (due to my inability to control the DNS round robin weights), a lot of messages are going to be severely delayed, especially ones with an @gmail.com domain name. I know I can temporarily set relayhost to a different queue and flush deferred messages, however, as far as I can tell, it doesn t work with active messages. To help mitigate the problem I m only using our bulk mail queue to warm up IPs, but really, this is not ideal. Suggestions welcome!

Update #1 If you are running postfix in a multi-instance setup and you have instances that are already warmed up, you can move active messages between queues with these steps:
# Put the message on hold in the warming up instance
postsuper -c /etc/postfix-warmingup -h $queueid
# Copy to a warmed up instance
cp --preserve=mode,ownership,timestamp /var/spool/postfix-warmingup/hold/$queueid /var/spool/postfix-warmedup/incoming/
# Queue the message
postqueue -c /etc/postfix-warmedup -i $queueid
# Delete from the original queue.
postsuper -c /etc/postfix-warmingup -d $queueid
After just 12 hours we had thousands of messages piling up. This warm up method was never going to work without the ability to move them to a faster queue. [Additional update: be sure to reload the postfix instance after flushing the queue so messages are drained from the active queue on the correct schedule. See update #4.]

Update #2 After 24 hours, most email is being accepted as far as I can tell. I am still getting a small percentage of email deferred by Yahoo with:
421 4.7.0 [TSS04] Messages from 204.19.241.9 temporarily deferred due to unexpected volume or user complaints - 4.16.55.1; see https://postmaster.yahooinc.com/error-codes (in reply
So I will keep it as 30m for another 24 hours or so and then move to 15m. Now that I can flush the backlog of active messages I am in less of a hurry.

Update #3 Well, this doesn t seem to be working the way I want it to. When a message arrives faster than the designated rate limit, it remains in the active queue. I m entirely sure how the timing is supposed to work, but at this point I m down to a 5m rate delay, and the active messages are just hanging out for a lot longer than 5m. I tried flushing the queue, but that only seems to affect the deferred messages. I finally got them re-tried with systemctl reload. I wonder if there is a setting to control this retry? Or better yet, why can t these messages that exceed the rate delayed be deferred instead?

Update #4 I think I see why I was confused in Update #3 about the timing. I suspect that when I move messages out of the active queue it screws up the timer. Reloading the instance resets the timer. Every time you muck with active messages, you should reload.

18 September 2024

Jamie McClelland: Gmail vs Tor vs Privacy

A legit email went to spam. Here are the redacted, relevant headers:
[redacted]
X-Spam-Flag: YES
X-Spam-Level: ******
X-Spam-Status: Yes, score=6.3 required=5.0 tests=DKIM_SIGNED,DKIM_VALID,
[redacted]
	*  1.0 RCVD_IN_XBL RBL: Received via a relay in Spamhaus XBL
	*      [185.220.101.64 listed in xxxxxxxxxxxxx.zen.dq.spamhaus.net]
	*  3.0 RCVD_IN_SBL_CSS Received via a relay in Spamhaus SBL-CSS
	*  2.5 RCVD_IN_AUTHBL Received via a relay in Spamhaus AuthBL
	*  0.0 RCVD_IN_PBL Received via a relay in Spamhaus PBL
[redacted]
[very first received line follows...]
Received: from [10.137.0.13] ([185.220.101.64])
        by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-378956d2ee6sm12487760f8f.83.2024.09.11.15.05.52
        for <xxxxx@mayfirst.org>
        (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
        Wed, 11 Sep 2024 15:05:53 -0700 (PDT)
At first I though a Gmail IP address was listed in spamhaus - I even opened a ticket. But then I realized it wasn t the last hop that Spamaus is complaining about, it s the first hop, specifically the ip 185.220.101.64 which appears to be a Tor exit node. The sender is using their own client to relay email directly to Gmail. Like any sane person, they don t trust Gmail to protect their privacy, so they are sending via Tor. But WTF, Gmail is not stripping the sending IP address from the header. I m a big fan of harm reduction and have always considered using your own client to relay email with Gmail as a nice way to avoid some of the surveillance tax Google imposes. However, it seems that if you pursue this option you have two unpleasant choices: I supposed you could also use a VPN, but I doubt the IP reputation of most VPN exit nodes are going to be more reliable than Tor.

4 September 2024

Reproducible Builds: Reproducible Builds in August 2024

Welcome to the August 2024 report from the Reproducible Builds project! Our reports attempt to outline what we ve been up to over the past month, highlighting news items from elsewhere in tech where they are related. As ever, if you are interested in contributing to the project, please visit our Contribute page on our website. Table of contents:
  1. LWN: The history, status, and plans for reproducible builds
  2. Intermediate Autotools build artifacts removed from PostgreSQL distribution tarballs
  3. Distribution news
  4. Mailing list news
  5. diffoscope
  6. Website updates
  7. Upstream patches
  8. Reproducibility testing framework

LWN: The history, status, and plans for reproducible builds The free software newspaper of record, Linux Weekly News, published an in-depth article based on Holger Levsen s talk, Reproducible Builds: The First Eleven Years which was presented at the recent DebConf24 conference in Busan, South Korea. Titled The history, status, and plans for reproducible builds and written by Jake Edge, LWN s article not only summarises Holger s talk and clarifies its message but it links to external information as well. Holger s original talk can also be watched on the DebConf24 webpage (direct .webm link and his HTML slides are available also). There are also a significant number of comments on LWN s page as well. Holger Levsen also headed a scheduled discussion session at DebConf24 on Preserving *other* build artifacts addressing a topic where a number of Debian packages are (or would like to) produce results that are neither the .deb files, the build logs nor the logs of CI tests. This is an issue for reproducible builds as this 4th type of build artifact are typically shipped within the binary .deb packages, and are invariably non-deterministic; thus making the .deb files unreproducible. (A direct .webm link and HTML slides are available).

Intermediate Autotools build artifacts removed from PostgreSQL distribution tarballs Peter Eisentraut wrote a detailed blog post on the subject of The new PostgreSQL 17 make dist . Like many projects, the PostgreSQL database has previously pre-built parts of its GNU Autotools build system: the reason for this is a mix of convenience and traditional practice . Peter astutely notes that this arrangement in the build system is quite tricky as:
You need to carefully maintain the different states of clean source code , partially built source code , and fully built source code , and the commands to transition between them.
However, Peter goes on to mention that:
a lot more attention is nowadays paid to the software supply chain. There are security and legal reasons for this. When users install software, they want to know where it came from, and they want to be sure that they got the right thing, not some fake version or some version of dubious legal provenance.
And cites the XZ Utils backdoor as a reason to care about transparent and reproducible ways of distributing and communicating a source tarball and provenance. Because of this, intermediate build artifacts are now henceforth essentially disallowed from PostgreSQL distribution tarballs.

Distribution news In Debian this month, 30 reviews of Debian packages were added, 17 were updated and 10 were removed this month adding to our knowledge about identified issues. One issue type was added by Chris Lamb, too. [ ] In addition, an issue was filed to update the Salsa CI pipeline (used by 1,000s of Debian packages) to no longer test for reproducibility with reprotest s build_path variation. Holger Levsen provided a rationale for this change in the issue, which has already been made to the tests being performed by tests.reproducible-builds.org.
In Arch Linux this month, Jelle van der Waa published a short blog post on the topic of Investigating creating reproducible images with mkosi, motivated by the desire to make it possible for anyone to re-recreate the official Arch cloud image bit-by-bit identical on their own machine as per [the] reproducible builds definition. In addition, Jelle filed a patch for pacman, the Arch Linux package manager, to respect the SOURCE_DATE_EPOCH environment variable when installing a package.
In openSUSE news, Bernhard M. Wiedemann published another report for that distribution.
In Android news, the IzzyOnDroid project added 49 new rebuilder recipes and now features 256 total reproducible applications representing 21% of the total offerings in the repository. IzzyOnDroid is an F-Droid style repository for Android apps[:] applications in this repository are official binaries built by the original application developers, taken from their resp. repositories (mostly GitHub).

Mailing list news From our mailing list this month:
  • Bernhard M. Wiedemann posted a brief message to the list with some helpful information regarding nondeterminism within Rust binaries, positing the use of the codegen-units = 16 default and resulting in a bug being filed in the Rust issue tracker. [ ]
  • Bernhard also wrote to the list, following up to a thread in November 2023, on attempts to make the LibreOffice suite of office applications build reproducibly. In the thread from this month, Bernhard could announce that the four patches previously mentioned have landed in LibreOffice upstream.
  • Fay Stegerman linked the mailing list to a thread she made on the Signal issue tracker regarding whether device-specific binaries [can] ever be considered meaningfully reproducible . In particular: the whole part about allow[ing] multiple third parties to come to a consensus on a correct result breaks down completely when correct is device-specific and not something everyone can agree on. [ ]
  • Developer kpcyrd posted an update for source code indexing project, whatsrc.org. Announcing that it now importing packages from live-bootstrap ( a usable Linux system [that is] created with only human-auditable, and wherever possible, human-written, source code ) into its database of provenance data.
  • Lastly, Mechtilde Stehmann posted an update to an earlier thread about how Java builds are not reproducible on the armhf architecture, enquiring how they might gain temporary access to such a machine in order to perform some deeper testing. [ ]

diffoscope diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. This month, Chris Lamb released versions 274, 275, 276 and 277, uploaded these to Debian, and made the following changes as well:
  • New features:
    • Strip ANSI escapes usually colour codes from the output of the Procyon Java decompiler. [ ]
    • Factor out a method for stripping ANSI escapes. [ ]
    • Append output from dumppdf(1) in more cases, avoiding situations where we fallback to a binary diff. [ ]
    • Add support for versions of Perl s IO::Compress::Zip version 2.212. [ ]
  • Bug fixes:
    • Also catch RuntimeError exceptions when importing the PyPDF library so that it, or, crucially, its transitive dependencies, cannot not cause diffoscope to traceback at runtime and build time. [ ]
    • Do not call marshal.load( ) of precompiled Python bytecode as it, alas, inherently unsafe. Replace for now with a brief summary of the code section of .pyc. [ ][ ]
    • Don t include excessive debug output when calling dumppdf(1). [ ]
  • Testsuite-related changes:
    • Don t bother to check version number in test_python.py: the fixture for this test is fixed. [ ][ ]
    • Update test_zip text fixtures and definitions to support new changes to the Perl IO::Compress library. [ ]
In addition, Mattia Rizzolo updated the available architectures for a number of test dependencies [ ] and Sergei Trofimovich fixed an issue to avoid diffoscope crashing when hashing directory symlinks [ ] and Vagrant Cascadian proposed GNU Guix updates for diffoscope versions 275 and 276 and 277.

Website updates There were a rather substantial number of improvements made to our website this month, including:
  • Alba Herrerias:
    • Substantially extend the guidance on the Contribute page. [ ]
  • Chris Lamb:
    • Set the future: true configuration value so we render all files and documents in the website, regardless of whether they have a date property in the future. After all, we don t re-generate the website on a timer, and have other ways of making unpublished, draft posts. [ ][ ]
  • Fay Stegerman:
  • hulkoba:
  • kpcyrd:
  • Mattia Rizzolo:
  • Pol Dellaiera:

Upstream patches The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:

Reproducibility testing framework The Reproducible Builds project operates a comprehensive testing framework running primarily at tests.reproducible-builds.org in order to check packages and other artifacts for reproducibility. In August, a number of changes were made by Holger Levsen, including:
  • Temporarily install the openssl-provider-legacy package for the Debian unstable environments for running diffoscope due to Debian bug #1078944. [ ][ ][ ][ ]
  • Mark Debian armhf architecture nodes as being down due to proxy down. [ ][ ]
  • Detect proxy failures. [ ][ ][ ]
  • Run the index-buildinfo for the builtin-pho script with the -q switch. [ ]
  • Disable all Arch Linux reproducible jobs. [ ]
In addition, Mattia Rizzolo updated the website configuration to install the ruby-jekyll-sitemap package as it is now used in the website [ ], Roland Clobus updated the script to build Debian live images to treat openQA issues as warnings [ ], and Vagrant Cascadian marked the cbxi4b node as down [ ].

If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

31 August 2024

Vincent Bernat: Fixing layout shifts caused by web fonts

In 2020, Google introduced Core Web Vitals metrics to measure some aspects of real-world user experience on the web. This blog has consistently achieved good scores for two of these metrics: Largest Contentful Paint and Interaction to Next Paint. However, optimizing the third metric, Cumulative Layout Shift, which measures unexpected layout changes, has been more challenging. Let s face it: optimizing for this metric is not really useful for a site like this one. But getting a better score is always a good distraction. To prevent the flash of invisible text when using web fonts, developers should set the font-display property to swap in @font-face rules. This method allows browsers to initially render text using a fallback font, then replace it with the web font after loading. While this improves the LCP score, it causes content reflow and layout shifts if the fallback and web fonts are not metrically compatible. These shifts negatively affect the CLS score. CSS provides properties to address this issue by overriding font metrics when using fallback fonts: size-adjust, ascent-override, descent-override, and line-gap-override. Two comprehensive articles explain each property and their computation methods in detail: Creating Perfect Font Fallbacks in CSS and Improved font fallbacks.

Interactive tuning tool Instead of computing each property from font average metrics, I put together a tool for interactively tuning fallback fonts.1

Instructions
  1. Load your custom font.
  2. Select a fallback font to tune.
  3. Adjust the size-adjust property to match the width of your custom font with the fallback font. With a proportional font, it is not possible to achieve a perfect match.
  4. Fine-tune the ascent-override property. Aim to align the final dot of the last paragraph while monitoring the font s baseline. For more precise adjustment, disable the option.
  5. Modify the descent-override property. The goal is to make the two boxes match. You may need to alternate between this and the previous property for optimal results.
  6. If necessary, adjust the line-gap-override property. This step is typically not required.
The process needs to be repeated for each fallback font. Some platforms may not include certain fonts. Notably, Android lacks most fonts found in other operating systems. It replaces Georgia with Noto Serif, which is not metrically-compatible.

Tool

This tool is not available from the Atom feed.

Results For the body text of this blog, I get the following CSS definition:
@font-face  
  font-family: Merriweather;
  font-style: normal;
  font-weight: 400;
  src: url("../fonts/merriweather.woff2") format("woff2");
  font-display: swap;
 
@font-face  
  font-family: "Fallback for Merriweather";
  src: local("Noto Serif"), local("Droid Serif");
  size-adjust: 98.3%;
  ascent-override: 99%;
  descent-override: 27%;
 
@font-face  
  font-family: "Fallback for Merriweather";
  src: local("Georgia");
  size-adjust: 106%;
  ascent-override: 90.4%;
  descent-override: 27.3%;
 
font-family: Merriweather, "Fallback for Merriweather", serif;
After a month, the CLS metric improved to 0:
Core Web Vitals scores for vincent.bernat.ch showing all 6 metrics as green. Notably the Cumulative Layout Shift is 0.
Recent Core Web Vitals scores for vincent.bernat.ch

About custom fonts Using safe web fonts or a modern font stack is often simpler. However, I prefer custom web fonts. Merriweather and Iosevka, which are used in this blog, enhance the reading experience. An alternative approach could be to use Georgia as a serif option. Unfortunately, most default monospace fonts are ugly. Furthermore, paragraphs that combine proportional and monospace fonts can create visual disruption. This occurs due to mismatched vertical metrics or weights. To address this issue, I adjust Iosevka s metrics and weight to align with Merriweather s characteristics.

  1. Similar tools already exist, like the Fallback Font Generator, but they were missing a few features, such as the ability to load the fallback font or to have decimals for the CSS properties. And no source code.

21 August 2024

Russ Allbery: Review: These Burning Stars

Review: These Burning Stars, by Bethany Jacobs
Series: Kindom Trilogy #1
Publisher: Orbit
Copyright: October 2023
ISBN: 0-316-46342-6
Format: Kindle
Pages: 430
These Burning Stars is a science fiction thriller with cyberpunk vibes. It is Bethany Jacobs's first novel and the first of an expected trilogy, and won the 2024 Philip K. Dick Award for the best SF paperback original published in the US. Generation starships brought humanity to the three star systems of the Treble, where they've built a new and thriving culture of billions. The Treble is ruled by the Kindom, a tripartite government structure built around the worship of six gods and the aristocratic power of the First Families. The Clerisy handle religion, the Secretaries run the bureaucracy, and the Cloaksaan enforce the decisions of the other branches. The Nightfoots are one of the First Families. They control sevite, the propellant required to move between the systems of the Treble now that the moon Jeve and the sole source of natural jevite has been destroyed. Esek Nightfoot is a cleric, theoretically following the rules of the Clerisy, but she has made a career of training cloaksaan. She is is mercurial, powerful, ruthless, ambitious, politically well-connected, and greatly feared. She is also obsessed with a person named Six: an orphan she first encountered at a training school who was too young to have a gender or a name but who was already one of the best fighters in the school. In the sort of manipulative challenge typical of Esek, she dangled the offer of a place as a student and challenged the child to learn enough to do something impressive. The subsequent twenty years of elusive taunts and mysterious gifts from the impossible-to-locate Six have driven Esek wild. Cleric Chono was beside Esek for much of that time. One of Six's classmates and another of Esek's rescues, Chono is the rare student who became a cleric rather than a cloaksaan. She is pious, cautious, and careful, the opposite of Esek's mercurial rage, but it's impossible to spend that much time around the woman and not be affected and manipulated by her. As this story opens, Chono is summoned by the First Cleric to join Esek on an assignment: recover a data coin that was stolen from a pirate raid on the Nightfoot compound. He refuses to tell them what data is on it, only saying that he believes it could be used to undermine public trust in the Nightfoot family. Jun is a hacker with considerably fewer connections to power or government and no desire to meet any of these people. She and her partner Liis make a dubiously legal living from smaller, quieter jobs. Buying a collection of stolen data coins for an archivist consortium is riskier than she prefers, but she's been tracking down rumors of this coin for months. The deal is worth a lot of money, enough to make a huge difference for her family. This is the second book I've read recently with strong cyberpunk vibes, although These Burning Stars mixes them with political thriller. This is a messy world with complicated political and religious systems, a lot of contentious history, and vast inequality. The story is told in two interleaved time sequences: the present-day fight over the data coin and the information that it contains, and a sequence of flashbacks telling the history of Esek's relationship with Six and Chono. Jun's story is the most cyberpunk and the one I found the most enjoyable to read, but Chono is a good viewpoint character for Esek's vicious energy and abusive charisma. Six is not a viewpoint character. For most of the book, they're present mostly in shadows, glimpses, and consequences, but they're the strongest character of the book. Both Esek and Six are larger than life, creatures of legend stuffed into mundane politics but too full of strong emotions, both good and bad, to play by any of the rules. Esek has the power base and access to the levers of government, but Six's quiet competence and mercilessly targeted morality may make them the more dangerous of the pair. I found the twisty political thriller part of this book engrossing and very difficult to put down, but it was also a bit too much drama for me in places. Jacobs has some surprises in store, one of which I did not expect at all, and they're set up beautifully and well-done within the story, but Esek and Six become an emotional star that the other characters orbit around and are in danger of getting pulled into. Chono is an accomplished and powerful character in her own right, but she's also an abuse victim, and while those parts are realistic, I didn't entirely enjoy reading them. There is quiet competence here alongside the drama, but I think I wanted the balance of emotion to tip a bit more towards the competence. There is one thing that Jacobs does with the end of the book that greatly impressed me. Unfortunately I can't even hint at it for fear of spoilers, but the ending is unsettling in a way that I found surprising and thought-provoking. I think what I can say is that this book respects the intelligence and skill of secondary characters in a way that I think is rare in a story with such overwhelming protagonists. I'm still thinking about that, and it's going to pull me right into the sequel. This is not going to be to everyone's taste. Esek is a viewpoint character and she can be very nasty. There's a lot of violence and abuse, including one rather graphic fight scene that I thought dragged on much longer than it needed to. But it's a satisfying, complex story with a true variety of characters and some real surprises. I'm glad I read it. Followed by On Vicious Worlds, not yet published as I write this. Content warnings: emotional and physical abuse, graphic violence, off-screen rape and sexual abuse of minors. Rating: 7 out of 10

19 August 2024

Matthew Garrett: Client-side filtering of private data is a bad idea

(The issues described in this post have been fixed, I have not exhaustively researched whether any other issues exist)

Feeld is a dating app aimed largely at alternative relationship communities (think "classier Fetlife" for the most part), so unsurprisingly it's fairly popular in San Francisco. Their website makes the claim:

Can people see what or who I'm looking for?
No. You're the only person who can see which genders or sexualities you're looking for. Your curiosity and privacy are always protected.


which is based on you being able to restrict searches to people of specific genders, sexualities, or relationship situations. This sort of claim is one of those things that just sits in the back of my head worrying me, so I checked it out.

First step was to grab a copy of the Android APK (there are multiple sites that scrape them from the Play Store) and run it through apk-mitm - Android apps by default don't trust any additional certificates in the device certificate store, and also frequently implement certificate pinning. apk-mitm pulls apart the apk, looks for known http libraries, disables pinning, and sets the appropriate manifest options for the app to trust additional certificates. Then I set up mitmproxy, installed the cert on a test phone, and installed the app. Now I was ready to start.

What became immediately clear was that the app was using graphql to query. What was a little more surprising is that it appears to have been implemented such that there's no server state - when browsing profiles, the client requests a batch of profiles along with a list of profiles that the client has already seen. This has the advantage that the server doesn't need to keep track of a session, but also means that queries just keep getting larger and larger the more you swipe. I'm not a web developer, I have absolutely no idea what the tradeoffs are here, so I point this out as a point of interest rather than anything else.

Anyway. For people unfamiliar with graphql, it's basically a way to query a database and define the set of fields you want returned. Let's take the example of requesting a user's profile. You'd provide the profile ID in question, and request their bio, age, rough distance, status, photos, and other bits of data that the client should show. So far so good. But what happens if we request other data?

graphql supports introspection to request a copy of the database schema, but this feature is optional and was disabled in this case. Could I find this data anywhere else? Pulling apart the apk revealed that it's a React Native app, so effectively a framework for allowing writing of native apps in Javascript. Sometimes you'll be lucky and find the actual Javascript source there, but these days it's more common to find Hermes blobs. Fortunately hermes-dec exists and does a decent job of recovering something that approximates the original input, and from this I was able to find various lists of database fields.

So, remember that original FAQ statement, that your desires would never be shown to anyone else? One of the fields mentioned in the app was "lookingFor", a field that wasn't present in the default profile query. What happens if we perform the incredibly complicated hack of exporting a profile query as a curl statement, add "lookingFor" into the set of requested fields, and run it?

Oops.

So, point 1 is that you can't simply protect data by having your client not ask for it - private data must never be released. But there was a whole separate class of issue that was an even more obvious issue.

Looking more closely at the profile data returned, I noticed that there were fields there that weren't being displayed in the UI. Those included things like "ageRange", the range of ages that the profile owner was interested in, and also whether the profile owner had already "liked" or "disliked" your profile (which means a bunch of the profiles you see may already have turned you down, but the app simply didn't show that). This isn't ideal, but what was more concerning was that profiles that were flagged as hidden were still being sent to the app and then just not displayed to the user. Another example of this is that the app supports associating your profile with profiles belonging to partners - if one of those profiles was then hidden, the app would stop showing the partnership, but was still providing the profile ID in the query response and querying that ID would still show the hidden profile contents.

Reporting this was inconvenient. There was no security contact listed on the website or in the app. I ended up finding Feeld's head of trust and safety on Linkedin, paying for a month of Linkedin Pro, and messaging them that way. I was then directed towards a HackerOne program with a link to terms and conditions that 404ed, and it took a while to convince them I was uninterested in signing up to a program without explicit terms and conditions. Finally I was just asked to email security@, and successfully got in touch. I heard nothing back, but after prompting was told that the issues were fixed - I then looked some more, found another example of the same sort of issue, and eventually that was fixed as well. I've now been informed that work has been done to ensure that this entire class of issue has been dealt with, but I haven't done any significant amount of work to ensure that that's the case.

You can't trust clients. You can't give them information and assume they'll never show it to anyone. You can't put private data in a database with no additional acls and just rely on nobody ever asking for it. You also can't find a single instance of this sort of issue and fix it without verifying that there aren't other examples of the same class. I'm glad that Feeld engaged with me earnestly and fixed these issues, and I really do hope that this has altered their development model such that it's not something that comes up again in future.

(Edit to add: as far as I can tell, pictures tagged as "private" which are only supposed to be visible if there's a match were appropriately protected, and while there is a "location" field that contains latitude and longitude this appears to only return 0 rather than leaking precise location. I also saw no evidence that email addresses, real names, or any billing data was leaked in any way)

comment count unavailable comments

9 August 2024

Kalyani Kenekar: One Backpack, One Passport: My First Solo Trip

Planing A Self Organized Solo Trip You know the movie Queen? The actor Kangana Ranaut plays in that movie the role of Rani Mehra, a 24-year-old Punjabi woman, who was a simple, homely girl that was always reliant on her family. Similar to Rani I too rarely ventured out without my parents and often needed my younger sibling by my side. Inspired by her transformation, I decided it was time to take control of my own story and discover who I truly am. Queen movie picture Of Kangana

Trip Requirements

My First Passport The journey began with a significant first step: Obtaining my first passport Never having had one before, I scheduled the nearest available interview date on June 29 2022. This meant traveling to Solapur, a city 309 km from my hometown, accompanied by my father. After successfully completing the interview, I received my passport on July 14 2022.

Select A Country, Booking Flights And Accommodation Excited and ready to embark on my adventure, I planed trip to Albania and booked the flight tickets. Why? I had heard from friends that it was a beautiful European country with beaches and other attractions, and importantly, it didn t require a visa for Indian citizens and was more affordable than other European destinations. Before heading to Albania, I planned a overnight stop in Abu Dhabi with a transit visa, thanks to friend who knew the process for obtaining it. Some of my friends did travel also to Europe at the same time and quite close to my plannings, but that I realized just later the trip.

Day 1, Starting The Experience On July 20, 2022, I started my journey by traveling from Pune, Maharashtra, to Delhi, where my brother lives. He came to see me off at the airport, adding a touch of warmth and support to the beginning of my solo adventure. Upon arriving in Delhi, with my next flight scheduled for July 21, I stayed at a backpacker hostel named Zostel, Paharganj, Delhi to rest. During my stay, I noticed that many travelers at the hostel carried rucksacks, which sparked a desire in me to get one for my own trip to Europe. Up until then, I had always shopped with my mom and had never bought anything on my own. Inspired by the travelers, I set out to find a suitable rucksack. I traveled alone by metro from Paharganj to Rohini to visit a Decathlon store, where I purchased a 50-liter rucksack. This was a significant step in preparing for my European adventure and marked a milestone in my journey of self reliance. Rucksack description tag Kalyani s packpacker

Day 2, Flying To Abu Dhabi The following day, July 21 2024, I had a flight to Abu Dhabi. I spent the night at the hostel to rest before my journey. On the day of the flight, I needed to reach the airport by 3 PM, and a friend kindly came to drop me off. With my rucksack packed and excitement building, I was ready for the next leg of my adventure. When we arrived at the airport, my friend saw me off, marking the start of my international journey. With mom made spices, chutneys, and chilly flakes packed for comfort, I completed my immigration process in about two and a half hours. I then settled at the gate for my flight, feeling a mix of excitement and anxiety as thoughts raced through my mind. mom-made spices Passport and boarding pass To ease my nerves, I struck up a conversation with a man seated nearby who was also traveling to Abu Dhabi for work. He provided helpful information about safety and transportation in Abu Dhabi, which reassured me. With the boarding process complete and my anxiety somewhat eased. I found my window seat on the flight and settled in, excited for the journey ahead. Next to me was a young man from Ranchi(Zarkhand, India), heading to Abu Dhabi for work at a mining factory. We had an engaging conversation about work culture in Abu Dhabi and recruitment from India. Upon arriving in Abu Dhabi, I completed my transit, collected my luggage, and began finding my way to the hotel Premier Inn AbuDhabi, which was in the airport area. To my surprise, I ran into the same man from the flight, now in a cab. He kindly offered to drop me at my hotel, which I gladly accepted since navigating an unfamiliar city with a short acquaintance felt safer. At the hotel gate, he asked if I had local currency (Dirhams) for payment, as sometimes online transactions can fail. That hadn t crossed my mind, and I realized I might be left stranded if a transaction failed. Recognizing his help as a godsend, I asked if he could lend me some Dirhams, promising to transfer the amount later. He kindly assured me to pay him back once I reached the hotel room. With that relief, I checked into the hotel, feeling deeply grateful for the unexpected assistance and transferred the money to him after getting to my room. dhiramm money hotel room Kalyani in hotel room

Day 3, Flying And Arrive In Tirana Once in the hotel room, I found it hard to sleep, anxious about waking up on time for my flight. I set an alarm to wake up early, but my subconscious mind kept me alert, and I woke up before the alarm went off. I got freshened up and went down for breakfast, where I found some vegetarian options like Idli-Sambar and bread with butter, along with some morning tea. After breakfast, I headed back to the airport, ready to catch my flight to my final destination: Tirana, Albania. Breakfast at hotel Airport area I reached Tirana, Albania after a six hours flight, feeling exhausted and I was suffering from a headache. The air pressure had blocked my ears, and jet lag added to my fatigue. After collecting my checked luggage, I headed to the first ATM machine at the airport. Struggling to insert my card, I asked a nearby gentleman for help. He tried his best, but my card got stuck inside the machine. Panic set in as I worried about how I would survive without money. Taking a deep breath, I found an airport employee and explained the situation. The gentleman stayed with me, offering support and repeatedly apologizing for his mistake. However, it wasn t his fault, the ATM was out of order, which I hadn t noticed. My focus was solely on retrieving my ATM card. The airport employee worked diligently, using a hairpin to carefully extract my card. Finally, the card was freed, and I felt an immense sense of relief, grateful for the help of these kind strangers. I used another ATM, successfully withdrew money, and then went to an airport mobile SIM shop to buy a new SIM card for local internet and connectivity. sim plans

Day 4, Arriving In Tirana, Facing Challenges In A Foreign Country I had booked a stay at a backpacker hostel near the city center of Tirana. After sorting out the ATM and SIM card issues, I searched for a bus or any transport to get there. It was quite late, around 8:30 PM, and being in a new city, I was in a hurry. I saw a bus nearly leaving the airport, stopped it, and asked if it went to the city center. They gave me the green flag, so I boarded the airport service bus and reached the city center. Feeling very tired, I discovered that the hostel was about an hour and a half away by walking. Deciding to take a cab, I faced a challenge as the driver couldn t understand my English or accent. Using a mobile translator to convert my address from English to Albanian, I finally communicated my destination to him. With that sorted out, I headed to the Blue Door Backpacker Hostel and arrived around 9 PM, relieved to have finally reached my destination and I checked in. Hostel gate Street in Tirana I found my top bunk bed, only to realize I had booked a mixed-gender dormitory. This detail had completely escaped my notice during the booking process. I felt unsure about how to handle the situation. Coincidentally, my experience mirrored what Kangana faced in the movie Queen . Feeling acidic due to an empty stomach and the exhaustion of heavy traveling, I wasn t up to cooking in the hostel s kitchen. I asked the front desk about the nearest restaurant. It was nearly 9:30 PM, and the streets were deserted. To avoid any mishaps like in the movie Queen, I kept my passport securely locked in my bag, ensuring it wouldn t be a victim of theft. Venturing out for dinner, I felt uneasy on the quiet streets. I eventually found a restaurant recommended by the hostel, but the menu was almost entirely non-vegetarian. I struggled to ask about vegetarian options and was uncertain if any dishes contained eggs, as some people consider eggs to be vegetarian. Feeling frustrated and unsure, I left the restaurant without eating. I noticed a nearby grocery store that was about to close and managed to get a few extra minutes to shop. I bought some snacks, wafers, milk, and tea bags (though I couldn t find tea powder to make Indian-style tea). Returning to the hostel, I made do with wafers, cookies, and milk for dinner. That day was incredibly tough for me, I filled with exhaustion and struggle in a new country, I was on the verge of tears . I made a video call home before sleeping on the top bunk bed. It was a new experience for me, sharing a room with both unknown men and women. I kept my passport safe inside my purse and under my pillow while sleeping, staying very conscious about its security.

Day 5, Exploring Nearby Places I woke up the next day at noon. After having some coffee, the hostel management girl asked if I wanted breakfast. She offered curd with cornflakes, which I refused because I don t like curd. Instead, I ordered a pizza from a vegetarian pizza place with her help, and I started feeling better. I met some people in the hostel, some from Syria and others from Italy. I struggled to understand their accents but kept pushing myself to get involved in their discussions. Despite the challenges, I felt more at ease and was slowly adapting to my new environment. I went out from the hostel in the evening to buy some vegetables to cook something. I searched for shops and found some potatoes, tomatoes, and rice. I decided to cook Khichdi, an Indian dish made with rice, and added some chili flakes I brought from home. After preparing my dinner, I ate and then went to sleep again. vegetable shop cooking in kitchen Food

Day 6, Tiranas Recent History The next day, I planned to explore the city and visited Bunkart-1, a fascinating museum in a massive underground bunker from the communist era. Originally built as a shelter for Albania s political and military elite, it now offers a unique glimpse into the country s history under Enver Hoxha s oppressive regime. The museum s exhibits include historical artifacts, photographs, and multimedia displays that detail the lives of Albanians during that time. Walking through the dimly lit corridors, I felt the weight of history and gained a deeper understanding of Albania s past. Bunkart Bunkart Bunkart Bunkart Bunkart Bunkart Bunkar Bunkart Bunkart Bunkart Bunkart

Day 7-8, Meeting Friends From India The next day, I accidentally met with Chirag, who was returning from the Debian Conference 2022 held in Prizren, Kosovo, and staying at the same hostel. When I encountered him, he was talking on the phone, and I recognized he was Indian by his accent. I introduced myself, and we discovered we had some mutual friends. Chirag told me that our common friend, Raju, was also coming to stay at the hostel the next day. This news made me feel relaxed and happy to have known people around. When Raju arrived, the three of us, Chirag, Raju, and I planned to have dinner at an Indian restaurant and explore Tirana city. I had a great time talking and enjoying their company. Friends on street

Day 9-10, Meeting More Friends Raju had a ticket to leave soon, so Chirag and I made a plan to visit Shkod r and the nearby Komani Lake for kayaking. We started our journey early in the morning by bus and reached Shkod r. There, we met new friends from the conference, Pavit and Abraham, who were already there. We had dinner together and enjoyed an ice cream treat from Chirag. Friends on dinner

Day 12, Kayaking And Say Good Bye To Friends The next day, Pavit and Abraham had a flight back to India, so Chirag and I went to Komani Lake. We had an adventurous time kayaking, even though neither of us knew how to swim. We took a ferry through the backwaters to the island on Komani Lake and enjoyed a fantastic adventure together. After our trip, Chirag returned to Tirana for his flight back to India, leaving me to continue my journey alone. Lake with mountain Kayak

Day 13, Climbing Rozafa Castel By stopping at Shkod r, I visited Rozafa Castle. Despite the language barrier, as most locals only spoke Albanian, people around me guided me correctly on how to get there. At times, I used applications like Google Translate to communicate. To read signs or hotel menus, I used Google Photos' language converter. I even used the audio converter to understand and speak some basic Albanian phrases. View from top of Castel Rozafa castel I took a bus from Shkod r to the southern part of Albania, heading to Sarand . The journey lasted about five to six hours, and I had booked a stay at Mona s Hostel. Upon arrival, I met Eliza from America, and we went together to Ksamil Beach, spending a wonderful day there.

Day 14, Vlora Beach: Beach Side Cycling Next, I traveled to Vlor , where I stayed for one day. During my time there, I enjoyed beach side cycling with a cycle provided by the hostel owner and spent some time feeding fish. I also met a fellow traveler from Delhi who had brought along some preserved Indian curry. He kindly shared it with me, which was a welcome change after nearly 15 days without authentic Indian cuisine, except for what I had cooked myself in various hostels. Sunset on BeachKalyani on Beach Beach with streetBeach side cycling

Day 15-16 Visiting Durress, Travelling Back To Tirana I then visited Durr s, exploring its beautiful beaches, before heading back to Tirana one day before my flight home. On the day of my flight, my alarm didn t go off, and I woke up late at the hostel. In a frantic rush, I packed everything in just five minutes and dashed toward the city center to catch the bus to the airport. If I had been just five minutes later, I would have missed the bus. Thankfully, I managed to stop it just in time and began my journey back home, reflecting on the incredible adventure I had experienced. Fortunately, I wasn t late; I arrived at the airport just in time. After clearing immigration, I boarded my flight, which had a layover in Warsaw, Poland. The journey from Tirana to Warsaw took about two and a half hours, followed by a seven to eight-hour flight from Poland back to India. Once I arrived in Delhi, I returned to Zostel and booked a train ticket to Aurangabad for the next three days.

Backview This trip was an incredible adventure for me. I never imagined I could accomplish something like this, but I did. Meeting diverse people, experiencing different cultures, and learning so much made this journey truly unforgettable. Looking back, I realize how much I ve grown from this experience. Although I may have more opportunities to travel abroad in the future, this trip will always hold a special place in my heart. The memories I made and the incredible people I met along the way are irreplaceable. This experience goes beyond what I can express through this blog or words; it was incredibly precious to me. Every moment of this journey is etched in my memory, and I am grateful for every part of it.

4 August 2024

Matthias Klumpp: Freedesktop Specs Website Update

The Freedesktop.org Specifications directory contains a list of common specifications that have accumulated over the decades and define how common desktop environment functionality works. The specifications are designed to increase interoperability between desktops. Common specifications make the life of both desktop-environment developers and especially application developers (who will almost always want to maximize the amount of Linux DEs their app can run on and behave as expected, to increase their apps target audience) a lot easier. Unfortunately, building the HTML specifications and maintaining the directory of available specs has become a bit of a difficult chore, as the pipeline for building the site has become fairly old and unmaintained (parts of it still depended on Python 2). In order to make my life of maintaining this part of Freedesktop easier, I aimed to carefully modernize the website. I do have bigger plans to maybe eventually restructure the site to make it easier to navigate and not just a plain alphabetical list of specifications, and to integrate it with the Wiki, but in the interest of backwards compatibility and to get anything done in time (rather than taking on a mega-project that can t be finished), I decided to just do the minimum modernization first to get a viable website, and do the rest later. So, long story short: Most Freedesktop specs are written in DocBook XML. Some were plain HTML documents, some were DocBook SGML, a few were plaintext files. To make things easier to maintain, almost every specification is written in DocBook now. This also simplifies the review process and we may be able to switch to something else like AsciiDoc later if we want to. Of course, one could have switched to something else than DocBook, but that would have been a much bigger chore with a lot more broken links, and I did not want this to become an even bigger project than it already was and keep its scope somewhat narrow. DocBook is a markup language for documentation which has been around for a very long time, and therefore has older tooling around it. But fortunately our friends at openSUSE created DAPS (DocBook Authoring and Publishing Suite) as a modern way to render DocBook documents to HTML and other file formats. DAPS is now used to generate all Freedesktop specifications on our website. The website index and the specification revisions are also now defined in structured TOML files, to make them easier to read and to extend. A bunch of specifications that had been missing from the original website are also added to the index and rendered on the website now. Originally, I wanted to put the website live in a temporary location and solicit feedback, especially since some links have changed and not everything may have redirects. However, due to how GitLab Pages worked (and due to me not knowing GitLab CI well enough ) the changes went live before their MR was actually merged. Rather than reverting the change, I decided to keep it (as the old website did not build properly anymore) and to see if anything breaks. So far, no dead links or bad side effects have been observed, but: If you notice any broken link to specifications.fd.o or anything else weird, please file a bug so that we can fix it! Thank you, and I hope you enjoy reading the specifications in better rendering and more coherent look!

8 July 2024

Russ Allbery: Review: Beyond Control

Review: Beyond Control, by Kit Rocha
Series: Beyond #2
Publisher: Kit Rocha
Copyright: December 2013
ASIN: B00GIA4GN8
Format: Kindle
Pages: 364
Beyond Control is science fiction erotica (dystopian erotic romance, per the marketing) and a direct sequel to Beyond Shame. These books shift protagonists with each volume and enough of the world background is explained that you could start here, but there are significant spoilers for the previous book. I read this book as part of the Beyond Series Bundle (Books 1-3), which is what the sidebar information is for. This is one of those reviews that I write because I'm stubborn about reviewing all the books I read, not because it's likely to be useful to anyone. There are also considerably more spoilers for the shape of the story than I normally include, so be warned. The Beyond series is erotica. Specifically, so far, consensual BDSM erotica with bisexuality but otherwise typical gender stereotypes. The authors (Kit Rocha is a pen name for Donna Herren and Bree Bridges) are women, so it's more female gaze than male gaze, but by erotica I don't mean romance with an above-average number of steamy scenes. I mean it felt like half the book by page count was descriptions of sex. This review is rather pointless because, one, I'm not going to review the sex that's the main point of the book, and two, I skimmed all the sex and read it for the story because I'm weird. Beyond Shame got me interested in these absurdly horny people and their post-apocalyptic survival struggles in the outskirts of a city run by a religious surveillance state, and I wanted to find out what happened next. Besides, this book promised to focus on my favorite character from the first novel, Lex, and I wanted to read more about her. Beyond Control uses a series pattern that I understand is common in romance but which is not often seen in SFF (my usual genre): each book focuses on a new couple adjacent to the previous couple, while the happily ever after of the previous couple plays out in the background. In this case, it also teases the protagonists of the next book. I can see why romance uses this structure: it's an excuse to provide satisfying interludes for the reader. In between Lex and Dallas's current relationship problems, one gets to enjoy how well everything worked out for Noelle and how much she's grown. In Beyond Shame, Lex was the sort-of partner of Dallas O'Kane, the leader of the street gang that is running Sector Four. (Picture a circle surrounding the rich-people-only city of Eden. That circle is divided into eight wedge-shaped sectors, which provide heavy industries, black-market pleasures, and slums for agricultural workers.) Dallas is an intensely possessive, personally charismatic semi-dictator who cultivates the image of a dangerous barbarian to everyone outside and most of the people inside Sector Four. Since he's supposed to be one of the good guys, this is more image than reality, but it's not entirely disconnected from reality. This book is about Lex and Dallas forming an actual relationship, instead of the fraught and complicated thing they had in the first book. I was hoping that this would involve Dallas becoming less of an asshole. It unfortunately does not, although some of what I attributed to malice may be adequately explained by stupidity. I'm not sure that's an improvement. Lex is great, just like she was in the first book. It's obvious by this point in the series that she does most of the emotional labor of keeping the gang running, and her support is central to Dallas's success. Like most of the people in this story, she has a nasty and abusive background that she's still dealing with in various ways. Dallas's possessiveness is intensely appealing to her, but she wants that possessiveness on different terms than Dallas may be willing to offer, or is even aware of. Lex was, I thought, exceptionally clear about what she wanted out of this relationship. Dallas thinks this is entirely about sex, and is, in general, dumber than a sack of hammers. That means fights. Also orgies, but, well, hopefully you knew what you were getting into if you picked up this book. I know, I know, it's erotica, that's the whole point, but these people have a truly absurd amount of sex. Eden puts birth control in the water supply, which is a neat way to simplify some of the in-story consequences of erotica. They must be putting aphrodisiacs in the water supply as well. There was a lot of sector politics in this book that I found way more interesting than it had any right to be. I really like most of these people, even Dallas when he manages to get his three brain cells connected for more than a few minutes. The events of the first book have a lot of significant fallout, Lex continues being a badass, the social dynamics between the women are very well-done (and pass the Bechdel test yet again even though this is mostly traditional-gender-role erotica), and if Dallas had managed to understand what he did wrong at a deeper-than-emotional level, I would have rather enjoyed the non-erotica story parts. Alas. I therefore wouldn't recommend this book even if I were willing to offer any recommendations about erotica (which I'm not). I was hoping it was going somewhere more rewarding than it did. But I still kind of want to read another one? I am weirdly fascinated with the lives of these people. The next book is about Six, who has the potential to turn into the sort of snarky, cynical character I love reading about. And it's not that hard to skim over the orgies. Maybe Dallas will get one additional brain cell per book? Followed by Beyond Pain. Rating: 5 out of 10

7 July 2024

Russ Allbery: Review: Welcome to Boy.Net

Review: Welcome to Boy.Net, by Lyda Morehouse
Series: Earth's Shadow #1
Publisher: Wizard's Tower Press
Copyright: April 2024
ISBN: 1-913892-71-9
Format: Kindle
Pages: 355
Welcome to Boy.Net is a science fiction novel with cyberpunk vibes, the first of a possible series. Earth is a largely abandoned wasteland. Humanity has survived in the rest of the solar system and spread from Earth's moon to the outer planets. Mars is the power in the inner system, obsessed with all things Earth and effectively run by the Earth Nations' Peacekeeping Force, the ENForcers. An ENForcer soldier is raised in a creche from an early age, implanted with cybernetic wetware and nanite enhancements, and extensively trained to be an elite fighting unit. As befits a proper military, every ENForcer is, of course, male. The ENForcers thought Lucia Del Toro was a good, obedient soldier. They also thought she was a man. They were wrong about those and many other things. After her role in an atrocity that named her the Scourge of New Shanghai, she went AWOL and stole her command ship. Now she and her partner/girlfriend Hawk, a computer hacker from Luna, make a living with bounty hunting jobs in the outer system. The ENForcers rarely cross the asteroid belt; the United Miners see to that. The appearance of an F-class ENForcer battle cruiser in Jupiter orbit is a very unpleasant surprise. Lucia and Hawk hope it has nothing to do with them. That hope is dashed when ENForcers turn up in the middle of their next job: a bounty to retrieve an AI eye. I first found Lyda Morehouse via her AngeLINK cyberpunk series, the last of which was published in 2011. Since then, she's been writing paranormal romance and urban fantasy as Tate Hallaway. This return to science fiction is an adventure with trickster hackers, throwback anime-based cowboy bars, tense confrontations with fascist thugs, and unexpected mutual aid, but its core is a cyberpunk look at the people who are unwilling or unable to follow the rules of social conformity. Gender conformity, specifically. Once you understand what this book is about, Welcome to Boy.Net is a great title, but I'm not sure it serves its purpose as a marketing tool. This is not the book that I would have expected from that title in isolation, and I'm a bit worried that people who would like it might pass it by. Inside the story, Boy.Net is the slang term for the cybernetic network that links all ENForcers. If this were the derogatory term used by people outside the ENForcers, I could see it, but it's what the ENForcers themselves call it. That left me with a few suspension of disbelief problems, since the sort of macho assholes who are this obsessed with male gender conformance usually consider "boys" to be derogatory and wouldn't call their military cybernetic network something that sounds that belittling, even as a joke. It would be named after some sort of Orwellian reference to freedom, or something related to violence, dominance, brutality, or some other "traditional male" virtue. But although this term didn't work for me as world-building, it's a beautiful touch thematically. What Morehouse is doing here is the sort of concretized metaphor that science fiction is so good at: an element of world-building that is both an analogy for something the reader is familiar with and is also a concrete piece of world background that follows believable rules and can be manipulated by the characters. Boy.Net is trying to reconnect to Lucia against her will. If it succeeds, it will treat the body modifications she's made as damage and try to reverse all of them, attempting to convert her back to the model of an ENForcer. But it is also a sharp metaphor for how gender roles are enforced in our world: a child assigned male is connected to a pervasive network of gender expectations and is programmed, shaped, and monitored to match the social role of a boy. Even if they reject those expectations, the gender role keeps trying to reconnect and convert them back. I really enjoyed Morehouse's handling of the gender dynamics. It's an important part of the plot, but it's not the only thing going on or the only thing the characters think about. Lucia is occasionally caught by surprise by well-described gender euphoria, but mostly gender is something other people keep trying to impose on her because they're obsessed with forcing social conformity. The rest of the book is a fun romp with a few memorable characters and a couple of great moments with unexpected allies. Hawk and Lucia have an imperfect but low drama relationship that features a great combination of insight and the occasional misunderstanding. It's the kind of believable human relationship that I don't see very much in science fiction, written with the comfortable assurance of an author with over a dozen books under her belt. Some of the supporting characters are also excellent, including a non-binary deaf hacker that I wish had been a bit more central to the story. This is not the greatest science fiction novel I've read, but it was entertaining throughout and kept me turning the pages. Recommended if you want some solar-system cyberpunk in your life. Welcome to Boy.Net reaches a conclusion of sorts, but there's an obvious hook for a sequel and a lot of room left for more stories. I hope enough people buy this book so that I can read it. Rating: 7 out of 10

4 July 2024

Arturo Borrero Gonz lez: Wikimedia Toolforge: migrating Kubernetes from PodSecurityPolicy to Kyverno

Le ch teau de Val re et le Haut de Cry en juillet 2022 Christian David, CC BY-SA 4.0, via Wikimedia Commons This post was originally published in the Wikimedia Tech blog, authored by Arturo Borrero Gonzalez. Summary: this article shares the experience and learnings of migrating away from Kubernetes PodSecurityPolicy into Kyverno in the Wikimedia Toolforge platform. Wikimedia Toolforge is a Platform-as-a-Service, built with Kubernetes, and maintained by the Wikimedia Cloud Services team (WMCS). It is completely free and open, and we welcome anyone to use it to build and host tools (bots, webservices, scheduled jobs, etc) in support of Wikimedia projects. We provide a set of platform-specific services, command line interfaces, and shortcuts to help in the task of setting up webservices, jobs, and stuff like building container images, or using databases. Using these interfaces makes the underlying Kubernetes system pretty much invisible to users. We also allow direct access to the Kubernetes API, and some advanced users do directly interact with it. Each account has a Kubernetes namespace where they can freely deploy their workloads. We have a number of controls in place to ensure performance, stability, and fairness of the system, including quotas, RBAC permissions, and up until recently PodSecurityPolicies (PSP). At the time of this writing, we had around 3.500 Toolforge tool accounts in the system. We early adopted PSP in 2019 as a way to make sure Pods had the correct runtime configuration. We needed Pods to stay within the safe boundaries of a set of pre-defined parameters. Back when we adopted PSP there was already the option to use 3rd party agents, like OpenPolicyAgent Gatekeeper, but we decided not to invest in them, and went with a native, built-in mechanism instead. In 2021 it was announced that the PSP mechanism would be deprecated, and removed in Kubernetes 1.25. Even though we had been warned years in advance, we did not prioritize the migration of PSP until we were in Kubernetes 1.24, and blocked, unable to upgrade forward without taking actions. The WMCS team explored different alternatives for this migration, but eventually we decided to go with Kyverno as a replacement for PSP. And so with that decision it began the journey described in this blog post. First, we needed a source code refactor for one of the key components of our Toolforge Kubernetes: maintain-kubeusers. This custom piece of software that we built in-house, contains the logic to fetch accounts from LDAP and do the necessary instrumentation on Kubernetes to accommodate each one: create namespace, RBAC, quota, a kubeconfig file, etc. With the refactor, we introduced a proper reconciliation loop, in a way that the software would have a notion of what needs to be done for each account, what would be missing, what to delete, upgrade, and so on. This would allow us to easily deploy new resources for each account, or iterate on their definitions. The initial version of the refactor had a number of problems, though. For one, the new version of maintain-kubeusers was doing more filesystem interaction than the previous version, resulting in a slow reconciliation loop over all the accounts. We used NFS as the underlying storage system for Toolforge, and it could be very slow because of reasons beyond this blog post. This was corrected in the next few days after the initial refactor rollout. A side note with an implementation detail: we stored a configmap on each account namespace with the state of each resource. Storing more state on this configmap was our solution to avoid additional NFS latency. I initially estimated this refactor would take me a week to complete, but unfortunately it took me around three weeks instead. Previous to the refactor, there were several manual steps and cleanups required to be done when updating the definition of a resource. The process is now automated, more robust, performant, efficient and clean. So in my opinion it was worth it, even if it took more time than expected. Then, we worked on the Kyverno policies themselves. Because we had a very particular PSP setting, in order to ease the transition, we tried to replicate their semantics on a 1:1 basis as much as possible. This involved things like transparent mutation of Pod resources, then validation. Additionally, we had one different PSP definition for each account, so we decided to create one different Kyverno namespaced policy resource for each account namespace remember, we had 3.5k accounts. We created a Kyverno policy template that we would then render and inject for each account. For developing and testing all this, maintain-kubeusers and the Kyverno bits, we had a project called lima-kilo, which was a local Kubernetes setup replicating production Toolforge. This was used by each engineer in their laptop as a common development environment. We had planned the migration from PSP to Kyverno policies in stages, like this:
  1. update our internal template generators to make Pod security settings explicit
  2. introduce Kyverno policies in Audit mode
  3. see how the cluster would behave with them, and if we had any offending resources reported by the new policies, and correct them
  4. modify Kyverno policies and set them in Enforce mode
  5. drop PSP
In stage 1, we updated things like the toolforge-jobs-framework and tools-webservice. In stage 2, when we deployed the 3.5k Kyverno policy resources, our production cluster died almost immediately. Surprise. All the monitoring went red, the Kubernetes apiserver became irresponsibe, and we were unable to perform any administrative actions in the Kubernetes control plane, or even the underlying virtual machines. All Toolforge users were impacted. This was a full scale outage that required the energy of the whole WMCS team to recover from. We temporarily disabled Kyverno until we could learn what had occurred. This incident happened despite having tested before in lima-kilo and in another pre-production cluster we had, called Toolsbeta. But we had not tested that many policy resources. Clearly, this was something scale-related. After the incident, I went on and created 3.5k Kyverno policy resources on lima-kilo, and indeed I was able to reproduce the outage. We took a number of measures, corrected a few errors in our infrastructure, reached out to the Kyverno upstream developers, asking for advice, and at the end we did the following to accommodate the setup to our needs: I have to admit, I was briefly tempted to drop Kyverno, and even stop pursuing using an external policy agent entirely, and write our own custom admission controller out of concerns over performance of this architecture. However, after applying all the measures listed above, the system became very stable, so we decided to move forward. The second attempt at deploying it all went through just fine. No outage this time When we were in stage 4 we detected another bug. We had been following the Kubernetes upstream documentation for setting securityContext to the right values. In particular, we were enforcing the procMount to be set to the default value, which per the docs it was DefaultProcMount . However, that string is the name of the internal variable in the source code, whereas the actual default value is the string Default . This caused pods to be rightfully rejected by Kyverno while we figured the problem. I sent a patch upstream to fix this problem. We finally had everything in place, reached stage 5, and we were able to disable PSP. We unloaded the PSP controller from the kubernetes apiserver, and deleted every individual PSP definition. Everything was very smooth in this last step of the migration. This whole PSP project, including the maintain-kubeusers refactor, the outage, and all the different migration stages took roughly three months to complete. For me there are a number of valuable reasons to learn from this project. For one, the scale is something to consider, and test, when evaluating a new architecture or software component. Not doing so can lead to service outages, or unexpectedly poor performances. This is in the first chapter of the SRE handbook, but we got a reminder the hard way This post was originally published in the Wikimedia Tech blog, authored by Arturo Borrero Gonzalez.

1 July 2024

Russ Allbery: Review: Snuff

Review: Snuff, by Terry Pratchett
Series: Discworld #39
Publisher: Harper
Copyright: October 2011
Printing: January 2013
ISBN: 0-06-221886-7
Format: Mass market
Pages: 470
Snuff is the 39th Discworld novel and the 8th (and last) Watch novel. This is not a good place to start reading. Sam Vines has been talked, cajoled, and coerced into taking a vacation. Since he is now the Duke of Ankh, he has a country estate that he's never visited. Lady Sybil is insistent on remedying this, as is Vetinari. Both of them may have ulterior motives. They may also be colluding. It does not take long for Vimes to realize that something is amiss in the countryside. It's not that the servants are uncomfortable with him talking to them, the senior servants are annoyed that he talks to the wrong servants, and the maids turn to face the wall at the sight of him. Those are just the strange customs of the aristocracy, for which he has little understanding and even less patience. There's something else going on. The nobility is wary, the town blacksmith is angry about something more than disliking the nobles, and the bartender doesn't want to get involved. Vimes smells something suspicious. When he's framed for a murder, the suspicions seem justified. It takes some time before the reader learns what the local nobility are squirming about, so I won't spoil it. What I will say is that Snuff is Pratchett hammering away at one of his favorite targets: prejudice, cruelty, and treating people like things. Vimes, with his uncompromising morality, is one of the first to realize the depth of the problem. It takes most of the rest longer to come around, even Sybil. It's both painful, and painfully accurate, to contemplate how often recognition of other people's worth only comes once they do something that you recognize as valuable. This is one of the better-plotted Discworld novels. Vimes starts out with nothing but suspicions and stubbornness, and manages to turn Snuff into a mystery novel through dogged persistence. The story is one continuous plot arc with the normal Pratchett color (Young Sam's obsession with types of poo, for example) but without extended digressions. It also has considerably better villains than most Pratchett novels: layers of foot soldiers and plotters, each of which have to be dealt with in a suitable way. Even the concluding action sequences worked for me, which is not always a given in Discworld. The problem, unfortunately, is that the writing is getting a bit wobbly. Pratchett died of early-onset Alzheimer's in 2015, four years after this book was first published, and this is the first novel where I can see some early effects. It mostly shows up in the dialogue: it's just a bit flabby and a bit repetitive, and the characters, particularly towards the end of the book, start repeating the name of the person they're talking to every other line. Once I saw it, I couldn't unsee it, and it was annoying enough to rob a bit of enjoyment from the end of the book. That aside, though, this was a solid Discworld novel. Vimes testing his moral certainty against the world and forcing it into a more ethical shape is always rewarding, and here he takes more risks, with better justification, than in most of the Watch novels. We also find out that Vimes has a legacy from the events of Thud!, which has interesting implications that I wish Pratchett had more time to explore. I think the best part of this book is how it models the process of social change through archetypes: the campaigner who knew the right choice early on, the person who formed their opinion the first time they saw injustice, the person who gets there through a more explicit moral code, the ones who have to be pushed by someone who was a bit faster, the ones who have to be convinced but then work to convince others, and of course the person who is willing to take on the unfair and far-too-heavy burden of being exceptional enough that they can be used as a tool to force other people to acknowledge them as a person. And, since this is Discworld, Vetinari is lurking in the scenery pulling strings, balancing threats, navigating politics, and giving Vimes just enough leeway to try to change the world without abusing his power. I love that the amount of leeway Vimes gets depends on how egregious the offense is, and Vetinari calibrates this quite carefully without ever saying so openly. Recommended, and as much as I don't want to see this series end, this is not a bad entry for the Watch novels to end on. Followed in publication order by Raising Steam. Rating: 8 out of 10

19 June 2024

Sahil Dhiman: First Iteration of My Free Software Mirror

As I m gearing towards setting up a Free Software download mirror in India, it occurred to me that I haven t chronicled the work and motivation behind setting up the original mirror in the first place. Also, seems like it would be good to document stuff here for observing the progression, as the mirror is going multi-country now. Right now, my existing mirror i.e., mirrors.de.sahilister.net (was mirrors.sahilister.in), is hosted in Germany and serves traffic for Termux, NomadBSD, Blender, BlendOS and GIMP. For a while in between, it hosted OSMC project mirror as well. To explain what is a Free Software download mirror thing is first, I ll quote myself from work blog -
As most Free Software doesn t have commercial backing and require heavy downloads, the concept of software download mirrors helps take the traffic load off of the primary server, leading to geographical redundancy, higher availability and faster download in general.
So whenever someone wants to download a particular (mirrored) software and click download, upstream redirects the download to one of the mirror server which is geographical (or in other parameters) nearby to the user, leading to faster downloads and load sharing amongst all mirrors. Since the time I got into Linux and servers, I always wanted to help the community somehow, and mirroring seemed to be the most obvious thing. India seems to be a country which has traditionally seen less number of public download mirrors. IITB, TiFR, and some of the public institutions used to host them for popular Linux and Free Softwares, but they seem to be diminishing these days. In the last months of 2021, I started using Termux and saw that it had only a few mirrors (back then). I tried getting a high capacity, high bandwidth node in budget but it was hard in India in 2021-22. So after much deliberation, I decided to go where it s available and chose a German hosting provider with the thought of adding India node when conditions are favorable (thankfully that happened, and India node is live too now.). Termux required only 29 GB of storage, so went ahead and started mirroring it. I raised this issue in Termux s GitHub repository in January 2022. This blog post chronicles the start of the mirror. Termux has high request counts from a mirror point of view. Each Termux client, usually checks every mirror in selected group for availability before randomly selecting one for download (only other case is when client has explicitly selected a single mirror using termux-repo-change). The mirror started getting thousands of requests daily due to this but only a small percentage would actually get my mirror in selection, so download traffic was lower. Similar thing happened with OSMC too (which I started mirroring later). With this start, I started exploring various project that would be benefit from additional mirrors. Public information from Academic Computer Club in Ume s mirror and Freedif s mirror stats helped to figure out storage and bandwidth requirements for potential projects. Fun fact, Academic Computer Club in Ume (which is one of the prominent Debian, Ubuntu etc.) mirror, now has 200 Gbits/s uplink to the internet through SUNET. Later, I migrated to a different provider for better speeds and added LibreSpeed test on the mirror server. Those were fun times. Between OSMC, Termux and LibreSpeed, I was getting almost 1.2 millions hits/day on the server at its peak, crossing for the first time a TB/day traffic number. Next came Blender, which took the longest time to set up of around 9 10 months. Blender had a push-trigger requirement for rsync from upstream that took quite some back and forth. It now contributes the most amount of traffic on the mirror. On release days, mirror does more than 3 TB/day and normal days, it hovers around 2 TB/day. Gimp project is the latest addition. At one time, the mirror traffic touched 4.97 TB/day traffic number. That s when I decided on dropping LibreSpeed server to solely focus on mirroring for now, keeping the bandwidth allotment for serving downloads only. The mirror projects selection grew organically. I used to reach out many projects discussing the need of for additional mirrors. Some projects outright denied mirroring request as Germany already has a good academic mirrors boosting 20-25 Gbits/s speeds from FTP era, which seems fair. Finding the niche was essential to only add softwares, which would truly benefit from additional capacity. There were months when nothing much would happen with the mirror, rsync would continue to update the mirror while nginx would keep on serving the traffic. Nowadays, the mirror pushes around 70 TB/month. I occasionally check logs, vnstat, add new security stuff here and there and pay the bills. It now saturates the Gigabit link sometimes and goes beyond that, peaking around 1.42 Gbits/s (the hosting provider seems to be upping their game). The plan is to upgrade the link to better speeds. vnstat yearly
Yearly traffic stats (through vnstat -y )
On the way, learned quite a few things like - GeoIP Map of Clients from Yesterday Access Logs
GeoIP Map of Clients from Yesterday's Access Logs. Click to enlarge
Generated from IPinfo.io
In hindsight, the statistics look amazing, hundreds of TBs of traffic served from the mirror, month after month. That does show that there s still an appetite for public mirrors in time of commercially donated CDNs and GitHub. The world could have done with one less mirror, but it saved some time, lessened the burden for others, while providing redundancy and traffic localization with one additional mirror. And it s fun for someone like me who s into infrastructure that powers the Internet. Now, I ll try focusing and expanding the India mirror, which in itself started pushing almost half a TB/day. Long live Free Software and public download mirrors.

Next.

Previous.