Search Results: "mt"

4 June 2024

Jonathan Dowland: Quake (soundtrack)

I haven't done that much crate digging recently, but I did stick this on last week: Trent Reznor's soundtrack for Quake, originally released (within the game) in 1996, and finally issued for the first time independently in 2020.
Quake LP cover and inner covers Quake LP cover and inner covers
I picked it up the Nine Inch Nails gig in Cornwall, 2022. An interesting factoid about the original release was the CD was mastered with the little-known pre-emphasis flag set to "on". This was relatively unusual at the time (1996) that it was never clear whether it was deliberate or not. CD ripping back then usually used an analog audio path from the CD-ROM drive to the PC sound card, and the CD-ROM would apply the necessary pre-emphasis. Therefore, ripping software didn't need to deal with it, and so most of it (then and now) doesn't, even though the path had long since changed to a purely-digital extraction. Thus, the various copies of the soundtrack circulating may or may not have had pre-emphasis correction applied, and if they did, it may or may not have been required to hear the soundtrack as it was intended. I spent a bit of time a few years ago, before the reissue, trying to determine what was "correct". There is certainly an audible difference with pre-emphasis applied (or not), but it wasn't clear which was the intended experience. The reissue should have cleared this up once and for all, but I haven't gone back to check what the outcome was.

1 June 2024

Guido G nther: Free Software Activities May 2024

A short status update of what happened on my side last month. A broken gcovr in Debian triggered a bit of busy work but 0.39.0 came out nicely nevertheless. We also reduced build time quiet a bit in phosh and phoc. If you want to support my work see donations.

16 May 2024

John Goerzen: Review of Reputable, Functional, and Secure Email Service

I last reviewed email services in 2019. That review focused a lot of attention on privacy. At the time, I selected mailbox.org as my provider, and have been using them for these 5 years since. However, both their service and their support have gone significantly downhill since, so it is time for me to look at other options. Here I am focusing strongly on email. Some of the providers mentioned here provide other services (IM, video calls, groupware, etc.), and to the extent they do, I am ignoring them.

What Matters in 2024
I want to start off by acknowledging that what you need in email probably depends on your circumstances and the country in which you live. For me, I begin by naming that the largest threat most of us face isn t from state actors but from criminals: hackers, ransomware gangs, etc. It is important to take as many steps as possible to secure one s account against that. Privacy and security are both part of the mix. I still value privacy but I am acknowledging, as Migadu does, that Email as we know it and encryption are incompatible. Although some of these services strongly protect parts of the conversation, the reality is that most people will be emailing people using plain old email services which don t. For stronger security, something like Signal would be needed. (I wrote about Signal in 2021 also.) Interestingly, OpenPGP support seems to be something of a standard feature in the providers I reviewed by this point. All or almost all of them provide integration with browser-based encryption as well as server-side encryption if you prefer that. Although mailbox.org can automatically PGP-encrypt every message that arrives in plaintext, for general use, this is unwieldy; there isn t good tooling for searching mailboxes where every message is encrypted, etc. So I never enabled that feature at Mailbox. I still value security and privacy, but a pragmatic approach addresses the most pressing threats first.

My criteria
The basic requirements for an email service include:
  1. Ability to use my own domains
  2. Strong privacy policy
  3. Ability for me to use my own IMAP and SMTP clients on both desktop and mobile
  4. It must be extremely reliable
  5. It must not be free
  6. It must have excellent support for those rare occasions when it is needed
  7. Support for basic aliases
Why do I say it must not be free? Because if someone is providing a service with the quality I m talking about here, and not charging for it, it implies something is fishy: either they are unscrupulous, are financially unstable, or the product is something else like ads. I am not aware of any provider that matches the other criteria with a free account anyhow. These providers range from about $30 to $90 per year, so cheaper than a Netflix subscription. Immediately, this rules out several options:
  • Proton doesn t let me use my own clients on mobile (their bridge is desktop-only)
  • Tuta also doesn t let me use my own clients
  • Posteo doesn t let me use my own domain
  • mxroute.com lacks a strong privacy policy, and its policy has numerous causes for concern (for instance, If you repeatedly send email to invalid/unroutable recipients, they may be published on our GitHub )
I will have a bit more to say about a couple of these providers below. There are some additional criteria that are strongly desired but not absolutely required:
  1. Ability to set individual access passwords for every device/app
  2. Support for two-factor authentication (2FA/TFA/TOTP) for web-based access
  3. Support for basics in filtering: ability to filter on envelope recipient (so if I get BCC d, I can still filter), and ability to execute more than one action on filter match (eg, deliver to two folders, or deliver to a folder and forward to someone else)
IMAP and SMTP don t really support 2FA, so by setting individual passwords for every device, you can at least limit the blast radius and cut off a specific device if something is (or might be) compromised.

The candidates
I considered these providers: Startmail, Mailfence, Runbox, Fastmail, Kolab, Mailbox.org, and Migadu. I ll review each, and highlight the pricing of the plan I would most likely use. Each provider offers multiple plans; some may be more expensive and some may be cheaper than the one I reviewed. I included a link to each provider s full pricing information so you can compare for your needs. I set up trials with each of these (except Mailbox.org, with which I already had a paid account). It so happend that I had actual questions for support for each one, which gave me an opportunity to see how support responded. I did not fabricate questions, and would not have contacted support if I didn t have real ones. (This means that I asked different questions of each provider, because they were the REAL questions I had.) I ll jump to the spoiler right now: I eventually chose Migadu, with Fastmail and Mailfence as close seconds. I looked for providers myself, and also solicited recommendations in a Mastodon thread.

Mailbox.org
I begin with Mailbox, as it was my top choice in 2019 and the incumbent. Until this year, I had been quite happy with it. I had cause to reach their support less than once a year on average, and each time they replied the same day or next day. Now, however, they are failing on reliability and on support. Their spam filter has become overly aggressive. It has blocked quite a bit of legitimate mail. When contacting their support about a prior issue earlier this year, they initially took 4 days to reply, and then 6 days to reply after that. Ouch. They had me disable some spam settings. It didn t really help. I continue to lose mail. I don t know how much, because they block a lot of it before it even hits the spam folder. One of my friends texted to say mail was dropping. I raised a new ticket with mailbox, which took them 5 days to reply to. Their reply was unhelpful. As the Internet is not a static system, unforeseen events can always occur. Well yes, that s true, and I get it, false positives exist with email. But this was from an ISP s mail system with an address that had been established for years, and it was part of a larger pattern of rejecting quite a bit of legit mail. And every interaction with them recently hasn t resulted in them actually doing anything to resolve anything. It s just a paragraph or two of reply that does nothing and helps nothing. When I complained that it took 5 days to reply, they said We have not been able to reply sooner as we are currently experiencing a high volume of customer enquiries. Even though their SLA for my account is a not-great 48 business hour turnaround, they still missed it and their reason is we re busy. I finally asked what RBL had caught the blocked email, since when I checked, the sender wasn t on any RBL. Mailbox s reply: they only keep their logs for 7 days, so next time I should contact them within 7 days. Which, of course, I DID; it was them that kept delaying. Ugh! It s like they ve become a cable company. Even worse is how they have been blocking mail from GrapheneOS s discussion form. See their thread about it. In short, Graphene s mail server has a clean reputation and Mailbox has no problem with it. But because one of Graphene s IPv6 webservers has an IPv6 allocation of a size Mailbox doesn t like, they drop mail. It s ridiculous, and Mailbox was dismissive of this well-known and well-regarded Open Source project. So if the likes of GrapheneOS can t get good faith effort to deliver their mail, what chance does an individual like me have? I m sorry, but I m literally paying you to deliver email for me and provide good support. If you can t do either of those, you don t get to push that problem down onto me. Hire appropriate staff. On the technical side, they support aliases, my own clients, and have a reasonable privacy policy. Their 2FA support exists for the web interface (though weirdly not the support site), though it is somewhat weird. They do not support app passwords. A somewhat unique feature is the @secure.mailbox.org domain. If you try to receive mail at that address, mailbox.org will block it unless it uses TLS. Same for sending. This isn t E2EE, but it does at least require things not be in plaintext for the last hop to Mailbox. Verdict: not recommended due to poor reliability and support. Mailbox.Org summary:
  • Website: https://mailbox.org/en/
  • Reliability: iffy due to over-aggressive spam filtering
  • Support: Poor; takes 4-6 days for a reply and replies are unhelpful
  • Individual access passwords: No
  • 2FA: Yes, but with a PIN instead of a password as the other factor
  • Filtering: Full SIEVE feature set and GUI editor
  • Spam settings: greylisting on/off, reject some/all spam, etc. But they re insufficient to address Mailbox s overzealousness, which support says I cannot workaround within the interface.
  • Server storage location: Germany
  • Plan as reviewed: standard [pricing link]
    • Cost per year: EUR 30 (about $33)
    • Mail storage included: 10GB
    • Limits on send/receive volume: none
    • Aliases: 50 on your domain name, 25 on mailbox.org
    • Additional mailboxes: Available; each one at the same fee as the primary mailbox

Startmail
I really wanted to like Startmail. Its vault is an interesting idea and should contribute to the security and privacy of an account. They clearly care about privacy. It falls down in filtering. They have no way to filter on envelope recipient (BCC or similar). Their support confirmed this to me and that s a showstopper. Startmail support was also as slow as Mailbox, taking 5 days to respond to me. Two showstoppers right there. Verdict: Not recommended due to slow support responsiveness and weak filtering. Startmail summary:
  • Website: https://www.startmail.com/
  • Reliability: Seems to be fine
  • Support: Mediocre; Took 5 days for a reply, but the reply was helpful
  • Individual app access passwords: Yes
  • 2FA: Yes
  • Filtering: Poor; cannot filter on envelope recipient, and can t build filters with multiple actions
  • Spam settings: None
  • Server storage location: The Netherlands
  • Plan as reviewed: Custom domain (trial was Personal), [pricing link]
    • Cost per year: $70
    • Mail storage included: 20GB
    • Limits on send/receive volume: none
    • Aliases: unlimited, with lots of features: can set expiration, etc.
    • Additional mailboxes: not available

Kolab
Kolab Now is mainly positioned as a full groupware service, but they do have a email-only option which I investigated. There isn t much documentation about it compared to other providers, and also not much in the way of settings. You can turn greylisting on or off. And . that s it. It has a full suite of filtering options. They set an X-Envelope-To header which you can use with the arbitrary header match to do the right thing even for BCC situations. Filters can have multiple conditions and multiple actions. It is SIEVE-based and you can download your SIEVE definitions. If you enable 2FA, you disable IMAP and SMTP; not great. Verdict: Not an impressive enough email featureset to justify going with it. Kolab Now summary:
  • Website: https://kolabnow.com/
  • Reliability: Seems to be fine
  • Support: Fine responsiveness (next day)
  • Invidiaul app passwords: no
  • 2FA: Yes, but if you enable it, they disable IMAP and SMTP
  • Filtering: Excellent
  • Spam settings: Only greylisting on/off
  • Server storage location: Switzerland; they have lots of details on their setup
  • Plan as reviewed: Just email [pricing link]
    • Cost per year: CHF 60, about $66
    • Mail storage included: 5GB
    • Limitations on send/receive volume: None
    • Aliases: Yes. Not sure if there are limits.
    • Additional mailboxes: Yes if you set up a group account. Flexible pricing based on user count is not documented anywhere I could find.

Mailfence
Mailfence is another option, somewhat similar to Startmail but without the unique vault. I had some questions about filters, and support was quite responsive, responding in a couple of hours. Some of their copy on their website is a bit misleading, but support clarified when I asked them. They do not offer encryption at rest (like most of the entries here). Mailfence s filtering system is the kind I d like to see. It allows multiple conditions and multiple actions for each rule, and has some unique actions as well (notify by SMS or XMPP). Support says that Recipients matches envelope recipients. However, one ommission is that I can t match on arbitrary headers; only the canned list of headers they provide. They have only two spam settings:
  • spam filter on/off
  • whitelist
Given some recent complaints about their spam filter being overly aggressive, I find this lack of control somewhat concerning. (However, I discount complaints about people begging for more features in free accounts; free won t provide the kind of service I m looking for with any provider.) There are generally just very few settings for email as well. Verdict: Response and helpful support, filtering has the right structure but lacks arbitrary header match. Could be a good option. Mailfence summary:
  • Website: https://mailfence.com/
  • Reliability: Seems to be fine
  • Support: Excellent responsiveness and helpful replies (after some initial confusion about my question of greylisting)
  • Individual app access passwords: No. You can set a per-service password (eg, an IMAP password), but those will be shared with all devices speaking that protocol.
  • 2FA: Yes
  • Filtering: Good; only misses the ability to filter on arbitrary headers
  • Spam settings: Very few
  • Server storage location: Belgium
  • Plan as reviewed: Entry [pricing link]
    • Cost per year: $42
    • Mail storage included: 10GB, with a maximum of 50,000 messages
    • Limits on send/receive volume: none
    • Aliases: 50. Aliases can t be deleted once created (there may be an exeption to this for aliases on your own domain rather than mailfence.com)
    • Additional mailboxes: Their page on this is a bit confusing, and the pricing page lacks the information promised. It looks like you can pay the same $42/year for additional mailboxes, with a limit of up to 2 additional paid mailboxes and 2 additional free mailboxes tied to the account.

Runbox
This one came recommended in a Mastodon thread. I had some questions about it, and support response was fantastic I heard from two people that were co-founders of the company! Even within hours, on a weekend. Incredible! This kind of response was only surpassed by Migadu. I initially wrote to Runbox with questions about the incoming and outgoing message limits, which I hadn t seen elsewhere, as well as the bandwidth limit. They said the bandwidth limit is no longer enforced on paid accounts. The incoming and outgoing limits are enforced, and all email (even spam) counts towards the limit. Notably the outgoing limit is per recipient, so if you send 10 messages to your 50-recipient family group, that s the limit. However, they also indicated a willingness to reset the limit if something happens. Unfortunately, hitting the limit results in a hard bounce (SMTP 5xx) rather than a temporary failure (SMTP 4xx) so it can result in lost mail. This means I d be worried about some attack or other weirdness causing me to lose mail. Their filter is a pain point. Here are the challenges:
  • You can t directly match on a BCC recipient. Support advised to use a headers match, which will search for something anywhere in the headers. This works and is probably good enough since this data is in the Received: headers, but it is a little more imprecise.
  • They only have a contains , not an equals operator. So, for instance, a pattern searching for test@example.com would also match newtest@example.com . Support advised to put the email address in angle brackets to avoid this. That will work mostly. Angle brackets aren t always required in headers.
  • There is no way to have multiple actions on the filter (there is just no way to file an incoming message into two folders). This was the ultimate showstopper for me.
Support advised they are planning to upgrade the filter system in the future, but these are the limitations today. Verdict: A good option if you don t need much from the filtering system. Lots of privacy emphasis. Runbox summary:
  • Website: https://runbox.com/
  • Reliability: Seems to be fine, except returning 5xx codes if per-day limits are exceeded
  • Support: Excellent responsiveness and replies from founders
  • Individual app passwords: Yes
  • 2FA: Yes
  • Filtering: Poor
  • Spam settings: Very few
  • Server storage location: Norway
  • Plan as reviewed: Mini [pricing link]
    • Cost per year: $35
    • Mail storage included: 10GB
    • Limited on send/receive volume: Receive 5000 messages/day, Send 500 recipients/day
    • Aliases: 100 on runbox.com; unlimited on your own domain
    • Additional mailboxes: $15/yr each, also with 10GB non-shared storage per mailbox

Fastmail
Fastmail came recommended to me by a friend I ve known for decades. Here s the thing about Fastmail, compared to all the services listed above: It all just works. Everything. Filtering, spam prevention, it is all there, all feature-complete, and all just does the right thing as you d hope. Their filtering system has a canned dropdown for To/Cc/Bcc , it supports multiple conditions and multiple actions, and just does the right thing. (Delivering to multiple folders is a little cumbersome but possible.) It has a particularly strong feature set around administering multiple accounts, including things like whether users can prevent admins from reading their mail. The not-so-great part of the picture is around privacy. Fastmail is based in Australia, where the government has extensive power around spying on data, even to the point of forcing companies to add wiretap capabilities. Fastmail s privacy policy states user data may be held in Australia, USA, India, and Netherlands. By default, they share data with unidentified spam companies , though you can disable this in settings. On the other hand, they do make a good effort towards privacy. I contacted support with some questions and got back a helpful response in three hours. However, one of the questions was about in which countries my particular data would be stored, and the support response said they would have to get back to me on that. It s been several days and no word back. Verdict: A featureful option that just works , with a lot of features for managing family accounts and the like, but lacking in the privacy area. Fastmail summary:
  • Website: https://www.fastmail.com/
  • Reliability: Seems to be fine
  • Support: Good response time on most questions; dropped the ball on one tha trequired research
  • Individual app access passwords: Yes
  • 2FA: Yes
  • Filtering: Excellent
  • Spam settings: Can set filter aggressiveness, decide whether to share spam data with spam-fighting companies , configure how to handle backscatter spam, and evaluate the personal learning filter.
  • Server storage locations: Australia, USA, India, and The Netherlands. Legal jurisdiction is Australia.
  • Plan as reviewed: Individual [pricing link]
    • Cost per year: $60
    • Mail storage included: 50GB
    • Limits on send/receive volume: 300/hour
    • Aliases: Unlimited from what I can see
    • Additional mailboxes: No; requires a different plan for that

Migadu
Migadu was a service I d never heard of, but came recommended to me on Mastodon. I listed Migadu last because it is a class of its own compared to all the other options. Every other service is basically a webmail interface with a few extra settings tacked on. Migadu has a full-featured email admin console in addition. By that I mean you can:
  • View usage graphs (incoming, outgoing, storage) over time
  • Manage DNS (if you want Migadu to run your nameservers)
  • Manage multiple domains, and cross-domain relationships with mailboxes
  • View a limited set of logs
  • Configure accounts, reset their passwords if needed/authorized, etc.
  • Configure email address rewrite rules with wildcards and so forth
Basically, if you were the sort of person that ran your own mail servers back in the day, here is Migadu giving you most of that functionality. Effectively you have a web interface to do all the useful stuff, and they handle the boring and annoying bits. This is a really attractive model. Migadu support has been fantastic. They are quick to respond, and went above and beyond. I pointed out that their X-Envelope-To header, which is needed for filtering by BCC, wasn t being added on emails I sent myself. They replied 5 hours later indicating they had added the feature to add X-Envelope-To even for internal mails! Wow! I am impressed. With Migadu, you buy a pool of resources: storage space and incoming/outgoing traffic. What you do within that pool is up to you. You can set up users ( mailboxes ), aliases, domains, whatever you like. It all just shares the pool. You can restrict users further so that an individual user has access to only a subset of the pool resources. I was initially concerned about Migadu s daily send/receive message count limits, but in visiting with support and reading the documentation, what really comes out is that Migadu is a service with a personal touch. Hitting the incoming traffic limit will cause a SMTP temporary fail (4xx) response so you won t lose legit mail and support will work with you if it s a problem for legit uses. In other words, restrictions are soft and they are interpreted reasonably. One interesting thing about Migadu is that they do not offer accounts under their domain. That is, you MUST bring your own domain. That s pretty easy and cheap, of course. It also puts you in a position of power, because it is easy to migrate email from one provider to another if you own the domain. Filtering is done via SIEVE. There is a GUI editor which lets you accomplish most things, though it has an odd blind spot where you can t file a message into multiple folders. However, you can edit a SIEVE ruleset directly and you get the full SIEVE featureset, which is extensive (and does support filing a message into multiple folders). I note that the SIEVE :envelope match doesn t work, but Migadu adds an X-Envelope-To header which is just as good. I particularly love a company that tells you all the reasons you might not want to use them. Migadu s pro/con list is an honest drawbacks list (of course, their homepage highlights all the features!). Verdict: Fantastically powerful, excellent support, and good privacy. I chose this one. Migadu summary:
  • Website: https://migadu.com/
  • Reliability: Excellent
  • Support: Fantastic. Good response times and they added a feature (or fixed a bug?) a few hours after I requested it.
  • Individual access passwords: Yes. Create identities to support them.
  • 2FA: Yes, on both the admin interface and the webmail interface
  • Filtering: Excellent, based on SIEVE. GUI editor doesn t support multiple actions when filing into a folder, but full SIEVE functionality is exposed.
  • Spam settings:
    • On the domain level, filter aggressiveness, Greylisting on/off, black and white lists
    • On the mailbox level, filter aggressiveness, black and whitelists, action to take with spam; compatible with filters.
  • Server storage location: France; legal jurisdiction Switzerland
  • Plan as reviewed: mini [pricing link]
    • Cost per year: $90
    • Mail storage included: 30GB ( soft quota)
    • Limits on send/receive volume: 1000 messgaes in/day, 100 messages out/day ( soft quotas)
    • Aliases: Unlimited on an unlimited number of domains
    • Additional mailboxes: Unlimited and free; uses pooled quotas, but individual quotas can be set

Others
Here are a few others that I didn t think worthy of getting a trial:
  • mxroute was recommended by several. Lots of concerning things in their policy, such as:
    • if you repeatedly send mail to unroutable recipients, they may publish the addresses on Github
    • they will terminate your account if they think you are rude or want to contest a charge
    • they reserve the right to cancel your service at any time for any (or no) reason.
  • Proton keeps coming up, and I will not consider it so long as I am locked into their client on mobile.
  • Skiff comes up sometimes, but they were acquired by Notion.
  • Disroot comes up; this discussion highlights a number of reasons why I avoid them. Their Terms of Service (ToS) is inconsistent with a general-purpose email account (I guess for targeting nonprofits and activists, that could make sense). Particularly laughable is that they claim to be friends of Open Source, but then would take down your account if you upload copyrighted material. News flash: in order for an Open Source license to be meaningful, the underlying work is copyrighted. It is perfectly legal to upload copyrighted material when you wrote it or have the license to do so!

Conclusions
There are a lot of good options for email hosting today, and in particular I appreciate the excellent personal support from companies like Migadu and Runbox. Support small businesses!

11 May 2024

Sven Hoexter: xdg and mime types - stuff I would've loved to know a week ago

Learned a few things about xdg and mimetype registration in the last week that could be helpful to have condensed in a single place. No Need to Ship a Mailcap Mime File If you already ship a .desktop file (that is what ends up in /usr/share/applications/) which has a MimeType declared, there is no need to also ship a mailcap file (that is what ends up in /usr/lib/mime/packages/). Some triggers will do the conversion work for you. See also Debian Policy 4.9. Reverse DNS Naming Convention for .desktop Files Seems to be a closely guarded secret, maybe mainly known inside the Gnome world, but it's in the spec. Also not very widely known inside Debian if I look at my local system as not very representative sample. Your hicolor Theme App Icon can be a Mime Type Icon as Well In case you didn't know the hicolor icon theme is the default fallback theme. Many of us already install application icons e.g. in /usr/share/icons/hicolor/48x48/apps/ which is used in conjunction with the Icon field in the .desktop file to locate the application icon. Now the next step, and there it seems quite of few us miss out, is to create a symlink to also provide a mime type icon, so it's displayed in graphical file managers for the application data files. The schema here is simple: Take the MimeType e.g. application/x-vymand replace the / with a - and use that as file name in e.g. /usr/share/icons/hicolor/48x48/mimetypes/. In the vym case that is /usr/share/icons/hicolor/48x48/mimetypes/application-x-vym.png. If you have one use a scalable .svg file instead of .png. This seems to be an area where Debian lacks a bit of tooling to automatically convert application icons to all the different sizes and install it in all the appropriate places. What is already there is a trigger to run gtk-update-icon-cache when you install new icons into one of the icon theme folder so they're picked up. No Priority or Order in .desktop Files Likely something that hapens on all my fresh installations: Libreoffice is installed and xdg-open starts to open pdf files with Libreoffice instead of evince. Now I've to figure out again to run xdg-mime default org.gnome.Evince.desktop application/pdf to change that (at least for my user). Background here is that the desktop file spec explicitly mandates "Priority for applications is handled external to the .desktop files.". That's why we got in addition to all of that mimeapps.list files. And now, after running the xdg-mime command from above, we've a ~/.config/mimeapps.list defining
[Default Applications]
application/pdf=org.gnome.Evince.desktop
Debian as whole seems to be not very keen on shipping something like a sensible default mimeapps.list outside of desktop environment specific ones. A quick search gave me just
$ apt-file search mimeapps.list
cinnamon-desktop-data: /usr/share/applications/x-cinnamon-mimeapps.list
gdm3: /usr/share/gdm/greeter/applications/mimeapps.list
gnome-session-common: /usr/share/applications/gnome-mimeapps.list
plasma-workspace: /usr/share/applications/kde-mimeapps.list
sxmo-utils: /usr/share/applications/mimeapps.list
sxmo-utils: /usr/share/sxmo/xdg/mimeapps.list
While it's a bit anoying to run into that pdf vs Libreoffice thing every now and then, it's maybe better to not have long controversial threads about default pdf viewer, like the ones we already had about the default MTA choices. ;) And while we're at it: everyone using Libreoffice should give a virtual hug to rene@ for taming that beast since 2010 and OpenOffice.org before.

10 May 2024

Reproducible Builds: Reproducible Builds in April 2024

Welcome to the April 2024 report from the Reproducible Builds project! In our reports, we attempt to outline what we have been up to over the past month, as well as mentioning some of the important things happening more generally in software supply-chain security. As ever, if you are interested in contributing to the project, please visit our Contribute page on our website. Table of contents:
  1. New backseat-signed tool to validate distributions source inputs
  2. NixOS is not reproducible
  3. Certificate vulnerabilities in F-Droid s fdroidserver
  4. Website updates
  5. Reproducible Builds and Insights from an Independent Verifier for Arch Linux
  6. libntlm now releasing minimal source-only tarballs
  7. Distribution work
  8. Mailing list news
  9. diffoscope
  10. Upstream patches
  11. reprotest
  12. Reproducibility testing framework

New backseat-signed tool to validate distributions source inputs kpcyrd announced a new tool called backseat-signed, after:
I figured out a somewhat straight-forward way to check if a given git archive output is cryptographically claimed to be the source input of a given binary package in either Arch Linux or Debian (or both).
Elaborating more in their announcement post, kpcyrd writes:
I believe this to be the reproducible source tarball thing some people have been asking about. As explained in the README, I believe reproducing autotools-generated tarballs isn t worth everybody s time and instead a distribution that claims to build from source should operate on VCS snapshots instead of tarballs with 25k lines of pre-generated shell-script.
Indeed, many distributions packages already build from VCS snapshots, and this trend is likely to accelerate in response to the xz incident. The announcement led to a lengthy discussion on our mailing list, as well as shorter followup thread from kpcyrd about bootstrapping Autotools projects.

NixOS is not reproducible Morten Linderud posted an post on his blog this month, provocatively titled, NixOS is not reproducible . Although quickly admitting that his title is indeed clickbait , Morten goes on to clarify the precise guarantees and promises that NixOS provides its users. Later in the most, Morten mentions that he was motivated to write the post because:
I have heavily invested my free-time on this topic since 2017, and met some of the accomplishments we have had with Doesn t NixOS solve this? for just as long and I thought it would be of peoples interest to clarify[.]

Certificate vulnerabilities in F-Droid s fdroidserver In early April, Fay Stegerman announced a certificate pinning bypass vulnerability and Proof of Concept (PoC) in the F-Droid fdroidserver tools for managing builds, indexes, updates, and deployments for F-Droid repositories to the oss-security mailing list.
We observed that embedding a v1 (JAR) signature file in an APK with minSdk >= 24 will be ignored by Android/apksigner, which only checks v2/v3 in that case. However, since fdroidserver checks v1 first, regardless of minSdk, and does not verify the signature, it will accept a fake certificate and see an incorrect certificate fingerprint. [ ] We also realised that the above mentioned discrepancy between apksigner and androguard (which fdroidserver uses to extract the v2/v3 certificates) can be abused here as well. [ ]
Later on in the month, Fay followed up with a second post detailing a third vulnerability and a script that could be used to scan for potentially affected .apk files and mentioned that, whilst upstream had acknowledged the vulnerability, they had not yet applied any ameliorating fixes.

Website updates There were a number of improvements made to our website this month, including Chris Lamb updating the archive page to recommend -X and unzipping with TZ=UTC [ ] and adding Maven, Gradle, JDK and Groovy examples to the SOURCE_DATE_EPOCH page [ ]. In addition Jan Zerebecki added a new /contribute/opensuse/ page [ ] and Sertonix fixed the automatic RSS feed detection [ ][ ].

Reproducible Builds and Insights from an Independent Verifier for Arch Linux Joshua Drexel, Esther H nggi and Iy n M ndez Veiga of the School of Computer Science and Information Technology, Hochschule Luzern (HSLU) in Switzerland published a paper this month entitled Reproducible Builds and Insights from an Independent Verifier for Arch Linux. The paper establishes the context as follows:
Supply chain attacks have emerged as a prominent cybersecurity threat in recent years. Reproducible and bootstrappable builds have the potential to reduce such attacks significantly. In combination with independent, exhaustive and periodic source code audits, these measures can effectively eradicate compromises in the building process. In this paper we introduce both concepts, we analyze the achievements over the last ten years and explain the remaining challenges.
What is more, the paper aims to:
contribute to the reproducible builds effort by setting up a rebuilder and verifier instance to test the reproducibility of Arch Linux packages. Using the results from this instance, we uncover an unnoticed and security-relevant packaging issue affecting 16 packages related to Certbot [ ].
A PDF of the paper is available.

libntlm now releasing minimal source-only tarballs Simon Josefsson wrote on his blog this month that, going forward, the libntlm project will now be releasing what they call minimal source-only tarballs :
The XZUtils incident illustrate that tarballs with files that are not included in the git archive offer an opportunity to disguise malicious backdoors. [The] risk of hiding malware is not the only motivation to publish signed minimal source-only tarballs. With pre-generated content in tarballs, there is a risk that GNU/Linux distributions [ship] generated files coming from the tarball into the binary *.deb or *.rpm package file. Typically the person packaging the upstream project never realized that some installed artifacts was not re-built[.]
Simon s post goes into further details how this was achieved, and describes some potential caveats and counters some expected responses as well. A shorter version can be found in the announcement for the 1.8 release of libntlm.

Distribution work In Debian this month, Helmut Grohne filed a bug suggesting the removal of dh-buildinfo, a tool to generate and distribute .buildinfo-like files within binary packages. Note that this is distinct from the .buildinfo generation performed by dpkg-genbuildinfo. By contrast, the entirely optional dh-buildinfo generated a debian/buildinfo file that would be shipped within binary packages as /usr/share/doc/package/buildinfo_$arch.gz. Adrian Bunk recently asked about including source hashes in Debian s .buildinfo files, which prompted Guillem Jover to refresh some old patches to dpkg to make this possible, which revealed some quirks Vagrant Cascadian discovered when testing. In addition, 21 reviews of Debian packages were added, 22 were updated and 16 were removed this month adding to our knowledge about identified issues. A number issue types have been added, such as new random_temporary_filenames_embedded_by_mesonpy and timestamps_added_by_librime toolchain issues. In openSUSE, it was announced that their Factory distribution enabled bit-by-bit reproducible builds for almost all parts of the package. Previously, more parts needed to be ignored when comparing package files, but now only the signature needs to be deleted. In addition, Bernhard M. Wiedemann published theunreproduciblepackage as a proper .rpm package which it allows to better test tools intended to debug reproducibility. Furthermore, it was announced that Bernhard s work on a 100% reproducible openSUSE-based distribution will be funded by NLnet. He also posted another monthly report for his reproducibility work in openSUSE. In GNU Guix, Janneke Nieuwenhuizen submitted a patch set for creating a reproducible source tarball for Guix. That is to say, ensuring that make dist is reproducible when run from Git. [ ] Lastly, in Fedora, a new wiki page was created to propose a change to the distribution. Titled Changes/ReproduciblePackageBuilds , the page summarises itself as a proposal whereby A post-build cleanup is integrated into the RPM build process so that common causes of build irreproducibility in packages are removed, making most of Fedora packages reproducible.

Mailing list news On our mailing list this month:
  • Continuing a thread started in March 2024 about the Arch Linux minimal container now being 100% reproducible, John Gilmore followed up with a post about the practical and philosophical distinctions of local vs. remote storage of the various artifacts needed to build packages.
  • Chris Lamb asked the list which conferences readers are attending these days: After peak Covid and other industry-wide changes, conferences are no longer the must attend events they previously were especially in the area of software supply-chain security. In rough, practical terms, it seems harder to justify conference travel today than it did in mid-2019. The thread generated a number of responses which would be of interest to anyone planning travel in Q3 and Q4 of 2024.
  • James Addison wrote to the list about a quirk in Git related to its core.autocrlf functionality, thus helpfully passing on a slightly off-topic and perhaps not of direct relevance to anyone on the list today note that might still be the kind of issue that is useful to be aware of if-and-when puzzling over unexpected git content / checksum issues (situations that I do expect people on this list encounter from time-to-time) .

diffoscope diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. This month, Chris Lamb made a number of changes such as uploading versions 263, 264 and 265 to Debian and made the following additional changes:
  • Don t crash on invalid .zip files, even if we encounter their badness halfway through the file and not at the time of their initial opening. [ ]
  • Prevent odt2txt tests from always being skipped due to an (impossibly) new version requirement. [ ]
  • Avoid parens-in-parens in test skipping messages. [ ]
  • Ensure that tests with >=-style version constraints actually print the tool name. [ ]
In addition, Fay Stegerman fixed a crash when there are (invalid) duplicate entries in .zip which was originally reported in Debian bug #1068705). [ ] Fay also added a user-visible note to a diff when there are duplicate entries in ZIP files [ ]. Lastly, Vagrant Cascadian added an external tool pointer for the zipdetails tool under GNU Guix [ ] and proposed updates to diffoscope in Guix as well [ ] which were merged as [264] [265], fixed a regression in test coverage and increased verbosity of the test suite[ ].

Upstream patches The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:

reprotest reprotest is our tool for building the same source code twice in different environments and then checking the binaries produced by each build for any differences. This month, reprotest version 0.7.27 was uploaded to Debian unstable) by Vagrant Cascadian who made the following additional changes:
  • Enable specific number of CPUs using --vary=num_cpus.cpus=X. [ ]
  • Consistently use 398 days for time variation, rather than choosing randomly each time. [ ]
  • Disable builds of arch:any packages. [ ]
  • Update the description for the build_path.path option in README.rst. [ ]
  • Update escape sequences for compatibility with Python 3.12. (#1068853). [ ]
  • Remove the generic upstream signing-key [ ] and update the packages signing key with the currently active team members [ ].
  • Update the packaging Standards-Version to 4.7.0. [ ]
In addition, Holger Levsen fixed some spelling errors detected by the spellintian tool [ ] and Vagrant Cascadian updated reprotest in GNU Guix to 0.7.27.

Reproducibility testing framework The Reproducible Builds project operates a comprehensive testing framework running primarily at tests.reproducible-builds.org in order to check packages and other artifacts for reproducibility. In April, an enormous number of changes were made by Holger Levsen:
  • Debian-related changes:
    • Adjust for changed internal IP addresses at Codethink. [ ]
    • Automatically cleanup failed diffoscope user services if there are too many failures. [ ][ ]
    • Configure two new nodes at infomanik.cloud. [ ][ ]
    • Schedule Debian experimental even less. [ ][ ]
  • Breakage detection:
    • Exclude currently building packages from breakage detection. [ ]
    • Be more noisy if diffoscope crashes. [ ]
    • Health check: provide clickable URLs in jenkins job log for failed pkg builds due to diffoscope crashes. [ ]
    • Limit graph to about the last 100 days of breakages only. [ ]
    • Fix all found files with bad permissions. [ ]
    • Prepare dealing with diffoscope timeouts. [ ]
    • Detect more cases of failure to debootstrap base system. [ ]
    • Include timestamps of failed job runs. [ ]
  • Documentation updates:
    • Document how to access arm64 nodes at Codethink. [ ]
    • Document how to use infomaniak.cloud. [ ]
    • Drop notes about long stalled LeMaker HiKey960 boards sponsored by HPE and hosted at ETH. [ ]
    • Mention osuosl4 and osuosl5 and explain their usage. [ ]
    • Mention that some packages are built differently. [ ][ ]
    • Improve language in a comment. [ ]
    • Add more notes how to query resource usage from infomaniak.cloud. [ ]
  • Node maintenance:
    • Add ionos4 and ionos14 to THANKS. [ ][ ][ ][ ][ ]
    • Deprecate Squid on ionos1 and ionos10. [ ]
    • Drop obsolete script to powercycle arm64 architecture nodes. [ ]
    • Update system_health_check for new proxy nodes. [ ]
  • Misc changes:
    • Make the update_jdn.sh script more robust. [ ][ ]
    • Update my SSH public key. [ ]
In addition, Mattia Rizzolo added some new host details. [ ]

If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

9 May 2024

Vincent Sanders: Bee to the blossom, moth to the flame; Each to his passion; what's in a name?

I like the sentiment of Helen Hunt Jackson in that quote and it generally applies double for computer system names. However I like to think when I named the first NetSurf VM host server phoenix fourteen years ago I captured the nature of its continuous cycle of replacement.
Image of the fourth phoenix server
We have been very fortunate to receive a donated server to replace the previous every few years and the very generous folks at Collabora continue to provide hosting for it.Recently I replaced the server for the third time. We once again were given a replacement by Huw Jones in the form of a SuperServer 6017R-TDAF system with dual Intel Xeon Ivy Bridge E5-2680v2 processors. There were even rack rails!

The project bought some NVMe drives and an adaptor cards and I attempted to arrange to swap out the server in January.

The old phoenixiii server being replaced
Here we come to the slight disadvantage of an informal arrangement where access to the system depends upon a busy third party. Unfortunately it took until May to arrange access (I must thank Vivek again for coming in on a Saturday to do this)

In the intervening time, once I realised access was going to become increasingly difficult, I decided to obtain as good a system as I could manage to reduce requirements for future access.

I turned to eBay and acquired a slightly more modern SuperServer with dual Intel Xeon Haswell E5-2680v3 processors which required purchase of 64G of new memory (Haswell is a DDR4 platform).

I had wanted to use Broadwell processors but this exceeded my budget and would only be a 10% performance uplift (The chassis, motherboard and memory cost 180 and another 50 for processors was just too much, maybe next time)

graph of cpu mark improvements in the phoenix servers over time
While making the decision on the processor selection I made a quick chart of previous processing capabilities (based on a passmark comparison) of phoenix servers and was startled to discover I needed a logarithmic vertical axis. Multi core performance of processors has improved at a startling rate in the last decade.

When the original replacement was donated I checked where the performance was limited and noticed it was mainly in disc access which is what prompted the upgrade to NVMe (2 gigabytes a second peek read throughput) which moved the bottleneck to the processors where, even with the upgrades, it remains.

I do not really know if there is a conclusion here beyond noting NetSurf is very fortunate as a project to have some generous benefactors both for donating hardware and hosting for which I know all the developers are grateful.

Now I just need to go and migrate a huge bunch of virtual machines and associated sysadmin to make use of these generous donations.

7 May 2024

John Goerzen: Photographic comparison: Is the Kobo Libra Colour display worse than the Kobo Libra 2?

I ve been using E Ink-based ereaders for quite a number of years now. I ve had my Kobo Libra 2 for a few years, and was looking forward to the Kobo Libra Colour the first color E Ink display in a mainstream ereader line. I found the display to be a mixed bag; contrast seemed a lot worse on B&W images, and the device backlight (it s not technically a back light) seemed to cause a particular contrast reduction in dark mode. I went searching for information on this. I found a lot of videos on Kobo Libra 2 vs Libra Colour and so forth, but they were all pretty much useless. These were the mistakes they made: So I dug out my Canon DSLR, tripod, and set up shots. Every shot here is set at ISO 100. Every shot in the same setting has the same exposure settings, which I document. The one thing I forgot to shut off was automatic white balance; you can notice it is active if you look closely at the backgrounds, but WB isn t really relevant to this comparison anyhow. Because there has also been a lot of concern about how well fine B&W details will show up on the Kobo Libra Colour screen, I shot all photos using a PDF test image from the open source hplip package (testpage.ps.gz converted to PDF). This also rules out font differences between the devices. I ensured a full screen refresh before each shot. This is all because color E Ink is effectively a filter called Kaleido over the B&W layer. This causes dimming and some other visual effects. You can click on any image here to see a full-resolution view. The full-size images are the exact JPEG coming from the camera, with only two modifications: 1) metadata has been redacted for privacy reasons, and 2) some images were losslessly rotated after the shoot. OK, onwards! Outdoors, bright sun, shot from directly overhead Bright sun is ideal lighting for an E Ink display. They need no lighting at all in this scenario, and in fact, if you turn on their internal display light, it will probably not be very noticeable. Of course, this is in contrast to phone LCD screens, for which bright sunlight is the worst. Scene: Morning sunlight reaching the ereaders at an angle. The angle was sufficient so that no shadows were cast by the camera or tripod. Device light: Off on both Exposure: 1/160, f16, ISO 100 You can see how much darker the Libra Colour is here. Though in these bright conditions, it is still plenty bright. There may actually be situations in which the Libra 2 is too bright in direct sunlight, requiring a person to squint or whatnot. Looking at the radial lines, it is a bit difficult to tell because the difference in brightness, but I don t see a hugely obvious reduction in quality in the Libra 2. Later I have a shot where I try to match brightness, and we ll check it out again there. Outdoors, shade, shot from directly overhead For the next shot, I set the ereaders in shade, but still well-lit with the diffuse sunlight from all around. The first two have both device lights off. For the third, I set the device light on the Kobo Colour to 100%, full cool shade, to try to see how close I could get it to the Libra 2 brightness. (Sorry it looks like I forgot to close the toolbar on the Colour for this set, but it doesn t modify the important bits of the underlying image.) Device light: Initially off on both Exposure: 1/60, f6.4, ISO 100 Here you can see the light on the Libra Colour was nearly able to match the brightness on the Libra 2. Indoors, room lit with overhead and window light, device light off We continue to move into dimmer light with this next shot. Device light: Off on both Exposure: 1/4, f5, ISO 100 Indoors, room lit with overhead and window light, device light on Now we have the first head-to-head with the device light on. I set the Libra 2 to my favorite warmth setting, found a brightness that looked good, and then tried my best to match those settings on the Libra Colour. My camera s light meter aided in matching brightness. Device light: On (Libra 2 at 40%, Libra Colour at 59%) Exposure: 1/8, f5, ISO 100 (Apparently I am terrible at remembering to dismiss menus, sigh.) Indoors, dark room, dark mode, at an angle The Kobo Libra Colour surprised me with its dark mode. When viewed at an oblique angle, the screen gets pretty washed out. I maintained the same brightness settings here as I did above. It is much more noticeable when the brightness is set down to my preferred nighttime level (4%), or with a more significant angle. Since you can t see my tags, the order of the photos here will be: Libra 2 (standard orientation), Colour (standard orientation), Colour (turned around. Device light: On (as above) Exposure: 1/4, f5.6, ISO 100 Notice how I said I maintained the same brightness settings as before, and yet the Libra Colour looks brighter than the Libra 2 here, whereas it looked the same in the prior non-dark mode photos. Here s why. I set the exposure of each set of shots based on camera metering. As we have seen from the light-off photos, the brightness of a white pixel is a lot less on a Libra Colour than on the Libra 2. However, it is likely that the brightness of a black pixel is about that same. Therefore, contrast on the Libra Colour is lower than on the Libra 2. The traditional shot is majority white pixels, so to make the Libra Colour brightness match that of the Libra 2, I had to crank up the brightness on the Libra Colour to compensate for the darker white background. With me so far? Now with the inverted image, you can see what that does. It doesn t just raise the brightness of the white pixels, but it also raises the brightness of the black pixels. This is expected because we didn t raise contrast, only brightness. Also, in the last image, you can see it is brighter to the right. Again, other conditions that are more difficult to photograph make that much more pronounced. Viewing the Libra Colour from one side (but not the other), in dark mode, with the light on, produces noticeably worse contrast on one side. Conclusions This isn t a slam dunk. Let s walk through this: I don t think there is any noticeable loss of detail on the Libra Colour. The radial lines appeared as well defined on it as on the Libra 2. Oddly, with the backlight, some striations were apparent in the gray gradient test, but I wouldn t be using an E Ink device for clear photographic reproduction anyhow. If you read mostly black and white: If you had been using a Kobo Libra Colour and were handed a Libra 2, you would go, Wow! What an upgrade! The screen is so much brighter! There s little reason to get a Libra Colour. The Libra 2 might be hard to find these days, but the new Clara BW (with a 6 instead of the 7 screen on the Libra series) might be just the thing for you. The Libra 2 is at home in any lighting, from direct sun to pitch black, and has all the usual E Ink benefits (eg, battery life measured in weeks) and drawbacks (slower refresh rate) that we re all used to. If you are interested in photographic color reproduction mostly indoors: Consider a small tablet. The Libra Colour s 4096 colors are going to appear washed out compared to what you re used to on a LCD screen. If you are interested in color content indoors and out: The Libra Colour might be a good fit. It could work well for things where superb color rendition isn t essential for instance, news stories (the Pocket integration or Calibre s news feature could be nice there), comics, etc. In a moderately-lit indoor room, it looks like the Libra Colour s light can lead it to results that approach Libra 2 quality. So if most of your reading is in those conditions, perhaps the Libra Colour is right for you. As a final aside, I wrote in this article about the Kobo devices. I switched from Kindles to Kobos a couple of years ago due to the greater openness of the Kobo devices (you can add things like Nickel Menu and KOReader to them, and they have built-in support for more useful formats), their featureset, and their cost. The top-of-the-line Kindle devices will have a screen very similar if not identical to the Libra 2, so you can very easily consider this to be a comparison between the Oasis and the Libra Colour as well.

1 May 2024

Matthew Palmer: The Mediocre Programmer's Guide to Rust

Me: Hi everyone, my name s Matt, and I m a mediocre programmer. Everyone: Hi, Matt. Facilitator: Are you an alcoholic, Matt? Me: No, not since I stopped reading Twitter. Facilitator: Then I think you re in the wrong room.
Yep, that s my little secret I m a mediocre programmer. The definition of the word hacker I most closely align with is someone who makes furniture with an axe . I write simple, straightforward code because trying to understand complexity makes my head hurt. Which is why I ve always avoided the more academic languages, like OCaml, Haskell, Clojure, and so on. I know they re good languages people far smarter than me are building amazing things with them but the time I hear the word endofunctor , I ve lost all focus (and most of my will to live). My preferred languages are the ones that come with less intellectual overhead, like C, PHP, Python, and Ruby. So it s interesting that I ve embraced Rust with significant vigour. It s by far the most complicated language that I feel at least vaguely comfortable with using in anger . Part of that is that I ve managed to assemble a set of principles that allow me to almost completely avoid arguing with Rust s dreaded borrow checker, lifetimes, and all the rest of the dark, scary corners of the language. It s also, I think, that Rust helps me to write better software, and I can feel it helping me (almost) all of the time. In the spirit of helping my fellow mediocre programmers to embrace Rust, I present the principles I ve assembled so far.

Neither a Borrower Nor a Lender Be If you know anything about Rust, you probably know about the dreaded borrow checker . It s the thing that makes sure you don t have two pieces of code trying to modify the same data at the same time, or using a value when it s no longer valid. While Rust s borrowing semantics allow excellent performance without compromising safety, for us mediocre programmers it gets very complicated, very quickly. So, the moment the compiler wants to start talking about explicit lifetimes , I shut it up by just using owned values instead. It s not that I never borrow anything; I have some situations that I know are borrow-safe for the mediocre programmer (I ll cover those later). But any time I m not sure how things will pan out, I ll go straight for an owned value. For example, if I need to store some text in a struct or enum, it s going straight into a String. I m not going to start thinking about lifetimes and &'a str; I ll leave that for smarter people. Similarly, if I need a list of things, it s a Vec<T> every time no &'b [T] in my structs, thank you very much.

Attack of the Clones Following on from the above, I ve come to not be afraid of .clone(). I scatter them around my code like seeds in a field. Life s too short to spend time trying to figure out who s borrowing what from whom, if I can just give everyone their own thing. There are warnings in the Rust book (and everywhere else) about how a clone can be expensive . While it s true that, yes, making clones of data structures consumes CPU cycles and memory, it very rarely matters. CPU cycles are (usually) plentiful and RAM (usually) relatively cheap. Mediocre programmer mental effort is expensive, and not to be spent on premature optimisation. Also, if you re coming from most any other modern language, Rust is already giving you so much more performance that you re probably ending up ahead of the game, even if you .clone() everything in sight. If, by some miracle, something I write gets so popular that the expense of all those spurious clones becomes a problem, it might make sense to pay someone much smarter than I to figure out how to make the program a zero-copy masterpiece of efficient code. Until then clone early and clone often, I say!

Derive Macros are Powerful Magicks If you start .clone()ing everywhere, pretty quickly you ll be hit with this error:

error[E0599]: no method named  clone  found for struct  Foo  in the current scope

This is because not everything can be cloned, and so if you want your thing to be cloned, you need to implement the method yourself. Well sort of. One of the things that I find absolutely outstanding about Rust is the derive macro . These allow you to put a little marker on a struct or enum, and the compiler will write a bunch of code for you! Clone is one of the available so-called derivable traits , so you add #[derive(Clone)] to your structs, and poof! you can .clone() to your heart s content. But there are other things that are commonly useful, and so I ve got a set of traits that basically all of my data structures derive:

#[derive(Clone, Debug, Default)]
struct Foo  
    // ...
 

Every time I write a struct or enum definition, that line #[derive(Clone, Debug, Default)] goes at the top. The Debug trait allows you to print a debug representation of the data structure, either with the dbg!() macro, or via the :? format in the format!() macro (and anywhere else that takes a format string). Being able to say what exactly is that? comes in handy so often, not having a Debug implementation is like programming with one arm tied behind your Aeron. Meanwhile, the Default trait lets you create an empty instance of your data structure, with all of the fields set to their own default values. This only works if all the fields themselves implement Default, but a lot of standard types do, so it s rare that you ll define a structure that can t have an auto-derived Default. Enums are easily handled too, you just mark one variant as the default:

#[derive(Clone, Debug, Default)]
enum Bar  
    Something(String),
    SomethingElse(i32),
    #[default]   // <== mischief managed
    Nothing,
 

Borrowing is OK, Sometimes While I previously said that I like and usually use owned values, there are a few situations where I know I can borrow without angering the borrow checker gods, and so I m comfortable doing it. The first is when I need to pass a value into a function that only needs to take a little look at the value to decide what to do. For example, if I want to know whether any values in a Vec<u32> are even, I could pass in a Vec, like this:

fn main()  
    let numbers = vec![0u32, 1, 2, 3, 4, 5];
    if has_evens(numbers)  
        println!("EVENS!");
     
 
fn has_evens(numbers: Vec<u32>) -> bool  
    numbers.iter().any( n  n % 2 == 0)
 

Howver, this gets ugly if I m going to use numbers later, like this:

fn main()  
    let numbers = vec![0u32, 1, 2, 3, 4, 5];
    if has_evens(numbers)  
        println!("EVENS!");
     
    // Compiler complains about "value borrowed here after move"
    println!("Sum:  ", numbers.iter().sum::<u32>());
 
fn has_evens(numbers: Vec<u32>) -> bool  
    numbers.iter().any( n  n % 2 == 0)
 

Helpfully, the compiler will suggest I use my old standby, .clone(), to fix this problem. But I know that the borrow checker won t have a problem with lending that Vec<u32> into has_evens() as a borrowed slice, &[u32], like this:

fn main()  
    let numbers = vec![0u32, 1, 2, 3, 4, 5];
    if has_evens(&numbers)  
        println!("EVENS!");
     
 
fn has_evens(numbers: &[u32]) -> bool  
    numbers.iter().any( n  n % 2 == 0)
 

The general rule I ve got is that if I can take advantage of lifetime elision (a fancy term meaning the compiler can figure it out ), I m probably OK. In less fancy terms, as long as the compiler doesn t tell me to put 'a anywhere, I m in the green. On the other hand, the moment the compiler starts using the words explicit lifetime , I nope the heck out of there and start cloning everything in sight. Another example of using lifetime elision is when I m returning the value of a field from a struct or enum. In that case, I can usually get away with returning a borrowed value, knowing that the caller will probably just be taking a peek at that value, and throwing it away before the struct itself goes out of scope. For example:

struct Foo  
    id: u32,
    desc: String,
 
impl Foo  
    fn description(&self) -> &str  
        &self.desc
     
 

Returning a reference from a function is practically always a mortal sin for mediocre programmers, but returning one from a struct method is often OK. In the rare case that the caller does want the reference I return to live for longer, they can always turn it into an owned value themselves, by calling .to_owned().

Avoid the String Tangle Rust has a couple of different types for representing strings String and &str being the ones you see most often. There are good reasons for this, however it complicates method signatures when you just want to take some sort of bunch of text , and don t care so much about the messy details. For example, let s say we have a function that wants to see if the length of the string is even. Using the logic that since we re just taking a peek at the value passed in, our function might take a string reference, &str, like this:

fn is_even_length(s: &str) -> bool  
    s.len() % 2 == 0
 

That seems to work fine, until someone wants to check a formatted string:

fn main()  
    // The compiler complains about "expected  &str , found  String "
    if is_even_length(format!("my string is  ", std::env::args().next().unwrap()))  
        println!("Even length string");
     
 

Since format! returns an owned string, String, rather than a string reference, &str, we ve got a problem. Of course, it s straightforward to turn the String from format!() into a &str (just prefix it with an &). But as mediocre programmers, we can t be expected to remember which sort of string all our functions take and add & wherever it s needed, and having to fix everything when the compiler complains is tedious. The converse can also happen: a method that wants an owned String, and we ve got a &str (say, because we re passing in a string literal, like "Hello, world!"). In this case, we need to use one of the plethora of available turn this into a String mechanisms (.to_string(), .to_owned(), String::from(), and probably a few others I ve forgotten), on the value before we pass it in, which gets ugly real fast. For these reasons, I never take a String or an &str as an argument. Instead, I use the Power of Traits to let callers pass in anything that is, or can be turned into, a string. Let us have some examples. First off, if I would normally use &str as the type, I instead use impl AsRef<str>:

fn is_even_length(s: impl AsRef<str>) -> bool  
    s.as_ref().len() % 2 == 0
 

Note that I had to throw in an extra as_ref() call in there, but now I can call this with either a String or a &str and get an answer. Now, if I want to be given a String (presumably because I plan on taking ownership of the value, say because I m creating a new instance of a struct with it), I use impl Into<String> as my type:

struct Foo  
    id: u32,
    desc: String,
 
impl Foo  
    fn new(id: u32, desc: impl Into<String>) -> Self  
        Self   id, desc: desc.into()  
     
 

We have to call .into() on our desc argument, which makes the struct building a bit uglier, but I d argue that s a small price to pay for being able to call both Foo::new(1, "this is a thing") and Foo::new(2, format!("This is a thing named name ")) without caring what sort of string is involved.

Always Have an Error Enum Rust s error handing mechanism (Results everywhere), along with the quality-of-life sugar surrounding it (like the short-circuit operator, ?), is a delightfully ergonomic approach to error handling. To make life easy for mediocre programmers, I recommend starting every project with an Error enum, that derives thiserror::Error, and using that in every method and function that returns a Result. How you structure your Error type from there is less cut-and-dried, but typically I ll create a separate enum variant for each type of error I want to have a different description. With thiserror, it s easy to then attach those descriptions:

#[derive(Clone, Debug, thiserror::Error)]
enum Error  
    #[error(" 0  caught fire")]
    Combustion(String),
    #[error(" 0  exploded")]
    Explosion(String),
 

I also implement functions to create each error variant, because that allows me to do the Into<String> trick, and can sometimes come in handy when creating errors from other places with .map_err() (more on that later). For example, the impl for the above Error would probably be:

impl Error  
    fn combustion(desc: impl Into<String>) -> Self  
        Self::Combustion(desc.into())
     
    fn explosion(desc: impl Into<String>) -> Self  
        Self::Explosion(desc.into())
     
 

It s a tedious bit of boilerplate, and you can use the thiserror-ext crate s thiserror_ext::Construct derive macro to do the hard work for you, if you like. It, too, knows all about the Into<String> trick.

Banish map_err (well, mostly) The newer mediocre programmer, who is just dipping their toe in the water of Rust, might write file handling code that looks like this:

fn read_u32_from_file(name: impl AsRef<str>) -> Result<u32, Error>  
    let mut f = File::open(name.as_ref())
        .map_err( e  Error::FileOpenError(name.as_ref().to_string(), e))?;
    let mut buf = vec![0u8; 30];
    f.read(&mut buf)
        .map_err( e  Error::ReadError(e))?;
    String::from_utf8(buf)
        .map_err( e  Error::EncodingError(e))?
        .parse::<u32>()
        .map_err( e  Error::ParseError(e))
 

This works great (or it probably does, I haven t actually tested it), but there are a lot of .map_err() calls in there. They take up over half the function, in fact. With the power of the From trait and the magic of the ? operator, we can make this a lot tidier. First off, assume we ve written boilerplate error creation functions (or used thiserror_ext::Construct to do it for us)). That allows us to simplify the file handling portion of the function a bit:

fn read_u32_from_file(name: impl AsRef<str>) -> Result<u32, Error>  
    let mut f = File::open(name.as_ref())
        // We've dropped the  .to_string()  out of here...
        .map_err( e  Error::file_open_error(name.as_ref(), e))?;
    let mut buf = vec![0u8; 30];
    f.read(&mut buf)
        // ... and the explicit parameter passing out of here
        .map_err(Error::read_error)?;
    // ...

If that latter .map_err() call looks weird, without the e and such, it s passing a function-as-closure, which just saves on a few characters typing. Just because we re mediocre, doesn t mean we re not also lazy. Next, if we implement the From trait for the other two errors, we can make the string-handling lines significantly cleaner. First, the trait impl:

impl From<std::string::FromUtf8Error> for Error  
    fn from(e: std::string::FromUtf8Error) -> Self  
        Self::EncodingError(e)
     
 
impl From<std::num::ParseIntError> for Error  
    fn from(e: std::num::ParseIntError) -> Self  
        Self::ParseError(e)
     
 

(Again, this is boilerplate that can be autogenerated, this time by adding a #[from] tag to the variants you want a From impl on, and thiserror will take care of it for you) In any event, no matter how you get the From impls, once you have them, the string-handling code becomes practically error-handling-free:

    Ok(
        String::from_utf8(buf)?
            .parse::<u32>()?
    )

The ? operator will automatically convert the error from the types returned from each method into the return error type, using From. The only tiny downside to this is that the ? at the end strips the Result, and so we ve got to wrap the returned value in Ok() to turn it back into a Result for returning. But I think that s a small price to pay for the removal of those .map_err() calls. In many cases, my coding process involves just putting a ? after every call that returns a Result, and adding a new Error variant whenever the compiler complains about not being able to convert some new error type. It s practically zero effort outstanding outcome for the mediocre programmer.

Just Because You re Mediocre, Doesn t Mean You Can t Get Better To finish off, I d like to point out that mediocrity doesn t imply shoddy work, nor does it mean that you shouldn t keep learning and improving your craft. One book that I ve recently found extremely helpful is Effective Rust, by David Drysdale. The author has very kindly put it up to read online, but buying a (paper or ebook) copy would no doubt be appreciated. The thing about this book, for me, is that it is very readable, even by us mediocre programmers. The sections are written in a way that really clicked with me. Some aspects of Rust that I d had trouble understanding for a long time such as lifetimes and the borrow checker, and particularly lifetime elision actually made sense after I d read the appropriate sections.

Finally, a Quick Beg I m currently subsisting on the kindness of strangers, so if you found something useful (or entertaining) in this post, why not buy me a refreshing beverage? It helps to know that people like what I m doing, and helps keep me from having to sell my soul to a private equity firm.

26 April 2024

Dirk Eddelbuettel: RcppSpdlog 0.0.17 on CRAN: New Upstream

Version 0.0.17 of RcppSpdlog arrived on CRAN overnight following and has been uploaded to Debian. RcppSpdlog bundles spdlog, a wonderful header-only C++ logging library with all the bells and whistles you would want that was written by Gabi Melman, and also includes fmt by Victor Zverovich. You can learn more at the nice package documention site. This releases updates the code to the version 1.14 of spdlog which was release yesterday. The NEWS entry for this release follows.

Changes in RcppSpdlog version 0.0.17 (2024-04-25)
  • Minor continuous integration update
  • Upgraded to upstream release spdlog 1.14.0

Courtesy of my CRANberries, there is also a diffstat report. More detailed information is on the RcppSpdlog page, or the package documention site. If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

25 April 2024

Jonathan Dowland: Biosphere

I've been enjoying Biosphere as the soundtrack to my recent "concentrated work" spells. Knives by Biosphere I remember seeing their name on playlists of yester-year: axioms, bluemars1, and (still a going concern) soma.fm's drone zone.

  1. Bluemars lives on, at echoes of bluemars

25 March 2024

Jonathan Dowland: a bug a day

I recently became a maintainer of/committer to IkiWiki, the software that powers my site. I also took over maintenance of the Debian package. Last week I cut a new upstream point release, 3.20200202.4, and a corresponding Debian package upload, consisting only of a handful of low-hanging-fruit patches from other people, largely to exercise both processes. I've been discussing IkiWiki's maintenance situation with some other users for a couple of years now. I've also weighed up the pros and cons of moving to a different static-site-generator (a term that describes what IkiWiki is, but was actually coined more recently). It turns out IkiWiki is exceptionally flexible and powerful: I estimate the cost of moving to something modern(er) and fashionable such as Jekyll, Hugo or Hakyll as unreasonably high, in part because they are surprisingly rigid and inflexible in some key places. Like most mature software, IkiWiki has a bug backlog. Over the past couple of weeks, as a sort-of "palate cleanser" around work pieces, I've tried to triage one IkiWiki bug per day: either upstream or in the Debian Bug Tracker. This is a really lightweight task: it can be as simple as "find a bug reported in Debian, copy it upstream, tag it upstream, mark it forwarded; perhaps taking 5-10 minutes. Often I'll stumble across something that has already been fixed but not recorded as such as I go. Despite this minimal level of work, I'm quite satisfied with the cumulative progress. It's notable to me how much my perspective has shifted by becoming a maintainer: I'm considering everything through a different lens to that of being just one user. Eventually I will put some time aside to scratch some of my own itches (html5 by default; support dark mode; duckduckgo plugin; use the details tag...) but for now this minimal exercise is of broader use.

24 March 2024

Anuradha Weeraman: This is a test

This is a testTesting 1 2 3

21 March 2024

Ravi Dwivedi: Thailand Trip

This post is the second and final part of my Malaysia-Thailand trip. Feel free to check out the Malaysia part here if you haven t already. Kuala Lumpur to Bangkok is around 1500 km by road, and so I took a Malaysian Airlines flight to travel to Bangkok. The flight staff at the Kuala Lumpur only asked me for a return/onward flight and Thailand immigration asked a few questions but did not check any documents (obviously they checked and stamped my passport ;)). The currency of Thailand is the Thai baht, and 1 Thai baht = 2.5 Indian Rupees. The Thailand time is 1.5 hours ahead of Indian time (For example, if it is 12 noon in India, it will be 13:30 in Thailand). I landed in Bangkok at around 3 PM local time. Fletcher was in Bangkok that time, leaving for Pattaya and we had booked the same hostel. So I took a bus to Pattaya from the airport. The next bus for which the tickets were available was at 7 PM, so I took tickets for that one. The bus ticket cost was 143 Thai Baht. I didn t buy SIM at the airport, thinking there must be better deals in the city. As a consequence, there was no way to contact Fletcher through internet. Although I had a few minutes call remaining out of my international roaming pack.
A welcome sign at Bangkok's Suvarnabhumi airport.
Bus from Suvarnabhumi Airport to Jomtien Beach in Pattaya.
Our accommodation was near Jomtien beach, so I got off at the last stop, as the bus terminates at the Jomtien beach. Then I decided to walk towards my accommodation. I was using OsmAnd for navigation. However, the place was not marked on OpenStreetMap, and it turned out I missed the street my hostel was on and walked around 1 km further as I was chasing a similarly named incorrect hostel on OpenStreetMap. Then I asked for help from two men sitting at a caf . One of them said he will help me find the street my hostel is on. So, I walked with him, and he told me he lives in Thailand for many years, but he is from Kuwait. He also gave me valuable information. Like, he told me about shared hail-and-ride songthaews which run along the Jomtien Second Road and charge 10 Baht for any distance on their route. This tip significantly reduced our expenses. Further, he suggested me 7-Eleven shops for buying a local SIM. Like Malaysia, Thailand has 24/7 7-Eleven convenience stores, a lot of them not even 100 m apart. The Kuwaiti person dropped me at the address where my hostel was. I tried searching for a person in-charge of that hostel, and soon I realized there was no reception. After asking for help from locals for some time, I bumped into Fletcher, who also came to this address and was searching for the same. After finding a friend, I felt a sigh of relief. Adjacent to the property, there was a hairdresser shop. We went there and asked about this property. The woman called the owner, and she also told us the required passcodes to go inside. Our accommodation was in a room on the second floor, which required us to put a passcode for opening. We entered the passcode and entered the room. So, we stayed at this hostel which had no reception. Due to this, it took 2 hours to find our room and enter. It reminded me of a difficult experience I had in Albania, where me and Akshat were not able to find our apartment in one of the hottest days and the owner didn t know our language. Traveling from the place where the bus dropped me to the hostel, I saw streets were filled with bars and massage parlors, which was expected. Prostitutes were everywhere. We went out at night towards the beach and also roamed around in 7-Elevens to buy a SIM card for myself. I got a SIM for 7 day unlimited internet for 399 baht. Turns out that the rates of SIM cards at the airport were not so different from inside the city.
Road near Jomtien beach in Pattaya
Photo of a songthaew in Pattaya. There are shared songthaews which run along Jomtien Second road and takes 10 bath to anywhere on the route.
Jomtien Beach in Pattaya.
In terms of speaking English, locals didn t know English at all in both Pattaya and Bangkok. I normally don t expect locals to know English in a non-English speaking country, but the fact that Bangkok is one of the most visited places by tourists made me expect locals to know some English. Talking to locals is an integral part of travel for me, which I couldn t do a lot in Thailand. This aspect is much more important for me than going to touristy places. So, we were in Pattaya. Next morning, Fletcher and I went to Tiger park using shared songthaew. After that, we planned to visit Pattaya Floating market which is near the Tiger Park, but we felt the ticket prices were higher than it was worth. Fletcher had to leave for Bangkok on that day. I suggested him to go to Suvarnabhumi Airport from the Jomtien beach bus terminal (this was the route I took the last day in opposite direction) to avoid traffic congestion inside Bangkok, as he can follow up with metro once he reaches the airport. From the floating market, we were walking in sweltering heat to reach the Jomtien beach. I tried asking for a lift and eventually got successful as a scooty stopped, and surprisingly the person gave a ride to both of us. He was from Delhi, so maybe that s the reason he stopped for us. Then we took a songthaew to the bus terminal and after having lunch, Fletcher left for Bangkok.
A welcome sign at Pattaya Floating market.
This Korean Vegetasty noodles pack was yummy and was available at many 7-Eleven stores.
Next day I went to Bangkok, but Fletcher already left for Kuala Lumpur. Here I had booked a private room in a hotel (instead of a hostel) for four nights, mainly because of my luggage. This costed 5600 INR for four nights. It was 2 km from the metro station, which I used to walk both sides. In Bangkok, I visited Sukhumvit and Siam by metro. Going to some areas require crossing the Chao Phraya river. For this, I took Chao Phraya Express Boat for going to places like Khao San road and Wat Arun. I would recommend taking the boat ride as it had very good views. In Bangkok, I met a person from Pakistan staying in my hotel and so here also I got some company. But by the time I met him, my days were almost over. So, we went to a random restaurant selling Indian food where we ate some paneer dish with naan and that restaurant person was from Myanmar.
Wat Arun temple stamps your hand upon entry
Wat Arun temple
Khao San Road
A food stall at Khao San Road
Chao Phraya Express Boat
For eating, I mainly relied on fruits and convenience stores. Bananas were very tasty. This was the first time I saw banana flesh being yellow. Mangoes were delicious and pineapples were smaller and flavorful. I also ate Rose Apple, which I never had before. I had Chhole Kulche once in Sukhumvit. That was a little expensive as it costed 164 baht. I also used to buy premix coffee packets from 7-Eleven convenience stores and prepare them inside the stores.
Banana with yellow flesh
Fruits at a stall in Bangkok
Trimmed pineapples from Thailand.
Corn in Bangkok.
A board showing coffee menu at a 7-Eleven store along with rates in Pattaya.
In this section of 7-Eleven, you can buy a premix coffee and mix it with hot water provided at the store to prepare.
My booking from Bangkok to Delhi was in Air India flight, and they were serving alcohol in the flight. I chose red wine, and this was my first time having alcohol in a flight.
Red wine being served in Air India

Notes
  • In this whole trip spanning two weeks, I did not pay for drinking water (except for once in Pattaya which was 9 baht) and toilets. Bangkok and Kuala Lumpur have plenty of malls where you should find a free-of-cost toilet nearby. For drinking water, I relied mainly on my accommodation providing refillable water for my bottle.
  • Thailand seemed more expensive than Malaysia on average. Malaysia had discounted price due to the Chinese New year.
  • I liked Pattaya more than Bangkok. Maybe because Pattaya has beach and Bangkok doesn t. Pattaya seemed more lively, and I could meet and talk to a few people as opposed to Bangkok.
  • Chao Phraya River express boat costs 150 baht for one day where you can hop on and off to any boat.

20 March 2024

Dirk Eddelbuettel: ciw 0.0.2 on CRAN: Updates

A first revision of the still only one-week old (at CRAN) package ciw has been released to CRAN! It provides is a single (efficient) function incoming() (now along with an alias ciw()) which summarises the state of the incoming directories at CRAN. I happen to like having these things at my (shell) fingertips, so it goes along with (still draft) wrapper ciw.r that will be part of the next littler release. For example, when I do this right now as I type this, I see (typically less than one second later)
edd@rob:~$ ciw.r 
    Folder                     Name                Time   Size         Age
    <char>                   <char>              <POSc> <char>  <difftime>
1: pretest instantiate_0.2.2.tar.gz 2024-03-20 13:29:00    17K  0.07 hours
2: recheck   tinytable_0.2.0.tar.gz 2024-03-20 12:50:00   565K  0.72 hours
3: pending      Matrix_1.7-0.tar.gz 2024-03-20 12:05:00   2.3M  1.47 hours
4: recheck      survey_4.4-2.tar.gz 2024-03-20 02:02:00   2.2M 11.52 hours
5: waiting   equateIRT_2.4.0.tar.gz 2024-03-19 17:00:00   895K 20.55 hours
6: pending   ravetools_0.1.5.tar.gz 2024-03-19 12:06:00   1.0M 25.45 hours
7: waiting     glmmTMB_1.1.9.tar.gz 2024-03-18 16:04:00   4.2M 45.48 hours
edd@rob:~$ 
See ciw.r --help or ciw.r --usage for more. Alternatively, in your R session, you can call ciw::incoming() (or now ciw::ciw()) for the same result (and/or load the package first). This release adds some packaging touches, brings the new alias ciw() as well as a state variable with all (known) folder names and some internal improvements for dealing with error conditions. The NEWS entry follows.

Changes in version 0.0.2 (2024-03-20)
  • The package README and DESCRIPTION have been expanded
  • An alias ciw can now be used for incoming
  • Network error handling is now more robist
  • A state variable known_folders lists all CRAN folders below incoming

Courtesy of my CRANberries, there is also a diffstat report for this release. If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Jonathan Dowland: aerc email client

my aerc
I started looking at aerc, a new Terminal mail client, in around 2019. At that time it was promising, but ultimately not ready yet for me, so I put it away and went back to neomutt which I have been using (in one form or another) all century. These days, I use neomutt as an IMAP client which is perhaps what it's worst at: prior to that, and in common with most users (I think), I used it to read local mail, either fetched via offlineimap or directly on my mail server. I switched to using it as a (slow, blocking) IMAP client because I got sick of maintaining offlineimap (or mbsync), and I started to use neomutt to read my work mail, which was too large (and rate limited) for local syncing. This year I noticed that aerc had a new maintainer who was presenting about it at FOSDEM, so I thought I'd take another look. It's come a long way: far enough to actually displace neomutt for my day-to-day mail use. In particular, it's a much better IMAP client. I still reach for neomutt for some tasks, but I'm now using aerc for most things. aerc is available in Debian, but I recommending building from upstream source at the moment as the project is quite fast-moving.

9 March 2024

Reproducible Builds: Reproducible Builds in February 2024

Welcome to the February 2024 report from the Reproducible Builds project! In our reports, we try to outline what we have been up to over the past month as well as mentioning some of the important things happening in software supply-chain security.

Reproducible Builds at FOSDEM 2024 Core Reproducible Builds developer Holger Levsen presented at the main track at FOSDEM on Saturday 3rd February this year in Brussels, Belgium. However, that wasn t the only talk related to Reproducible Builds. However, please see our comprehensive FOSDEM 2024 news post for the full details and links.

Maintainer Perspectives on Open Source Software Security Bernhard M. Wiedemann spotted that a recent report entitled Maintainer Perspectives on Open Source Software Security written by Stephen Hendrick and Ashwin Ramaswami of the Linux Foundation sports an infographic which mentions that 56% of [polled] projects support reproducible builds .

Mailing list highlights From our mailing list this month:

Distribution work In Debian this month, 5 reviews of Debian packages were added, 22 were updated and 8 were removed this month adding to Debian s knowledge about identified issues. A number of issue types were updated as well. [ ][ ][ ][ ] In addition, Roland Clobus posted his 23rd update of the status of reproducible ISO images on our mailing list. In particular, Roland helpfully summarised that all major desktops build reproducibly with bullseye, bookworm, trixie and sid provided they are built for a second time within the same DAK run (i.e. [within] 6 hours) and that there will likely be further work at a MiniDebCamp in Hamburg. Furthermore, Roland also responded in-depth to a query about a previous report
Fedora developer Zbigniew J drzejewski-Szmek announced a work-in-progress script called fedora-repro-build that attempts to reproduce an existing package within a koji build environment. Although the projects README file lists a number of fields will always or almost always vary and there is a non-zero list of other known issues, this is an excellent first step towards full Fedora reproducibility.
Jelle van der Waa introduced a new linter rule for Arch Linux packages in order to detect cache files leftover by the Sphinx documentation generator which are unreproducible by nature and should not be packaged. At the time of writing, 7 packages in the Arch repository are affected by this.
Elsewhere, Bernhard M. Wiedemann posted another monthly update for his work elsewhere in openSUSE.

diffoscope diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. This month, Chris Lamb made a number of changes such as uploading versions 256, 257 and 258 to Debian and made the following additional changes:
  • Use a deterministic name instead of trusting gpg s use-embedded-filenames. Many thanks to Daniel Kahn Gillmor dkg@debian.org for reporting this issue and providing feedback. [ ][ ]
  • Don t error-out with a traceback if we encounter struct.unpack-related errors when parsing Python .pyc files. (#1064973). [ ]
  • Don t try and compare rdb_expected_diff on non-GNU systems as %p formatting can vary, especially with respect to MacOS. [ ]
  • Fix compatibility with pytest 8.0. [ ]
  • Temporarily fix support for Python 3.11.8. [ ]
  • Use the 7zip package (over p7zip-full) after a Debian package transition. (#1063559). [ ]
  • Bump the minimum Black source code reformatter requirement to 24.1.1+. [ ]
  • Expand an older changelog entry with a CVE reference. [ ]
  • Make test_zip black clean. [ ]
In addition, James Addison contributed a patch to parse the headers from the diff(1) correctly [ ][ ] thanks! And lastly, Vagrant Cascadian pushed updates in GNU Guix for diffoscope to version 255, 256, and 258, and updated trydiffoscope to 67.0.6.

reprotest reprotest is our tool for building the same source code twice in different environments and then checking the binaries produced by each build for any differences. This month, Vagrant Cascadian made a number of changes, including:
  • Create a (working) proof of concept for enabling a specific number of CPUs. [ ][ ]
  • Consistently use 398 days for time variation rather than choosing randomly and update README.rst to match. [ ][ ]
  • Support a new --vary=build_path.path option. [ ][ ][ ][ ]

Website updates There were made a number of improvements to our website this month, including:

Reproducibility testing framework The Reproducible Builds project operates a comprehensive testing framework (available at tests.reproducible-builds.org) in order to check packages and other artifacts for reproducibility. In February, a number of changes were made by Holger Levsen:
  • Debian-related changes:
    • Temporarily disable upgrading/bootstrapping Debian unstable and experimental as they are currently broken. [ ][ ]
    • Use the 64-bit amd64 kernel on all i386 nodes; no more 686 PAE kernels. [ ]
    • Add an Erlang package set. [ ]
  • Other changes:
    • Grant Jan-Benedict Glaw shell access to the Jenkins node. [ ]
    • Enable debugging for NetBSD reproducibility testing. [ ]
    • Use /usr/bin/du --apparent-size in the Jenkins shell monitor. [ ]
    • Revert reproducible nodes: mark osuosl2 as down . [ ]
    • Thanks again to Codethink, for they have doubled the RAM on our arm64 nodes. [ ]
    • Only set /proc/$pid/oom_score_adj to -1000 if it has not already been done. [ ]
    • Add the opemwrt-target-tegra and jtx task to the list of zombie jobs. [ ][ ]
Vagrant Cascadian also made the following changes:
  • Overhaul the handling of OpenSSH configuration files after updating from Debian bookworm. [ ][ ][ ]
  • Add two new armhf architecture build nodes, virt32z and virt64z, and insert them into the Munin monitoring. [ ][ ] [ ][ ]
In addition, Alexander Couzens updated the OpenWrt configuration in order to replace the tegra target with mpc85xx [ ], Jan-Benedict Glaw updated the NetBSD build script to use a separate $TMPDIR to mitigate out of space issues on a tmpfs-backed /tmp [ ] and Zheng Junjie added a link to the GNU Guix tests [ ]. Lastly, node maintenance was performed by Holger Levsen [ ][ ][ ][ ][ ][ ] and Vagrant Cascadian [ ][ ][ ][ ].

Upstream patches The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:

If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

20 February 2024

Jonathan Dowland: Propaganda A Secret Wish

How can I not have done one of these for Propaganda already?
Propaganda: A Secret Wish, and 12"s of Duel and p:Machinery
Propaganda/A Secret Wish is criminally underrated. There seem to be a zillion variants of each track, which keeps completionists busy. Of the variants of Jewel/Duel/etc., I'm fond of the 03:10, almost instrumental mix of Jewel; preferring the lyrics to be exclusive to the more radio friendly Duel (04:42); I don't need them conflating (Jewel 06:21); but there are further depths I've yet to explore (Do Well cassette mix, the 20:07 The First Cut / Duel / Jewel (Cut Rough)/ Wonder / Bejewelled mega-mix...) I recently watched The Fall of the House of Usher which I think has Poe lodged in my brain, which is how this album popped back into my conciousness this morning, with the opening lines of Dream within a Dream. But are they Goth?

19 February 2024

Matthew Garrett: Debugging an odd inability to stream video

We have a cabin out in the forest, and when I say "out in the forest" I mean "in a national forest subject to regulation by the US Forest Service" which means there's an extremely thick book describing the things we're allowed to do and (somewhat longer) not allowed to do. It's also down in the bottom of a valley surrounded by tall trees (the whole "forest" bit). There used to be AT&T copper but all that infrastructure burned down in a big fire back in 2021 and AT&T no longer supply new copper links, and Starlink isn't viable because of the whole "bottom of a valley surrounded by tall trees" thing along with regulations that prohibit us from putting up a big pole with a dish on top. Thankfully there's LTE towers nearby, so I'm simply using cellular data. Unfortunately my provider rate limits connections to video streaming services in order to push them down to roughly SD resolution. The easy workaround is just to VPN back to somewhere else, which in my case is just a Wireguard link back to San Francisco.

This worked perfectly for most things, but some streaming services simply wouldn't work at all. Attempting to load the video would just spin forever. Running tcpdump at the local end of the VPN endpoint showed a connection being established, some packets being exchanged, and then nothing. The remote service appeared to just stop sending packets. Tcpdumping the remote end of the VPN showed the same thing. It wasn't until I looked at the traffic on the VPN endpoint's external interface that things began to become clear.

This probably needs some background. Most network infrastructure has a maximum allowable packet size, which is referred to as the Maximum Transmission Unit or MTU. For ethernet this defaults to 1500 bytes, and these days most links are able to handle packets of at least this size, so it's pretty typical to just assume that you'll be able to send a 1500 byte packet. But what's important to remember is that that doesn't mean you have 1500 bytes of packet payload - that 1500 bytes includes whatever protocol level headers are on there. For TCP/IP you're typically looking at spending around 40 bytes on the headers, leaving somewhere around 1460 bytes of usable payload. And if you're using a VPN, things get annoying. In this case the original packet becomes the payload of a new packet, which means it needs another set of TCP (or UDP) and IP headers, and probably also some VPN header. This still all needs to fit inside the MTU of the link the VPN packet is being sent over, so if the MTU of that is 1500, the effective MTU of the VPN interface has to be lower. For Wireguard, this works out to an effective MTU of 1420 bytes. That means simply sending a 1500 byte packet over a Wireguard (or any other VPN) link won't work - adding the additional headers gives you a total packet size of over 1500 bytes, and that won't fit into the underlying link's MTU of 1500.

And yet, things work. But how? Faced with a packet that's too big to fit into a link, there are two choices - break the packet up into multiple smaller packets ("fragmentation") or tell whoever's sending the packet to send smaller packets. Fragmentation seems like the obvious answer, so I'd encourage you to read Valerie Aurora's article on how fragmentation is more complicated than you think. tl;dr - if you can avoid fragmentation then you're going to have a better life. You can explicitly indicate that you don't want your packets to be fragmented by setting the Don't Fragment bit in your IP header, and then when your packet hits a link where your packet exceeds the link MTU it'll send back a packet telling the remote that it's too big, what the actual MTU is, and the remote will resend a smaller packet. This avoids all the hassle of handling fragments in exchange for the cost of a retransmit the first time the MTU is exceeded. It also typically works these days, which wasn't always the case - people had a nasty habit of dropping the ICMP packets telling the remote that the packet was too big, which broke everything.

What I saw when I tcpdumped on the remote VPN endpoint's external interface was that the connection was getting established, and then a 1500 byte packet would arrive (this is kind of the behaviour you'd expect for video - the connection handshaking involves a bunch of relatively small packets, and then once you start sending the video stream itself you start sending packets that are as large as possible in order to minimise overhead). This 1500 byte packet wouldn't fit down the Wireguard link, so the endpoint sent back an ICMP packet to the remote telling it to send smaller packets. The remote should then have sent a new, smaller packet - instead, about a second after sending the first 1500 byte packet, it sent that same 1500 byte packet. This is consistent with it ignoring the ICMP notification and just behaving as if the packet had been dropped.

All the services that were failing were failing in identical ways, and all were using Fastly as their CDN. I complained about this on social media and then somehow ended up in contact with the engineering team responsible for this sort of thing - I sent them a packet dump of the failure, they were able to reproduce it, and it got fixed. Hurray!

(Between me identifying the problem and it getting fixed I was able to work around it. The TCP header includes a Maximum Segment Size (MSS) field, which indicates the maximum size of the payload for this connection. iptables allows you to rewrite this, so on the VPN endpoint I simply rewrote the MSS to be small enough that the packets would fit inside the Wireguard MTU. This isn't a complete fix since it's done at the TCP level rather than the IP level - so any large UDP packets would still end up breaking)

I've no idea what the underlying issue was, and at the client end the failure was entirely opaque: the remote simply stopped sending me packets. The only reason I was able to debug this at all was because I controlled the other end of the VPN as well, and even then I wouldn't have been able to do anything about it other than being in the fortuitous situation of someone able to do something about it seeing my post. How many people go through their lives dealing with things just being broken and having no idea why, and how do we fix that?

(Edit: thanks to this comment, it sounds like the underlying issue was a kernel bug that Fastly developed a fix for - under certain configurations, the kernel fails to associate the MTU update with the egress interface and so it continues sending overly large packets)

comment count unavailable comments

7 February 2024

Reproducible Builds: Reproducible Builds in January 2024

Welcome to the January 2024 report from the Reproducible Builds project. In these reports we outline the most important things that we have been up to over the past month. If you are interested in contributing to the project, please visit our Contribute page on our website.

How we executed a critical supply chain attack on PyTorch John Stawinski and Adnan Khan published a lengthy blog post detailing how they executed a supply-chain attack against PyTorch, a popular machine learning platform used by titans like Google, Meta, Boeing, and Lockheed Martin :
Our exploit path resulted in the ability to upload malicious PyTorch releases to GitHub, upload releases to [Amazon Web Services], potentially add code to the main repository branch, backdoor PyTorch dependencies the list goes on. In short, it was bad. Quite bad.
The attack pivoted on PyTorch s use of self-hosted runners as well as submitting a pull request to address a trivial typo in the project s README file to gain access to repository secrets and API keys that could subsequently be used for malicious purposes.

New Arch Linux forensic filesystem tool On our mailing list this month, long-time Reproducible Builds developer kpcyrd announced a new tool designed to forensically analyse Arch Linux filesystem images. Called archlinux-userland-fs-cmp, the tool is supposed to be used from a rescue image (any Linux) with an Arch install mounted to, [for example], /mnt. Crucially, however, at no point is any file from the mounted filesystem eval d or otherwise executed. Parsers are written in a memory safe language. More information about the tool can be found on their announcement message, as well as on the tool s homepage. A GIF of the tool in action is also available.

Issues with our SOURCE_DATE_EPOCH code? Chris Lamb started a thread on our mailing list summarising some potential problems with the source code snippet the Reproducible Builds project has been using to parse the SOURCE_DATE_EPOCH environment variable:
I m not 100% sure who originally wrote this code, but it was probably sometime in the ~2015 era, and it must be in a huge number of codebases by now. Anyway, Alejandro Colomar was working on the shadow security tool and pinged me regarding some potential issues with the code. You can see this conversation here.
Chris ends his message with a request that those with intimate or low-level knowledge of time_t, C types, overflows and the various parsing libraries in the C standard library (etc.) contribute with further info.

Distribution updates In Debian this month, Roland Clobus posted another detailed update of the status of reproducible ISO images on our mailing list. In particular, Roland helpfully summarised that all major desktops build reproducibly with bullseye, bookworm, trixie and sid provided they are built for a second time within the same DAK run (i.e. [within] 6 hours) . Additionally 7 of the 8 bookworm images from the official download link build reproducibly at any later time. In addition to this, three reviews of Debian packages were added, 17 were updated and 15 were removed this month adding to our knowledge about identified issues. Elsewhere, Bernhard posted another monthly update for his work elsewhere in openSUSE.

Community updates There were made a number of improvements to our website, including Bernhard M. Wiedemann fixing a number of typos of the term nondeterministic . [ ] and Jan Zerebecki adding a substantial and highly welcome section to our page about SOURCE_DATE_EPOCH to document its interaction with distribution rebuilds. [ ].
diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. This month, Chris Lamb made a number of changes such as uploading versions 254 and 255 to Debian but focusing on triaging and/or merging code from other contributors. This included adding support for comparing eXtensible ARchive (.XAR/.PKG) files courtesy of Seth Michael Larson [ ][ ], as well considerable work from Vekhir in order to fix compatibility between various and subtle incompatible versions of the progressbar libraries in Python [ ][ ][ ][ ]. Thanks!

Reproducibility testing framework The Reproducible Builds project operates a comprehensive testing framework (available at tests.reproducible-builds.org) in order to check packages and other artifacts for reproducibility. In January, a number of changes were made by Holger Levsen:
  • Debian-related changes:
    • Reduce the number of arm64 architecture workers from 24 to 16. [ ]
    • Use diffoscope from the Debian release being tested again. [ ]
    • Improve the handling when killing unwanted processes [ ][ ][ ] and be more verbose about it, too [ ].
    • Don t mark a job as failed if process marked as to-be-killed is already gone. [ ]
    • Display the architecture of builds that have been running for more than 48 hours. [ ]
    • Reboot arm64 nodes when they hit an OOM (out of memory) state. [ ]
  • Package rescheduling changes:
    • Reduce IRC notifications to 1 when rescheduling due to package status changes. [ ]
    • Correctly set SUDO_USER when rescheduling packages. [ ]
    • Automatically reschedule packages regressing to FTBFS (build failure) or FTBR (build success, but unreproducible). [ ]
  • OpenWrt-related changes:
    • Install the python3-dev and python3-pyelftools packages as they are now needed for the sunxi target. [ ][ ]
    • Also install the libpam0g-dev which is needed by some OpenWrt hardware targets. [ ]
  • Misc:
    • As it s January, set the real_year variable to 2024 [ ] and bump various copyright years as well [ ].
    • Fix a large (!) number of spelling mistakes in various scripts. [ ][ ][ ]
    • Prevent Squid and Systemd processes from being killed by the kernel s OOM killer. [ ]
    • Install the iptables tool everywhere, else our custom rc.local script fails. [ ]
    • Cleanup the /srv/workspace/pbuilder directory on boot. [ ]
    • Automatically restart Squid if it fails. [ ]
    • Limit the execution of chroot-installation jobs to a maximum of 4 concurrent runs. [ ][ ]
Significant amounts of node maintenance was performed by Holger Levsen (eg. [ ][ ][ ][ ][ ][ ][ ] etc.) and Vagrant Cascadian (eg. [ ][ ][ ][ ][ ][ ][ ][ ]). Indeed, Vagrant Cascadian handled an extended power outage for the network running the Debian armhf architecture test infrastructure. This provided the incentive to replace the UPS batteries and consolidate infrastructure to reduce future UPS load. [ ] Elsewhere in our infrastructure, however, Holger Levsen also adjusted the email configuration for @reproducible-builds.org to deal with a new SMTP email attack. [ ]

Upstream patches The Reproducible Builds project tries to detects, dissects and fix as many (currently) unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including: Separate to this, Vagrant Cascadian followed up with the relevant maintainers when reproducibility fixes were not included in newly-uploaded versions of the mm-common package in Debian this was quickly fixed, however. [ ]

If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

Jonathan Dowland: carbon

I got a new work laptop this year: A Thinkpad X1 Carbon (Gen 11). It wasn't the one I wanted: I'd ordered an X1 Nano, which had a footprint very reminiscent to me of my beloved x40. Never mind! The Carbon is lovely. Despite ostensibly the same size as the T470s it's replacing, it's significantly more portable, and more capable. The two USB-A ports, as well as the full-size HDMI port, are welcome and useful (over the Nano). I used to keep notes on setting up Linux on different types of hardware, but I haven't really bothered now for years. Things Just Work. That's good! My old machine naming schemes are stretched beyond breaking point (and I've re-used my favourite hostname, qusp, one too many times) so I went for something new this time: Riffing on Carbon, I settled (for now) on carbyne, a carbon allotrope which is of interest to nanotechnologists (Seems appropriate)

Next.