Search Results: "amu"

4 September 2024

Samuel Henrique: DebConf24 was fun!: Security, curl, wcurl, Debian's quality

A picture of a badger2040w with Samuel's badge and the curl manpage PCB on the side

tl;dr DebConf24 was fun! A playlist of all of my talks, with subtitles (en, pt-br) and chapters is available on YouTube.

Overview DebConf24 was held in Busan, South Korea, between Sunday July 28th to Sunday August 4th 2024. As usual for DebConfs, I had a great time meeting my friends, but also met new people and got to learn a bit about the interesting things they're working on. I ended up getting too excited during the talk submission stage of the conference and as a result I presented 5 different activities (3 talks, 1 BoF and 1 lightning talk). Since I was too busy with the presentations, I did not have a lot of time to actually hang out with folks, or even to go out in the city, I guess I've learned my lesson for next time. The main purpose of this post is to write about all of the things I presented at the conference. I did want to list some of the interesting talks I've watched, but that I would not be able to be fair as I'm sure I would miss some. You can get the schedule and the recordings of any talks from the conference's website: https://debconf24.debconf.org/schedule/

wcurl Lightning Talk The most fun of my presentations, during the second-to-last day of the conference, I've asked for help from Sergio Durigan Junior <sergiodj> to setup an URL containing a whitespace and redirecting that to wcurl's manpage. I then did a little demo to showcase why me (and a lot others) struggle with downloading things with curl, and how wcurl solves that. https://www.youtube.com/watch?v=eM8M5qa4pPM

Fixing CVEs on Debian: Everything you probably know already I've always felt like DebConf was missing security-related talks, so I decided to do something about it and presented a few of the things I've learned when fixing CVEs for Debian. This is an area where we don't get a lot of new contributors, I'm trying to change that, and this talk can be used to introduce newcomers to it. https://www.youtube.com/watch?v=XzNVVILVyUM

The secret sauce of Debian Debian is not very vocal about all of the nice things it has regarding quality-assurance, testing, or CI, even though it's at the state-of-the-art for a lot of things. This talk is an initial step towards making people aware of the cool things happening behind the scenes. Ideally we should have it well-documented somewhere. https://www.youtube.com/watch?v=x_X2IBnpjic

"I use Debian BTW": fzf, tmux, zoxide and friends One of my earliest good memories of Debian was when it started coming with a colored PS1 by default, I still remember the feeling of relief whenever I jumped into a Debian server and didn't have to deal with a black and white PS1. There's still a lot of room for Debian to ship better defaults, and I think some of them can actually happen. This talk is a bit of a silly one where I'm just making people aware of the existence of a few Golang/Rust CLI tools, and also some dotfiles configurations that should probably be the default. https://www.youtube.com/watch?v=tfto3Seokn4

curl The curl project does such a great job with their security advisories that it will likely never receive the amount of praise it deserves, but I did my best at mentioning it throughout my CVEs talk. Maybe I will write more extensively about this someday, but in case I don't:
There's no other project which always consistently mentions the exact range of commits that are affected by a given CVE. Forget about whether the versions are EOL, curl doesn't have LTS releases, yet they do such a great job at clearly documenting their CVEs that I would take that over having LTS releases anytime (that's for curl at least, I acknowledge some types of projects have a different need for LTS releases). Not only that, but they are also always careful about explaining alternative mitigations such as configuration changes, build flags that defuse the exploitation, or parameters that you should not use.
Just like we tend to do every time we meet, me and the other Debian curl maintainers spent the first 2 or 3 days of the conference talking about how we wanted to eventually meet up to discuss the package. It was going to be informal, maybe during the Cheese and Wine party, but then I've realized we should make it part of the official schedule, which would also give us the recordings for later. And so the "curl maintainers BoF" happened, where we spoke about HTTP3, GnutTLS, wcurl and other things. https://www.youtube.com/watch?v=fL7hSypUTdM

wcurl Right after that BoF, Daniel Stenberg asked if we were interested in having wcurl adopted into curl, which we definitely were, so wcurl is now part of the curl project. Daniel was also kind enough to design a logo for the project, which makes me especially happy because I can stop with my own approach at a logo (which I had to redo every few days): A laptop with a curl and a GoHorse sticker, there's a 'w' handwritten with a marker on the right side of the curl sticker, making it 'wcurl' And here is the new logo: 'wcurl' written with the same font and colors as the curl logo, with the 'w' being green instead of blue, and a download icon at the end Much better, I would say :)

curl Swag DebConf24 was my chance at forwarding some curl swag items to the other curl maintainers, so both Sergio Durigan Junior <sergiodj> and Carlos Henrique Lima Melara <charles> got the curl-up t-shirt and the very cool curl PCB coaster, both gifted by Daniel Stenberg. Unfortunately I didn't have any of that for DebConf attendees, but I did drop loads of curl stickers at the stickers table, they were gone very quickly. A table full of different stickers, curl stickers can be seen over the whole table

For the future I used to think the most humbling experience you could have as someone who presented a talk was to have to watch it yourself, you notice a lot of mistakes and you instantly think about things that should be done differently. It turns out the most humbling thing to do is actually to write subtitles for your talks, I noticed every single mistake, often multiple times. So after spending more than 30 hours writing the subtitles for both English and Brazilian Portuguese for my talks, I feel like it's going to be much easier to avoid committing the same mistakes again. After some time you stop feeling shame about those mistakes and you're just left with feelings of annoyance, and at that point it becomes easier to consciously avoid them. I am collecting a list of things I wish I had done differently on all of those talks, so if I end up presenting any one of them again, it will be an improved version. A picture from the top of a group of conference attendees, there's about 150 people in the picture

8 August 2024

Jonathan Carter: DebConf24 Busan, South Korea

I m finishing typing up this blog entry hours before my last 13 hour leg back home, after I spent 2 weeks in Busan, South Korea for DebCamp24 and DebCamp24. I had a rough year and decided to take it easy this DebConf. So this is the first DebConf in a long time where I didn t give any talks. I mostly caught up on a bit of packaging, worked on DebConf video stuff, attended a few BoFs and talked to people. Overall it was a very good DebConf, which also turned out to be more productive than I expeced it would. In the welcome session on the first day of DebConf, Nicolas Dandrimont mentioned that a benefit of DebConf is that it provides a sort of caffeine for your Debian motivation. I could certainly feel that affect swell as the days went past, and it s nice to be excited about some ideas again that would otherwise be fading.

Recovering DPL It s a bit of a gear shift being DPL for 4 years, and DebConf Committee for nearly 5 years before that, and then being at DebConf while some issue arise (as it always does during a conference). At first I jump into high alert mode, but then I have to remind myself it s not your problem anymore and let others deal with it. It was nice spending a little in-person time with Andreas Tille, our new DPL, we did some more handover and discussed some current issues. I still have a few dozen emails in my DPL inbox that I need to collate and forward to Andreas, I hope to finish all that up by the end of August. During the Bits from the DPL talk, the usual question came up whether Andreas will consider running for DPL again, to which he just responded in a slide Maybe . I think it s a good idea for a DPL to do at least two terms if it all works out for everyone, since it takes a while to get up to speed on everything. Also, having been DPL for four years, I have a lot to say about it, and I think there s a lot we can fix in the role, or at least discuss it. If I had the bandwidth for it I would have scheduled a BoF for it, but I ll very likely do that for the next DebConf instead!

Video team I set up the standby loop for the video streaming setup. We call it loopy, it s a bunch of OBS scenes that provide announcements, shows sponsors, the schedule and some social content. I wrote about it back in 2020, but it s evolved quite a bit since then, so I m probably due to write another blog post with a bunch of updates on it. I hope to organise a video team sprint in Cape Town in the first half of next year, so I ll summarize everything before then.

It would ve been great if we could have some displays in social areas that could show talks, the loop and other content, but we were just too pressed for time for that. This year s DebConf had a very compressed timeline, and there was just too much that had to be done and that had to be figured out on the last minute. This put quite a lot of strain on the organisers, but I was glad to see how, for the most part, most attendees were very sympathetic to some rough edges (but I digress ). I added more of the OBS machine setup to the videoteam s ansible repository, so as of now it just needs an ansible setup and the OBS data and it s good to go. The loopy data is already in the videoteam git repository, so I could probably just add a git pull and create some symlinks in ansible and then that machine can be installed from 0% to 100% by just installing via debian-installer with our ansible hooks. This DebConf I volunteered quite a bit for actual video roles during the conference, something I didn t have much time for in recent DebConfs, and it s been fun, especially in a session or two where nearly none of the other volunteers showed up. Sometimes chaos is just fun :-)
Baekyongee is the university mascot, who s visible throughout the university. So of course we included this four legged whale creature on the loop too!

Packaging I was hoping to do more packaging during DebCamp, but at least it was a non-zero amount:
  • Uploaded gdisk 1.0.10-2 to unstable (previously tested effects of adding dh-sequence-movetousr) (Closes: #1073679).
  • Worked a bit on bcachefs-tools (updating git to 1.9.4), but has a build failure that I need to look into (we might need a newer bindgen) update: I m probably going to ROM this package soon, it doesn t seem suitable for packaging in Debian.
  • Calamares: Tested a fix for encrypted installs, and uploaded it.
  • Calamares: Uploaded (3.3.8-1) to backports (at the time of writing it s still in backports-NEW).
  • Backport obs-gradient-source for bookworm.
  • Did some initial packaging on Cambalache, I ll upload to unstable once wlroots (0.18) hits unstable.
  • Pixelorama 1.0 I did some initial packaging for Pixelorama back when we did the MiniDebConf Gaming Edition, but it had a few stoppers back then. Version 1.0 seems to fix all of that, but it depends on Godot 4.2 and we re still on the 3 series in Debian, so I ll upload this once Godot 4.2 hits at least experimental. Godot software/games is otherwise quite easy to run, it s basically just source code / data that is installed and then run via godot-runner (godot3-runner package in Debian).

BoFs Python Team BoF Link to the etherpad / pad archive link and video can be found on the talk page: https://debconf24.debconf.org/talks/31-python-bof/ The session ended up being extended to a second part, since all the issues didn t fit into the first session. I was distracted by too many thing during the Python 3.12 transition (to the point where I thought that 3.11 was still new in Debian), so it was very useful listening to the retrospective of that transition. There was a discussion whether Python 3.13 could still make it to testing in time for freeze, and it seems that there is consensus that it can, although, likely with new experimental features like disabling the global interpreter lock and the just in time compiler disabled. I learned for the first time about the dead batteries project, PEP-0594, which removes ancient modules that have mostly been superseded, from the Python standard library. There was some talk about the process for changing team policy, and a policy discussion on whether we should require autopkgtests as a SHOULD or a MUST for migration to testing. As with many things, the devil is in the details and in my opinion you could go either way and achieve a similar result (the original MUST proposal allowed exceptions which imho made it the same as the SHOULD proposal). There s an idea to do some ongoing remote sprints, like having co-ordinated days for bug squashing / working on stuff together. This is a nice idea and probably a good way to energise the team and also to gain some interest from potential newcomers. Louis-Philipe V ronneau was added as a new team admin and there was some discussion on various Sphinx issues and which Lintian tags might be needed for Python 3.13. If you want to know more, you probably have to watch the videos / read the notes :)
    Debian.net BoF Link to the etherpad / pad archive link can be found on the talk page: https://debconf24.debconf.org/talks/37-debiannet-team-bof Debian Developers can set up services on subdomains on debian.net, but a big problem we ve had before was that developers were on their own for hosting those services. This meant that they either hosted it on their DSL/fiber connection at home, paid for the hosting themselves, or hosted it at different services which became an accounting nightmare to claim back the used funds. So, a few of us started the debian.net hosting project (sometimes we just call it debian.net, this is probably a bit of a bug) so that Debian has accounts with cloud providers, and as admins we can create instances there that gets billed directly to Debian. We had an initial rush of services, but requests have slowed down since (not really a bad thing, we don t want lots of spurious requests). Last year we did a census, to check which of the instances were still used, whether they received system updates and to ask whether they are performing backups. It went well and some issues were found along the way, so we ll be doing that again. We also gained two potential volunteers to help run things, which is great. Debian Social BoF Link to the etherpad / pad archive link can be found on the talk page: https://debconf24.debconf.org/talks/34-debiansocial-bof We discussed the services we run, you can view the current state of things at: https://wiki.debian.org/Teams/DebianSocial Pleroma has shown some cracks over the last year or so, and there are some forks that seem promising. At the same time, it might be worth while considering Mastodon too. So we ll do some comparison of features and maintenance and find a way forward. At the time when Pleroma was installed, it was way ahead in terms of moderation features. Pixelfed is doing well and chugging along nicely, we should probably promote it more. Peertube is working well, although we learned that we still don t have all the recent DebConf videos on there. A bunch of other issues should be fixed once we move it to a new machine that we plan to set up. We re removing writefreely and plume. Nice concepts, but it didn t get much traction yet, and no one who signed up for these actually used it, which is fine, some experimentation with services is good and sometimes they prove to be very popular and other times not. The WordPress multisite instance has some mild use, otherwise haven t had any issues. Matrix ended up to be much, much bigger than we thought, both in usage and in its requirements. It s very stateful and remembers discussions for as long as you let it, so it s Postgres database is continuously expanding, this will also be a lot easier to manage once we have this on the new host. Jitsi is also quite popular, but it could probably be on jitsi.debian.net instead (we created this on debian.social during the initial height of COVID-19 where we didn t have the debian.net hosting yet), although in practice it doesn t really matter where it lives. Most of our current challenges will be solved by moving everything to a new big machine that has a few public IPs available for some VMs, so we ll be doing that shortly. Debian Foundation Discussion BoF This was some brainstorming about the future structure of Debian, and what steps might be needed to get there. It s way too big a problem to take on in a BoF, but we made some progress in figuring out some smaller pieces of the larger puzzle. The DPL is going to get in touch with some legal advisors and our trusted organisations so that we can aim to formalise our relationships a bit more by the time it s DebConf again. I also introduced my intention to join the Debian Partners delegation. When I was DPL, I enjoyed talking with external organisations who wanted to help Debian, but helping external organisations help Debian turned out to be too much additional load on the usual DPL roles, so I m pursuing this with the Debian Partners team, more on that some other time. This session wasn t recorded, but if you feel like you missed something, don t worry, all intentions will be communicated and discussed with project members before anything moves forward. There was a strong agreement in the room though that we should push forward on this, and not reach another DebConf where we didn t make progress on formalising Debian s structure more.

    Social Conference Dinner
    Conference Dinner Photo from Santiago
    The conference dinner took place in the university gymnasium. I hope not many people do sports there in the summer, because it got HOT. There was also some interesting observations on the thermodynamics of the attempted cooling solutions, which was amusing. On the plus side, the food was great, the company was good, and the speeches were kept to a minimum, so it was a great conference dinner, even though it was probably cut a bit short due to the heat. Cheese and Wine Cheese and Wine happened on 1 August, which happens to be the date I became a DD at DebConf17 in Montr al seven years before, so this was a nice accidental celebration of my Debiversary :) Since I m running out of time, I ll add some more photos to this post some time after publishing it :P Group Photo As per DebConf tradition, Aigars took the group photo. You can find the high resolution version on Debian s GitLab instance.
    Debian annual conference Debconf 24, Busan, South Korea
    Photography: Aigars Mahinovs aigarius@debian.org
    License: CC-BYv3+ or GPLv2+
    Talking Ah yes, talking to people is a big part of DebConf, but I didn t keep track of it very well.
    • I mostly listened to Alper a bit about his ideas for his talk about debian installer.
    • I talked to Rhonda a bit about ActivityPub and MQTT and whether they could be useful for publicising Debian activity.
    • Listened to Gunnar and Julian have a discussion about GPG and APT which was interesting.
    • I learned that you can learn Hangul, the Korean alphabet, in about an hour or so (I wish I knew that in all my years of playing StarCraft II).
    • We had the usual continuous keysigning party. Besides it s intended function, this is always a good ice breaker and a way to for shy people to meet other shy people.
    • and many other fly-by discussions.

    Stuff that didn t happen this DebConf
    • loo.py A simple Python script that could eventually replace the obs-advanced-scene-switcher sequencer in OBS. It would also be extremely useful if we d ever replace OBS for loopy. I was hoping to have some time to hack on this, and try to recreate the current loopy in loo.py, but didn t have the time.
    • toetally This year videoteam had to scramble to get a bunch of resistors to assemble some tally light. Even when assembled, they were a bit troublesome. It would ve been nice to hack on toetally and get something ready for testing, but it mostly relies on having something like a rasbperry pi zero with an attached screen in order to work on further. I ll try to have something ready for the next mini conf though.
    • extrepo on debian live I think we should have extrepo installed by default on desktop systems, I meant to start a discussion on this, but perhaps it s just time I go ahead and do it and announce it.
    • Live stream to peertube server It would ve been nice to live stream DebConf to PeerTube, but the dependency tree to get this going got a bit too huge. Following our plans discussed in the Debian Social BoF, we should have this safely ready before the next MiniDebConf and should be able to test it there.
    • Desktop Egg there was this idea to get a stand-in theme for Debian testing/unstable until the artwork for the next release is finalized (Debian bug: #1038660), I have an idea that I meant to implement months ago, but too many things got in the way. It s based on Juliette Taka s Homeworld theme, and basically transforms the homeworld into an egg. Get it? Something that hasn t hatched yet? I also only recently noticed that we never used the actual homeworld graphics (featuring the world image) in the final bullseye release. lol.
    So, another DebConf and another new plush animal. Last but not least, thanks to PKNU for being such a generous and fantastic host to us! See you again at DebConf25 in Brest, France next year!

      Louis-Philippe V ronneau: A Selection of DebConf24 Talks

      DebConf24 is now over! I'm very happy I was able to attend this year. If you haven't had time to look at the schedule yet, here is a selection of talks I liked.
      What happens if I delete setup.py?: a live demo of upgrading to PEP-518 Python packaging A great talk by Weezel showcasing how easy it is to migrate to PEP-518 for existing Python projects. This is the kind of thing I've been doing a lot when packaging upstream projects that still use setup.py. I encourage you to send this kind of patch upstream, as it makes everyone's life much easier.
      Debian on Chromebooks: What's New and What's Next? A talk by Alper Nebi Yasak, who has done great work on running Debian and the Debian Installer on Chromebooks. With Chromebooks being very popular machines in schools, it's nice to see people working on a path to liberate them.
      Sequoia PGP, sq, gpg-from-sq, v6 OpenPGP, and Debian I had the chance to see Justus' talk on Sequoia an OpenPGP implementation in Rust at DebConf22 in Kosovo. Back then, the conclusion was that sq wasn't ready for production yet. Well it seems it now is! This in-depth talk goes through the history of the project and its goals. There is also a very good section on the current OpenPGP/LibrePGP schism.
      Chameleon - the easy way to try out Sequoia - OpenPGP written in Rust A very short talk by Holger on Chameleon, a tool to make migration to Sequoia easier. TL;DW: apt install gpg-from-sq
      Protecting OpenPGP keyservers from certificate flooding Although I used to enjoy signing people's OpenPGP keys, I completely gave up on this practice around 2019 when dkg's key was flooded with bogus certifications and have been refusing to do so since. In this talk, Gunnar talks about his PhD work on fixing this issue and making sure we can eventually restore this important function on keyservers.
      Bits from the DPL Bits from the DPL! A DebConf classic.
      Linux live patching in Debian Having to reboot servers after kernel upgrades is a hassle, especially with machines that have encrypted disk drives. Although kernel live patching in Debian is still a work in progress, it is encouraging to see people trying to fix this issue.
      "I use Debian BTW": fzf, tmux, zoxide and friends A fun talk by Samuel Henrique on little changes and tricks one can make to their setup to make life easier.
      Ideas to Move Debian Installer Forward Another in-depth talk by Alper, this time on the Debian Installer and his ideas to try to make it better. I learned a lot about the d-i internals!
      Lightning Talks Lighting talks are always fun to watch! This year, the following talks happened:
      1. Customizing your Linux icons
      2. A Free Speech tracker by SFLC.IN
      3. Desktop computing is irrelevant
      4. An introduction to wcurl
      5. Aliasing in dpkg
      6. A DebConf art space
      7. Tiny Tapeout, Fomu, PiCI
      8. Data processing and visualisation in the shell

      Is there a role for Debian in the post-open source era? As an economist, I've been interested in Copyright and business models in the Free Software ecosystem for a while. In this talk, Hatta-san and Bruce Perens discuss the idea of alternative licences that are not DFSG-free, like Post-Open.

      30 July 2024

      Russell Coker: Links July 2024

      Interesting Scientific American article about the way that language shapes thought processes and how it was demonstrated in eye tracking experiments with people who have Aboriginal languages as their first language [1]. David Brin wrote an interesting article Do We Really Want Immortality [2]. I disagree with his conclusions about the politics though. Better manufacturing technology should allow decreasing the retirement age while funding schools well. Scientific American has a surprising article about the differences between Chimp and Bonobo parenting [3]. I d never have expected Chimp moms to be protective. Sam Varghese wrote an insightful and informative article about the corruption in Indian politics and the attempts to silence Australian journalist Avani Dias [4]. WorksInProgress has an insightful article about the world s first around the world solo yacht race [5]. It has some interesting ideas about engineering. Htwo has an interesting video about adverts for fake games [6]. It s surprising how they apparently make money from advertising games that don t exist. Elena Hashman wrote an insightful blog post about Chronic Fatigue Syndrome [7]. I hope they make some progress on curing it soon. The fact that it seems similar to long Covid which is quite common suggests that a lot of research will be applied to that sort of thing. Bruce Schneier wrote an insightful blog post about the risks of MS Copilot [8]. Krebs has an interesting article about how Apple does Wifi AP based geo-location and how that can be abused for tracking APs in warzones etc. Bad Apple! [9]. Bruce Schneier wrote an insightful blog post on How AI Will Change Democracy [10]. Charles Stross wrote an amusing and insightful post about MS Recall titled Is Microsoft Trying to Commit Suicide [11]. Bruce Schneier wrote an insightful blog post about seeing the world as a data structure [12]. Luke Miani has an informative YouTube video about eBay scammers selling overprices MacBooks [13]. The Yorkshire Ranter has an insightful article about Ronald Coase and the problems with outsourcing big development contracts as an array of contracts without any overall control [14].

      10 July 2024

      Russell Coker: Computer Adavances in the Last Decade

      I wrote a comment on a social media post where someone claimed that there s no computer advances in the last 12 years which got long so it s worth a blog post. In the last decade or so new laptops have become cheaper than new desktop PCs. USB-C has taken over for phones and for laptop charging so all recent laptops support USB-C docks and monitors with USB-C docks built in have become common. 4K monitors have become cheap and common and higher than 4K is cheap for some use cases such as ultra wide. 4K TVs are cheap and TVs with built-in Android computers for playing internet content are now standard. For most use cases spinning media hard drives are obsolete, SSDs large enough for all the content most people need to store are cheap. We have gone from gigabit Ethernet being expensive to 2.5 gigabit being cheap. 12 years ago smart phones were very limited and every couple of years there would be significant improvements. Since about 2018 phones have been capable of doing most things most people want. 5yo Android phones can run the latest apps and take high quality pics. Any phone that supports VoLTE will be good for another 5+ years if it has security support. Phones without security support still work and are quite usable apart from being insecure. Google and Samsung have significantly increased their minimum security support for their phones and the GKI project from Google makes it easier for smaller vendors to give longer security support. There are a variety of open Android projects like LineageOS which give longer security support on a variety of phones. If you deliberately choose a phone that is likely to be well supported by projects like LineageOS (which pretty much means just Pixel phones) then you can expect to be able to actually use it when it is 10 years old. Compare this to the Samsung Galaxy S3 released in 2012 which was a massive improvement over the original Galaxy S (the S2 felt closer to the S than the S3). The Samsung Galaxy S4 released in 2013 was one of the first phones to have FullHD resolution which is high enough that most people can t easily recognise the benefits of higher resolution. It wasn t until 2015 that phones with 4G of RAM became common which is enough that for most phone use it s adequate today. Now that 16G of RAM is affordable in laptops running more secure OSs like Qubes is viable for more people. Even without Qubes, OS security has been improving a lot with better compiler features, new languages like Rust, and changes to software design and testing. Containers are being used more but we still aren t getting all the benefits of that. TPM has become usable in the last few years and we are only starting to take advantage of what it can offer. In 2012 BTRFS was still at an early stage of development and not many people wanted to use it in production, I was using it in production then and while I didn t lose any data from bugs I did have some downtime because of BTRFS issues. Now BTRFS is quite solid for server use. DDR4 was released in 2014 and gave significant improvements over DDR3 for performance and capacity. My home workstation now has 256G of DDR4 which wasn t particularly expensive while the previous biggest system I owned had 96G of DDR3 RAM. Now DDR5 is available to again increase performance and size while also making DDR4 cheap on the second hand market. This isn t a comprehensive list of all advances in the computer industry over the last 12 years or so, it s just some things that seem particularly noteworthy to me. Please comment about what you think are the most noteworthy advances I didn t mention.

      4 July 2024

      Samuel Henrique: Debian's curl now supports HTTP3

      tl;dr Starting with curl 8.0.0-2, you can now use HTTP3.
      curl --http3-only https://example.com
      
      Or, if you would like to try it out in a container:
      podman run debian:unstable apt install --update -y curl && curl --http3-only https://example.com
      
      (in case you haven't noticed, apt now has the --update option for the upgrade and install commands, although not available on stable yet)

      Availability
      • Debian unstable - Since 2024-07-02
      • Debian testing - Since 2024-07-18
      • Debian 12/bookworm backports - Expected by the end of August 2024.
      • Debian 12/bookworm - Due to the mechanisms we have in place to make sure Debian stable is in fact stable, we will never be able to ship this in the regular repository. Users can make use of the backports repositories instead.
      • Debian derivatives - Rolling releases will get it by the time it's on Debian testing (e.g.: Kali Linux). Stable derivatives only in their next major release.

      The challenge HTTP3 is fresh new, well... not really, but at least fresh enough that I'm not aware of any other Linux distribution supporting it on curl, the reason is likely two-fold:
      1. OpenSSL is not there yet OpenSSL still doesn't have proper HTTP3 support, and given that OpenSSL is so widely used, almost every curl distributor/packager will build curl with it and thus changing the TLS backend to something else is risky. Unfortunately, proper support for the OpenSSL libcurl is unlikely to come anytime before the end of this year, the OpenSSL performance is not good enough yet as of version 3.3. Daniel Stenberg has written about the state of this multiple times, most recently at HTTP/3 in curl mid 2024, if you're interested, I suggest reading through his other posts as well. Some might have noticed that nginx does support HTTP3 through OpenSSL, although when you look closely, it's not exactly perfect:
        An SSL library that provides QUIC support is recommended to build nginx, such as BoringSSL, LibreSSL, or QuicTLS. Otherwise, the OpenSSL compatibility layer will be used that does not support early data.
        As you can see, they don't recommend using OpenSSL, and when doing so, you don't get complete support.

      2. HTTP3 support for GnuTLS/nghttp3/ngtcp2 is recent The non-experimental support arrived back in October 2023, and so that's when I started seriously planning for this. curl has been working on HTTP3 support for years, and so it did support other TLS backends before that, but out of them, the one most feasible for a distribution to ship would be GnuTLS, which gets HTTP3 support through ngctp2 and nghttp3.

      How it was done The Debian curl package has historically shipped at least two variants of libcurl, an OpenSSL and a GnuTLS one. The OpenSSL libcurl can't support HTTP3 for the reasons explained above, but the GnuTLS libcurl can (with ngtcp2 and nghtp3). Debian packages can choose which version of libcurl to link against (without having to modify any upstream source code). Debian's "git" package being a famous example of a package that links against the GnuTLS libcurl. Enabling HTTP3 on curl was done in three steps:
      1. Make sure all required dependencies fulfill the minimum requirements.
      2. Enable HTTP3 for GnuTLS libcurl.
      3. Change the libcurl used by the curl CLI, from OpenSSL to GnuTLS.
      curl's HTTP3 support requires a somewhat recent version of nghttp3 and updating that required a transition (due to the SONAME bump), while we've also had months of freeze for transitions due to the time_t transition. After the dependencies were in place, enabling HTTP3 for the GnuTLS libcurl was straightforward. Then, for the last part, we had to switch the TLS backend used by the curl CLI. Doing the swap is also quite easy on the packaging level, but we have to consider the chances of this change breaking our users' environments.

      Ensuring there are no breakages The first thing to consider regarding breakages is that this change is not going to be pushed directly to the current Debian stable releases, it will be present in the next stable release (13/trixie) but the current one will stick to the version that's already shipped. Secondly, we have to consider the risk of losing the ability to use certain parameters from the curl CLI which could be limited to the OpenSSL backend. During curl-up 2024, the curl developers pointed out the existence of a page that lists the TLS related options and the backends they work with. Analysing that page, ignoring all of the options that are suffixed with "BLOB" (only pertinent to the library, not the CLI), the only one left which is attention worthy is CURLOPT_ECH.
      This experimental feature requires a special build of OpenSSL, as ECH is not yet supported in OpenSSL releases. In contrast ECH is supported by the latest BoringSSL and wolfSSL releases.
      As it turns out, Encrypted Client Hello is experimental and it's not supported by the vanilla OpenSSL. This was enough of an investigation for me to go ahead with the change. Noting that even in the worst case scenario (we find a horrible regression), we can rollback without having affected a single stable release. Now that the package is on Debian unstable, the CI tests (autopkgtest) of every package that depends on curl is currently running, the results are compared against the migration-reference (in this case, the curl CLI with OpenSSL, before the change). If everything goes right, curl with HTTP3 support will migrate to Debian testing in around 5 days. If we spot any issues, we'll have to solve them first and it's going to be hard to predict how long it takes, although it's fair to expect less than a month.

      Feedback Feel free to join the Matrix room for the Debian curl maintainers:
      https://matrix.to/#/#debian-curl-maintainers:matrix.org

      Acknowledgements It took us a bit longer than expected to be able to enable HTTP3, nonetheless it's still early enough to be excited about. A lot of people were crucial to make this happen. I should recognize in the first place, obviously, the curl developers and the developers of the supporting libraries: GnuTLS, nghttp3, ngtcp2. Participating in the curl-up 2024 conference helped me get motivated to push this through, besides becoming aware of the right documentation to research for impact. On the Debian side, Sakirnth Nagarasa <sakirnth> was responsible for updating and taking care of the transition for nghttp3 and ngtcp2. Also on the Debian side, I've got loads of help and support from the co-maintainers of the curl package: Sergio Durigan Junior <sergiodj> and Carlos Henrique Lima Melara <charles>.

      Changes since publication

      2024-07-18
      • Update date of availability for Debian testing and expected date for bookworm backports.
      • We have historically spoken Portuguese in the room but we'll switch to English in case anyone joins.

      3 July 2024

      Samuel Henrique: Announcing wcurl: a curl wrapper to download files

      tl;dr Whenever you need to download files through the terminal and don't feel like using wget:
      wcurl example.com/filename.txt
      
      Manpage:
      https://manpages.debian.org/unstable/curl/wcurl.1.en.html

      Availability (comes installed with the curl package):
      • Debian unstable - Since 2024-07-02
      • Debian testing - Since 2024-07-18
      • Debian 12/bookworm backports - Expected by the end of August 2024.
      • Debian 12/bookworm - Depends on whether Debian's release team will approve it, it could be available in the next point release.
      • Debian derivatives - Rolling releases will get it by the time it's on Debian testing (e.g.: Kali Linux). Stable derivatives only in their next major release.
      If you don't want to wait for the package update to arrive, you can always copy the script and place it in your /usr/bin, the code is here:
      https://github.com/Debian/wcurl/blob/main/wcurl
      https://salsa.debian.org/debian/wcurl/-/blob/main/wcurl?ref_type=heads

      Smoother CLI experience Starting with curl version 8.8.0-2, the Debian's curl package now ships a wcurl executable. wcurl is the solution for those who just need to download files without having to remember curl's parameters for things like automatically naming the files. Some people, myself included, would fall back to using wget whenever there was a need to download a file. Sometimes even installing wget just for that usecase. After all, it's easier to remember "apt install wget" rather than "curl -L -O -C - ...". wcurl consists of a simple shell script that provides sane defaults for the curl invocation, for when the use case is to just download files. By default, wcurl will:
      • Encode whitespaces in URLs;
      • Download multiple URLs in parallel if the installed curl's version is >= 7.66.0;
      • Follow redirects;
      • Automatically choose a filename as output;
      • Avoid overwriting files if the installed curl's version is >= 7.83.0 (--no-clobber);
      • Perform retries;
      • Set the downloaded file timestamp to the value provided by the server, if available;
      • Default to the protocol used as https if the URL doesn't contain any;
      • Disable curl's URL globbing parser so and [] characters in URLs are not treated specially.
      Example to download a single file:
      wcurl example.com/filename.txt
      
      If you ever need to set a custom flag, you can make use of the --curl-options wcurl option, anything set there will be passed to the curl invocation. Just beware that if you need to set any custom flags, it's likely you will be better served by calling curl directly. The --curl-options option is there to allow for some flexibility in unforeseen circumstances.

      The need for wcurl I've always felt a bit ashamed of not remembering curl's parameters for downloading a file and automatically naming it, having resorted to wget most of the times this was needed (even installing wget when it wasn't there, just for this). I've spoken to a few other experienced people I know and confirmed what could be obvious to others: a lot of people struggle with this. Recently, the curl project released the results of 2024's curl survey, which also showed this is as a much needed feature, just look at some of the answers:

      Q: Which curl command line option do you think needs improvement and how?
      -O, I really want wget like functionality where I don't have to specify the name
      Downloading a file (like wget) could be improved - with automatic naming of the file
      downloading files - wget is much cleaner
      I wish the default behaviour when GETting a binary was to drop it on disk. That's the only reason 'wget foo.tgz" is still ingrained in my muscle memory .
      Maybe have a way to download without specifying something in -o (the only reason i used wget still)
      --remote-time should be default
      --remote-name-all could really use a short flag

      Q: If you miss support for something, tell us what!
      "Write the data to the file named in the URL (or in redirects if I'm feeling daring), and timestamp the file to the last-modified-date". This is the main reason I'm still using wget.
      I can finally feel less bad about falling back to wget due to not remembering the parameters I want.

      Idealization vs. reality I don't believe curl will ever change its default behavior in such a way that would accommodate this need, as that would have a side-effect of breaking things which expect the current behavior (the blast radius is literally the solar system). This means a new executable needs to be shipped side-by-side with curl, an opportunity to start fresh and work with a more focused use case (to download files). Ideally, this new executable would be maintained by the curl project, make use of libcurl under-the-hood, and be available everywhere. Nobody wants to worry if their systems have the tool or not, it should always be there. Given I'm just a Debian Developer, with not as much free time as I wish, I've decided to write a simple shell script wrapper calling the curl CLI under-the-hood. wcurl will come installed with the curl package from now on, and I will check with the release team about shipping it on the current Debian stable as well. Shipping wcurl in other distros will be up to them (Debian-derivatives should pick it up automatically, though). We've tried to make it easy for anyone to ship this by using the curl license, keeping the script POSIX-compliant, and shipping a manpage. Maybe if there's enough interest across distributions, someone might sign up for implementing this in upstream curl and increase its reach. I would be happy with the curl project reusing the wcurl name when that happens. It's unlikely that wcurl would be shipped by curl upstream as it is, assuming they would prefer a solution that uses libcurl direclty (more similar to curl the CLI, to maintain). In the worst case, wcurl becomes a Debian-specific tool that only a few people are aware of, in the best case, it becomes the new go-to CLI tool for simply downloading files. I would be happy if at least someone other than me finds it useful.

      Naming is hard When I started working on it, I was calling the new executable "curld" (stands for "curl download"), but then when discussing this in one of our weekly calls in the Debian Bras lia community, it was mentioned that this could be confused for a daemon. We then settled for the name "wcurl", suggested by Carlos Henrique Lima Melara <charles>. It doesn't really stand for anything, but it's very easy to remember. You know... "it's that wget alternative for when you want to use curl instead" :)

      Feedback I'm hosting the code on Github and Debian's GitLab instance, feel free to open an issue to provide feedback.
      https://salsa.debian.org/debian/wcurl
      https://github.com/Debian/wcurl We also have a Matrix room for the Debian curl maintainers:
      https://matrix.to/#/#debian-curl-maintainers:matrix.org

      Acknowledgments The idea for wcurl came a few days before the curl-up conference 2024. I've been thinking a lot about developer productivity in the terminal lately, different tools and better defaults. Before curl-up, I was also thinking about packaging improvements for the curl package. I don't remember what exactly happened, but I likely had to download something and felt a bit ashamed of maintaining curl and not remembering the parameters to download files the way I wanted. I first discussed this idea in the conference, where I asked the participants about it and there were no concerns raised, and some people said I should give it a go. Participating in curl-up was a really great experience and I'm thankful for the interactions I've had there. On the Debian side, I've got reviews of the code and manpage by Sergio Durigan Junior <sergiodj>, Guilherme Puida Moreira <puida> and Carlos Henrique Lima Melara <charles>. Sergio ended up rewriting the tool to be POSIX-compliant (my version was written in bash), so he takes all the credit for the portability.

      Changes since publication

      2024-07-18
      • Update date of availability for Debian testing and expected date for bookworm backports.
      • Mention charles as the person who suggested "wcurl" as a name.
      • Update wcurl's -o/--opts options, it's now just --curl-options.
      • Remove mention of language spoken in the Matrix room, we are using English now.
      • Update list of features of wcurl.

      1 July 2024

      Sahil Dhiman: Personal ASNs From India

      Internet and it s working are interesting and complex. We need an IP address to connect to the Internet. A group of IP addresses with common routing policy is known as an Autonomous System (AS). Each AS has a globally unique Autonomous System Number (ASN) and is maintained by a single entity or individual(s). Your ISP would have an ASN. IP addresses/prefixes are advertised (announced) by an AS through Border Gateway Protocol (BGP) to its peers (ASes which it connects to) to steer traffic in its direction or back. Take for example Google DNS service at 8.8.8.8 owned and operated by AS15169 Google LLC. AS15169 through BGP announcements, lets all its peers know that traffic for whole of 8.8.8.0/24 (including 8.8.8.8) prefix should be sent to them. See the following screenshot response of mtr -zt 8.8.8.8 from my system. From my Internet Service Provider (ISP), AS133982 Excitel Broadband, traffic travels to AS15169 to reach 8.8.8.8 (dns.google) and returns via the same path. This Inter-AS traffic makes the Internet tick. mtr from Excitel to Google ASes comes in different sizes and purposes. Like AS749 DoD Network Information Center which holds more than 200 million+ IPv4 addresses for historical reasons or AS23860 Alliance Broadband Services which has 68 thousand+ IPv4 address for purpose of providing consumer Internet. Similarly, some individuals also run their personal ASN including a bunch of Indians. Most of these Indian ASNs are IPv6 (primary or only) networks run for hobby and educational purposes. I was interested in this data, so complied a list of active ones (visible in the global routing table) from BGP.Tools: Let me know if I m missing someone.

      Russell Coker: Links June 2024

      Modos Labs have released the design of an e-ink display connected by USB-C [1]. They have provided a lot of background information on e-ink displays which isn t available elsewhere. Excellent work! Informative article about a company giving renters insecure locks while facilitating collusion to raise rents [2]. Insightful video by JimmyTheGiant about the destruction of housing estates in the UK [3]. I wonder how much of this was deliberate by the Tories. Insightful video by Modern Vintage Gamer about the way Nintendo is destroying history by preventing people playing old games [4]. Interesting video by Louis Rossmann about the low quality of products and reviews on Amazon [5]. We all know about Enshittification, but it seems that Amazon is getting to the stage of being unusable for some products. Amusing video by Folding Ideas about Decentraland an attampt at a blockchain based second life type thing which failed as you expect blockchain things to fail [6]. The top comment is a transcription of the actions of the speaker s pet cat. ;)

      27 June 2024

      Russ Allbery: Review: Lyorn

      Review: Lyorn, by Steven Brust
      Series: Vlad Taltos #17
      Publisher: Tor
      Copyright: 2024
      ISBN: 1-4668-8971-3
      Format: Kindle
      Pages: 274
      Lyorn is the 17th Vlad Taltos book and a direct sequel to 2014's Hawk. (Yes, actual main story progress!) You do not want to start reading here; you would be hopelessly confused. When this series is complete, I want to re-read the entire thing from the beginning and pick up more of the bits I missed the first time. Vlad is not, in fact, free to see his friends and get entangled in imperial politics again as I thought after Hawk. Despite the successes of that story, there is one remaining small problem: incredibly powerful magic users still want to kill him. His immediate solution is to shelter in a theater, since Draegaran theaters are well-known for their excellent magical shielding. This works well enough at first, but the theater is rehearsing a play about Draegaran politics that is highly offensive to the Lyorn and the theater may be shut down because of it. Vlad's enemies are also willing to lean on his friends to find him and kill him. This series continues to be thoroughly enjoyable. Lyorn is "just" more of Vlad being Vlad, meddling in everyone's business and coming up with elaborate plans with too many moving parts that he somehow manages to pull off, but I'll happily read lots of books like that. Vlad is both anxious and grumpy, both of which give the plot some needed tension without being overwhelming. There are no truly major world-building revelations here (or, if there are, I missed them), but there's a lot of processing of what the reader learned in Tsalmoth. It's increasingly looking like the payoff from those revelations is going to be the series finale. This is the first Vlad book that contains solid confirmation of where the series as a whole is headed. Brust mentioned some time ago that the last book is titled The Last Contract, and Lyorn comes close to stating explicitly what that contract will be. I am sure that it will be more complicated than it appears now and there are misdirections yet to come, but I am excited to see where Brust takes this idea. Vlad has been insistently apolitical for much of the series, meddling in politics only when he has to or to help his friends and otherwise treating it as a system that he has to navigate and survive. That was the root of the conflict in Teckla, all the way back at the start of the series. This may be starting to change, and when Brust ties it together with the Jenoine, the Great Weapon Godslayer, and the rest of the world-building he's cued up, the results are going to be explosive. Two books left: Chreotha and The Last Contract. I can hardly wait. In every Vlad book, Brust plays some sort of structural game. This time, befitting the setting, it's a musical. The action is interspersed with quotes from a fictional history about the play the theater is putting on, a work called Song of the Presses about political censorship during the reign of a Lyorn emperor in the 14th cycle, thousands of years before the time of this book. This was, at times, nearly as interesting as the main plot. The chapters are also numbered like the acts and scenes of a play, although this I didn't notice as much since books often have that structure anyway. Since this is a musical, there are also songs. Specifically, each chapter is introduced by a parody of songs from various musicals in our world, rewritten so that they fit within Brust's fictional musical. Brust is also a musician and a filker, so these songs are actually good, or at least they amused me a great deal. I'm not much of a musical fan and I still could hear the tune playing when I read most of them. Lyorn is not so good that I would rave about it. It's one of those functional connective books of a series that advances the plot, tells a good story, and has some fun along the way. The guns on the mantelpiece of this world have not gone off yet, and Vlad is still maneuvering into position. But it's looking like we're going to get the conclusion, and it's going to be spectacular. If you have read this far, you will want to keep reading. Followed by Chreotha, which may be a bit of a wait because apparently Brust is going to write The Last Contract first to make sure he ties up loose ends properly. Rating: 7 out of 10

      27 May 2024

      Sahil Dhiman: A Late, Late Debconf23 Post

      After much procrastination, I have gotten around to complete my DebConf23 (DC23), Kochi blog post. I lost the original etherpad which was started before DebConf23, for jotting down things. Now, I have started afresh with whatever I can remember, months after the actual conference ended. So things might be as accurate as my memory. DebConf23, the 24th annual Debian Conference, happened in Infopark, Kochi, India from 10th September to 17th September 2023. It was preceded by DebCamp from 3rd September to 9th September 2023. First formal bid to host DebConf in India was made during DebConf18 in Hsinchu, Taiwan by Raju Dev, which didn t came our way. In next DebConf, DebConf19 in Curitiba, Brazil, another bid was made by him with help and support from Sruthi, Utkarsh and the whole team.This time, India got the opportunity to host DebConf22, which eventually became DebConf23 for the reasons you all know. I initially met the local team on the sidelines of DebConf20, which was also my first DebConf. Having recently switched to Debian, DC20 introduced me to how things work in Debian. Video team s call for volunteers email pulled me in. Things stuck, and I kept hanging out and helping the local Indian DC team with various stuff. We did manage to organize multiple events leading to DebConf23 including MiniDebConf India 2021 Online, MiniDebConf Palakkad 2022, MiniDebConf Tamil Nadu 2023 and DebUtsav Kochi 2023, which gave us quite a bit of experience and workout. Many local organizers from these conferences later joined various DebConf teams during the conference to help out. For DebConf23, originally I was part of publicity team because that was my usual thing. After a team redistribution exercise, Sruthi and Praveen moved me to sponsorship team, as anyhow we didn t had to do much publicity and sponsorship was one of those things I could get involved remotely. Sponsorship team had to take care of raising funds by reaching out to sponsors, managing invoices and fulfillment. Praveen joined as well in sponsorship team. We also had international sponsorship team, Anisa, Daniel and various Debian Trusted Organizations (TO)s which took care of reaching out to international organizations, and we took care of reaching out to Indian organizations for sponsorship. It was really proud moment when my present employer, Unmukti (makers of hopbox) came aboard as Bronze sponsor. Though fundraising seem to be hit hard from tech industry slowdown and layoffs. Many of our yesteryear sponsors couldn t sponsor. We had biweekly local team meetings, which were turned to weekly as we neared the event. This was done in addition to biweekly global team meeting. Pathu
      Pathu, DebConf23 mascot
      To describe the conference venue, it happened in InfoPark, Kochi with the main conference hall being Athulya Hall and food, accommodation and two smaller halls in Four Point Hotel, right outside Infopark. We got Athulya Hall as part of venue sponsorship from Infopark. The distance between both of them was around 300 meters. Halls were named Anamudi, Kuthiran and Ponmudi based on hills and mountain areas in host state of Kerala. Other than Annamudi hall which was the main hall, I couldn t remember the names of the hall, I still can t. Four Points was big and expensive, and we had, as expected, cost overruns. Due to how DebConf function, an Indian university wasn t suitable to host a conference of this scale. Infinity Pool at Night
      Four Point's Infinity Pool at Night
      I landed in Kochi on the first day of DebCamp on 3rd September. As usual, met Abraham first, and the better part of the next hour was spent on meet and greet. It was my first IRL DebConf so met many old friends and new folks. I got a room to myself. Abraham lived nearby and hadn t taken the accommodation, so I asked him to join. He finally joined from second day onwards. All through the conference, room 928 became in-famous for various reasons, and I had various roommates for company. In DebCamp days, we would get up to have breakfast and go back to sleep and get active only past lunch for hacking and helping in the hack lab for the day, followed by fun late night discussions and parties. Nilesh, Chirag and Apple at DC23
      Nilesh, Chirag and Apple at DC23
      The team even managed to get a press conference arranged as well, and we got an opportunity to go to Press Club, Ernakulam. Sruthi and Jonathan gave the speech and answered questions from journalists. The event was covered by media as well due to this. Ernakulam Press Club
      Ernakulam Press Club
      During the conference, every night the team use to have 9 PM meetings for retrospection and planning for next day, which was always dotted with new problems. Every day, we used to hijack Silent Hacklab for the meeting and gently ask the only people there at the time to give us space. DebConf, it itself is a well oiled machine. Network was brought up from scratch. Video team built the recording, audio mixing, live-streaming, editing and transcoding infrastructure on site. A gaming rig served as router and gateway. We got internet uplinks, a 1 Gbps sponsored leased line from Kerala Vision and a paid backup 100 Mbps connection from a different provider. IPv6 was added through HE s Tunnelbroker. Overall the network worked fine as additionally we had hotel Wi-Fi, so the conference network wasn t stretched much. I must highlight, DebConf is my only conference where almost everything and every piece of software in developed in-house, for the conference and modified according to need on the fly. Even event recording cameras, audio check, direction, recording and editing is all done on in-house software by volunteer-attendees (in some cases remote ones as well), all trained on the sideline of the conference. The core recording and mixing equipment is owned by Debian and travels to each venue. The rest is sourced locally. Gaming Rig which served as DC23 gateway router
      Gaming Rig which served as DC23 gateway router
      It was fun seeing how almost all the things were coordinated over text on Internet Relay Chat (IRC). If a talk/event was missing a talkmeister or a director or a camera person, a quick text on #debconf channel would be enough for someone to volunteer. Video team had a dedicated support channel for each conference venue for any issues and were quick to respond and fix stuff. Network information. Screengrab from closing ceremony
      Network information. Screengrab from closing ceremony
      It rained for the initial days, which gave us a cool weather. Swag team had decided to hand out umbrellas in swag kit which turned out to be quite useful. The swag kit was praised for quality and selection - many thanks to Anupa, Sruthi and others. It was fun wearing different color T-shirts, all designed by Abraham. Red for volunteers, light green for Video team, green for core-team i.e. staff and yellow for conference attendees. With highvoltage
      With highvoltage
      We were already acclimatized by the time DebConf really started as we had been talking, hacking and hanging out since last 7 days. Rush really started with the start of DebConf. More people joined on the first and second day of the conference. As has been the tradition, an opening talk was prepared by the Sruthi and local team (which I highly recommend watching to get more insights of the process). DebConf day 1 also saw job fair, where Canonical and FOSSEE, IIT Bombay had stalls for community interactions, which judging by the crowd itself turned out to be quite a hit. For me, association with DebConf (and Debian) started due to volunteering with video team, so anyhow I was going to continue doing that this conference as well. I usually volunteer for talks/events which anyhow I m interested in. Handling the camera, talkmeister-ing and direction are fun activities, though I didn t do sound this time around. Sound seemed difficult, and I didn t want to spoil someone s stream and recording. Talk attendance varied a lot, like in Bits from DPL talk, the hall was full but for some there were barely enough people to handle the volunteering tasks, but that s what usually happens. DebConf is more of a place to come together and collaborate, so talk attendance is an afterthought sometimes. Audience in highvoltage's Bits from DPL talk
      Audience in highvoltage's Bits from DPL talk
      I didn t submit any talk proposals this time around, as just being in the orga team was too much work already, and I knew, the talk preparation would get delayed to the last moment and I would have to rush through it. Enrico's talk
      Enrico's talk
      From Day 2 onward, more sponsor stalls were introduced in the hallway area. Hopbox by Unmukti , MostlyHarmless and Deeproot (joint stall) and FOSEE. MostlyHarmless stall had nice mechanical keyboards and other fun gadgets. Whenever I got the time, I would go and start typing racing to enjoy the nice, clicky keyboards. As the DebConf tradition dictates, we had a Cheese and Wine party. Everyone brought in cheese and other delicacies from their region. Then there was yummy Sadya. Sadya is a traditional vegetarian Malayalis lunch served over banana leaves. There were loads of different dishes served, the names of most I couldn t pronounce or recollect properly, but everything was super delicious. Day 4 was day trip and I chose to go to Athirappilly Waterfalls and Jungle safari. Pictures would describe the beauty better than words. The journey was bit long though. Athirappilly Falls
      Athirappilly Falls

      Pathu Pathu Tea Gardens
      Tea Gardens
      Late that day, we heard the news of Abraham gone missing. We lost Abraham. He had worked really hard all through the years for Debian and making this conference possible. Talks were cancelled for the next day and Jonathan addressed everyone. We went to Abraham s home the next day to meet his family. Team had arranged buses to Abraham s place. It was an unfortunate moment that I only got an opportunity to visit his place after he was gone. Days went by slowly after that. The last day was marked by a small conference dinner. Some of the people had already left. All through the day and next, we kept saying goodbye to friends, with whom we spent almost a fortnight together. Athirappilly Falls
      Group photo with all DebConf T-shirts chronologically
      This was 2nd trip to Kochi. Vistara Airway s UK886 has become the default flight now. I have almost learned how to travel in and around Kochi by Metro, Water Metro, Airport Shuttle and auto. Things are quite accessible in Kochi but metro is a bit expensive compared to Delhi. I left Kochi on 19th. My flight was due to leave around 8 PM, so I had the whole day to myself. A direct option would have taken less than 1 hour, but as I had time and chose to take the long way to the airport. First took an auto rickshaw to Kakkanad Water Metro station. Then sailed in the water metro to Vyttila Water Metro station. Vyttila serves as intermobility hub which connects water metro, metro, bus at once place. I switched to Metro here at Vyttila Metro station till Aluva Metro station. Here, I had lunch and then boarded the Airport feeder bus to reach Kochi Airport. All in all, I did auto rickshaw > water metro > metro > feeder bus to reach Airport. I was fun and scenic. I must say, public transport and intermodal integration is quite good and once can transition seamlessly from one mode to next. Kochi Water Metro
      Kochi Water Metro

      Scenes from Kochi Water Metro Scenes from Kochi Water Metro
      Scenes from Kochi Water Metro
      DebConf23 served its purpose of getting existing Debian people together, as well as getting new people interested and contributing to Debian. People who came are still contributing to Debian, and that s amazing. Streaming video stats
      Streaming video stats. Screengrab from closing ceremony
      The conference wasn t without its fair share of troubles. There were multiple money transfer woes, and being in India didn t help. Many thanks to multiple organizations who were proactive in helping out. On top of this, there was conference visa uncertainty and other issues which troubled visa team a lot. Kudos to everyone who made this possible. Surely, I m going to miss the name, so thank you for it, you know how much you have done to make this event possible. Now, DebConf24 is scheduled for Busan, South Korea, and work is already in full swing. As usual, I m helping with the fundraising part and plan to attend too. Let s see if I can make it or not. DebConf23 Group Photo
      DebConf23 Group Photo. Click to enlarge.
      Credits - Aigars Mahinovs
      In the end, we kept on saying, no DebConf at this scale would come back to India for the next 10 or 20 years. It s too much trouble to be frank. It was probably the peak that we might not reach again. I would be happy to be proven wrong though :)

      18 April 2024

      Samuel Henrique: Hello World

      This is my very first post, just to make sure everything is working as expected. Made with Zola and the Abridge theme.

      13 April 2024

      Simon Josefsson: Reproducible and minimal source-only tarballs

      With the release of Libntlm version 1.8 the release tarball can be reproduced on several distributions. We also publish a signed minimal source-only tarball, produced by git-archive which is the same format used by Savannah, Codeberg, GitLab, GitHub and others. Reproducibility of both tarballs are tested continuously for regressions on GitLab through a CI/CD pipeline. If that wasn t enough to excite you, the Debian packages of Libntlm are now built from the reproducible minimal source-only tarball. The resulting binaries are reproducible on several architectures. What does that even mean? Why should you care? How you can do the same for your project? What are the open issues? Read on, dear reader This article describes my practical experiments with reproducible release artifacts, following up on my earlier thoughts that lead to discussion on Fosstodon and a patch by Janneke Nieuwenhuizen to make Guix tarballs reproducible that inspired me to some practical work. Let s look at how a maintainer release some software, and how a user can reproduce the released artifacts from the source code. Libntlm provides a shared library written in C and uses GNU Make, GNU Autoconf, GNU Automake, GNU Libtool and gnulib for build management, but these ideas should apply to most project and build system. The following illustrate the steps a maintainer would take to prepare a release:
      git clone https://gitlab.com/gsasl/libntlm.git
      cd libntlm
      git checkout v1.8
      ./bootstrap
      ./configure
      make distcheck
      gpg -b libntlm-1.8.tar.gz
      The generated files libntlm-1.8.tar.gz and libntlm-1.8.tar.gz.sig are published, and users download and use them. This is how the GNU project have been doing releases since the late 1980 s. That is a testament to how successful this pattern has been! These tarballs contain source code and some generated files, typically shell scripts generated by autoconf, makefile templates generated by automake, documentation in formats like Info, HTML, or PDF. Rarely do they contain binary object code, but historically that happened. The XZUtils incident illustrate that tarballs with files that are not included in the git archive offer an opportunity to disguise malicious backdoors. I blogged earlier how to mitigate this risk by using signed minimal source-only tarballs. The risk of hiding malware is not the only motivation to publish signed minimal source-only tarballs. With pre-generated content in tarballs, there is a risk that GNU/Linux distributions such as Trisquel, Guix, Debian/Ubuntu or Fedora ship generated files coming from the tarball into the binary *.deb or *.rpm package file. Typically the person packaging the upstream project never realized that some installed artifacts was not re-built through a typical autoconf -fi && ./configure && make install sequence, and never wrote the code to rebuild everything. This can also happen if the build rules are written but are buggy, shipping the old artifact. When a security problem is found, this can lead to time-consuming situations, as it may be that patching the relevant source code and rebuilding the package is not sufficient: the vulnerable generated object from the tarball would be shipped into the binary package instead of a rebuilt artifact. For architecture-specific binaries this rarely happens, since object code is usually not included in tarballs although for 10+ years I shipped the binary Java JAR file in the GNU Libidn release tarball, until I stopped shipping it. For interpreted languages and especially for generated content such as HTML, PDF, shell scripts this happens more than you would like. Publishing minimal source-only tarballs enable easier auditing of a project s code, to avoid the need to read through all generated files looking for malicious content. I have taken care to generate the source-only minimal tarball using git-archive. This is the same format that GitLab, GitHub etc offer for the automated download links on git tags. The minimal source-only tarballs can thus serve as a way to audit GitLab and GitHub download material! Consider if/when hosting sites like GitLab or GitHub has a security incident that cause generated tarballs to include a backdoor that is not present in the git repository. If people rely on the tag download artifact without verifying the maintainer PGP signature using GnuPG, this can lead to similar backdoor scenarios that we had for XZUtils but originated with the hosting provider instead of the release manager. This is even more concerning, since this attack can be mounted for some selected IP address that you want to target and not on everyone, thereby making it harder to discover. With all that discussion and rationale out of the way, let s return to the release process. I have added another step here:
      make srcdist
      gpg -b libntlm-1.8-src.tar.gz
      Now the release is ready. I publish these four files in the Libntlm s Savannah Download area, but they can be uploaded to a GitLab/GitHub release area as well. These are the SHA256 checksums I got after building the tarballs on my Trisquel 11 aramo laptop:
      91de864224913b9493c7a6cec2890e6eded3610d34c3d983132823de348ec2ca  libntlm-1.8-src.tar.gz
      ce6569a47a21173ba69c990965f73eb82d9a093eb871f935ab64ee13df47fda1  libntlm-1.8.tar.gz
      So how can you reproduce my artifacts? Here is how to reproduce them in a Ubuntu 22.04 container:
      podman run -it --rm ubuntu:22.04
      apt-get update
      apt-get install -y --no-install-recommends autoconf automake libtool make git ca-certificates
      git clone https://gitlab.com/gsasl/libntlm.git
      cd libntlm
      git checkout v1.8
      ./bootstrap
      ./configure
      make dist srcdist
      sha256sum libntlm-*.tar.gz
      You should see the exact same SHA256 checksum values. Hooray! This works because Trisquel 11 and Ubuntu 22.04 uses the same version of git, autoconf, automake, and libtool. These tools do not guarantee the same output content for all versions, similar to how GNU GCC does not generate the same binary output for all versions. So there is still some delicate version pairing needed. Ideally, the artifacts should be possible to reproduce from the release artifacts themselves, and not only directly from git. It is possible to reproduce the full tarball in a AlmaLinux 8 container replace almalinux:8 with rockylinux:8 if you prefer RockyLinux:
      podman run -it --rm almalinux:8
      dnf update -y
      dnf install -y make wget gcc
      wget https://download.savannah.nongnu.org/releases/libntlm/libntlm-1.8.tar.gz
      tar xfa libntlm-1.8.tar.gz
      cd libntlm-1.8
      ./configure
      make dist
      sha256sum libntlm-1.8.tar.gz
      The source-only minimal tarball can be regenerated on Debian 11:
      podman run -it --rm debian:11
      apt-get update
      apt-get install -y --no-install-recommends make git ca-certificates
      git clone https://gitlab.com/gsasl/libntlm.git
      cd libntlm
      git checkout v1.8
      make -f cfg.mk srcdist
      sha256sum libntlm-1.8-src.tar.gz 
      As the Magnus Opus or chef-d uvre, let s recreate the full tarball directly from the minimal source-only tarball on Trisquel 11 replace docker.io/kpengboy/trisquel:11.0 with ubuntu:22.04 if you prefer.
      podman run -it --rm docker.io/kpengboy/trisquel:11.0
      apt-get update
      apt-get install -y --no-install-recommends autoconf automake libtool make wget git ca-certificates
      wget https://download.savannah.nongnu.org/releases/libntlm/libntlm-1.8-src.tar.gz
      tar xfa libntlm-1.8-src.tar.gz
      cd libntlm-v1.8
      ./bootstrap
      ./configure
      make dist
      sha256sum libntlm-1.8.tar.gz
      Yay! You should now have great confidence in that the release artifacts correspond to what s in version control and also to what the maintainer intended to release. Your remaining job is to audit the source code for vulnerabilities, including the source code of the dependencies used in the build. You no longer have to worry about auditing the release artifacts. I find it somewhat amusing that the build infrastructure for Libntlm is now in a significantly better place than the code itself. Libntlm is written in old C style with plenty of string manipulation and uses broken cryptographic algorithms such as MD4 and single-DES. Remember folks: solving supply chain security issues has no bearing on what kind of code you eventually run. A clean gun can still shoot you in the foot. Side note on naming: GitLab exports tarballs with pathnames libntlm-v1.8/ (i.e.., PROJECT-TAG/) and I ve adopted the same pathnames, which means my libntlm-1.8-src.tar.gz tarballs are bit-by-bit identical to GitLab s exports and you can verify this with tools like diffoscope. GitLab name the tarball libntlm-v1.8.tar.gz (i.e., PROJECT-TAG.ARCHIVE) which I find too similar to the libntlm-1.8.tar.gz that we also publish. GitHub uses the same git archive style, but unfortunately they have logic that removes the v in the pathname so you will get a tarball with pathname libntlm-1.8/ instead of libntlm-v1.8/ that GitLab and I use. The content of the tarball is bit-by-bit identical, but the pathname and archive differs. Codeberg (running Forgejo) uses another approach: the tarball is called libntlm-v1.8.tar.gz (after the tag) just like GitLab, but the pathname inside the archive is libntlm/, otherwise the produced archive is bit-by-bit identical including timestamps. Savannah s CGIT interface uses archive name libntlm-1.8.tar.gz with pathname libntlm-1.8/, but otherwise file content is identical. Savannah s GitWeb interface provides snapshot links that are named after the git commit (e.g., libntlm-a812c2ca.tar.gz with libntlm-a812c2ca/) and I cannot find any tag-based download links at all. Overall, we are so close to get SHA256 checksum to match, but fail on pathname within the archive. I ve chosen to be compatible with GitLab regarding the content of tarballs but not on archive naming. From a simplicity point of view, it would be nice if everyone used PROJECT-TAG.ARCHIVE for the archive filename and PROJECT-TAG/ for the pathname within the archive. This aspect will probably need more discussion. Side note on git archive output: It seems different versions of git archive produce different results for the same repository. The version of git in Debian 11, Trisquel 11 and Ubuntu 22.04 behave the same. The version of git in Debian 12, AlmaLinux/RockyLinux 8/9, Alpine, ArchLinux, macOS homebrew, and upcoming Ubuntu 24.04 behave in another way. Hopefully this will not change that often, but this would invalidate reproducibility of these tarballs in the future, forcing you to use an old git release to reproduce the source-only tarball. Alas, GitLab and most other sites appears to be using modern git so the download tarballs from them would not match my tarballs even though the content would. Side note on ChangeLog: ChangeLog files were traditionally manually curated files with version history for a package. In recent years, several projects moved to dynamically generate them from git history (using tools like git2cl or gitlog-to-changelog). This has consequences for reproducibility of tarballs: you need to have the entire git history available! The gitlog-to-changelog tool also output different outputs depending on the time zone of the person using it, which arguable is a simple bug that can be fixed. However this entire approach is incompatible with rebuilding the full tarball from the minimal source-only tarball. It seems Libntlm s ChangeLog file died on the surgery table here. So how would a distribution build these minimal source-only tarballs? I happen to help on the libntlm package in Debian. It has historically used the generated tarballs as the source code to build from. This means that code coming from gnulib is vendored in the tarball. When a security problem is discovered in gnulib code, the security team needs to patch all packages that include that vendored code and rebuild them, instead of merely patching the gnulib package and rebuild all packages that rely on that particular code. To change this, the Debian libntlm package needs to Build-Depends on Debian s gnulib package. But there was one problem: similar to most projects that use gnulib, Libntlm depend on a particular git commit of gnulib, and Debian only ship one commit. There is no coordination about which commit to use. I have adopted gnulib in Debian, and add a git bundle to the *_all.deb binary package so that projects that rely on gnulib can pick whatever commit they need. This allow an no-network GNULIB_URL and GNULIB_REVISION approach when running Libntlm s ./bootstrap with the Debian gnulib package installed. Otherwise libntlm would pick up whatever latest version of gnulib that Debian happened to have in the gnulib package, which is not what the Libntlm maintainer intended to be used, and can lead to all sorts of version mismatches (and consequently security problems) over time. Libntlm in Debian is developed and tested on Salsa and there is continuous integration testing of it as well, thanks to the Salsa CI team. Side note on git bundles: unfortunately there appears to be no reproducible way to export a git repository into one or more files. So one unfortunate consequence of all this work is that the gnulib *.orig.tar.gz tarball in Debian is not reproducible any more. I have tried to get Git bundles to be reproducible but I never got it to work see my notes in gnulib s debian/README.source on this aspect. Of course, source tarball reproducibility has nothing to do with binary reproducibility of gnulib in Debian itself, fortunately. One open question is how to deal with the increased build dependencies that is triggered by this approach. Some people are surprised by this but I don t see how to get around it: if you depend on source code for tools in another package to build your package, it is a bad idea to hide that dependency. We ve done it for a long time through vendored code in non-minimal tarballs. Libntlm isn t the most critical project from a bootstrapping perspective, so adding git and gnulib as Build-Depends to it will probably be fine. However, consider if this pattern was used for other packages that uses gnulib such as coreutils, gzip, tar, bison etc (all are using gnulib) then they would all Build-Depends on git and gnulib. Cross-building those packages for a new architecture will therefor require git on that architecture first, which gets circular quick. The dependency on gnulib is real so I don t see that going away, and gnulib is a Architecture:all package. However, the dependency on git is merely a consequence of how the Debian gnulib package chose to make all gnulib git commits available to projects: through a git bundle. There are other ways to do this that doesn t require the git tool to extract the necessary files, but none that I found practical ideas welcome! Finally some brief notes on how this was implemented. Enabling bootstrappable source-only minimal tarballs via gnulib s ./bootstrap is achieved by using the GNULIB_REVISION mechanism, locking down the gnulib commit used. I have always disliked git submodules because they add extra steps and has complicated interaction with CI/CD. The reason why I gave up git submodules now is because the particular commit to use is not recorded in the git archive output when git submodules is used. So the particular gnulib commit has to be mentioned explicitly in some source code that goes into the git archive tarball. Colin Watson added the GNULIB_REVISION approach to ./bootstrap back in 2018, and now it no longer made sense to continue to use a gnulib git submodule. One alternative is to use ./bootstrap with --gnulib-srcdir or --gnulib-refdir if there is some practical problem with the GNULIB_URL towards a git bundle the GNULIB_REVISION in bootstrap.conf. The srcdist make rule is simple:
      git archive --prefix=libntlm-v1.8/ -o libntlm-1.8-src.tar.gz HEAD
      Making the make dist generated tarball reproducible can be more complicated, however for Libntlm it was sufficient to make sure the modification times of all files were set deterministically to the timestamp of the last commit in the git repository. Interestingly there seems to be a couple of different ways to accomplish this, Guix doesn t support minimal source-only tarballs but rely on a .tarball-timestamp file inside the tarball. Paul Eggert explained what TZDB is using some time ago. The approach I m using now is fairly similar to the one I suggested over a year ago. If there are problems because all files in the tarball now use the same modification time, there is a solution by Bruno Haible that could be implemented. Side note on git tags: Some people may wonder why not verify a signed git tag instead of verifying a signed tarball of the git archive. Currently most git repositories uses SHA-1 for git commit identities, but SHA-1 is not a secure hash function. While current SHA-1 attacks can be detected and mitigated, there are fundamental doubts that a git SHA-1 commit identity uniquely refers to the same content that was intended. Verifying a git tag will never offer the same assurance, since a git tag can be moved or re-signed at any time. Verifying a git commit is better but then we need to trust SHA-1. Migrating git to SHA-256 would resolve this aspect, but most hosting sites such as GitLab and GitHub does not support this yet. There are other advantages to using signed tarballs instead of signed git commits or git tags as well, e.g., tar.gz can be a deterministically reproducible persistent stable offline storage format but .git sub-directory trees or git bundles do not offer this property. Doing continous testing of all this is critical to make sure things don t regress. Libntlm s pipeline definition now produce the generated libntlm-*.tar.gz tarballs and a checksum as a build artifact. Then I added the 000-reproducability job which compares the checksums and fails on mismatches. You can read its delicate output in the job for the v1.8 release. Right now we insists that builds on Trisquel 11 match Ubuntu 22.04, that PureOS 10 builds match Debian 11 builds, that AlmaLinux 8 builds match RockyLinux 8 builds, and AlmaLinux 9 builds match RockyLinux 9 builds. As you can see in pipeline job output, not all platforms lead to the same tarballs, but hopefully this state can be improved over time. There is also partial reproducibility, where the full tarball is reproducible across two distributions but not the minimal tarball, or vice versa. If this way of working plays out well, I hope to implement it in other projects too. What do you think? Happy Hacking!

      Paul Tagliamonte: Domo Arigato, Mr. debugfs

      Years ago, at what I think I remember was DebConf 15, I hacked for a while on debhelper to write build-ids to debian binary control files, so that the build-id (more specifically, the ELF note .note.gnu.build-id) wound up in the Debian apt archive metadata. I ve always thought this was super cool, and seeing as how Michael Stapelberg blogged some great pointers around the ecosystem, including the fancy new debuginfod service, and the find-dbgsym-packages helper, which uses these same headers, I don t think I m the only one. At work I ve been using a lot of rust, specifically, async rust using tokio. To try and work on my style, and to dig deeper into the how and why of the decisions made in these frameworks, I ve decided to hack up a project that I ve wanted to do ever since 2015 write a debug filesystem. Let s get to it.

      Back to the Future Time to admit something. I really love Plan 9. It s just so good. So many ideas from Plan 9 are just so prescient, and everything just feels right. Not just right like, feels good like, correct. The bit that I ve always liked the most is 9p, the network protocol for serving a filesystem over a network. This leads to all sorts of fun programs, like the Plan 9 ftp client being a 9p server you mount the ftp server and access files like any other files. It s kinda like if fuse were more fully a part of how the operating system worked, but fuse is all running client-side. With 9p there s a single client, and different servers that you can connect to, which may be backed by a hard drive, remote resources over something like SFTP, FTP, HTTP or even purely synthetic. The interesting (maybe sad?) part here is that 9p wound up outliving Plan 9 in terms of adoption 9p is in all sorts of places folks don t usually expect. For instance, the Windows Subsystem for Linux uses the 9p protocol to share files between Windows and Linux. ChromeOS uses it to share files with Crostini, and qemu uses 9p (virtio-p9) to share files between guest and host. If you re noticing a pattern here, you d be right; for some reason 9p is the go-to protocol to exchange files between hypervisor and guest. Why? I have no idea, except maybe due to being designed well, simple to implement, and it s a lot easier to validate the data being shared and validate security boundaries. Simplicity has its value. As a result, there s a lot of lingering 9p support kicking around. Turns out Linux can even handle mounting 9p filesystems out of the box. This means that I can deploy a filesystem to my LAN or my localhost by running a process on top of a computer that needs nothing special, and mount it over the network on an unmodified machine unlike fuse, where you d need client-specific software to run in order to mount the directory. For instance, let s mount a 9p filesystem running on my localhost machine, serving requests on 127.0.0.1:564 (tcp) that goes by the name mountpointname to /mnt.
      $ mount -t 9p \
      -o trans=tcp,port=564,version=9p2000.u,aname=mountpointname \
      127.0.0.1 \
      /mnt
      
      Linux will mount away, and attach to the filesystem as the root user, and by default, attach to that mountpoint again for each local user that attempts to use it. Nifty, right? I think so. The server is able to keep track of per-user access and authorization along with the host OS.

      WHEREIN I STYX WITH IT Since I wanted to push myself a bit more with rust and tokio specifically, I opted to implement the whole stack myself, without third party libraries on the critical path where I could avoid it. The 9p protocol (sometimes called Styx, the original name for it) is incredibly simple. It s a series of client to server requests, which receive a server to client response. These are, respectively, T messages, which transmit a request to the server, which trigger an R message in response (Reply messages). These messages are TLV payload with a very straight forward structure so straight forward, in fact, that I was able to implement a working server off nothing more than a handful of man pages. Later on after the basics worked, I found a more complete spec page that contains more information about the unix specific variant that I opted to use (9P2000.u rather than 9P2000) due to the level of Linux specific support for the 9P2000.u variant over the 9P2000 protocol.

      MR ROBOTO The backend stack over at zoo is rust and tokio running i/o for an HTTP and WebRTC server. I figured I d pick something fairly similar to write my filesystem with, since 9P can be implemented on basically anything with I/O. That means tokio tcp server bits, which construct and use a 9p server, which has an idiomatic Rusty API that partially abstracts the raw R and T messages, but not so much as to cause issues with hiding implementation possibilities. At each abstraction level, there s an escape hatch allowing someone to implement any of the layers if required. I called this framework arigato which can be found over on docs.rs and crates.io.
      /// Simplified version of the arigato File trait; this isn't actually
      /// the same trait; there's some small cosmetic differences. The
      /// actual trait can be found at:
      ///
      /// https://docs.rs/arigato/latest/arigato/server/trait.File.html
      trait File  
      /// OpenFile is the type returned by this File via an Open call.
       type OpenFile: OpenFile;
      /// Return the 9p Qid for this file. A file is the same if the Qid is
       /// the same. A Qid contains information about the mode of the file,
       /// version of the file, and a unique 64 bit identifier.
       fn qid(&self) -> Qid;
      /// Construct the 9p Stat struct with metadata about a file.
       async fn stat(&self) -> FileResult<Stat>;
      /// Attempt to update the file metadata.
       async fn wstat(&mut self, s: &Stat) -> FileResult<()>;
      /// Traverse the filesystem tree.
       async fn walk(&self, path: &[&str]) -> FileResult<(Option<Self>, Vec<Self>)>;
      /// Request that a file's reference be removed from the file tree.
       async fn unlink(&mut self) -> FileResult<()>;
      /// Create a file at a specific location in the file tree.
       async fn create(
      &mut self,
      name: &str,
      perm: u16,
      ty: FileType,
      mode: OpenMode,
      extension: &str,
      ) -> FileResult<Self>;
      /// Open the File, returning a handle to the open file, which handles
       /// file i/o. This is split into a second type since it is genuinely
       /// unrelated -- and the fact that a file is Open or Closed can be
       /// handled by the  arigato  server for us.
       async fn open(&mut self, mode: OpenMode) -> FileResult<Self::OpenFile>;
       
      /// Simplified version of the arigato OpenFile trait; this isn't actually
      /// the same trait; there's some small cosmetic differences. The
      /// actual trait can be found at:
      ///
      /// https://docs.rs/arigato/latest/arigato/server/trait.OpenFile.html
      trait OpenFile  
      /// iounit to report for this file. The iounit reported is used for Read
       /// or Write operations to signal, if non-zero, the maximum size that is
       /// guaranteed to be transferred atomically.
       fn iounit(&self) -> u32;
      /// Read some number of bytes up to  buf.len()  from the provided
       ///  offset  of the underlying file. The number of bytes read is
       /// returned.
       async fn read_at(
      &mut self,
      buf: &mut [u8],
      offset: u64,
      ) -> FileResult<u32>;
      /// Write some number of bytes up to  buf.len()  from the provided
       ///  offset  of the underlying file. The number of bytes written
       /// is returned.
       fn write_at(
      &mut self,
      buf: &mut [u8],
      offset: u64,
      ) -> FileResult<u32>;
       
      

      Thanks, decade ago paultag! Let s do it! Let s use arigato to implement a 9p filesystem we ll call debugfs that will serve all the debug files shipped according to the Packages metadata from the apt archive. We ll fetch the Packages file and construct a filesystem based on the reported Build-Id entries. For those who don t know much about how an apt repo works, here s the 2-second crash course on what we re doing. The first is to fetch the Packages file, which is specific to a binary architecture (such as amd64, arm64 or riscv64). That architecture is specific to a component (such as main, contrib or non-free). That component is specific to a suite, such as stable, unstable or any of its aliases (bullseye, bookworm, etc). Let s take a look at the Packages.xz file for the unstable-debug suite, main component, for all amd64 binaries.
      $ curl \
      https://deb.debian.org/debian-debug/dists/unstable-debug/main/binary-amd64/Packages.xz \
        unxz
      
      This will return the Debian-style rfc2822-like headers, which is an export of the metadata contained inside each .deb file which apt (or other tools that can use the apt repo format) use to fetch information about debs. Let s take a look at the debug headers for the netlabel-tools package in unstable which is a package named netlabel-tools-dbgsym in unstable-debug.
      Package: netlabel-tools-dbgsym
      Source: netlabel-tools (0.30.0-1)
      Version: 0.30.0-1+b1
      Installed-Size: 79
      Maintainer: Paul Tagliamonte <paultag@debian.org>
      Architecture: amd64
      Depends: netlabel-tools (= 0.30.0-1+b1)
      Description: debug symbols for netlabel-tools
      Auto-Built-Package: debug-symbols
      Build-Ids: e59f81f6573dadd5d95a6e4474d9388ab2777e2a
      Description-md5: a0e587a0cf730c88a4010f78562e6db7
      Section: debug
      Priority: optional
      Filename: pool/main/n/netlabel-tools/netlabel-tools-dbgsym_0.30.0-1+b1_amd64.deb
      Size: 62776
      SHA256: 0e9bdb087617f0350995a84fb9aa84541bc4df45c6cd717f2157aa83711d0c60
      
      So here, we can parse the package headers in the Packages.xz file, and store, for each Build-Id, the Filename where we can fetch the .deb at. Each .deb contains a number of files but we re only really interested in the files inside the .deb located at or under /usr/lib/debug/.build-id/, which you can find in debugfs under rfc822.rs. It s crude, and very single-purpose, but I m feeling a bit lazy.

      Who needs dpkg?! For folks who haven t seen it yet, a .deb file is a special type of .ar file, that contains (usually) three files inside debian-binary, control.tar.xz and data.tar.xz. The core of an .ar file is a fixed size (60 byte) entry header, followed by the specified size number of bytes.
      [8 byte .ar file magic]
      [60 byte entry header]
      [N bytes of data]
      [60 byte entry header]
      [N bytes of data]
      [60 byte entry header]
      [N bytes of data]
      ...
      
      First up was to implement a basic ar parser in ar.rs. Before we get into using it to parse a deb, as a quick diversion, let s break apart a .deb file by hand something that is a bit of a rite of passage (or at least it used to be? I m getting old) during the Debian nm (new member) process, to take a look at where exactly the .debug file lives inside the .deb file.
      $ ar x netlabel-tools-dbgsym_0.30.0-1+b1_amd64.deb
      $ ls
      control.tar.xz debian-binary
      data.tar.xz netlabel-tools-dbgsym_0.30.0-1+b1_amd64.deb
      $ tar --list -f data.tar.xz   grep '.debug$'
      ./usr/lib/debug/.build-id/e5/9f81f6573dadd5d95a6e4474d9388ab2777e2a.debug
      
      Since we know quite a bit about the structure of a .deb file, and I had to implement support from scratch anyway, I opted to implement a (very!) basic debfile parser using HTTP Range requests. HTTP Range requests, if supported by the server (denoted by a accept-ranges: bytes HTTP header in response to an HTTP HEAD request to that file) means that we can add a header such as range: bytes=8-68 to specifically request that the returned GET body be the byte range provided (in the above case, the bytes starting from byte offset 8 until byte offset 68). This means we can fetch just the ar file entry from the .deb file until we get to the file inside the .deb we are interested in (in our case, the data.tar.xz file) at which point we can request the body of that file with a final range request. I wound up writing a struct to handle a read_at-style API surface in hrange.rs, which we can pair with ar.rs above and start to find our data in the .deb remotely without downloading and unpacking the .deb at all. After we have the body of the data.tar.xz coming back through the HTTP response, we get to pipe it through an xz decompressor (this kinda sucked in Rust, since a tokio AsyncRead is not the same as an http Body response is not the same as std::io::Read, is not the same as an async (or sync) Iterator is not the same as what the xz2 crate expects; leading me to read blocks of data to a buffer and stuff them through the decoder by looping over the buffer for each lzma2 packet in a loop), and tarfile parser (similarly troublesome). From there we get to iterate over all entries in the tarfile, stopping when we reach our file of interest. Since we can t seek, but gdb needs to, we ll pull it out of the stream into a Cursor<Vec<u8>> in-memory and pass a handle to it back to the user. From here on out its a matter of gluing together a File traited struct in debugfs, and serving the filesystem over TCP using arigato. Done deal!

      A quick diversion about compression I was originally hoping to avoid transferring the whole tar file over the network (and therefore also reading the whole debug file into ram, which objectively sucks), but quickly hit issues with figuring out a way around seeking around an xz file. What s interesting is xz has a great primitive to solve this specific problem (specifically, use a block size that allows you to seek to the block as close to your desired seek position just before it, only discarding at most block size - 1 bytes), but data.tar.xz files generated by dpkg appear to have a single mega-huge block for the whole file. I don t know why I would have expected any different, in retrospect. That means that this now devolves into the base case of How do I seek around an lzma2 compressed data stream ; which is a lot more complex of a question. Thankfully, notoriously brilliant tianon was nice enough to introduce me to Jon Johnson who did something super similar adapted a technique to seek inside a compressed gzip file, which lets his service oci.dag.dev seek through Docker container images super fast based on some prior work such as soci-snapshotter, gztool, and zran.c. He also pulled this party trick off for apk based distros over at apk.dag.dev, which seems apropos. Jon was nice enough to publish a lot of his work on this specifically in a central place under the name targz on his GitHub, which has been a ton of fun to read through. The gist is that, by dumping the decompressor s state (window of previous bytes, in-memory data derived from the last N-1 bytes) at specific checkpoints along with the compressed data stream offset in bytes and decompressed offset in bytes, one can seek to that checkpoint in the compressed stream and pick up where you left off creating a similar block mechanism against the wishes of gzip. It means you d need to do an O(n) run over the file, but every request after that will be sped up according to the number of checkpoints you ve taken. Given the complexity of xz and lzma2, I don t think this is possible for me at the moment especially given most of the files I ll be requesting will not be loaded from again especially when I can just cache the debug header by Build-Id. I want to implement this (because I m generally curious and Jon has a way of getting someone excited about compression schemes, which is not a sentence I thought I d ever say out loud), but for now I m going to move on without this optimization. Such a shame, since it kills a lot of the work that went into seeking around the .deb file in the first place, given the debian-binary and control.tar.gz members are so small.

      The Good First, the good news right? It works! That s pretty cool. I m positive my younger self would be amused and happy to see this working; as is current day paultag. Let s take debugfs out for a spin! First, we need to mount the filesystem. It even works on an entirely unmodified, stock Debian box on my LAN, which is huge. Let s take it for a spin:
      $ mount \
      -t 9p \
      -o trans=tcp,version=9p2000.u,aname=unstable-debug \
      192.168.0.2 \
      /usr/lib/debug/.build-id/
      
      And, let s prove to ourselves that this actually mounted before we go trying to use it:
      $ mount   grep build-id
      192.168.0.2 on /usr/lib/debug/.build-id type 9p (rw,relatime,aname=unstable-debug,access=user,trans=tcp,version=9p2000.u,port=564)
      
      Slick. We ve got an open connection to the server, where our host will keep a connection alive as root, attached to the filesystem provided in aname. Let s take a look at it.
      $ ls /usr/lib/debug/.build-id/
      00 0d 1a 27 34 41 4e 5b 68 75 82 8E 9b a8 b5 c2 CE db e7 f3
      01 0e 1b 28 35 42 4f 5c 69 76 83 8f 9c a9 b6 c3 cf dc E7 f4
      02 0f 1c 29 36 43 50 5d 6a 77 84 90 9d aa b7 c4 d0 dd e8 f5
      03 10 1d 2a 37 44 51 5e 6b 78 85 91 9e ab b8 c5 d1 de e9 f6
      04 11 1e 2b 38 45 52 5f 6c 79 86 92 9f ac b9 c6 d2 df ea f7
      05 12 1f 2c 39 46 53 60 6d 7a 87 93 a0 ad ba c7 d3 e0 eb f8
      06 13 20 2d 3a 47 54 61 6e 7b 88 94 a1 ae bb c8 d4 e1 ec f9
      07 14 21 2e 3b 48 55 62 6f 7c 89 95 a2 af bc c9 d5 e2 ed fa
      08 15 22 2f 3c 49 56 63 70 7d 8a 96 a3 b0 bd ca d6 e3 ee fb
      09 16 23 30 3d 4a 57 64 71 7e 8b 97 a4 b1 be cb d7 e4 ef fc
      0a 17 24 31 3e 4b 58 65 72 7f 8c 98 a5 b2 bf cc d8 E4 f0 fd
      0b 18 25 32 3f 4c 59 66 73 80 8d 99 a6 b3 c0 cd d9 e5 f1 fe
      0c 19 26 33 40 4d 5a 67 74 81 8e 9a a7 b4 c1 ce da e6 f2 ff
      
      Outstanding. Let s try using gdb to debug a binary that was provided by the Debian archive, and see if it ll load the ELF by build-id from the right .deb in the unstable-debug suite:
      $ gdb -q /usr/sbin/netlabelctl
      Reading symbols from /usr/sbin/netlabelctl...
      Reading symbols from /usr/lib/debug/.build-id/e5/9f81f6573dadd5d95a6e4474d9388ab2777e2a.debug...
      (gdb)
      
      Yes! Yes it will!
      $ file /usr/lib/debug/.build-id/e5/9f81f6573dadd5d95a6e4474d9388ab2777e2a.debug
      /usr/lib/debug/.build-id/e5/9f81f6573dadd5d95a6e4474d9388ab2777e2a.debug: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically linked, interpreter *empty*, BuildID[sha1]=e59f81f6573dadd5d95a6e4474d9388ab2777e2a, for GNU/Linux 3.2.0, with debug_info, not stripped
      

      The Bad Linux s support for 9p is mainline, which is great, but it s not robust. Network issues or server restarts will wedge the mountpoint (Linux can t reconnect when the tcp connection breaks), and things that work fine on local filesystems get translated in a way that causes a lot of network chatter for instance, just due to the way the syscalls are translated, doing an ls, will result in a stat call for each file in the directory, even though linux had just got a stat entry for every file while it was resolving directory names. On top of that, Linux will serialize all I/O with the server, so there s no concurrent requests for file information, writes, or reads pending at the same time to the server; and read and write throughput will degrade as latency increases due to increasing round-trip time, even though there are offsets included in the read and write calls. It works well enough, but is frustrating to run up against, since there s not a lot you can do server-side to help with this beyond implementing the 9P2000.L variant (which, maybe is worth it).

      The Ugly Unfortunately, we don t know the file size(s) until we ve actually opened the underlying tar file and found the correct member, so for most files, we don t know the real size to report when getting a stat. We can t parse the tarfiles for every stat call, since that d make ls even slower (bummer). Only hiccup is that when I report a filesize of zero, gdb throws a bit of a fit; let s try with a size of 0 to start:
      $ ls -lah /usr/lib/debug/.build-id/e5/9f81f6573dadd5d95a6e4474d9388ab2777e2a.debug
      -r--r--r-- 1 root root 0 Dec 31 1969 /usr/lib/debug/.build-id/e5/9f81f6573dadd5d95a6e4474d9388ab2777e2a.debug
      $ gdb -q /usr/sbin/netlabelctl
      Reading symbols from /usr/sbin/netlabelctl...
      Reading symbols from /usr/lib/debug/.build-id/e5/9f81f6573dadd5d95a6e4474d9388ab2777e2a.debug...
      warning: Discarding section .note.gnu.build-id which has a section size (24) larger than the file size [in module /usr/lib/debug/.build-id/e5/9f81f6573dadd5d95a6e4474d9388ab2777e2a.debug]
      [...]
      
      This obviously won t work since gdb will throw away all our hard work because of stat s output, and neither will loading the real size of the underlying file. That only leaves us with hardcoding a file size and hope nothing else breaks significantly as a result. Let s try it again:
      $ ls -lah /usr/lib/debug/.build-id/e5/9f81f6573dadd5d95a6e4474d9388ab2777e2a.debug
      -r--r--r-- 1 root root 954M Dec 31 1969 /usr/lib/debug/.build-id/e5/9f81f6573dadd5d95a6e4474d9388ab2777e2a.debug
      $ gdb -q /usr/sbin/netlabelctl
      Reading symbols from /usr/sbin/netlabelctl...
      Reading symbols from /usr/lib/debug/.build-id/e5/9f81f6573dadd5d95a6e4474d9388ab2777e2a.debug...
      (gdb)
      
      Much better. I mean, terrible but better. Better for now, anyway.

      Kilroy was here Do I think this is a particularly good idea? I mean; kinda. I m probably going to make some fun 9p arigato-based filesystems for use around my LAN, but I don t think I ll be moving to use debugfs until I can figure out how to ensure the connection is more resilient to changing networks, server restarts and fixes on i/o performance. I think it was a useful exercise and is a pretty great hack, but I don t think this ll be shipping anywhere anytime soon. Along with me publishing this post, I ve pushed up all my repos; so you should be able to play along at home! There s a lot more work to be done on arigato; but it does handshake and successfully export a working 9P2000.u filesystem. Check it out on on my github at arigato, debugfs and also on crates.io and docs.rs. At least I can say I was here and I got it working after all these years.

      3 April 2024

      Arnaud Rebillout: Firefox: Moving from the Debian package to the Flatpak app (long-term?)

      First, thanks to Samuel Henrique for giving notice of recent Firefox CVEs in Debian testing/unstable. At the time I didn't want to upgrade my system (Debian Sid) due to the ongoing t64 transition transition, so I decided I could install the Firefox Flatpak app instead, and why not stick to it long-term? This blog post details all the steps, if ever others want to go the same road. Flatpak Installation Disclaimer: this section is hardly anything more than a copy/paste of the official documentation, and with time it will get outdated, so you'd better follow the official doc. First thing first, let's install Flatpak:
      $ sudo apt update
      $ sudo apt install flatpak
      
      Then the next step is to add the Flathub remote repository, from where we'll get our Flatpak applications:
      $ flatpak remote-add --if-not-exists flathub https://dl.flathub.org/repo/flathub.flatpakrepo
      
      And that's all there is to it! Now come the optional steps. For GNOME and KDE users, you might want to install a plugin for the software manager specific to your desktop, so that it can support and manage Flatpak apps:
      $ which -s gnome-software  && sudo apt install gnome-software-plugin-flatpak
      $ which -s plasma-discover && sudo apt install plasma-discover-backend-flatpak
      
      And here's an additional check you can do, as it's something that did bite me in the past: missing xdg-portal-* packages, that are required for Flatpak applications to communicate with the desktop environment. Just to be sure, you can check the output of apt search '^xdg-desktop-portal' to see what's available, and compare with the output of dpkg -l grep xdg-desktop-portal. As you can see, if you're a GNOME or KDE user, there's a portal backend for you, and it should be installed. For reference, this is what I have on my GNOME desktop at the moment:
      $ dpkg -l   grep xdg-desktop-portal   awk ' print $2 '
      xdg-desktop-portal
      xdg-desktop-portal-gnome
      xdg-desktop-portal-gtk
      
      Install the Firefox Flatpak app This is trivial, but still, there's a question I've always asked myself: should I install applications system-wide (aka. flatpak --system, the default) or per-user (aka. flatpak --user)? Turns out, this questions is answered in the Flatpak documentation:
      Flatpak commands are run system-wide by default. If you are installing applications for day-to-day usage, it is recommended to stick with this default behavior.
      Armed with this new knowledge, let's install the Firefox app:
      $ flatpak install flathub org.mozilla.firefox
      
      And that's about it! We can give it a go already:
      $ flatpak run org.mozilla.firefox
      
      Data migration At this point, running Firefox via Flatpak gives me an "empty" Firefox. That's not what I want, instead I want my usual Firefox, with a gazillion of tabs already opened, a few extensions, bookmarks and so on. As it turns out, Mozilla provides a brief doc for data migration, and it's as simple as moving Firefox data directory around! To clarify, we'll be copying data: Make sure that all Firefox instances are closed, then proceed:
      # BEWARE! Below I'm erasing data!
      $ rm -fr ~/.var/app/org.mozilla.firefox/.mozilla/firefox/
      $ cp -a ~/.mozilla/firefox/ ~/.var/app/org.mozilla.firefox/.mozilla/
      
      To avoid confusing myself, it's also a good idea to rename the local data directory:
      $ mv ~/.mozilla/firefox ~/.mozilla/firefox.old.$(date --iso-8601=date)
      
      At this point, flatpak run org.mozilla.firefox takes me to my "usual" everyday Firefox, with all its tabs opened, pinned, bookmarked, etc. More integration? After following all the steps above, I must say that I'm 99% happy. So far, everything works as before, I didn't hit any issue, and I don't even notice that Firefox is running via Flatpak, it's completely transparent. So where's the 1% of unhappiness? The Run a Command dialog from GNOME, the one that shows up via the keyboard shortcut <Alt+F2>. This is how I start my GUI applications, and I usually run two Firefox instances in parallel (one for work, one for personal), using the firefox -p <profile> command. Given that I ran apt purge firefox before (to avoid confusing myself with two installations of Firefox), now the right (and only) way to start Firefox from a command-line is to type flatpak run org.mozilla.firefox -p <profile>. Typing that every time is way too cumbersome, so I need something quicker. Seems like the most straightforward is to create a wrapper script:
      $ cat /usr/local/bin/firefox 
      #!/bin/sh
      exec flatpak run org.mozilla.firefox "$@"
      
      And now I can just hit <Alt+F2> and type firefox -p <profile> to start Firefox with the profile I want, just as before. Neat! Looking forward: system updates I usually update my system manually every now and then, via the well-known pair of commands:
      $ sudo apt update
      $ sudo apt full-upgrade
      
      The downside of introducing Flatpak, ie. introducing another package manager, is that I'll need to learn new commands to update the software that comes via this channel. Fortunately, there's really not much to learn. From flatpak-update(1):
      flatpak update [OPTION...] [REF...] Updates applications and runtimes. [...] If no REF is given, everything is updated, as well as appstream info for all remotes.
      Could it be that simple? Apparently yes, the Flatpak equivalent of the two apt commands above is just:
      $ flatpak update
      
      Going forward, my options are:
      1. Teach myself to run flatpak update additionally to apt update, manually, everytime I update my system.
      2. Go crazy: let something automatically update my Flatpak apps, in my back and without my consent.
      I'm actually tempted to go for option 2 here, and I wonder if GNOME Software will do that for me, provided that I installed gnome-software-plugin-flatpak, and that I checked Software Updates -> Automatic in the Settings (which I did). However, I didn't find any documentation regarding what this setting really does, so I can't say if it will only download updates, or if it will also install it. I'd be happy if it automatically installs new version of Flatpak apps, but at the same time I'd be very unhappy if it automatically upgrades my Debian system... So we'll see. Enough for today, hope this blog post was useful!

      29 March 2024

      Ravi Dwivedi: A visit to the Taj Mahal

      Note: The currency used in this post is Indian Rupees, which was around 83 INR for 1 US Dollar as that time. I and my friend Badri visited the Taj Mahal this month. Taj Mahal is one of the main tourist destinations in India and does not need an introduction, I guess. It is in Agra, in the state of Uttar Pradesh, 188 km from Delhi by train. So, I am writing a post documenting useful information for people who are planning to visit Taj Mahal. Feel free to ask me questions about visiting the Taj Mahal.
      Our retiring room at the Old Delhi Railway Station.
      We had booked a train from Delhi to Agra. The name of the train was Taj Express, and its scheduled departure time from Hazrat Nizamuddin station in Delhi is 07:08 hours in the morning, and its arrival time at Agra Cantt station is 09:45. So, we booked a retiring room at the Old Delhi railway station for the previous night. This retiring room was hard to find. We woke up at 05:00 in the morning and took the metro to Hazrat Nizamuddin station. We barely reached the station in time, but anyway, the train was not yet at the station; it was late. We reached Agra at 10:30 and checked into our retiring room, took rest and went out for Taj Mahal at 13:00 in the afternoon. Taj Mahal s outer gate is 5 km away from the Agra Cantt station. As we were going out of the railway station, we were chased by an autorickshaw driver who offered to go to Taj Mahal for 150 INR for both of us. I asked him to bring it down to 60 INR, and after some back and forth, he agreed to drop us off at Taj Mahal for 80 INR. But I said we won t pay anything above 60 INR. He agreed with that amount but said that he would need to fill up with more passengers. When we saw that he wasn t making any effort in bringing more passengers, we walked away. As soon as we got out of the railway station complex, an autorickshaw driver came to us and offered to drop us off at Taj Mahal for 20 INR if we are sharing with other passengers and 100 INR if we reserve the auto for us. We agreed to go with 20 INR per person, but he started the autorickshaw as soon as we hopped in. I thought that the third person in the auto was another passenger sharing a ride with us, but later we got to know he was with the driver. Upon reaching the outer gate of Taj Mahal, I gave him 40 INR (for both of us), and he asked to instead give 100 INR as he said we reserved the auto, even though I clearly stated before taking the auto that we wanted to share the auto, not reserve it. I think this was a scam. We walked away, and he didn t insist further. Taj Mahal entrance was like 500 m from the outer gate. We went there and bought offline tickets just outside the West gate. For Indians, the ticket for going inside the Taj Mahal complex is 50 INR, and a visit to the mausoleum costs 200 INR extra.
      Security outside the Taj Mahal complex.
      This red colored building is entrance to where you can see the Taj Mahal.
      Taj Mahal.
      Shoe covers for going inside the mausoleum.
      Taj Mahal from side angle.
      We came out of the Taj Mahal complex at 18:00 and stopped for some tea and snacks. I also bought a fridge magnet for 30 INR. Then we walked back towards Agra Cantt station, as we had a train for Jaipur at midnight. We were hoping to find a restaurant along the way, but we didn t find any that we found interesting, so we just ate at the railway station. During the return trip, we noticed there was a bus stand near the station, which we didn t know about. It turns out you can catch a bus to Taj Mahal from there. You can click here to check out the location of that bus stand on OpenStreetMap.

      Expenses These were our expenses per person Retiring room at Delhi Railway Station for 12 hours 131 Train ticket from Delhi to Agra (Taj Express) 110 Retiring room at Agra Cantt station for 12 hours 450 Auto-rickshaw to Taj Mahal 20 Taj Mahal ticket (including going inside the mausoleum): 250 Food 350

      Important information for visitors
      • Taj Mahal is closed on Friday.
      • There are plenty of free-of-cost drinking water taps inside the Taj Mahal complex.
      • Ticket price for Indians is 50, for foreigners and NRIs it is 1100, and for people from SAARC/BIMSTEC is 540. 200 extra for the mausoleum for everyone.
      • A visit inside the mausoleum requires covering your shoes or removing them. Shoe covers costs 10 per person inside the complex, but are probably involved free of charge in foreigner tickets. We could not find a place to keep our shoes, but some people managed to enter barefoot, indicating there must be some place to keep your shoes.
      • Mobile phones and cameras are allowed inside the Taj Mahal, but not eatables.
      • We went there on March 10th, and the weather was pleasant. So, we recommend going around that time.
      • Regarding the timings, I found this written near the ticket counter: Taj Mahal opens 30 minutes before sunrise and closes 30 minutes before sunset during normal operating days, so the timings are vague. But we came out of the complex at 18:00 hours. I would interpret that to mean the Taj Mahal is open from 07:00 to 18:00, and the ticket counter closes at around 17:00. During the winter, the timings might differ.
      • The cheapest way to reach Taj Mahal is by bus, and the bus stop is here
      Bye for now. See you in the next post :)

      25 February 2024

      Russ Allbery: Review: The Fund

      Review: The Fund, by Rob Copeland
      Publisher: St. Martin's Press
      Copyright: 2023
      ISBN: 1-250-27694-2
      Format: Kindle
      Pages: 310
      I first became aware of Ray Dalio when either he or his publisher plastered advertisements for The Principles all over the San Francisco 4th and King Caltrain station. If I recall correctly, there were also constant radio commercials; it was a whole thing in 2017. My brain is very good at tuning out advertisements, so my only thought at the time was "some business guy wrote a self-help book." I think I vaguely assumed he was a CEO of some traditional business, since that's usually who writes heavily marketed books like this. I did not connect him with hedge funds or Bridgewater, which I have a bad habit of confusing with Blackwater. The Principles turns out to be more of a laundered cult manual than a self-help book. And therein lies a story. Rob Copeland is currently with The New York Times, but for many years he was the hedge fund reporter for The Wall Street Journal. He covered, among other things, Bridgewater Associates, the enormous hedge fund founded by Ray Dalio. The Fund is a biography of Ray Dalio and a history of Bridgewater from its founding as a vehicle for Dalio's advising business until 2022 when Dalio, after multiple false starts and title shuffles, finally retired from running the company. (Maybe. Based on the history recounted here, it wouldn't surprise me if he was back at the helm by the time you read this.) It is one of the wildest, creepiest, and most abusive business histories that I have ever read. It's probably worth mentioning, as Copeland does explicitly, that Ray Dalio and Bridgewater hate this book and claim it's a pack of lies. Copeland includes some of their denials (and many non-denials that sound as good as confirmations to me) in footnotes that I found increasingly amusing.
      A lawyer for Dalio said he "treated all employees equally, giving people at all levels the same respect and extending them the same perks."
      Uh-huh. Anyway, I personally know nothing about Bridgewater other than what I learned here and the occasional mention in Matt Levine's newsletter (which is where I got the recommendation for this book). I have no independent information whether anything Copeland describes here is true, but Copeland provides the typical extensive list of notes and sourcing one expects in a book like this, and Levine's comments indicated it's generally consistent with Bridgewater's industry reputation. I think this book is true, but since the clear implication is that the world's largest hedge fund was primarily a deranged cult whose employees mostly spied on and rated each other rather than doing any real investment work, I also have questions, not all of which Copeland answers to my satisfaction. But more on that later. The center of this book are the Principles. These were an ever-changing list of rules and maxims for how people should conduct themselves within Bridgewater. Per Copeland, although Dalio later published a book by that name, the version of the Principles that made it into the book was sanitized and significantly edited down from the version used inside the company. Dalio was constantly adding new ones and sometimes changing them, but the common theme was radical, confrontational "honesty": never being silent about problems, confronting people directly about anything that they did wrong, and telling people all of their faults so that they could "know themselves better." If this sounds like textbook abusive behavior, you have the right idea. This part Dalio admits to openly, describing Bridgewater as a firm that isn't for everyone but that achieves great results because of this culture. But the uncomfortably confrontational vibes are only the tip of the iceberg of dysfunction. Here are just a few of the ways this played out according to Copeland: In one of the common and all-too-disturbing connections between Wall Street finance and the United States' dysfunctional government, James Comey (yes, that James Comey) ran internal security for Bridgewater for three years, meaning that he was the one who pulled evidence from surveillance cameras for Dalio to use to confront employees during his trials. In case the cult vibes weren't strong enough already, Bridgewater developed its own idiosyncratic language worthy of Scientology. The trials were called "probings," firing someone was called "sorting" them, and rating them was called "dotting," among many other Bridgewater-specific terms. Needless to say, no one ever probed Dalio himself. You will also be completely unsurprised to learn that Copeland documents instances of sexual harassment and discrimination at Bridgewater, including some by Dalio himself, although that seems to be a relatively small part of the overall dysfunction. Dalio was happy to publicly humiliate anyone regardless of gender. If you're like me, at this point you're probably wondering how Bridgewater continued operating for so long in this environment. (Per Copeland, since Dalio's retirement in 2022, Bridgewater has drastically reduced the cult-like behaviors, deleted its archive of probings, and de-emphasized the Principles.) It was not actually a religious cult; it was a hedge fund that has to provide investment services to huge, sophisticated clients, and by all accounts it's a very successful one. Why did this bizarre nightmare of a workplace not interfere with Bridgewater's business? This, I think, is the weakest part of this book. Copeland makes a few gestures at answering this question, but none of them are very satisfying. First, it's clear from Copeland's account that almost none of the employees of Bridgewater had any control over Bridgewater's investments. Nearly everyone was working on other parts of the business (sales, investor relations) or on cult-related obsessions. Investment decisions (largely incorporated into algorithms) were made by a tiny core of people and often by Dalio himself. Bridgewater also appears to not trade frequently, unlike some other hedge funds, meaning that they probably stay clear of the more labor-intensive high-frequency parts of the business. Second, Bridgewater took off as a hedge fund just before the hedge fund boom in the 1990s. It transformed from Dalio's personal consulting business and investment newsletter to a hedge fund in 1990 (with an earlier investment from the World Bank in 1987), and the 1990s were a very good decade for hedge funds. Bridgewater, in part due to Dalio's connections and effective marketing via his newsletter, became one of the largest hedge funds in the world, which gave it a sort of institutional momentum. No one was questioned for putting money into Bridgewater even in years when it did poorly compared to its rivals. Third, Dalio used the tried and true method of getting free publicity from the financial press: constantly predict an upcoming downturn, and aggressively take credit whenever you were right. From nearly the start of his career, Dalio predicted economic downturns year after year. Bridgewater did very well in the 2000 to 2003 downturn, and again during the 2008 financial crisis. Dalio aggressively takes credit for predicting both of those downturns and positioning Bridgewater correctly going into them. This is correct; what he avoids mentioning is that he also predicted downturns in every other year, the majority of which never happened. These points together create a bit of an answer, but they don't feel like the whole picture and Copeland doesn't connect the pieces. It seems possible that Dalio may simply be good at investing; he reads obsessively and clearly enjoys thinking about markets, and being an abusive cult leader doesn't take up all of his time. It's also true that to some extent hedge funds are semi-free money machines, in that once you have a sufficient quantity of money and political connections you gain access to investment opportunities and mechanisms that are very likely to make money and that the typical investor simply cannot access. Dalio is clearly good at making personal connections, and invested a lot of effort into forming close ties with tricky clients such as pools of Chinese money. Perhaps the most compelling explanation isn't mentioned directly in this book but instead comes from Matt Levine. Bridgewater touts its algorithmic trading over humans making individual trades, and there is some reason to believe that consistently applying an algorithm without regard to human emotion is a solid trading strategy in at least some investment areas. Levine has asked in his newsletter, tongue firmly in cheek, whether the bizarre cult-like behavior and constant infighting is a strategy to distract all the humans and keep them from messing with the algorithm and thus making bad decisions. Copeland leaves this question unsettled. Instead, one comes away from this book with a clear vision of the most dysfunctional workplace I have ever heard of, and an endless litany of bizarre events each more astonishing than the last. If you like watching train wrecks, this is the book for you. The only drawback is that, unlike other entries in this genre such as Bad Blood or Billion Dollar Loser, Bridgewater is a wildly successful company, so you don't get the schadenfreude of seeing a house of cards collapse. You do, however, get a helpful mental model to apply to the next person who tries to talk to you about "radical honesty" and "idea meritocracy." The flaw in this book is that the existence of an organization like Bridgewater is pointing to systematic flaws in how our society works, which Copeland is largely uninterested in interrogating. "How could this have happened?" is a rather large question to leave unanswered. The sheer outrageousness of Dalio's behavior also gets a bit tiring by the end of the book, when you've seen the patterns and are hearing about the fourth variation. But this is still an astonishing book, and a worthy entry in the genre of capitalism disasters. Rating: 7 out of 10

      28 January 2024

      Russell Coker: Links January 2024

      Long Now has an insightful article about domestication that considers whether humans have evolved to want to control nature [1]. The OMG Elite hacker cable is an interesting device [2]. A Wifi device in a USB cable to allow remote control and monitoring of data transfer, including remote keyboard control and sniffing. Pity that USB-C cables have chips in them so you can t use a spark to remove unwanted chips from modern cables. David Brin s blog post The core goal of tyrants: The Red-Caesar Cult and a restored era of The Great Man has some insightful points about authoritarianism [3]. Ron Garret wrote an interesting argument against Christianity [4], and a follow-up titled Why I Don t Believe in Jesus [5]. He has a link to a well written article about the different theologies of Jesus and Paul [6]. Dimitri John Ledkov wrote an interesting blog post about how they reduced disk space for Ubuntu kernel packages and RAM for the initramfs phase of boot [7]. I hope this gets copied to Debian soon. Joey Hess wrote an interesting blog post about trying to make LLM systems produce bad code if trained on his code without permission [8]. Arstechnica has an interesting summary of research into the security of fingerprint sensors [9]. Not surprising that the products of the 3 vendors that supply almost all PC fingerprint readers are easy to compromise. Bruce Schneier wrote an insightful blog post about how AI will allow mass spying (as opposed to mass surveillance) [10]. ZDnet has an informative article How to Write Better ChatGPT Prompts in 5 Steps [11]. I sent this to a bunch of my relatives. AbortRetryFail has an interesting article about the Itanic Saga [12]. Erberus sounds interesting, maybe VLIW designs could give a good ration of instructions to power unlike the Itanium which was notorious for being power hungry. Bruce Schneier wrote an insightful article about AI and Trust [13]. We really need laws controlling these things! David Brin wrote an interesting blog post on the obsession with historical cycles [14].

      24 January 2024

      Louis-Philippe V ronneau: Montreal Subway Foot Traffic Data, 2023 edition

      For the fifth year in a row, I've asked Soci t de Transport de Montr al, Montreal's transit agency, for the foot traffic data of Montreal's subway. By clicking on a subway station, you'll be redirected to a graph of the station's foot traffic. Licences

      29 December 2023

      Russ Allbery: Review: The Afterward

      Review: The Afterward, by E.K. Johnston
      Publisher: Dutton Books
      Copyright: February 2019
      Printing: 2020
      ISBN: 0-7352-3190-7
      Format: Kindle
      Pages: 339
      The Afterward is a standalone young adult high fantasy with a substantial romance component. The title is not misspelled. Sir Erris and her six companions, matching the number of the new gods, were successful in their quest for the godsgem. They defeated the Old God and destroyed Him forever, freeing King Dorrenta from his ensorcellment, and returned in triumph to Cadrium to live happily ever after. Or so the story goes. Sir Erris and three of the companions are knights. Another companion is the best mage in the kingdom. Kalanthe Ironheart, who distracted the Old God at a critical moment and allowed Sir Erris to strike, is only an apprentice due to her age, but surely will become a great knight. And then there is Olsa Rhetsdaughter, the lowborn thief, now somewhat mockingly called Thief of the Realm for all the good that does her. The reward was enough for her to buy her freedom from the Thief's Court. It was not enough to pay for food after that, or enough for her to change her profession, and the Thief's Court no longer has any incentive to give her easy (or survivable) assignments. Kalanthe is in a considerably better position, but she still needs a good marriage. Her reward paid off half of her debt, which broadens her options, but she's still a debt-knight, liable for the full cost of her training once she reaches the age of nineteen. She's mostly made her peace with the decisions she made given her family's modest means, but marriages of that type are usually for heirs, and Kalanthe is not looking forward to bearing a child. Or, for that matter, sleeping with a man. Olsa and Kalanthe fell in love during the Quest. Given Kalanthe's debt and the way it must be paid, and her iron-willed determination to keep vows, neither of them expected their relationship to survive the end of the Quest. Both of them wish that it had. The hook is that this novel picks up after the epic fantasy quest is over and everyone went home. This is not an entirely correct synopsis; chapters of The Afterward alternate between "After" and "Before" (and one chapter delightfully titled "More or less the exact moment of"), and by the end of the book we get much of the story of the Quest. It's not told from the perspective of the lead heroes, though; it's told by following Kalanthe and Olsa, who would be firmly relegated to supporting characters in a typical high fantasy. And it's largely told through the lens of their romance. This is not the best fantasy novel I've read, but I had a fun time with it. I am now curious about the intended audience and marketing, though. It was published by a YA imprint, and both the ages of the main characters and the general theme of late teenagers trying to chart a course in an adult world match that niche. But it's also clearly intended for readers who have read enough epic fantasy quests that they will both be amused by the homage and not care that the story elides a lot of the typical details. Anyone who read David Eddings at an impressionable age will enjoy the way Johnston pokes gentle fun at The Belgariad (this book is dedicated to David and Leigh Eddings), but surely the typical reader of YA fantasy these days isn't also reading Eddings. I'm therefore not quite sure who this book was for, but apparently that group included me. Johnston thankfully is not on board with the less savory parts of Eddings's writing, as you might have guessed from the sapphic romance. There is no obnoxious gender essentialism here, although there do appear to be gender roles that I never quite figured out. Knights are referred to as sir, but all of the knights in this story are women. Men still seem to run a lot of things (kingdoms, estates, mage colleges), but apart from the mage, everyone on the Quest was female, and there seems to be an expectation that women go out into the world and have adventures while men stay home. I'm not sure if there was an underlying system that escaped me, or if Johnston just mixed things up for the hell of it. (If the latter, I approve.) This book does suffer a bit from addressing some current-day representation issues without managing to fold them naturally into the story or setting. One of the Quest knights is transgender, something that's revealed in a awkward couple of paragraphs and then never mentioned again. Two of the characters have a painfully earnest conversation about the word "bisexual," complete with a strained attempt at in-universe etymology. Racial diversity (Olsa is black, and Kalanthe is also not white) seemed to be handled a bit better, although I am not the reader to notice if the discussions of hair maintenance were similarly awkward. This is way better than no representation and default-white characters, to be clear, but it felt a bit shoehorned in at times and could have used some more polish. These are quibbles, though. Olsa was the heart of the book for me, and is exactly the sort of character I like to read about. Kalanthe is pure stubborn paladin, but I liked her more and more as the story continued. She provides a good counterbalance to Olsa's natural chaos. I do wish Olsa had more opportunities to show her own competence (she's not a very good thief, she's just the thief that Sir Erris happened to know), but the climax of the story was satisfying. My main grumble is that I badly wanted to dwell on the happily-ever-after for at least another chapter, ideally two. Johnston was done with the story before I was. The writing was serviceable but not great and there are some bits that I don't think would stand up to a strong poke, but the characters carried the story for me. Recommended if you'd like some sapphic romance and lightweight class analysis complicating your Eddings-style quest fantasy. Rating: 7 out of 10

      Next.