Before reaching Vietnam
Continuing from the last post, Badri and I took a flight from the Brunei International Airport to Kuala Lumpur on the 12th of December 2024. We reached Kuala Lumpur in the evening.
After arriving at the airport, we went through immigration. In a previous post, I mentioned that we had put our stuff in lockers at the TBS bus terminal in Kuala Lumpur. Therefore, we had to go there.
The locker was automated and required us to enter the PIN we had set. Upon entering the PIN, the locker wasn t getting unlocked. After trying this for 10-15 minutes without any luck, we tried getting some help as there the lockers weren t under supervision.
So, I roamed around and found a staff member, reporting that our lockers weren t getting unlocked. They called the person who was in-charge of the lockers. He came to us in a few minutes and used their admin access to open the locker. We were supposed to pay for using the lockers by putting the banknotes inside through a slot. However, as the machine wasn t working, we gave the amount for the use of our locker service to that person instead.
We soon went back to the KL airport to catch our morning flight to Ho Chi Minh City in Vietnam. At the flight counter, we were afraid we would have to pay extra as our luggage surpassed the allowed weight limit. This one was also a budget airline AirAsia and our tickets didn t include a check-in bag.
Generally, passengers from countries requiring a visa to visit Vietnam (such as India) require going to the airline and showing their visa to get the boarding pass. However, when we went to the AirAsia counter at the Kuala Lumpur airport, they didn t weigh our bags and asked us to get our boarding passes from an automated kiosk. So, we got our boarding passes printed and proceeded to the airport security.
While clearing the airport security, a lotion I bought from Singapore was confiscated because it was 200 mL, exceeding the limit of 100 mL per bottle. Had that 200 mL liquid been in two different bottles of 100 mL each, I would have been allowed to take it in my carry-on bag, but a single 200 mL bottle wasn t! I was allowed to keep it in the check-in bag, but I didn t have it included in my ticket. Huh, airports and their weird rules :( The lotion was an expensive one, so having it thrown away did ruin my mood.
Overview
We started our Vietnam trip from Ho Chi Minh City in the south on the 13th of December 2024 and finished it in Hanoi in the north on the 20th of December. We traveled from Ho Chi Minh City to Hanoi mostly by train, except for a hundred or so kilometers by bus, in chunks. On the way, we visited Nha Trang, Hoi An, and Hue. The distance between Ho Chi Minh City and Hanoi is 1700 km.
For your reference, here are those places labeled on Vietnam s map.
A map of Vietnam with points of places we went to labeled. CARTO MAPTILER OPENSTREETMAP
Ho Chi Minh City
We landed in Ho Chi Minh City early morning on the 13th of December 2024. I was tired and sleepy as I hadn t gotten a good night s sleep. After going through immigration, we went to a currency exchange counter to get Vietnamese Dong. Unlike other countries on this trip, money exchange counters in Vietnam didn t accept Indian rupees. Therefore, we exchanged euros to get Vietnamese dong at the airport.
After getting out of the airport, we took a bus to the city center. It was 15,000 dongs approximately 50 Indian rupees. Our plan was to meet Badri s friend and stay the night at his apartment.
So we went to a caf nearby and bought a coffee for each of us for 75,000 dongs. We went upstairs and sat for a while. The Wi-Fi password was mentioned on our bill. During the trip, I found out about the caf culture of Vietnam. They have their own coffee brands (such as Highlands Coffee), and you can sit down at any of the caf s for work or wait for the rain to stop. It rained a lot while we were there, so we did use these caf s for that purpose.
Badri s friend met us there, and we roamed around the area a bit, which included roaming inside a beautiful park. Then Badri s friend took us to a restaurant. Because I do not eat meat, he took us to a vegan restaurant. Having been to four Southeast Asian countries at this point (excluding Vietnam), I was under the impression that there wouldn t be a lot of things for my diet in Vietnam.
A picture of the park we roamed around in Ho Chi Minh City. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.
However, I was pleasantly surprised at the restaurant. I found all the dishes to be tasty, especially their signature noodles called Pho. I liked another dish so much that I tracked down the restaurant again with Badri using the geotagged image of the bill I had taken earler to have it again. As a tip for vegans coming to Vietnam, the places having the letters Chay (without any accented letters) in their name are vegan only.
This is the restaurant Badri s friend took us to. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.
One of the dishes we had in the restaurant. This one was especially tasty. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.
One of the dishes we had in the restaurant. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.
These noodles are called Pho and are very popular in Vietnam. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.
In the night, we went to a supermarket where I got myself some oranges and guavas. Then, we went to a Japanese restaurant where I didn t have anything, as there was no vegetarian option available for me. Then we took a free bus to the place to Badri s friend s apartment. The construction company that built the apartment also runs this free bus service from their residential area to different parts of the city as a way of promoting their apartments. Anyone can take the bus, not just residents.
The next day, we took the free bus back to the city center and checked in to a hostel for a night. We took two beds in dormitories, which were 88,000 dongs (270 rupees) for each bed for a night. In Vietnam, if you can spend around 300 rupees per night, you can get a bed in a decent hostel.
Train from Ho Chi Minh City to Nha Trang
On the night of the 15th of December 2024, we boarded a train from Ho Chi Minh City to Nha Trang. The ticket for each of us was 519,000 dongs (1600 Indian rupees). The train name was SNT2. When we reached the Ho Chi Minh City train station, we noticed that the station was rather small by Indian standards.
After entering the train station, we went inside to the first platform, where the tickets were checked by a staff member. Ho Chi Minh City was the originating station for our train, so our train was already standing at the station. We had to cross the railway tracks on foot to reach the platform our train was on. Then we located our coach, where a ticket inspector was standing at the gate. He let us in after checking our tickets. In all these instances, we just had to show our digital boarding pass which we had received by email.
Unlike Indian trains, the train didn t have side berths. Additionally, I liked the fact that it had a dedicated space to put our bags in, which was very convenient. The train departed from Ho Chi Minh City at 21:05 and arrived in Nha Trang at 05:30 in the morning.
Interior of our train coach. Trains in Vietnam don t have side berths, unlike India. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.
A picture of the berths from our coach. It had three tiers, similar to a 3 AC coach in Indian trains. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.
The train had a cabin to put the bags in. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.
Nha Trang train station. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.
Nha Trang
Nha Trang is a coastal place, and we planned to go to a beach. We figured out that the bus to the airport goes can drop us near the beach. Therefore, we went to the bus station to get to the airport bus. The bus station was walking distance from the railway station. So, we decided to walk.
On the way, we stopped at a small shop for a coffee. The shop also gave a complimentary cup of green tea along with the coffee. I found out later that it is common for local shops to give a cup of complimentary green tea in Vietnam.
I got a complimentary cup of green tea along with coffee in Nha Trang. In this trip, Badri and I found out that this is customary at local places in Vietnam. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.
Soon we reached the bus station and took a bus to the beach. It was 65,000 dongs ( 200). After getting down from the bus, I had coconut water and some eggs at a small local place.
Eggs being cooked on a pan for my order. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.
Then we went to the beach, but nobody else was there. We spent some time there and went back to the place where the bus dropped us as it started raining. We couldn t find a bus for some time. A taxi driver approached us and agreed to take us to the city center for 200,000 dongs ( 650). For reference, the place where he dropped us was 35 km from the place we took the taxi. Taxi fares in Vietnam were also cheap!
The beach we went to in Nha Trang. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.
Nha Trang was a beautiful place, and so we roamed around for a while. Then we stopped at a Highlands Coffee branch for a while. Since Christmas was coming up, the caf had a Christmas tree, and I liked the Christmas vibes. They were playing Mariah Carey s All I Want for Christmas Is You.
This one was shot in the city center. In this trip, Badri and I found out that this is customary at local places in Vietnam. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.
Inside a Highlands Coffee cafe in Nha Trang. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.
A coffee I got from Highlands Coffee in Nha Trang. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.
During the evening, we went to a local place to eat. The place mentioned Chay in its name, and you know what it means it was a vegan place. There was a man there and no other customers. I don t remember the names of the dishes we ordered, but it was a bowl of soupy noodles and a bowl of dry noodles. They were very tasty. To top that off, the meal was a total of 55,000 dongs ( 180) for both of us.
The host was welcoming and friendly. We had a nice conversation with the host. In Vietnam, restaurants give chopsticks to eat noodles. While Badri was good at using them, I wasn t. So, the host of this restaurant helped me in using chopsticks. Although my technique was not perfect and I take a bit of time, I could now eat solely with chopsticks.
The restaurant we went to in Nha Trang. The word Chay in the name means it was a vegan restaurant. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.
Soupy noodles we got at that restaurant. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.
Dry noodles we got at that restaurant. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.
Our plan was to take a night bus to Hoi An, and we were hoping to find a bus stand. However, we couldn t find one. Asking around about the pickup location of the Hoi An bus led us to many different locations. Finally, we ended up at a bus booking agency s office where we found out that there were no tickets available for Hoi An.
At this point, we gave up on booking the bus and searched for trains instead. As we didn t have a local SIM, we asked the agency to let us connect to their Wi-Fi so that we could look for trains. They were kind enough to let us do that. It also seemed like they were going to close the office in like 10 minutes.
Unfortunately, all the sleeper berths were booked from Nha Trang till Hoi An on the next train with only seating berths being available. It takes around 10 hours, so I wasn t comfortable traveling on seating berths.
Here I came up with the idea to look for sleeper berths from an intermediate stop. Fortunately, there were sleeper berths available from the next stop, Ninh H a. Therefore, we booked a seating berth from Nha Trang to Ninh H a and a sleeper berth from Ninh H a to Tr Ki u (the nearest railway station from Hoi An). The train name was SE6, and it was a total of 500,000 dongs per person ( 1600 per person).
So, we went to the Nha Trang railway station and boarded the train. We had to spend 40 minutes seated for the train to reach the next stop before we could go to our sleeper berths. Badri had some friendly co-passengers on that trip who gave him Saigon beer and some crispy papad-like thing. They offered me as well, but I thought it was non-veg, so I declined it.
Hoi An
On the morning of 17th December 2024, we got down at the Tr Ki u station at around 09:30. Our hostel was in Hoi An, which was around 22 km from the station. There was no public transport to get there.
Instead, there was a taxi driver at the train platform. We told him the name of our hostel, and he quoted 270,000 dongs (around 850). We said it was too expensive for us, so he agreed to bargain at 250,000 dongs. At this point, we told him that we could give him no more than 200,000 dongs, but he didn t agree.
Badri tried a trick. He asked the driver to show us prices in the Grab app (a popular taxi booking app in Southeast Asia). Unfortunately, the Grab app showed 258,000 dongs, which was more than the fare the driver agreed to.
So we walked away as if we had so many options (we didn t!) to reach the hostel. We got out of the station and stopped at a small shop outside to have some coffee. As is customary in Vietnam, we got a complimentary green tea here as well.
This was the place we had our coffee in Tra Kieu. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.
That taxi driver also joined us and sat in that shop. He started talking with the locals in the shop in the local language. The taxi driver was insistent on taking us to Hoi An for 250,000 dongs. At this point, Badri told the taxi driver (by the use of translation software) that we usually use public transport during our trips, and we aren t used to paying high prices to get around. So, he can drop us somewhere in Hoi An for 200,000 dongs as we don t mind walking a bit to reach our hotel.
After reading this, the taxi driver agreed to take us to our hostel for 200,000 dongs ( 660). He also had me take a picture with Badri after this. I think such a bargain tactic would not work in India.
Photo of Badri with taxi driver. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.
The nice thing we noticed in Vietnam is, once bargaining is done and the deal is settled, people don t try to bargain more or keep on talking about the subject. Before the deal, the driver was being somewhat insistent and argumentative, but after the deal was done, it was as if no argument had happened at all.
A picture of Tra Kieu area near the train station we got down at. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.
We were treated to some beautiful scenery on the way to our hostel. Soon we reached our place and completed all the formalities for checking-in. During the time our room was being prepared for check-in, we had an egg sandwich with coffee in the hotel. I found the egg sandwich very tasty. The bread looked like the French baguette. The hostel was 240 per night for each of us.
The name of the hostel was Bana Spa. We liked staying here and we can recommend it if you find yourself there. It is operated by a family.
Our breakfast in Hoi An. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.
A photo of the hostel we stayed in Hoi An. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.
We also rented a bicycle for each of us 25,000 dongs per day ( 80) and explored the old town during the evening. Hoi An is popular for Vietnamese silk. Tourists come here to buy fabric and get it done by the tailor. The buildings here looked old, and they were painted in yellow with a gabled roof.
Typical yellow house with gabled roof in Hoi An old town. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.
Here, I also had egg coffee for the first time, and I liked it. Egg coffee is a delicacy of Hanoi, but you can get it in other parts of Vietnam. If you find yourself in Vietnam, then I recommend you try egg coffee. We also bought some cool T-shirts and other souvenirs, such as a Vietnamese hat, from here.
Egg coffee I had in Hoi An. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.
Hue
The next day the 18th of December 2024 we went to Hue by bus. As we could not take a bus on our own in Nha Trang, we asked the hostel to book it for us this time. We booked it a day before, and they told us to be ready by 07:00 in the morning. At 07:00, a minibus arrived, which took us to a bus agency s office. There we waited for a few minutes and got into the bus to Hue.
The bus had sleeper seats, so I took the opportunity to catch some sleep. The ride was comfortable, so I am assuming the roads were good. In a couple of hours, we reached Hue. Again, we went to Highlands Coffee to have some coffee, charge our phones, and use the internet, not to mention using the bathrooms.
During the afternoon, we went to a local restaurant named Qu n Chay Thanh Li u. It was a vegan restaurant (remember the thing I mentioned earlier about Chay being in the name?). On the way, we had a steamed dumpling shaped like a momo called banh bao from a street vendor. It wasn t very good, but I found it worthwhile.
Bahn Bao in Hue. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.
At the restaurant, we ordered a hot pot. First, they brought noodles and a gas stove. Then came the stock and our gas stove was turned on. The stock was kept simmering on the stove. Then, we had it bit by bit with the noodles. A big hot pot at this place costs 50,000 dongs ( 170). Then we had b nh cu n. These were steamed rolls made of rice flour for 10,000 dongs ( 33).
Hot Pot. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.
Added soup to the noodles. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.
Steamed rolls made of rice flour. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.
Restaurants in Vietnam usually add photos of the meals in their menu or write a description in English. So, even though the dish names were Vietnamese, we had no problems in ordering food there. In addition, all the places we went to provided free Wi-Fi. They either mention the Wi-Fi password on the bill, on the menu or paste it on the wall. This made our trip smoother without getting a local SIM.
Menu from a restaurant in Ho Chi Minh City with detailed description of the food. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.
Then we slowly walked towards the railway station, as we had a night train to Hanoi. We had egg coffee in a cafe. Near the railway station, we had a b nh m (egg sandwich). As for sightseeing, we had plans to visit a couple of places in Hue, but we ended up spending all our time inside sheltered spaces due to heavy rain.
We had booked the train SE20 for Hanoi, which had a departure time of 20:41 from Hue. This one was 948,000 dongs ( 3100) for myself and 870,000 dongs ( 2900) for Badri. My ticket was pricier than Badri s because I got a lower berth. Our train was late by half an hour, so we waited in the common area of the station. After the train arrived, we got inside and took our seats.
The cabin had four berths two upper and two lower, similar to India s First AC class. The ticket inspector came to us and offered us the whole cabin (two additional berths) for 300,000 dongs ( 1,000), which we declined. However, this hinted at the other two seats not being reserved. Eventually, we had the whole cabin to ourselves, as nobody else showed up for the other two berths. It was a 14-hour journey, and I got a good sleep.
Our berths in the train. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.
Hanoi
On the morning of the 19th of December 2024, we reached Vietnam s capital, Hanoi. We had booked a private hotel room for 800. It was 1 km from the Hanoi Airport. However, it was pretty far from the railway station. So, we roamed around in the city and went to the hotel in the evening.
First, we walked to a place and had egg coffee with egg sandwiches. Then we went to Hanoi Train Street, which was walking distance from the train station. After clicking some pictures at the train street, we went to a museum nearby. Upon reaching there, we found out that it was closed.
Egg coffee in Hanoi. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.
Hanoi train street is a tourist attraction in Hanoi. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.
Then we went shopping for jackets, as Hanoi was cold compared to other parts of Vietnam we had been to, and since many of them are manufactured in Vietnam, we thought they would be cheaper. I liked some jackets, but they were not my size. Eventually, we didn t buy anything at the clothes shop.
In the evening, I bought a Vietnamese-styled phin coffee filter and coffee powder from Highlands Coffee. We spent a lot of time in their cafes, so it made sense to buy some souvenirs from there. Badri bought a few coffee filters for his family at Trung Nguyen, where I also bought another filter.
We had dinner at a local place where we had pho and banh it. Bahn it was served packed in banana leaves and it was made of sticky rice.
A picture of pho we had in Hanoi. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.
Bahn it is served packed in banana leaves. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.
Bahn it. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.
Next, we went to Hanoi railway station to catch a bus to the airport since our hotel was 1 km from the airport. The locals there helped us take the bus. It took like an hour to get to the airport. We saw on OpenStreetMap that we can take a bus from there to the hotel, but we could not find it. So we walked to our hotel instead.
It was a decent hotel room for 800 for a night. We went outside to explore the area and had egg sandwiches and egg coffee at a local place. Again, we were given a complimentary green tea. We went to this place like three times. We had practically become regulars by the time we left.
The next day 20th of December 2024 we took a bus to the airport and boarded our flight to Delhi.
Credits: Thanks Badri, Kishy and Richard for proofreading.
Between July and November 2025, the Debian pt_BR translation team received
five students for an online mentoring program. The initiative was carried
out in partnership with the Federal University of ABC through the
extension project "Immersion in Free Software", coordinated by
professors Suzana Santos and Miguel Vieira.
During the mentorship the mentees acted on several of the team's translation
efforts and joined presentations about the Debian Project and its
community given by the mentors. We thank the dedication and contributions of
Ana Parra, Bruno Freitas, Henrique Barbosa, Raul Banzatto and Vitoria Cordeiro.
And we also thank the members of the team who have reviewed the work of the
mentees, specially the ones who were designated as official mentors, namely
Allythy Rennan, Daniel Lenharo, Thiago Pezzo, and Victor Marinho.
Results:
Package descriptions, translations: 27
Package descriptions, revisions: 190
Web pages: 11
Revisions to the Debian Administrator's Handbook
Revisions to the Debian Edu documentation
We hope that this experience will inspire new paths and that you continue to
contribute to Free Software especially to Debian.
I created the latest Wikipedia language edition, the Toki Pona Wikipedia, last month.
Unlike most other wikis which start their lives in the Wikimedia Incubator
before the full wiki is created, in this case the community had been using a
completely external MediaWiki site to build the wiki before it was approved as
a "proper" Wikipedia wiki,1 and now that external wiki needed to be imported
to the newly created Wikimedia-hosted wiki. (As far as I'm aware, the last and previously
only time an external wiki has been imported to a Wikimedia project was in 2013 when
Wikitravel was forked as Wikivoyage.)
Creating a Wikimedia wiki these days is actually pretty straightforward, at least
when compared to what it used to be like a couple of years ago. Today the process
mostly involves using a script to generate two configuration changes, one to add
the basic configuration for a wiki to operate and an another to add the wiki to the
list of all wikis that exist, and then running a script to create the wiki database
in between of deploying those two configuration changes. And then you wait half an
hour while the script to tell all Wikidata client wikis about the new wiki runs on
one wiki at a time.
The primary technical challenge in importing a third-party wiki is that there's
no SUL making sure that a single username maps to the same account on both
wikis. This means that the usual strategy of using the functionality I wrote in
CentralAuth to manually create local accounts can't be used as is, and so we
needed to come up with a new way of matching everyone's contributions to their
existing Wikimedia accounts.
(Side note: While the user-facing interface tries to present a single "global"
user account that can be used on all public Wikimedia wikis, in reality the
account management layer in CentralAuth is mostly just a glue layer to link
together individual "local" accounts on each wiki that the user has ever
visited. These local accounts have independent user ID numbers for example I
am user #35938993 on the English Wikipedia but #4 on the new Toki Pona
Wikipedia and are what most of MediaWiki code interacts with except for a few
features specifically designed with cross-wiki usage in mind. This distinction
is also still very much present and visible in the various administrative and
anti-abuse workflows.)
The approach we ended up choosing was to re-write the dump file before importing,
so that a hypothetical account called $Name would be turned $Name~wikipesija.org
after the import.2 We also created empty user accounts that would take
ownership of the edits to be imported so that we could use the standard account
management tools on them later on. MediaWiki supports importing contributions
without a local account to attribute them to, but it doesn't seem to be possible
to convert an imported actor3 to a regular user later on which we wanted
to keep as a possibility, even with the minor downside of creating a few hundred
users that'll likely never get touched again later.
We also made specific decisions to add the username suffix to everyone, not to
just those names that'd conflicted with existing SUL accounts, and to deal with
renaming users that wanted their contributions linked to an existing SUL account
only after the import. This both reduced complexity and thus risk from the
import phase, which already had much more unknowns compared to the rest of the
process, but also were much better options ethically as well: suffixing all names
meant we would not imply that those people chose to be Wikimedians with those
specific usernames (when in reality it was us choosing to import those edits to
the Wikimedia universe), and doing renames using the standard MediaWiki account
management tooling meant that it produced the normal public log entries that
all other MediaWiki administrative actions create.
With all of the edits imported, the only major thing remaining was doing those merges
I mentioned earlier to attribute imported edits to people's existing SUL accounts.
Thankfully, the local account -based system makes it actually pretty simple. Usually
CentralAuth prevents renaming individual local accounts that are attached to a global
account, but that check can be bypassed with a maintenance script or a privileged
enough account. Renaming the user automatically detached it from the previous global
account, after which an another maintenance script could be used to attach the
user to the correct global account.
That external site was a fork of a fork of the original Toki Pona Wikipedia
that was closed in 2005. And because cool URIs don't change,
we made the the URLs that the old Wikipedia was using work again. Try it: https://art-tokipona.wikipedia.org.
wikipesija.org was the domain where the old third-party wiki was hosted on, and
~ was used as a separator character in usernames during the
SUL finalization in the early
2010s so using it here felt appropriate as well.
An actor is a MediaWiki term and a
database table referring to anything that can do edits or logged actions. Usually an actor
is a user account or an IP address, but an imported user name in a specific format can
also be represented as an actor.
Welcome to the report for November 2025 from the Reproducible Builds project!
These monthly reports outline what we ve been up to over the past month, highlighting items of news from elsewhere in the increasingly-important area of software supply-chain security. As always, if you are interested in contributing to the Reproducible Builds project, please see the Contribute page on our website.
In this report:
10 years of Reproducible Builds at SeaGL 2025
On Friday 8th November, Chris Lamb gave a talk called 10 years of Reproducible Builds at SeaGL in Seattle, WA.
Founded in 2013, SeaGL is a free, grassroots technical summit dedicated to spreading awareness and knowledge about free source software, hardware and culture. Chris talk:
[ ] introduces the concept of reproducible builds, its technical underpinnings and its potentially transformative impact on software security and transparency. It is aimed at developers, security professionals and policy-makers who are concerned with enhancing trust and accountability in our software. It also provides a history of the Reproducible Builds project, which is approximately ten years old. How are we getting on? What have we got left to do? Aren t all the builds reproducible now?
Distribution work
In Debian this month, Jochen Sprickerhof created a merge request to replace the use of reprotest in Debian s Salsa Continuous Integration (CI) pipeline with debrebuild. Jochen cites the advantages as being threefold: firstly, that only one extra build needed ; it uses the same sbuild and ccache tooling as the normal build ; and works for any Debian release . The merge request was merged by Emmanuel Arias and is now active.
kpcyrd posted to our mailing list announcing the initial release of repro-threshold, which implements an APT transport that defines a threshold of at least X of my N trusted rebuilders need to confirm they reproduced the binary before installing Debian packages. Configuration can be done through a config file, or through a curses-like user interface.
Holger then merged two commits by Jochen Sprickerhof in order to address a fakeroot-related reproducibility issue in the debian-installer, and J rg Jaspert deployed a patch by Ivo De Decker for a bug originally filed by Holger in February 2025 related to some Debian packages not being archived on snapshot.debian.org.
Elsewhere, Roland Clobus performed some analysis on the live Debian trixie images, which he determined were not reproducible. However, in a follow-up post, Roland happily reports that the issues have been handled. In addition, 145 reviews of Debian packages were added, 12 were updated and 15 were removed this month adding to our knowledge about identified issues.
Lastly, Jochen Sprickerhof filed a bug announcing their intention to binary NMU a very large number of the R programming language after a reproducibility-related toolchain bug was fixed.
Bernhard M. Wiedemann posted another openSUSEmonthly update for their work there.
Julien Malka and Arnout Engelen launched the new hash collection
server for NixOS. Aside from improved reporting to help focus reproducible builds
efforts within NixOS, it collects build hashes as individually-signed attestations
from independent builders, laying the groundwork for further tooling.
Tool development
diffoscope version 307 was uploaded to Debian unstable (as well as version 309). These changes included further attempts to automatically attempt to deploy to PyPI by liaising with the PyPI developers/maintainers (with this experimental feature). [][][]
In addition, reprotest versions 0.7.31 and 0.7.32 were uploaded to Debian unstable by Holger Levsen, who also made the following changes:
Do not vary the architecture personality if the kernel is not varied. (Thanks to Ra l Cumplido). []
Drop the debian/watch file, as Lintian now flags this as error for native Debian packages. [][]
Bump Standards-Version to 4.7.2, with no changes needed. []
Drop the Rules-Requires-Root header as it is no longer required.. []
In addition, however, Vagrant Cascadian fixed a build failure by removing some extra whitespace from an older changelog entry. []
Website updates
Once again, there were a number of improvements made to our website this month including:
Bernhard M. Wiedemann updated the SOURCE_DATE_EPOCH page to fix the Lisp example syntax. []
Web3 applications, built on blockchain technology, manage billions of dollars in digital assets through decentralized applications (dApps) and smart contracts. These systems rely on complex, software supply chains that introduce significant security vulnerabilities. This paper examines the software supply chain security challenges unique to the Web3 ecosystem, where traditional Web2 software supply chain problems intersect with the immutable and high-stakes nature of blockchain technology. We analyze the threat landscape and propose mitigation strategies to strengthen the security posture of Web3 systems.
Their paper lists reproducible builds as one of the mitigating strategies. A PDF of the full text is available to download.
Upstream patches
The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:
Finally, if you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:
The Raven Scholar is an epic fantasy and the first book of a
projected trilogy. It is Antonia Hodgson's first published fantasy novel;
her previous published novels are historical mystery. I would classify
this as adult fantasy the main character is thirty-four with a stable
court position but it has strong YA vibes because of the generational
turnover feel of the main plot.
Eight years before the start of this book, Andren Valit attempted to
assassinate the emperor and failed. Since then, his widow and three
children twins Yana and Ruko and infant Nisthala have been living in
disgrace in a cramped apartment, subject to constant inspections and
suspicion. As the story opens, they have been summoned to appear before
the emperor, escorted by a young and earnest Hound (essentially the state
security services) named Shal Worthy. The resulting interrogation is full
of dangerous traps. Not all of them will be avoided.
The formalization of the consequences of that imperial summons falls to an
unpopular Junior Archivist (Third Class) whose one notable skill is her
penmanship. A meeting that was disasterous for the Valits becomes
unexpectedly fortunate for the archivist, albeit with a poisonous core.
Eight years later, Neema Kraa is High Scholar, and Emperor Bersun's
twenty-four years of permitted reign is coming to an end. The Festival is
about to begin. One representative from each of the empire's eight anats
(religious schools) will compete in seven days of Trials, save for the
Dragons who do not want the throne and will send a proxy. The victor
according to the Trials scoring system will become emperor and reign
unquestioned for twenty-four years or until resignation. This is the
system that put an end to the era of chaos and has been in place for over
a thousand years.
On the eve of the Trials, the Raven contender is found murdered. Neema is
immediately a suspect; she even has reasons to suspect herself. She
volunteers to lead the investigation because she has to know what
happened. She is also volunteered to be the replacement Raven contender.
There is no chance that she will become emperor; she doesn't even know how
to fight. But agnostic Neema has a rather unexpected ally.
As the last chime fades we drop neatly on to the balcony's rusting
hand rail, folding our wings with a soft shuffle. Noon, on the ninth
day of the eighth month, 1531. Neema Kraa's lodgings. We are here,
exactly where we should be, at exactly the right moment, because we
are the Raven, and we are magnificent.
The Raven Scholar is a rather good epic fantasy, with some caveats
that I'll get to in a moment, but I found it even more fascinating as a
genre artifact.
I've read my share of epic fantasy over the years, although most of my
familiarity of the current wave of new adult fairy epics comes from
reviews rather than personal experience. The Raven Scholar is epic
fantasy, through and through. There is court intrigue, a main character
who is a court functionary unexpectedly thrown into the middle of some
problem, civilization-wide stakes, dramatic political alliances, detailed
magic and mythological systems, and gods. There were moments that reminded
me of a Guy Gavriel Kay novel, although Hodgson's characters tend more
towards disarming moments of humanization instead of Kay's operatic scenes
of emotional intensity.
But The Raven Scholar is also a murder mystery, complete with a
crime scene, clues, suspects, evidence, an investigation, a possibly
compromised detective, and a morass of possible motives and red herrings.
I'm not much of a mystery reader, but this didn't feel like sort of
ancillary mystery that might crop up in the course of a typical epic
fantasy. It felt like a full-fledged investigation with an amateur
detective; one can tell that Hodgson's previous four books were historical
mysteries.
And then there's the Trials, which are the centerpiece of the book.
This book helped me notice that people (okay, me, I'm the people) have
been sleeping on the influence of The
Hunger Games, Battle Royale, and reality TV (specifically
Survivor) on genre fiction, possibly because the more obvious riffs
on the idea (Powerless, The Selection) have been young adult
or new adult. Once I started looking, I realized this idea is everywhere
now: Throne of Glass, Fourth Wing, even The Night
Circus to some extent. Competitions with consequences are having a
moment.
I suspect having a competition to decide the next emperor is going to
strike some traditional fantasy readers as sufficiently absurd and
unbelievable that it will kick them out of the book. I had a moment of
"okay, this is weird, why would anyone stick with this system for so long"
myself. But I would encourage such readers to interrogate whether that's
only a response from unfamiliarity; after all, strange women lying in
ponds distributing swords is no basis for a system of government either.
This is hardly the most unrealistic epic fantasy trope, and it has the
advantage of being a hell of a plot generator when handled well.
Hodgson handles it well. Society in this novel is structured around the
anats and the eight Guardians, gods who, according to myth, had returned
seven times previously to save the world, but who will destroy the world
when they return again. Each Guardian represents a group of
characteristics and useful societal functions: the Ox is trustworthy,
competent and hard-working; the Fox is a trickster and a rule-bender; the
Raven is shrewd and careful and is the Guardian of scholars and lawyers.
Each Trial is organized by one of the anats and tests the contenders for
the skills most valued by that Guardian, often in subtle and rather
ingenious ways. There are flaws here that you could poke at if you wanted
to, but I was charmed and thoroughly entertained by how well Hodgson
weaves the story around the Trials and uses the conflicting values to
create character conflict, unexpected alliances, and engrossing plot.
Most importantly for a book of this sort, I liked Neema. She has a
charming combination of competence, quirks (she is almost physically
unable to not correct people's factual errors), insecurity, imposter
syndrome, and determination. She is way out of her depth and knows it, but
she has an ethical core and an insatiable curiosity that won't let her
leave the central mysteries of the book alone. And the character dynamics
are great; there are a lot of characters, including the competition
problem of having to juggle eight contenders and give them all sufficient
characterization to be meaningful, but this book uses its length to give
each character some room to breathe. This is a long book, well over 600
pages, but it felt packed with events and plot twists. After every chapter
I had to fight the urge to read just one more.
The biggest drawback of this book is that it is very much the first book
of a trilogy, none of the other volumes are out yet, and the ending is
rather nasty. This is the sort of trilogy that opens with a whole lot of
bad things happening, and while I am thoroughly hooked and will purchase
the next volume as soon as it's available, I wish Hodgson had found a way
to end the book on a somewhat more positive or hopeful note. The middle of
the book was great; the end was a bit of an emotional slog, alas. The
writing is good enough here that I'm fairly sure the depression will be
worth it, but if you need your endings to be triumphant (and who could
blame you in this moment in history), you may want to wait on this one
until more volumes are out.
Apart from that, though, this was a lot of fun. The Guardians felt like
they came from a different strand of fantasy than you usually see in epic,
more of a traditional folk tale vibe, which adds an intriguing twist to
the epic fantasy setting. The characters all work, and Hodgson even pulls
off some Game of Thrones style twists that make you sympathetic to
characters you previously hated. The magic system apart from the Guardians
felt underbaked, but the politics had more depth than a lot of fantasy
novels. If you want the truly complex and twisty politics you would get
from one of Guy Gavriel Kay's historical rewrites, you will come away
disappointed, but it was good enough for me. And I did enjoy the Raven.
Respect, that's all we demand. Recognition of our magnificence.
Offerings. Love. Fear. Trembling awe. Worship. Shiny things. Blood
sacrifice, some of us very much enjoy blood sacrifice. Truly, we ask
for so little.
Followed by an as-yet untitled sequel that I hope will materialize.
Rating: 7 out of 10
(Edit: Twitter could improve this significantly with very few changes - I wrote about that here. It's unclear why they'd launch without doing that, since it entirely defeats the point of using HSMs)
When Twitter[1] launched encrypted DMs a couple of years ago, it was the worst kind of end-to-end encrypted - technically e2ee, but in a way that made it relatively easy for Twitter to inject new encryption keys and get everyone's messages anyway. It was also lacking a whole bunch of features such as "sending pictures", so the entire thing was largely a waste of time. But a couple of days ago, Elon announced the arrival of "XChat", a new encrypted message platform built on Rust with (Bitcoin style) encryption, whole new architecture. Maybe this time they've got it right?
tl;dr - no. Use Signal. Twitter can probably obtain your private keys, and admit that they can MITM you and have full access to your metadata.
The new approach is pretty similar to the old one in that it's based on pretty straightforward and well tested cryptographic primitives, but merely using good cryptography doesn't mean you end up with a good solution. This time they've pivoted away from using the underlying cryptographic primitives directly and into higher level abstractions, which is probably a good thing. They're using Libsodium's boxes for message encryption, which is, well, fine? It doesn't offer forward secrecy (if someone's private key is leaked then all existing messages can be decrypted) so it's a long way from the state of the art for a messaging client (Signal's had forward secrecy for over a decade!), but it's not inherently broken or anything. It is, however, written in C, not Rust[2].
That's about the extent of the good news. Twitter's old implementation involved clients generating keypairs and pushing the public key to Twitter. Each client (a physical device or a browser instance) had its own private key, and messages were simply encrypted to every public key associated with an account. This meant that new devices couldn't decrypt old messages, and also meant there was a maximum number of supported devices and terrible scaling issues and it was pretty bad. The new approach generates a keypair and then stores the private key using the Juicebox protocol. Other devices can then retrieve the private key.
Doesn't this mean Twitter has the private key? Well, no. There's a PIN involved, and the PIN is used to generate an encryption key. The stored copy of the private key is encrypted with that key, so if you don't know the PIN you can't decrypt the key. So we brute force the PIN, right? Juicebox actually protects against that - before the backend will hand over the encrypted key, you have to prove knowledge of the PIN to it (this is done in a clever way that doesn't directly reveal the PIN to the backend). If you ask for the key too many times while providing the wrong PIN, access is locked down.
But this is true only if the Juicebox backend is trustworthy. If the backend is controlled by someone untrustworthy[3] then they're going to be able to obtain the encrypted key material (even if it's in an HSM, they can simply watch what comes out of the HSM when the user authenticates if there's no validation of the HSM's keys). And now all they need is the PIN. Turning the PIN into an encryption key is done using the Argon2id key derivation function, using 32 iterations and a memory cost of 16MB (the Juicebox white paper says 16KB, but (a) that's laughably small and (b) the code says 16 * 1024 in an argument that takes kilobytes), which makes it computationally and moderately memory expensive to generate the encryption key used to decrypt the private key. How expensive? Well, on my (not very fast) laptop, that takes less than 0.2 seconds. How many attempts to I need to crack the PIN? Twitter's chosen to fix that to 4 digits, so a maximum of 10,000. You aren't going to need many machines running in parallel to bring this down to a very small amount of time, at which point private keys can, to a first approximation, be extracted at will.
Juicebox attempts to defend against this by supporting sharding your key over multiple backends, and only requiring a subset of those to recover the original. I can't find any evidence that Twitter's does seem to be making use of this,Twitter uses three backends and requires data from at least two, but all the backends used are under x.com so are presumably under Twitter's direct control. Trusting the keystore without needing to trust whoever's hosting it requires a trustworthy communications mechanism between the client and the keystore. If the device you're talking to can prove that it's an HSM that implements the attempt limiting protocol and has no other mechanism to export the data, this can be made to work. Signal makes use of something along these lines using Intel SGX for contact list and settings storage and recovery, and Google and Apple also have documentation about how they handle this in ways that make it difficult for them to obtain backed up key material. Twitter has no documentation of this, and as far as I can tell does nothing to prove that the backend is in any way trustworthy. (Edit to add: The Juicebox API does support authenticated communication between the client and the HSM, but that relies on you having some way to prove that the public key you're presented with corresponds to a private key that only exists in the HSM. Twitter gives you the public key whenever you communicate with them, so even if they've implemented this properly you can't prove they haven't made up a new key and MITMed you the next time you retrieve your key)
On the plus side, Juicebox is written in Rust, so Elon's not 100% wrong. Just mostly wrong.
But ok, at least you've got viable end-to-end encryption even if someone can put in some (not all that much, really) effort to obtain your private key and render it all pointless? Actually no, since you're still relying on the Twitter server to give you the public key of the other party and there's no out of band mechanism to do that or verify the authenticity of that public key at present. Twitter can simply give you a public key where they control the private key, decrypt the message, and then reencrypt it with the intended recipient's key and pass it on. The support page makes it clear that this is a known shortcoming and that it'll be fixed at some point, but they said that about the original encrypted DM support and it never was, so that's probably dependent on whether Elon gets distracted by something else again. And the server knows who and when you're messaging even if they haven't bothered to break your private key, so there's a lot of metadata leakage.
Signal doesn't have these shortcomings. Use Signal.
[1] I'll respect their name change once Elon respects his daughter
[2] There are implementations written in Rust, but Twitter's using the C one with these JNI bindings
[3] Or someone nominally trustworthy but who's been compelled to act against your interests - even if Elon were absolutely committed to protecting all his users, his overarching goals for Twitter require him to have legal presence in multiple jurisdictions that are not necessarily above placing employees in physical danger if there's a perception that they could obtain someone's encryption keys
Dietmar Rabich,
Cape Town (ZA), Sea Point, Nachtansicht 2024 1867-70
2,
CC BY-SA 4.0This post was originally published in the Wikimedia Tech blog, authored by Arturo Borrero Gonzalez.Wikimedia Cloud VPS is a service offered by the Wikimedia
Foundation, built using OpenStack and managed by the Wikimedia Cloud Services
team. It provides cloud computing resources for projects related to the
Wikimedia movement, including virtual machines, databases, storage,
Kubernetes, and DNS.
A few weeks ago, in April 2025,
we were finally able to introduce IPv6 to
the cloud virtual network, enhancing the platform s scalability, security, and future-readiness. This is a major
milestone, many years in the making, and serves as an excellent point to take a moment to reflect on the road that got
us here.
There were definitely a number of challenges that needed to be addressed before we could get into IPv6. This post covers the journey to this
implementation.
The Wikimedia Foundation was an early adopter of the OpenStack technology, and the original OpenStack deployment in the
organization dates back to 2011. At that time, IPv6 support was still nascent and had limited implementation across
various OpenStack components.
In 2012, the Wikimedia cloud users formally requested IPv6 support.
When Cloud VPS was originally deployed, we had set up the network following some of the upstream-recommended patterns:
nova-networks as the engine in charge of the software-defined virtual network
using a flat network topology all virtual machines would share the same network
using a physical VLAN in the datacenter
using Linux bridges to make this physical datacenter VLAN available to virtual machines
using a single virtual router as the edge network gateway, also executing a global egress NAT barring some
exceptions, using what was called dmz_cidr mechanism
In order for us to be able to implement IPv6 in a way that aligned with our architectural goals and operational
requirements, pretty much all the elements in this list would need to change. First of all, we needed to migrate from
nova-networks into Neutron,
a migration effort that started in 2017.
Neutron was the more modern component to implement software-defined networks in OpenStack. To facilitate this
transition, we made the strategic decision to backport certain functionalities from nova-networks into Neutron,
specifically the dmz_cidr mechanism and some egress NAT capabilities.
Once in Neutron, we started to think about IPv6. In 2018 there was an initial attempt to decide on the network CIDR
allocations that Wikimedia Cloud Services would have. This initiative encountered unforeseen challenges
and was subsequently put on hold. We focused on removing the previously
backported nova-networks patches from Neutron.
Between 2020 and 2021, we initiated another
significant network refresh.
We were able to introduce the cloudgw project, as part of a larger effort to rework the Cloud VPS edge network. The new
edge routers allowed us to drop all the custom backported patches we had in Neutron from the nova-networks era,
unblocking further progress. Worth mentioning that the cloudgw router would use nftables as firewalling and NAT engine.
A pivotal decision in 2022 was to
expose the OpenStack APIs to the internet, which
crucially enabled infrastructure management via OpenTofu. This was key in the IPv6 rollout as will be explained later.
Before this, management was limited to Horizon the OpenStack graphical interface or the command-line interface
accessible only from internal control servers.
Later, in 2023, following the OpenStack project s announcement of the deprecation of the neutron-linuxbridge-agent, we
began to seriously consider migrating to the neutron-openvswitch-agent.
This transition would, in turn, simplify the enablement of tenant networks a feature allowing each OpenStack project
to define its own isolated network, rather than all virtual machines sharing a single flat network.
Once we replaced neutron-linuxbridge-agent with neutron-openvswitch-agent, we were ready to migrate virtual machines to
VXLAN. Demonstrating perseverance, we decided to execute the VXLAN migration in conjunction with the IPv6 rollout.
We prepared and tested several things, including the rework of the edge
routing to be based on BGP/OSPF instead of static routing. In 2024 we were ready for the initial attempt to deploy
IPv6, which failed for unknown reasons. There was a full network outage and
we immediately reverted the changes. This quick rollback was feasible due to
our adoption of OpenTofu: deploying IPv6 had
been reduced to a single code change within our repository.
We started an investigation, corrected a few issues, and
increased our network functional testing coverage before trying again. One
of the problems we discovered was that Neutron would enable the enable_snat configuration flag for our main router
when adding the new external IPv6 address.
Finally, in April 2025,
after many years in the making,
IPv6 was successfully deployed.
Compared to the network from 2011, we would have:
Neutron as the engine in charge of the software-defined virtual network
Ready to use tenant-networks
Using a VXLAN-based overlay network
Using neutron-openvswitch-agent to provide networking to virtual machines
A modern and robust edge network setup
Over time, the WMCS team has skillfully navigated numerous challenges to ensure our service offerings consistently meet
high standards of quality and operational efficiency. Often engaging in multi-year planning strategies, we have enabled
ourselves to set and achieve significant milestones.
The successful IPv6 deployment stands as further testament to the team s dedication and hard work over the years. I
believe we can confidently say that the 2025 Cloud VPS represents its most advanced and capable iteration to date.
This post was originally published in the Wikimedia Tech blog, authored by Arturo Borrero Gonzalez.
DeepSeek R1, the new entrant to the Large Language Model wars has created quite a splash over the last few weeks. Its entrance into a space dominated by the Big Corps, while pursuing asymmetric and novel strategies has been a refreshing eye-opener.GPT AI improvement was starting to show signs of slowing down, and has been observed to be reaching a point of diminishing returns as it runs out of data and compute required to train, fine-tune increasingly large models. This has turned the focus towards building "reasoning" models that are post-trained through reinforcement learning, techniques such as inference-time and test-time scaling and search algorithms to make the models appear to think and reason better. OpenAI&aposs o1-series models were the first to achieve this successfully with its inference-time scaling and Chain-of-Thought reasoning.
Intelligence as an emergent property of Reinforcement Learning (RL)Reinforcement Learning (RL) has been successfully used in the past by Google&aposs DeepMind team to build highly intelligent and specialized systems where intelligence is observed as an emergent property through rewards-based training approach that yielded achievements like AlphaGo (see my post on it here - AlphaGo: a journey to machine intuition).DeepMind went on to build a series of Alpha* projects that achieved many notable feats using RL:
AlphaGo, defeated the world champion Lee Seedol in the game of Go
AlphaZero, a generalized system that learned to play games such as Chess, Shogi and Go without human input
AlphaStar, achieved high performance in the complex real-time strategy game StarCraft II.
AlphaFold, a tool for predicting protein structures which significantly advanced computational biology.
AlphaCode, a model designed to generate computer programs, performing competitively in coding challenges.
AlphaDev, a system developed to discover novel algorithms, notably optimizing sorting algorithms beyond human-derived methods.
All of these systems achieved mastery in its own area through self-training/self-play and by optimizing and maximizing the cumulative reward over time by interacting with its environment where intelligence was observed as an emergent property of the system.The RL feedback loopRL mimics the process through which a baby would learn to walk, through trial, error and first principles.
R1 model training pipelineAt a technical level, DeepSeek-R1 leverages a combination of Reinforcement Learning (RL) and Supervised Fine-Tuning (SFT) for its training pipeline:DeepSeek-R1 Model Training PipelineUsing RL and DeepSeek-v3, an interim reasoning model was built, called DeepSeek-R1-Zero, purely based on RL without relying on SFT, which demonstrated superior reasoning capabilities that matched the performance of OpenAI&aposs o1 in certain benchmarks such as AIME 2024.The model was however affected by poor readability and language-mixing and is only an interim-reasoning model built on RL principles and self-evolution.DeepSeek-R1-Zero was then used to generate SFT data, which was combined with supervised data from DeepSeek-v3 to re-train the DeepSeek-v3-Base model.The new DeepSeek-v3-Base model then underwent additional RL with prompts and scenarios to come up with the DeepSeek-R1 model.The R1-model was then used to distill a number of smaller open source models such as Llama-8b, Qwen-7b, 14b which outperformed bigger models by a large margin, effectively making the smaller models more accessible and usable.
Key contributions of DeepSeek-R1
RL without the need for SFT for emergent reasoning capabilities
R1 was the first open research project to validate the efficacy of RL directly on the base model without relying on SFT as a first step, which resulted in the model developing advanced reasoning capabilities purely through self-reflection and self-verification.Although, it did degrade in its language capabilities during the process, its Chain-of-Thought (CoT) capabilities for solving complex problems was later used for further RL on the DeepSeek-v3-Base model which became R1. This is a significant contribution back to the research community.The below analysis of DeepSeek-R1-Zero and OpenAI o1-0912 shows that it is viable to attain robust reasoning capabilities purely through RL alone, which can be further augmented with other techniques to deliver even better reasoning performance.Source: https://github.com/deepseek-ai/DeepSeek-R1Its quite interesting, that the application of RL gives rise to seemingly human capabilities of "reflection", and arriving at "aha" moments, causing it to pause, ponder and focus on a specific aspect of the problem, resulting in emergent capabilities to problem-solve as humans do.
Model distillation
DeepSeek-R1 also demonstrated that larger models can be distilled into smaller models which makes advanced capabilities accessible to resource-constrained environments, such as your laptop. While its not possible to run a 671b model on a stock laptop, you can still run a distilled 14b model that is distilled from the larger model which still performs better than most publicly available models out there. This enables intelligence to be brought closer to the edge, to allow faster inference at the point of experience (such as on a smartphone, or on a Raspberry Pi), which paves way for more use cases and possibilities for innovation.Source: https://github.com/deepseek-ai/DeepSeek-R1Distilled models are very different to R1, which is a massive model with a completely different model architecture than the distilled variants, and so are not directly comparable in terms of capability, but are instead built to be more smaller and efficient for more constrained environments. This technique of being able to distill a larger model&aposs capabilities down to a smaller model for portability, accessibility, speed, and cost will bring about a lot of possibilities for applying artificial intelligence in places where it would have otherwise not been possible. This is another key contribution of this technology from DeepSeek, which I believe has even further potential for democratization and accessibility of AI.
Why is this moment so significant?DeepSeek-R1 was a pivotal contribution in many ways.
The contributions to the state-of-the-art and the open research helps move the field forward where everybody benefits, not just a few highly funded AI labs building the next billion dollar model.
Open-sourcing and making the model freely available follows an asymmetric strategy to the prevailing closed nature of much of the model-sphere of the larger players. DeepSeek should be commended for making their contributions free and open.
It reminds us that its not just a one-horse race, and it incentivizes competition, which has already resulted in OpenAI o3-mini a cost-effective reasoning model which now shows the Chain-of-Thought reasoning. Competition is a good thing.
We stand at the cusp of an explosion of small-models that are hyper-specialized, and optimized for a specific use case that can be trained and deployed cheaply for solving problems at the edge. It raises a lot of exciting possibilities and is why DeepSeek-R1 is one of the most pivotal moments of tech history.
Moose Madness is a sapphic shifter romance novella (on the short
side for a novella) by the same author as Wolf Country. It was originally published in the anthology
Her Wild Soulmate, which appears to be very out of print.
Maggie (she hates the nickname Moose) grew up in Moose Point, a tiny
fictional highway town in (I think) Alaska. (There is, unsurprisingly, an
actual Moose Point in Alaska, but it's a geographic feature and not a
small town.) She stayed after graduation and is now a waitress in the
Moose Point Pub. She's also a shifter; specifically, she is a moose
shifter like her mother, the town mayor. (Her father is a fox shifter.) As
the story opens, the annual Moose Madness festival is about to turn the
entire town into a blizzard of moose kitsch.
Fiona Barton was Maggie's nemesis in high school. She was the cool,
popular girl, a red-headed wolf shifter whose friend group teased and
bullied awkward and uncoordinated Maggie mercilessly. She was also
Maggie's impossible crush, although the very idea seemed laughable. Fi
left town after graduation, and Maggie hadn't thought about her for years.
Then she walks into Moose Point Pub dressed in biker leathers, with
piercings and one side of her head shaved, back in town for a wedding in
her pack.
Much to the shock of both Maggie and Fi, they realize that they're
soulmates as soon as their eyes meet. Now what?
If you thought I wasn't going to read the moose and wolf shifter romance
once I knew it existed, you do not know me very well. I have been saving
it for when I needed something light and fun. It seemed like the right
palette cleanser after a very
disappointing book.
Moose Madness takes place in the same universe as Wolf
Country, which means there are secret shifters all over Alaska (and
presumably elsewhere) and they have the strong magical version of love at
first sight. If one is a shifter, one knows immediately as soon as one
locks eyes with one's soulmate and this feeling is never wrong. This is
not my favorite romance trope, but if I get moose shifter romance out of
it, I'll endure.
As you can tell from the setup, this is enemies-to-lovers, but the whole
soulmate thing shortcuts the enemies to lovers transition rather abruptly.
There's a bit of apologizing and air-clearing at the start, but most of
the novella covers the period right after enemies have become lovers and
are getting to know each other properly. If you like that part of the arc,
you will probably enjoy this, but be warned that it's slight and somewhat
obvious. There's a bit of tension from protective parents and annoying
pack mates, but it's sorted out quickly and easily. If you want the
characters to work for the relationship, this is not the novella for you.
It's essentially all vibes.
I liked the vibes, though! Maggie is easy to like, and Fi does a solid job
apologizing. I wish there was quite a bit more moose than we get, but
Delaney captures the combination of apparent awkwardness and raw power of
a moose and has a good eye for how beautiful large herbivores can be. This
is not the sort of book that gives a moment's thought to wolves being
predators and moose being, in at least some sense, prey animals, so if you
are expecting that to be a plot point, you will be disappointed. As with
Wolf Country, Delaney elides most of the messier and more ethically
questionable aspects of sometimes being an animal.
This is a sweet, short novella about two well-meaning and fundamentally
nice people who are figuring out that middle school and high school are
shitty and sometimes horrible but don't need to define the rest of one's
life. It's very forgettable, but it made me smile, and it was indeed a
good palette cleanser.
If you are, like me, the sort of person who immediately thought "oh, I
have to read that" as soon as you saw the moose shifter romance, keep your
expectations low, but I don't think this will disappoint. If you are not
that sort of person, you can safely miss this one.
Rating: 6 out of 10
Number Go Up is a cross between a history and a first-person
account of investigative journalism around the cryptocurrency bubble and
subsequent collapse in 2022. The edition I read has an afterward from June
2024 that brings the story up to date with Sam Bankman-Fried's trial and a
few other events. Zeke Faux is a reporter for Bloomberg News and a fellow
of New
America.
Last year, I read Michael Lewis's Going
Infinite, a somewhat-sympathetic book-length profile of Sam
Bankman-Fried that made a lot of people angry. One of the common refrains
at the time was that people should read Number Go Up instead, and
since I'm happy to read more about the absurdities of the cryptocurrency
world, I finally got around to reading the other big crypto book of 2023.
This is a good book, with some caveats that I am about to explain at
absurd length. If you want a skeptical history of the cryptocurrency
bubble, you should read it. People who think that it's somehow in
competition with Michael Lewis's book or who think the two books disagree
(including Faux himself) have profoundly missed the point of Going
Infinite. I agree with Matt Levine: Both of these books are worth your
time if this is the sort of thing you like reading about. But (much) more
on Faux's disagreements with Lewis later.
The frame of Number Go Up is Faux's quixotic quest to prove that
Tether is a fraud. To review this book, I therefore need to briefly
explain what Tether is. This is only the first of many extended
digressions.
One natural way to buy cryptocurrency would be to follow the same pattern
as a stock brokerage account. You would deposit some amount of money into
the account (or connect the brokerage account to your bank account), and
then exchange money for cryptocurrency or vice versa, using bank transfers
to put money in or take it out. However, there are several problems with
this. One is that swapping cryptocurrency for money is awkward and
sometimes expensive. Another is that holding people's investment money for
them is usually highly regulated, partly for customer safety but also to
prevent money laundering. These are often called KYC laws (Know Your
Customer), and the regulation-hostile world of cryptocurrency didn't want
to comply with them.
Tether is a stablecoin, which means that the company behind Tether
attempts to guarantee that one Tether is always worth exactly one US
dollar. It is not a speculative investment like Bitcoin; it's a
cryptocurrency substitute for dollars. People exchange dollars for Tether
to get their money into the system and then settle all of their subsequent
trades in Tether, only converting the Tether back to dollars when they
want to take their money out of cryptocurrency entirely. In essence,
Tether functions like the cash reserve in a brokerage account: Your Tether
holdings are supposedly guaranteed to be equivalent to US dollars, you can
withdraw them at any time, and because you can do so, you don't bother,
instead leaving your money in the reserve account while you contemplate
what new coin you want to buy.
As with a bank, this system rests on the assurance that one can always
exchange one Tether for one US dollar. The instant people stop believing
this is true, people will scramble to get their money out of Tether,
creating the equivalent of a bank run. Since Tether is not a regulated
bank or broker and has no deposit insurance or strong legal protections,
the primary defense against a run on Tether is Tether's promise that they
hold enough liquid assets to be able to hand out dollars to everyone who
wants to redeem Tether. (A secondary defense that I wish Faux had
mentioned is that Tether limits redemptions to registered accounts
redeeming more than $100,000, which is a tiny fraction of the people who
hold Tether, but for most purposes this doesn't matter because that
promise is sufficient to maintain the peg with the dollar.)
Faux's firmly-held belief throughout this book is that Tether is lying. He
believes they do not have enough money to redeem all existing Tether
coins, and that rather than backing every coin with very safe liquid
assets, they are using the dollars deposited in the system to make
illiquid and risky investments.
Faux never finds the evidence that he's looking for, which makes this
narrative choice feel strange. His theory was tested when there was a run
on Tether following the collapse of the Terra stablecoin. Tether passed
without apparent difficulty, redeeming $16B or about 20% of the
outstanding Tether coins. This doesn't mean Faux is wrong; being able to
redeem 20% of the outstanding tokens is very different from being able to
redeem 100%, and Tether has been fined for lying about its reserves. But
Tether is clearly more stable than Faux thought it was, which makes the
main narrative of the book weirdly unsatisfying. If he admitted he might
be wrong, I would give him credit for showing his work even if it didn't
lead where he expected, but instead he pivots to focusing on Tether's role
in money laundering without acknowledging that his original theory took a
serious blow.
In Faux's pursuit of Tether, he wanders through most of the other elements
of the cryptocurrency bubble, and that's the strength of this book. Rather
than write Number Go Up as a traditional history, Faux chooses to
closely follow his own thought processes and curiosity. This has the
advantage of giving Faux an easy and natural narrative, something that
non-fiction books of this type can struggle with, and it lets Faux show
how confusing and off-putting the cryptocurrency world is to an outsider.
The best parts of this book were the parts unrelated to Tether. Faux
provides an excellent summary of the
Axie Infinity
speculative bubble and even traveled to the Philippines to interview
people who were directly affected. He then wandered through the bizarre
world of NFTs, and his first-hand account of purchasing one (specifically
a Mutant Ape)
to get entrance to a party (which sounded like a miserable experience I
would pay money to get out of) really drives home how sketchy and weird
cryptocurrency-related software and markets can be. He also went to El
Salvador to talk to people directly about the country's supposed embrace
of Bitcoin, and there's no substitute for that type of reporting to show
how exaggerated and dishonest the claims of cryptocurrency adoption are.
The disadvantage of this personal focus on Faux himself is that it
sometimes feels tedious or sensationalized. I was much less interested in
his unsuccessful attempts to interview the founder of Tether than Faux
was, and while the digression into forced labor compounds in Cambodia
devoted to pig
butchering scams was informative (and horrific), I think Faux leaned too
heavily on an indirect link to Tether. His argument is that cryptocurrency
enables a type of money laundering that is particularly well-suited to
supporting scams, but both scams and this type of economic slavery existed
before cryptocurrency and will exist afterwards. He did not make a very
strong case that Tether was uniquely valuable as a money laundering
service, as opposed to a currently useful tool that would be replaced with
some other tool should it go away.
This part of the book is essentially an argument that money laundering is
bad because it enables crime, and sure, to an extent I agree. But if
you're going to put this much emphasis on the evils of money laundering, I
think you need to at least acknowledge that many people outside the United
States do not want to give US government, which is often openly hostile to
them, veto power over their financial transactions. Faux does not.
The other big complaint I have with this book, and with a lot of other
reporting on cryptocurrency, is that Faux is sloppy with the term "Ponzi
scheme." This is going to sound like nit-picking, but I think this
sloppiness matters because it may obscure an ongoing a shift in
cryptocurrency markets.
A Ponzi scheme is not any speculative bubble. It is a very specific type
of fraud in which investors are promised improbably high returns at very
low risk and with safe principal. These returns are paid out, not via
investment in some underlying enterprise, but by taking the money from new
investments and paying it to earlier investors. Ponzi schemes are doomed
because satisfying their promises requires a constantly increasing flow of
new investors. Since the population of the world is finite, all Ponzi
schemes are mathematically guaranteed to eventually fail, often in
a sudden death spiral of ever-increasing promises to lure new investors
when the investment stream starts to dry up.
There are some Ponzi schemes in cryptocurrency, but most practices that
are called Ponzi schemes are not. For example, Faux calls Axie
Infinity a Ponzi scheme, but it was missing the critical elements of
promised safe returns and fraudulently paying returns from the investments
of later investors. It was simply a speculative bubble that people bought
into on the assumption that its price would increase, and like any
speculative bubble those who sold before the peak made money at the
expense of those who bought at the peak.
The reason why this matters is that Ponzi schemes are a self-correcting
problem. One can decry the damage caused when they collapse, but one can
also feel the reassuring certainty that they will inevitably collapse and
prove the skeptics correct. The same is not true of speculative assets in
general. You may think that the lack of an underlying economic
justification for prices means that a speculative bubble is guaranteed to
collapse eventually, but in the famous words of Gary Schilling, "markets
can remain irrational a lot longer than you and I can remain solvent."
One of the people Faux interviews explains this distinction to him
directly:
Rong explained that in a true Ponzi scheme, the organizer would have
to handle the "fraud money." Instead, he gave the sneakers away and
then only took a small cut of each trade. "The users are trading
between each other. They are not going through me, right?" Rong said.
Essentially, he was arguing that by downloading the Stepn app and
walking to earn tokens, crypto bros were Ponzi'ing themselves.
Faux is openly contemptuous of this response, but it is technically
correct. Stepn is not a Ponzi scheme; it's a speculative bubble. There are
no guaranteed returns being paid out of later investments and no promise
that your principal is safe. People are buying in at price that you may
consider irrational, but Stepn never promised you would get your money
back, let alone make a profit, and therefore it doesn't have the
exponential progression of a Ponzi scheme. One can argue that this is a
distinction without a moral difference, and personally I would agree, but
it matters immensely if one is trying to analyze the future of
cryptocurrencies.
Schemes as transparently unstable as Stepn (which gives you coins for
exercise and then tries to claim those coins have value through some
vigorous hand-waving) are nearly as certain as Ponzi schemes to eventually
collapse. But it's also possible to create a stable business around
allowing large numbers of people to regularly lose money to small numbers
of sophisticated players who are collecting all of the winnings. It's
called a poker room at a casino, and no one thinks poker rooms are Ponzi
schemes or are doomed to collapse, even though nearly everyone who plays
poker will lose money.
This is the part of the story that I think Faux largely missed, and which
Michael Lewis highlights in Going Infinite. FTX was a legitimate
business that made money (a lot of money) off of trading fees, in much the
same way that a casino makes money off of poker rooms. Lots of people want
to bet on cryptocurrencies, similar to how lots of people want to play
poker. Some of those people will win; most of those people will lose. The
casino doesn't care. Its profit comes from taking a little bit of each
pot, regardless of who wins. Bankman-Fried also speculated with
customer funds, and therefore FTX collapsed, but there is no inherent
reason why the core exchange business cannot be stable if people continue
to want to speculate in cryptocurrencies. Perhaps people will get tired of
this method of gambling, but poker has been going strong for 200 years.
It's also important to note that although trading fees are the most
obvious way to be a profitable cryptocurrency casino, they're not the only
way. Wall Street firms specialize in finding creative ways to take a cut
of every financial transaction, and many of those methods are more
sophisticated than fees. They are so good at this that buying and selling
stock through trading apps like Robinhood is free. The money to run the
brokerage platform comes from companies that are delighted to pay for the
opportunity to handle stock trades by day traders with a phone app. This
is not, as some conspiracy theories would have you believe, due to some
sort of fraudulent price manipulation. It is because the average person
with a Robinhood phone app is sufficiently unsophisticated that companies
that have invested in complex financial modeling will make a steady profit
taking the other side of their trades, mostly because of the spread (the
difference between offered buy and sell prices).
Faux is so caught up in looking for Ponzi schemes and fraud that I think
he misses this aspect of cryptocurrency's transformation. Wall Street
trading firms aren't piling into cryptocurrency because they want to do
securities fraud. They're entering this market because there seems to be
persistent demand for this form of gambling, cryptocurrency markets reward
complex financial engineering, and running a legal casino is a profitable
business model.
Michael Lewis appears as a character in this book, and Faux portrays him
quite negatively. The root of this animosity appears to stem from a
cryptocurrency conference in the Bahamas that Faux attended. Lewis
interviewed Bankman-Fried on stage, and, from Faux's account, his
questions were fawning and he praised cryptocurrencies in ways that Faux
is certain he knew were untrue. From that point on, Faux treats Lewis as
an apologist for the cryptocurrency industry and for Sam Bankman-Fried
specifically.
I think this is a legitimate criticism of Lewis's methods of getting close
to the people he wants to write about, but I think Faux also makes the
common mistake of assuming Lewis is a muckraking reporter like himself.
This has never been what Lewis is interested in. He writes about people he
finds interesting and that he thinks a reader will also find interesting.
One can legitimately accuse him of being credulous, but that's partly
because he's not even trying to do the same thing Faux is doing. He's not
trying to judge; he's trying to understand.
This shows when it comes to the parts of this book about Sam
Bankman-Fried. Faux's default assumption is that everyone involved in
cryptocurrency is knowingly doing fraud, and a lot of his research is
looking for evidence to support the conclusion he had already reached. I
don't think there's anything inherently wrong with that approach: Faux is
largely, although not entirely, correct, and this type of hostile
journalism is incredibly valuable for society at large. Upton Sinclair
didn't start writing The Jungle with an open mind about the
meat-packing industry. But where Faux and Lewis disagree on
Bankman-Fried's motivations and intentions, I think Lewis has the much
stronger argument.
Faux's position is that Bankman-Fried always intended to steal people's
money through fraud, perhaps to fund his effective altruism donations, and
his protestations that he made mistakes and misplaced funds are obvious
lies. This is an appealing narrative if one is looking for a simple
villain, but Faux's evidence in support of this is weak. He mostly argues
through stereotype: Bankman-Fried was a physics major and a Jane Street
trader and therefore could not possibly be the type of person to misplace
large amounts of money or miscalculate risk.
If he wants to understand how that could be possible, he could read
Going Infinite? I find it completely credible that someone with
what appears to be uncontrolled, severe ADHD could be adept at trading and
calculating probabilities and yet also misplace millions of dollars of
assets because he wasn't thinking about them and therefore they stopped
existing.
Lewis made a lot of people angry by being somewhat sympathetic to someone
few people wanted to be sympathetic towards, but Faux (and many others)
are also misrepresenting his position. Lewis agrees that Bankman-Fried
intentionally intermingled customer funds with his hedge fund and agrees
that he lied about doing this. His only contention is that Bankman-Fried
didn't do this to steal the money; instead, he invested customer money in
risky bets that he thought would pay off. In support of this, Lewis made a
prediction that was widely scoffed at, namely that much less of FTX's
money was missing than was claimed, and that likely most or all of it
would be found.
And, well, Lewis was basically correct? The FTX bankruptcy is now expected
to recover considerably more than the amount of money owed to creditors.
Faux argues that this is only because the bankruptcy clawed back assets
and cryptocurrencies have gone up considerably since the FTX bankruptcy,
and therefore that the lost money was just replaced by unexpected windfall
profits on other investments, but I don't think this point is as strong as
he thinks it is. Bankman-Fried lost money on some of what he did with
customer funds, made money on other things, and if he'd been able to
freeze withdrawals for the year that the bankruptcy froze them, it does
appear most of the money would have been recoverable. This does not make
what he did legal or morally right, but no one is arguing that, only that
he didn't intentionally steal money for his own personal gain or for
effective altruism donations. And on that point, I don't think Faux is
giving Lewis's argument enough credit.
I have a lot of complaints about this book because I know way too much
about this topic than anyone should probably know. I think Faux missed the
plot in a couple of places, and I wish someone would write a book about
where cryptocurrency markets are currently going. (Matt Levine's
Money Stuff newsletter is quite good, but it's about all sorts of
things other than cryptocurrency and isn't designed to tell a coherent
story.) But if you know less about cryptocurrency and just want to hear
the details of the run-up to the 2022 bubble, this is a great book for
that. Faux is writing for people who are already skeptical and is not
going to convince people who are cryptocurrency true believers, but that's
fine. The details are largely correct (and extensively footnoted) and will
satisfy most people's curiosity.
Lewis's Going Infinite is a better book, though. It's not the same
type of book at all, and it will not give you the broader overview of the
cryptocurrency world. But if you're curious about what was going through
the head of someone at the center of all of this chaos, I think Lewis's
analysis is much stronger than Faux's. I'm happy I read both books.
Rating: 8 out of 10
This is the story of an investigation conducted by Jochen Sprickerhof, Helmut
Grohne, and myself. It was true teamwork, and we would have not reached the
bottom of the issue working individually. We think you will find it as
interesting and fun as we did, so here is a brief writeup. A few of the steps
mentioned here took several days, others just a few minutes. What is described
as a natural progression of events did not always look very obvious at the
moment at all.
Let us go through the Six Stages of Debugging together.
Stage 1: That cannot happen
Official Debian GCC builds start failing on multiple architectures in late
November.
The build error happens on the build servers when running the testuite, but we
know this cannot happen. GCC builds are not meant to fail in case of testsuite
failures! Return codes are not making the build fail, make is being called
with -k, it just cannot happen.
A lot of the GCC tests are always failing in fact, and an extensive log of
the results is posted to the debian-gcc
mailing list, but the packages always build fine regardless.
Building on my machine running Bookworm is just fine. The Build Daemons run
Bookworm and use a Sid chroot for the build environment, just like I am. Same
kernel.
The only obvious difference between my setup and the Debian buildds is that I
am using sbuild 0.85.0 from bookworm, and the buildds have 0.86.3~bpo12+1
from bookworm-backports. Trying again with 0.86.3~bpo12+1, the build fails on
my system too. The build daemons were updated to the bookworm-backports version
of sbuild at some point in late November. Ha.
Stage 3: That should not happen
There are quite a few sbuild versions in between 0.85.0 and 0.86.3~bpo12+1,
but looking at recent sbuild bugs shows that
sbuild 0.86.0 was
breaking "quite a number of packages". Indeed, with 0.86.0 the build still
fails. Trying the version immediately before, 0.85.11, the build finishes
correctly. This took more time than it sounds, one run including the tests
takes several hours. We need a way to shorten this somehow.
The Debian packaging of GCC allows to specify which languages you may want to
skip, and by default it builds Ada, Go, C, C++, D, Fortran, Objective
C, Objective C++, M2, and Rust. When running the tests sequentially,
the build logs stop roughly around the tests of a runtime library for D,
libphobos. So can we still reproduce the failure by skipping everything except
for D? With
DEB_BUILD_OPTIONS=nolang=ada,go,c,c++,fortran,objc,obj-c++,m2,rust
the build still fails, and it fails faster than before. Several minutes, not
hours. This is progress, and time to file a bug. The report contains massive
spoilers, so no link. :-)
Stage 4: Why does that happen?
Something is causing the build to end prematurely. It s not the OOM killer, and
the kernel does not have anything useful to say in the logs. Can it be that the
D language tests are sending signals to some process, and that is what s
killing make ? We start tracing signals sent with bpftrace by writing the
following script, signals.bt:
tracepoint:signal:signal_generate
printf("%s PID %d (%s) sent signal %d to PID %d\n", comm, pid, args->sig, args->pid);
And executing it with sudo bpftrace signals.bt.
The build takes its sweet time, and it fails. Looking at the trace output
there s a suspicious process.exe terminating stuff.
process.exe (PID: 2868133) sent signal 15 to PID 711826
That looks interesting, but we have no clue what PID 711826 may be. Let s change
the script a bit, and trace signals received as well.
tracepoint:signal:signal_generate
printf("PID %d (%s) sent signal %d to %d\n", pid, comm, args->sig, args->pid);
tracepoint:signal:signal_deliver
printf("PID %d (%s) received signal %d\n", pid, comm, args->sig);
The working version of sbuild was using dumb-init, whereas the new one
features
a
little init in perl. We patch the current version of sbuild by making it use
dumb-init instead, and trace two builds: one with the perl init, one with
dumb-init.
Here are the signals observed when building with dumb-init.
PID 3590011 (process.exe) sent signal 2 to 3590014
PID 3590014 (sleep) received signal 9
PID 3590011 (process.exe) sent signal 15 to 3590063
PID 3590063 (std.process tem) received signal 9
PID 3590011 (process.exe) sent signal 9 to 3590065
PID 3590065 (std.process tem) received signal 9
And this is what happens with the new init in perl:
PID 3589274 (process.exe) sent signal 2 to 3589291
PID 3589291 (sleep) received signal 9
PID 3589274 (process.exe) sent signal 15 to 3589338
PID 3589338 (std.process tem) received signal 9
PID 3589274 (process.exe) sent signal 9 to 3589340
PID 3589340 (std.process tem) received signal 9
PID 3589274 (process.exe) sent signal 15 to 3589341
PID 3589274 (process.exe) sent signal 15 to 3589323
PID 3589274 (process.exe) sent signal 15 to 3589320
PID 3589274 (process.exe) sent signal 15 to 3589274
PID 3589274 (process.exe) received signal 9
PID 3589341 (sleep) received signal 9
PID 3589273 (sbuild-usernsex) sent signal 9 to 3589320
PID 3589273 (sbuild-usernsex) sent signal 9 to 3589323
There are a few additional SIGTERM being sent when using the perl init, that s
helpful. At this point we are fairly convinced that process.exe is worth
additional inspection. The
source
code of process.d shows something interesting:
1221 @system unittest
1222
[...]
1247 auto pid = spawnProcess(["sleep", "10000"],
[...]
1260 // kill the spawned process with SIGINT
1261 // and send its return code
1262 spawn((shared Pid pid)
1263 auto p = cast() pid;
1264 kill(p, SIGINT);
So yes, there s our sleep and the SIGINT (signal 2) right in the unit tests
of process.d, just like we have observed in the bpftrace output.
Can we study the behavior of process.exe in isolation, separatedly from the
build? Indeed we can. Let s take the executable from a failed build, and try
running it under /usr/libexec/sbuild-usernsexec.
First, we prepare a chroot inside a suitable user namespace:
Now we can run process.exe on its own using the perl init, and trace signals at will:
/usr/libexec/sbuild-usernsexec --pivotroot --nonet u:0:100000:65536 g:0:100000:65536 /tmp/rootfs ema /whatever -- /process.exe
We can compare the behavior of the perl init vis-a-vis the one using
dumb-init in milliseconds instead of minutes.
Stage 5: Oh, I see.
Why does process.exe send more SIGTERMs when using the perl init is now the
big question. We have a simple reproducer, so this is where using strace
becomes possible.
sudo strace --user ema --follow-forks -o sbuild-dumb-init.strace ./sbuild-usernsexec-dumb-init --pivotroot --nonet u:0:100000:65536 g:0:100000:65536 /tmp/dumbroot ema /whatever -- /process.exe
3593883 kill(-2, SIGTERM) = -1 ESRCH (No such process)
No such process. Under perl-init instead:
3593777 kill(-2, SIGTERM <unfinished ...>
The process is there under perl-init!
That is a kill with negative pid. From the kill(2) man page:
If pid is less than -1, then sig is sent to every process in the process group whose ID is -pid.
It would have been very useful to see this kill with negative pid in the
output of bpftrace, why didn t we? The tracepoint used,
tracepoint:signal:signal_generate, shows when signals are actually being
sent, and not the syscall being called. To confirm, one can trace
tracepoint:syscalls:sys_enter_kill and see the negative PIDs, for example:
PID 312719 (bash) sent signal 2 to -312728
The obvious question at this point is: why is there no process group 2 when
using dumb-init?
Stage 6: How did that ever work?
We know that process.exe sends a SIGTERM to every process in the process
group with ID 2. To find out what this process group may be, we spawn a shell
with dumb-init and observe under /proc PIDs 1, 16, and 17. With perl-init
we have 1, 2, and 17. When running dumb-init, there are a few forks before
launching the program, explaining the difference. Looking at /proc/2/cmdline
we see that it s bash, ie. the program we are running under perl-init. When
building a package, that is dpkg-buildpackage itself.
The test is accidentally killing its own process group.
Now where does this -2 come from in the test?
2363 // Special values for _processID.
2364 enum invalid = -1, terminated = -2;
Oh. -2 is used as a special value for PID, meaning "terminated". And there s a
call to kill() later on:
2694 do s = tryWait(pid); while (!s.terminated);
[...]
2697 assertThrown!ProcessException(kill(pid));
What sets pid to terminated you ask?
Here is tryWait:
2568 auto tryWait(Pid pid) @safe
2569
2570 import std.typecons : Tuple;
2571 assert(pid !is null, "Called tryWait on a null Pid.");
2572 auto code = pid.performWait(false);
Intro
I always find it amazing the opportunities I have thanks to my contributions to the Debian Project. I am happy to receive this recognition through the help I receive with travel to attend events in other countries.
This year, two MiniDebConfs were scheduled for the second half of the year in Europe: the traditional edition in Cambridge in UK and a new edition in Toulouse in France. After weighing the difficulties and advantages that I would have to attend one of them, I decided to choose Toulouse, mainly because it was cheaper and because it was in November, giving me more time to plan the trip. I contacted the current DPL Andreas Tille explaining my desire to attend the event and he kindly approved my request for Debian to pay for the tickets. Thanks again to Andreas!
MiniDebConf Toulouse 2024 was held in November 16th and 17th (Saturday and Sunday) and took place in one of the rooms of a traditional Free Software event in the city named Capitole du Libre. Before MiniDebConf, the team organized a MiniDebCamp in November 14th and 15th at a coworking space.
The whole experience promised to be incredible, and it was! From visiting a city in France for the first time, to attending a local Free Software event, and sharing four days with people from the Debian community from various countries.
Travel and the city
My plan was to leave Belo Horizonte on Monday, pass through S o Paulo, and arrive in Toulouse on Tuesday night. I was going to spend the whole of Wednesday walking around the city and then participate in the MiniDebCamp on Thursday.
But the flight that was supposed to leave S o Paulo in the early hours of Monday to Tuesday was cancelled due to a problem with the airplane and I had spent all Tuesday waiting. I was rebooked on another flight that left in the evening and arrived in Toulouse on Wednesday afternoon. Even though I was very tired from the trip, I still took advantage of the end of the day to walk around the city. But it was a shame to have lost an entire day of sightseeing.
On Thursday I left early in the morning to walk around a little more before going to the MiniDebCamp venue. I walked around a lot and saw several tourist attractions. The city is really very beautiful, as they say, especially the houses and buildings made of pink bricks. I was impressed by the narrow and winding streets; at one point it seemed like I was walking through a maze. I arrived to a corner and there would be 5 streets crossing in different directions.
The riverbank that runs through the city is very beautiful and people spend their evenings there just hanging out. There was a lot of history around there.
I stayed in an airbnb 25 minutes walking from the coworking space and only 10 minutes from the event venue. It was a very spacious apartment that was much cheaper than a hotel.
MiniDebCamp
I arrived at the coworking space where the MiniDebCamp was being held and met up with several friends. I also met some new people, talked about the translation work we do in Brazil, and other topics.
We already knew that the organization would pay for lunch for everyone during the two days of MiniDebCamp, and at a certain point they told us that we could go to the room (which was downstairs from the coworking space) to have lunch. They set up a table with quiches, breads, charcuterie and LOTS of cheese :-) There were several types of cheese and they were all very good. I just found it a little strange because I m not used to having cheese for lunch, but the experience was amazing anyway :-)
In the evening, we went as a group to dinner at a restaurant in front of the Capitolium, the city s main tourist attraction.
On the second day, in the morning, I walked around the city a bit more, then went to the coworking space and had another incredible cheese table for lunch.
Video Team
One of my ideas for going to Toulouse was to be able to help the video team in setting up the equipment for broadcasting and recording the talks. I wanted to follow this work from the beginning and learn some details, something I can t do before the DebConfs because I always arrive after the people have already set up the infrastructure. And later reproduce this work in the MiniDebConfs in Brazil, such as the one in Macei that is already scheduled for May 1-4, 2025.
As I had agreed with the people from the video team that I would help set up the equipment, on Friday night we went to the University and stayed in the room working. I asked several questions about what they were doing, about the equipment, and I was able to clear up several doubts. Over the next two days I was handling one of the cameras during the talks. And on Sunday night I helped put everything away.
Thanks to olasd, tumbleweed and ivodd for their guidance and patience.
The event in general
There was also a meeting with some members of the publicity team who were there with the DPL. We went to a cafeteria and talked mainly about areas that could be improved in the team.
The talks at MiniDebConf were very good and the recordings are also available here.
I ended up not watching any of the talks from the general schedule at Capitole du Libre because they were in French. It s always great to see free software events abroad to learn how they are done there and to bring some of those experiences to our events in Brazil.
I hope that MiniDebConf in Toulouse will continue to take place every year, or that the French community will hold the next edition in another city and I will be able to join again :-) If everything goes well, in July next year I will return to France to join DebConf25 in Brest.
More photos
Long Live Evil is a portal fantasy (or, arguably more precisely, a
western take on an isekai villainess fantasy) and the first book
of a series. If the author's name sounds familiar, it's possibly because
of In Other Lands, which got a bunch of award nominations in 2018,
She has also written a lot of other YA fantasy, but this is her first
adult epic fantasy novel.
Rae is in the hospital, dying of cancer. Everything about that
experience, from the obvious to the collapse of her friendships,
absolutely fucking sucks. One of the few bright points is her sister's
favorite fantasy series, Time of Iron, which her sister started
reading to her during chemo sessions. Rae mostly failed to pay attention
until the end of the first book and the rise of the Emperor. She fell in
love with the brooding, dangerous anti-hero and devoured the next two
books. The first book was still a bit hazy, though, even with the help of
a second dramatic reading after she was too sick to read on her own.
This will be important later.
After one of those reading sessions, Rae wakes up to a strange woman in
her hospital room who offers her an option. Rather than die a miserable
death that bankrupts her family, she can go through a door to Eyam, the
world of Time of Iron, and become the character who suits her best.
If she can steal the Flower of Life and Death from the imperial greenhouse
on the one day a year that it blooms, she will wake up, cured. If not,
she will die. Rae of course goes through, and wakes in the body of Lady
Rahela, the Beauty Dipped in Blood, the evil stepsister. One of the
villains, on the night before she is scheduled to be executed.
Rae's initial panic slowly turns to a desperate glee. She knows all of
these characters. She knows how the story will turn out. And she has a
healthy body that's not racked with pain. Maybe she's not the heroine,
but who cares, the villains are always more interesting anyway. If she's
going to be cast as the villain, she's going to play it to the hilt. It's
not like any of these characters are real.
Stories in which the protagonists are the villains are not new
(Nimona and Hench come to mind just among books I've reviewed), but they are
having a moment. Assistant to the Villain by Hannah Nicole Maehrer
came out last year, and this book and Django Wexler's How to Become
the Dark Lord and Die Trying both came out this year. This batch of
villain books all take different angles on the idea, but they lean heavily
on humor. In Long Live Evil, that takes the form of Rae's giddy
embrace of villainous scheming, flouncing, and blatant plot manipulation,
along with her running commentary on the various characters and their
in-story fates.
The setup here is great. Rae is not only aware that she's in a story, she
knows it's full of cliches and tropes. Some of them she loves, some of
them she thinks are ridiculous, and she isn't shy about expressing both of
those opinions. Rae is a naturally dramatic person, and it doesn't take
her long to lean into the opportunities for making dramatic monologues and
villainous quips, most of which involve modern language and pop culture
references that the story characters find baffling and disconcerting.
Unfortunately, the base Time of Iron story is, well, bad. It's
absurd grimdark epic fantasy with paper-thin characters and angst as a
central character trait. This is clearly intentional for both in-story
and structural reasons. Rae enjoys it precisely because it's full of
blood and battles and over-the-top brooding, malevolent anti-heroes, and
Rae's sister likes the impossibly pure heroes who suffer horrible fates
while refusing to compromise their ideals. Rae is also about to turn the
story on its head and start smashing its structure to try to get herself
into position to steal the Flower of Life and Death, and the story has to
have a simple enough structure that it doesn't get horribly confusing once
smashed. But the original story is such a grimdark parody, and so not my
style of fantasy, that I struggled with it at the start of the book.
This does get better eventually, as Rae introduces more and more
complications and discovers some surprising things about the other
characters. There are several delightful twists concerning the impossibly
pure heroine of the original story that I will not spoil but that I
thought retroactively made the story far more interesting. But that leads
to the other problem: Rae is both not very good at scheming, and is
flippant and dismissive of the characters around her. These are both
realistic; Rae is a young woman with cancer, not some sort of genius
mastermind, and her whole frame for interacting with the story is fandom
discussions and arguments with her sister. Early in the book, it's rather
funny. But as the characters around her start becoming more fleshed out
and complex, Rae's inability to take them seriously starts to grate. The
grand revelation to Rae that these people have their own independent
existence comes so late in the book that it's arguably a spoiler, but it
was painfully obvious to everyone except Rae for hundreds of pages
before it got through Rae's skull.
Those are my main complaints, but there was a lot about this book that I
liked. The Cobra, who starts off as a minor villain in the story, is by
far the best character of the book. He's not only more interesting than
Rae, he makes everyone else in the book, including Rae, more interesting
characters through their interactions. The twists around the putative
heroine, Lady Rahela's stepsister, are a bit too long in coming but are an
absolute delight. And Key, the palace guard that Rae befriends at the
start of the story, is the one place where Rae's character dynamic
unquestionably works. Key anchors a lot of Rae's scenes, giving them a
sense of emotional heft that Rae herself would otherwise undermine.
The narrator in this book does not stick with Rae. We also get viewpoint
chapters from the Cobra, the Last Hope, and Emer, Lady Rahela's maid. The
viewpoints from the Time of Iron characters can be a bit
eye-roll-inducing at the start because of how deeply they follow the
grimdark aesthetic of the original story, but by the middle of the book I
was really enjoying the viewpoint shifts. This story benefited immensely
from being seen from more angles than Rae's chaotic manipulation. By the
end of the book, I was fully invested in the plot line following Cobra and
the Last Hope, to the extent that I was a bit disappointed when the story
would switch back to Rae.
I'm not sure this was a great book, but it was fun. It's funny in places,
but I ended up preferring the heartfelt parts to the funny parts. It is a
fascinating merger of gleeful fandom chaos and rather heavy emotional
portrayals of both inequality and the experience of terminal illness.
Rees Brennan is a stage four cancer survivor and that really shows;
there's a depth, nuance, and internal complexity to Rae's reactions to
illness, health, and hope that feels very real. It is the kind of book
that can give you emotional whiplash; sometimes it doesn't work, but
sometimes it does.
One major warning: this book ends on a ridiculous cliffhanger and does not
in any sense resolve its main plot arc. I found this annoying, not so
much because of the wait for the second volume, but because I thought this
book was about the right length for the amount of time I wanted to spend
in this world and wish Rees Brennan had found a way to wrap up the story
in one book. Instead, it looks like there will be three books. I'm in
for at least one more, since the story was steadily getting better towards
the end of Long Live Evil, but I hope the narrative arc survives
being stretched out across that many words.
This one's hard to classify, since it's humorous fantasy on the cover and
in the marketing, and that element is definitely present, but I thought
the best parts of the book were when it finally started taking itself
seriously. It's metafictional, trope-subverting portal fantasy full of
intentional anachronisms that sometimes fall flat and sometimes work
brilliantly. I thought the main appeal of it would be watching Rae
embrace being a proper villain, but then the apparent side
characters stole the show. Recommended, but you may have to be in just
the right mood.
Content notes: Cancer, terminal illness, resurrected corpses, wasting
disease, lots of fantasy violence and gore, and a general grimdark
aesthetic.
Rating: 7 out of 10
Unexploded Remnants is a science fiction adventure novella. The
protagonist and world background would support an episodic series, but as
of this writing it stands alone. It is Elaine Gallagher's first
professional publication.
Alice is the last survivor of Earth: an explorer, information trader, and
occasional associate of the Archive. She scouts interesting places, looks
for inconsistencies in the stories the galactic civilizations tell
themselves, and pokes around ruins for treasure. As this story opens, she
finds a supposedly broken computer core in the Alta Sidoie bazaar that is
definitely not what the trader thinks it is. Very shortly thereafter,
she's being hunted by a clan of dangerous Delosi while trying to decide
what to do with a possibly malevolent AI with frightening intrusion
abilities.
This is one of those stories where all the individual pieces sounded
great, but the way they were assembled didn't click for me. Unusually,
I'm not entirely sure why. Often it's the characters, but I liked Alice
well enough. The Lewis Carroll allusions were there but not overdone, her
computer agent Bugs is a little too much of a Warner Brothers cartoon but
still interesting, and the world building has plenty of interesting hooks.
I certainly can't complain about the pacing: the plot moves briskly along
to a somewhat predictable but still adequate conclusion. The writing is
smooth and competent, and the world is memorable enough that I'm still
thinking about it.
And yet, I never connected with this story. I think it may be because
both Alice and the tight third-person narrator tend towards breezy
confidence and matter-of-fact descriptions. Alice does, at times, get
scared or angry, but I never felt those emotions. They were just events
that were described to me. There wasn't an emotional hook, a place where
the character grabbed me, and so it felt like everything was happening at
an odd remove. The advantage of this approach is that there are no
overwrought emotional meltdowns or brooding angstful protagonists, just an
adventure story about a competent and thoughtful character, but I think I
wanted a bit more emotional involvement than I got.
The world background is the best part and feels like it could be part of a
larger series. The Milky Way is connected by an old, vast, and only
partly understood network of teleportation portals, which had cut off
Earth for unknown reasons and then just as mysteriously reactivated when
Alice, then Andrew, drunkenly poked at a standing stone while muttering an
old prayer in Gaelic. The Archive spent a year sorting out her
intellectual diseases (capitalism was particularly alarming) and giving
her a fresh start with a new body. Humanity subsequently destroyed itself
in a paroxysm of reactionary violence, leaving Alice a free agent, one of
a kind in a galaxy of dizzying variety and forgotten history.
Gallagher makes great use of the weirdness of the portal network to create
a Star Wars style of universe: the focus is more on the diversity
of the planets and alien species than on a coherent unifying structure.
The settings of this book are not prone to
Planet of
the Hats problems. They instead have the contrasts that one would get if
one dropped portals near current or former Earth population centers and
then took a random walk through them (or, in other words, what playing
GeoGuessr on a world map
feels like). I liked this effect, but I have to admit that it also added
to that sense of sliding off the surface of the story. The place
descriptions were great bits of atmosphere, but I never cared about
them. There isn't enough emotional coherence to make them memorable.
One of the more notable quirks of this story is the description of
ideologies and prejudices as viral memes that can be cataloged, cured, and
deployed like weapons. This is a theme of the world-building as well:
this society, or at least the Archive-affiliated parts of it, classifies
some patterns of thought as potentially dangerous but treatable contagious
diseases. I'm not going to object too much to this as a bit of background
and characterization in a fairly short novella stuffed with a lot of other
world-building and plot, but there's was something about treating ethical
systems like diseases that bugged me in much the same way that
medicalization of neurodiversity bugs me. I think some people will find
that sense of moral clarity relaxing and others will find it vaguely
irritating, and I seem to have ended up in the second group.
Overall, I would classify this as an interesting not-quite-success. It
felt like a side story in a larger universe, like a story that would work
better if I already knew Alice from other novels and had an established
emotional connection with her. As is, I would not really recommend it,
but there are enough good pieces here that I would be interested to see
what Gallagher does next.
Rating: 6 out of 10
Review: Delilah Green Doesn't Care, by Ashley Herring Blake
Series:
Bright Falls #1
Publisher:
Jove
Copyright:
February 2022
ISBN:
0-593-33641-0
Format:
Kindle
Pages:
374
Delilah Green Doesn't Care is a sapphic romance novel. It's the
first of a trilogy, although in the normal romance series fashion each
book follows a different protagonist and has its own happy ending. It is
apparently classified as romantic comedy, which did not occur to me while
reading but which I suppose I can see in retrospect.
Delilah Green got the hell out of Bright Falls as soon as she could and
tried not to look back. After her father died, her step-mother lavished
all of her perfectionist attention on her overachiever step-sister,
leaving Delilah feeling like an unwanted ghost. She escaped to New York
where there was space for a queer woman with an acerbic personality and a
burgeoning career in photography. Her estranged step-sister's upcoming
wedding was not a good enough reason to return to the stifling small town
of her childhood. The pay for photographing the wedding was, since it
amounted to three months of rent and trying to sell photographs in
galleries was not exactly a steady living. So back to Bright Falls
Delilah goes.
Claire never left Bright Falls. She got pregnant young and ended up with
a different life than she expected, although not a bad one. Now she's
raising her daughter as a single mom, running the town bookstore, and
dealing with her unreliable ex. She and Iris are Astrid Parker's best
friends and have been since fifth grade, which means she wants to be happy
for Astrid's upcoming wedding. There's only one problem: the groom. He's
a controlling, boorish ass, but worse, Astrid seems to turn into a
different person around him. Someone Claire doesn't like.
Then, to make life even more complicated, Claire tries to pick up Astrid's
estranged step-sister in Bright Falls's bar without recognizing her.
I have a lot of things to say about this novel, but here's the core of my
review: I started this book at 4pm on a Saturday because I hadn't read
anything so far that day and wanted to at least start a book. I finished
it at 11pm, having blown off everything else I had intended to do that
evening, completely unable to put it down.
It turns out there is a specific type of romance novel protagonist that I
absolutely adore: the sarcastic, confident, no-bullshit character who is
willing to pick the fights and say the things that the other overly polite
and anxious characters aren't able to get out. Astrid does not react well
to criticism, for reasons that are far more complicated than it may first
appear, and Claire and Iris have been dancing around the obvious problems
with her surprise engagement. As the title says, Delilah thinks she
doesn't care: she's here to do a job and get out, and maybe she'll get to
tweak her annoying step-sister a bit in the process. But that also means
that she is unwilling to play along with Astrid's obsessively controlling
mother or her obnoxious fiance, and thus, to the barely disguised glee of
Claire and Iris, is a direct threat to the tidy life that Astrid's mother
is trying to shoehorn her daughter into.
This book is a great example of why I prefer sapphic romances: I think
this character setup would not work, at least for me, in a heterosexual
romance. Delilah's role only works if she's a woman; if a male character
were the sarcastic conversational bulldozer, it would be almost impossible
to avoid falling into the gender stereotype of a male rescuer. If this
were a heterosexual romance trying to avoid that trap, the long-time
friend who doesn't know how to directly confront Astrid would have to be
the male protagonist. That could work, but it would be a tricky book to
write without turning it into a story focused primarily on the subversion
of gender roles. Making both protagonists women dodges the problem
entirely and gives them so much narrative and conceptual space to simply
be themselves, rather than characters obscured by the shadows of societal
gender rules.
This is also, at it's core, a book about friendship. Claire, Astrid, and
Iris have the sort of close-knit friend group that looks exclusive and
unapproachable from the outside. Delilah was the stereotypical outsider,
mocked and excluded when they thought of her at all. This, at least, is
how the dynamics look at the start of the book, but Blake did an
impressive job of shifting my understanding of those relationships without
changing their essential nature. She fleshes out all of the characters,
not just the romantic leads, and adds complexity, nuance, and perspective.
And, yes, past misunderstanding, but it's mostly not the cheap sort that
sometimes drives romance plots. It's the misunderstanding rooted in
remembered teenage social dynamics, the sort of misunderstanding that
happens because communication is incredibly difficult, even more difficult
when one has no practice or life experience, and requires knowing oneself
well enough to even know what to communicate.
The encounter between Delilah and Claire in the bar near the start of the
book is cornerstone of the plot, but the moment that grabbed me and pulled
me in was Delilah's first interaction with Claire's daughter Ruby. That
was the point when I knew these were characters I could trust, and Blake
never let me down. I love how Ruby is handled throughout this book, with
all of the messy complexity of a kid of divorced parents with her own life
and her own personality and complicated relationships with both parents
that are independent of the relationship their parents have with each
other.
This is not a perfect book. There's one prank scene that I thought was
excessively juvenile and should have been counter-productive, and there's
one tricky question of (nonsexual) consent that the book raises and then
later seems to ignore in a way that bugged me after I finished it. There
is a third-act breakup, which is not my favorite plot structure, but I
think Blake handles it reasonably well. I would probably find more
niggles and nitpicks if I re-read it more slowly. But it was utterly
engrossing reading that exactly matched my mood the day that I picked it
up, and that was a fantastic reading experience.
I'm not much of a romance reader and am not the traditional audience for
sapphic romance, so I'm probably not the person you should be looking to
for recommendations, but this is the sort of book that got me to
immediately buy all of the sequels and start thinking about a re-read.
It's also the sort of book that dragged me back in for several chapters
when I was fact-checking bits of my review. Take that recommendation for
whatever it's worth.
Content note: Reviews of Delilah Green Doesn't Care tend to call it
steamy or spicy. I have no calibration for this for romance novels. I
did not find it very sex-focused (I have read genre fantasy novels with
more sex), but there are several on-page sex scenes if that's something
you care about one way or the other.
Followed by Astrid Parker Doesn't Fail.
Rating: 9 out of 10
In September of this year, I visited Kenya to attend the State of the Map conference. I spent six nights in the capital Nairobi, two nights in Mombasa, and one night on a train. I was very happy with the visa process being smooth and quick. Furthermore, I stayed at the Nairobi Transit Hotel with other attendees, with Ibtehal from Bangladesh as my roommate. One of the memorable moments was the time I spent at a local coffee shop nearby. We used to go there at midnight, despite the grating in the shops suggesting such adventures were unsafe. Fortunately, nothing bad happened, and we were rewarded with a fun time with the locals.
The coffee shop Ibtehal and me used to visit during the midnight
Grating at a chemist shop in Mombasa, Kenya
The country lies on the equator, which might give the impression of extremely hot temperatures. However, Nairobi was on the cooler side (10 25 degrees Celsius), and I found myself needing a hoodie, which I bought the next day. It also served as a nice souvenir, as it had an outline of the African map printed on it.
I also bought a Safaricom SIM card for 100 shillings and recharged it with 1000 shillings for 8 GB internet with 5G speeds and 400 minutes talk time.
A visit to Nairobi s Historic Cricket Ground
On this trip, I got a unique souvenir that can t be purchased from the market a cricket jersey worn in an ODI match by a player. The story goes as follows: I was roaming around the market with my friend Benson from Nairobi to buy a Kenyan cricket jersey for myself, but we couldn t find any. So, Benson had the idea of visiting the Nairobi Gymkhana Club, which used to be Kenya s main cricket ground. It has hosted some historic matches, including the 2003 World Cup match in which Kenya beat the mighty Sri Lankans and the record for the fastest ODI century by Shahid Afridi in just 37 balls in 1996.
Although entry to the club was exclusively for members, I was warmly welcomed by the staff. Upon reaching the cricket ground, I met some Indian players who played in Kenyan leagues, as well as Lucas Oluoch and Dominic Wesonga, who have represented Kenya in ODIs. When I expressed interest in getting a jersey, Dominic agreed to send me pictures of his jersey. I liked his jersey and collected it from him. I gave him 2000 shillings, an amount suggested by those Indian players.
Me with players at the Nairobi Gymkhana Club
Cricket pitch at the Nairobi Gymkhana Club
A view of the cricket ground inside the Nairobi Gymkhana Club
Scoreboard at the Nairobi Gymkhana cricket ground
Giraffe Center in Nairobi
Kenya is known for its safaris and has no shortage of national parks. In fact, Nairobi is the only capital in the world with a national park. I decided not to visit one, as most of them were expensive and offered multi-day tours, and I didn t want to spend that much time in the wildlife.
Instead, I went to the Giraffe Center in Nairobi with Pragya and Rabina. The ticket cost 1500 Kenyan shillings (1000 Indian rupees). In Kenya, matatus - shared vans, usually decorated with portraits of famous people and play rap songs - are the most popular means of public transport. Reaching the Giraffe Center from our hotel required taking five matatus, which cost a total of 150 shillings, and a 2 km walk. The journey back was 90 shillings, suggesting that we didn t find the most efficient route to get there. At the Giraffe Center, we fed giraffes and took photos.
A matatu with a Notorious BIG portrait.
Inside the Giraffe Center
Train ride from Nairobi to Mombasa
I took a train from Nairobi to Mombasa. The train is known as the SGR Train, where SGR refers to Standard Gauge Railway. The journey was around 500 km. M-Pesa was the only way to make payment for pre-booking the train ticket, and I didn t have an M-Pesa account. Pragya s friend Mary helped facilitate the payment. I booked a second-class ticket, which cost 1500 shillings (1000 Indian rupees).
The train was scheduled to depart from Nairobi at 08:00 hours in the morning and arrive in Mombasa at 14:00 hours. The security check at the station required scanning our bags and having them sniffed by sniffer dogs. I also fell victim to a scam by a security official who offered to help me get my ticket printed, only to later ask me to get him some coffee, which I politely declined.
Before boarding the train, I was treated to some stunning views at the Nairobi Terminus station. It was a seating train, but I wished it were a sleeper train, as I was sleep-deprived. The train was neat and clean, with good toilets. The train reached Mombasa on time at around 14:00 hours.
SGR train at Nairobi Terminus.
Interior of the SGR train
Arrival in Mombasa
Mombasa Terminus station.
Mombasa was a bit hotter than Nairobi, with temperatures reaching around 30 degrees Celsius. However, that s not too hot for me, as I am used to higher temperatures in India. I had booked a hostel in the Old Town and was searching for a hitchhike from the Mombasa Terminus station. After trying for more than half an hour, I took a matatu that dropped me 3 km from my hostel for 200 shillings (140 Indian rupees). I tried to hitchhike again but couldn t find a ride.
I think I know why I couldn t get a ride in both the cases. In the first case, the Mombasa Terminus was in an isolated place, so most of the vehicles were taxis or matatus while any noncommercial cars were there to pick up friends and family. If the station were in the middle of the city, there would be many more car/truck drivers passing by, thus increasing my possibilities of getting a ride. In the second case, my hostel was at the end of the city, and nobody was going towards that side. In fact, many drivers told me they would love to give me a ride, but they were going in some other direction.
Finally, I took a tuktuk for 70 shillings to reach my hostel, Tulia Backpackers. It was 11 USD (1400 shillings) for one night. The balcony gave a nice view of the Indian Ocean. The rooms had fans, but there was no air conditioning. Each bed also had mosquito nets. The place was walking distance of the famous Fort Jesus. Mombasa has had more Islamic influence compared to Nairobi and also has many Hindu temples.
The balcony at Tulia Backpackers Hostel had a nice view of the ocean.
A room inside the hostel with fans and mosquito nets on the beds
Visiting White Sandy Beaches and Getting a Hitchhike
Visiting Nyali beach marked my first time ever at a white sand beach. It was like 10 km from the hostel. The next day, I visited Diani Beach, which was 30 km from the hostel. Going to Diani Beach required crossing a river, for which there s a free ferry service every few minutes, followed by taking a matatu to Ukunda and then a tuk-tuk. The journey gave me a glimpse of the beautiful countryside of Kenya.
Nyali beach is a white sand beach
This is the ferry service for crossing the river.
During my return from Diani Beach to the hostel, I was successful in hitchhiking. However, it was only a 4 km ride and not sufficient to reach Ukunda, so I tried to get another ride. When a truck stopped for me, I asked for a ride to Ukunda. Later, I learned that they were going in the same direction as me, so I got off within walking distance from my hostel. The ride was around 30 km. I also learned the difference between a truck ride and a matatu or car ride. For instance, matatus and cars are much faster and cooler due to air conditioning, while trucks tend to be warmer because they lack it. Further, the truck was stopped at many checkpoints by the police for inspections as it carried goods, which is not the case with matatus. Anyways, it was a nice experience, and I am grateful for the ride. I had a nice conversation with the truck drivers about Indian movies and my experiences in Kenya.
Diani beach is a popular beach in Kenya. It is a white sand beach.
Selfie with truck drivers who gave me the free ride
Back to Nairobi
I took the SGR train from Mombasa back to Nairobi. This time I took the night train, which departs at 22:00 hours, reaching Nairobi at around 04:00 in the morning. I could not sleep comfortably since the train only had seater seats.
I had booked the Zarita Hotel in Nairobi and had already confirmed if they allowed early morning check-in. Usually, hotels have a fixed checkout time, say 11:00 in the morning, and you are not allowed to stay beyond that regardless of the time you checked in. But this hotel checked me in for 24 hours. Here, I paid in US dollars, and the cost was 12 USD.
Almost Got Stuck in Kenya
Two days before my scheduled flight from Nairobi back to India, I heard the news that the airports in Kenya were closed due to the strikes. Rabina and Pragya had their flight back to Nepal canceled that day, which left them stuck in Nairobi for two additional days. I called Sahil in India and found out during the conversation that the strike was called off in the evening. It was a big relief for me, and I was fortunate to be able to fly back to India without any changes to my plans.
Newspapers at a stand in Kenya covering news on the airport closure
Experience with locals
I had no problems communicating with Kenyans, as everyone I met knew English to an extent that could easily surpass that of big cities in India. Additionally, I learned a few words from Kenya s most popular local language, Swahili, such as Asante, meaning thank you, Jambo for hello, and Karibu for welcome. Knowing a few words in the local language went a long way.
I am not sure what s up with haggling in Kenya. It wasn t easy to bring the price of souvenirs down. I bought a fridge magnet for 200 shillings, which was the quoted price. On the other hand, it was much easier to bargain with taxis/tuktuks/motorbikes.
I stayed at three hotels/hostels in Kenya. None of them had air conditioners. Two of the places were in Nairobi, and they didn t even have fans in the rooms, while the one in Mombasa had only fans. All of them had good Wi-Fi, except Tulia where the internet overall was a bit shaky.
My experience with the hotel staff was great. For instance, we requested that the Nairobi Transit Hotel cancel the included breakfast in order to reduce the room costs, but later realized that it was not a good idea. The hotel allowed us to revert and even offered one of our missing breakfasts during dinner.
The staff at Tulia Backpackers in Mombasa facilitated the ticket payment for my train from Mombasa to Nairobi. One of the staff members also gave me a lift to the place where I could catch a matatu to Nyali Beach. They even added an extra tea bag to my tea when I requested it to be stronger.
Food
At the Nairobi Transit Hotel, a Spanish omelet with tea was served for breakfast. I noticed that Spanish omelette appeared on the menus of many restaurants, suggesting that it is popular in Kenya. This was my first time having this dish. The milk tea in Kenya, referred to by locals as white tea, is lighter than Indian tea (they don t put a lot of tea leaves).
Spanish Omelette served in breakfast at Nairobi Transit Hotel
I also sampled ugali with eggs. In Mombasa, I visited an Indian restaurant called New Chetna and had a buffet thali there twice.
Ugali with eggs.
Tips for Exchanging Money
In Kenya, I exchanged my money at forex shops a couple of times. I received good exchange rates for bills larger than 50 USD. For instance, 1 USD on xe.com was 129 shillings, and I got 128.3 shillings per USD (a total of 12,830 shillings) for two 50 USD notes at an exchange in Nairobi, while 127 shillings, which was the highest rate at the banks. On the other hand, for smaller bills such as a one US dollar note, I would have got 125 shillings. A passport was the only document required for the exchange, and they also provided a receipt.
A good piece of advice for travelers is to keep 50 USD or larger bills for exchanging into the local currency while saving the smaller US dollar bills for accommodation, as many hotels and hostels accept payment in US dollars (in addition to Kenyan shillings).
Missed Malindi and Lamu
There were more places on my to-visit list in Kenya. But I simply didn t have time to cover them, as I don t like rushing through places, especially in a foreign country where there is a chance of me underestimating the amount of time it takes during transit. I would have liked to visit at least one of Kilifi, Watamu or Malindi beaches. Further, Lamu seemed like a unique place to visit as it has no cars or motorized transport; the only options for transport are boats and donkeys.
That s it for now. Meet you in the next one :)
As DebConf22 was coming to an end, in Kosovo, talking with Eeveelweezel they
invited me to prepare a talk to give for the Chicago Python User
Group. I replied that I m not really that much of a Python
guy But would think about a topic. Two years passed. I meet Eeveelweezel
again for DebConf24 in Busan, South Korea. And the topic came up again. I had
thought of some ideas, but none really pleased me. Again, I do write some Python
when needed, and I teach using Python, as it s the language I find my students
can best cope with. But delivering a talk to ChiPy?
On the other hand, I have long used a very simplistic and limited filesystem
I ve designed as an implementation project at class:
FIUnamFS
(for Facultad de Ingenier a, Universidad Nacional Aut noma de M xico : the
Engineering Faculty for Mexico s National University, where I teach. Sorry, the
link is in Spanish but you will find several implementations of it from the
students ). It is a toy filesystem, with as many bad characteristics you can
think of, but easy to specify and implement. It is based on contiguous file
allocation, has no support for sub-directories, and is often limited to the size
of a 1.44MB floppy disk.
As I give this filesystem as a project to my students (and not as a mere
homework), I always ask them to try and provide a good, polished, professional
interface, not just the simplistic menu I often get. And I tell them the best
possible interface would be if they provide support for FIUnamFS transparently,
usable by the user without thinking too much about it. With high probability,
that would mean: Use FUSE.
But, in the six semesters I ve used this project (with 30-40 students per
semester group), only one student has bitten the bullet and presented a FUSE
implementation.
Maybe this is because it s not easy to understand how to build a FUSE-based
filesystem from a high-level language such as Python? Yes, I ve seen several
implementation examples and even nice web pages (i.e. the examples shipped with
thepython-fuse
moduleStavros
passthrough filesystem,
Dave Filesystem based upon, and further explaining,
Stavros ,
and several others) explaining how to provide basic functionality. I found a
particularly useful presentation by Matteo
Bertozzi presented
~15 years ago at PyCon4 But none of those is IMO followable enough by
itself. Also, most of them are very old (maybe the world is telling me
something that I refuse to understand?).
And of course, there isn t a single interface to work from. In Python only, we
can find
python-fuse,
Pyfuse,
Fusepy Where to start from?
So I setup to try and help.
Over the past couple of weeks, I have been slowly working on my own version, and
presenting it as a progressive set of tasks, adding filesystem calls, and
being careful to thoroughly document what I write (but maybe my documentation
ends up obfuscating the intent? I hope not and, read on, I ve provided some
remediation).
I registered a GitLab project for a hand-holding guide to writing FUSE-based
filesystems in Python. This
is a project where I present several working FUSE filesystem implementations,
some of them RAM-based, some passthrough-based, and I intend to add to this also
filesystems backed on pseudo-block-devices (for implementations such as my
FIUnamFS).
So far, I have added five stepwise pieces, starting from the barest possible
empty
filesystem,
and adding system calls (and functionality) until (so far) either a read-write
filesystem in RAM with basicstat()
support
or a read-only passthrough
filesystem.
I think providing fun or useful examples is also a good way to get students to
use what I m teaching, so I ve added some ideas I ve had: DNS
Filesystem,
on-the-fly markdown compiling
filesystem,
unzip
filesystem
and uncomment
filesystem.
They all provide something that could be seen as useful, in a way that s easy to
teach, in just some tens of lines. And, in case my comments/documentation are
too long to read, uncommentfs will happily strip all comments and whitespace
automatically!
So I will be delivering my talk tomorrow (2024.10.10, 18:30 GMT-6) at
ChiPy (virtually). I am also presenting
this talk virtually at Jornadas Regionales de Software
Libre in Santa Fe,
Argentina, next week (virtually as well). And also in November, in person, at
nerdear.la, that will be held in Mexico City for the
first time.
Of course, I will also share this project with my students in the next couple of
weeks And hope it manages to lure them into implementing FUSE in Python. At
some point, I shall report!
Update: After delivering my ChiPy talk, I have uploaded it to YouTube: A
hand-holding guide to writing FUSE-based filesystems in
Python, and after presenting at Jornadas
Regionales, I present you the video in Spanish here: Aprendiendo y ense ando a
escribir sistemas de archivo en espacio de usuario con FUSE y
Python.