Search Results: "abi"

15 August 2024

Joerg Jaspert: Electric Car, Vacation trip

Electric Car A while ago I got my hands on an electric car - after not having owned a car for most of my life (there really is not much need here). It wasn t planned nor a goal of mine, but it kind of came out of talks with my boss , so now it s there. Due to some special rules in the german tax system it turns out really cheap for me - it comes from the company, which allows personal use. So I have to pay taxes on the value of it plus whichever amount of kilometers I have to drive to work. And the latter is what is good for me - I have homeoffice in my contract, so no drive to work, except maybe once a year for something. So no regular trip to calculate, only if I ever really have to do a trip to the office. And it being an electric car, running costs are also cheap. Way below the outdated tech that needs gas to run, is annoyingly loud and stinks.

The car In the past I used either car sharing or renting a car when I needed one, depending on what I actually needed. Last times renting I already tried electric variants, so I could compare this with. What I have now is a Citroen e-Berlingo XL from 2023 (so not the latest change from 2024), which is on the huge size for space, but small for battery. It has 7 seats (though I have the last 2 currently taken out, no daily need) and more storage space than I need even on a vacation trip. The engine (or well, it s battery) is on the small side - a capacity of 50 kWh means it only has 278km reach according to WLTP. That actually is a good bit less - as usual, those numbers are lying for the producer. Turns out that on highways in a more realistic mode than WLTP (read: real driving) it s somewhat around 120km before one wants a charger again. But, looking around, at least in Germany that is not a big problem, there are more than enough chargers available.

Driving Actually driving experience is good. It sure is a huge car (4.7m long, 1.85m high/wide) and feels more like driving a (small) bus, but it is easy to handle. Maximum speed is limited (or the small battery would suck even more) to 135km/h, but that is more than enough. Even on german highways. Did my vacation trip with cruise control set to 115km/h and very few times only went above that manually. Real relaxed driving that was.

Vacation trip So we had a vacation just recently, and instead of renting a car for the trip we, of course, wanted to take the e-Berlingo. Distance was about double what the car can (realistically!) do, so one charging stop in the middle somewhere was a must. Not having had to charge on highways yet - and entirely new with this car - that made for a bit of nervousness, but it all turned out really good. There are really nice tools like ABRP to plan your trip including charging, which can take live data of availability of charging points into the planning. And it turned out nice - we reached our planned charging point and found a long queue of cars waiting. But turns out it was all those poor folks that need actual gasoline for their outdated combustion engines. The charging points for EV cars still had enough free space, so we could bypass the queue and directly start charging. We also did not need to repark the car after just a few minutes, we could directly start our break. With a charging time of approx. 30 minutes using the fast charger, such a break is long enough to get enough energy for the next part of the trip and short enough to not be annoying.

Charging prices At home it depends. If one has some photovoltaic system to get power, charging is basically free. If not it depends on whichever contract one has, costs will be somewhat between 0 and ~30cent per kWh. Not much, and way below gasoline costs. Outside, using a fast charger, prices vary depending on where you charge - and with what charging card. Prices between 40 and 70cent / kWh, and the same charging point can vary, just from the card one uses. That is a thing that the EU could actually go and better regulate, similar to the phone regulations it took. Still, the costs are still way below gasoline.

Charging cards There is a huge amount of different providers available, and all do their own things in pricing and how one can use them. They do have standards (say, the plugs are standardized, by now the way to start charging also), and that enables roaming (use a charging card of one provider at a charging point of another), but other than that, it seems to be random. That is - if you use card A on a charging point of Provider B you may pay 0.49cents, if you use card C on the same point, it may charge you 0.79cents. And card D isn t taken at all. Some (the newer ones) you can pay directly by credit card, many you can t. Some may allow paypal or Google/Apple Pay. So in the end you need more than just one charging card - I collected 8 free ones by now - just to be sure you can find a combination that isn t hugely overpriced.

12 August 2024

Freexian Collaborators: Monthly report about Debian Long Term Support, July 2024 (by Roberto C. S nchez)

Like each month, have a look at the work funded by Freexian s Debian LTS offering.

Debian LTS contributors In July, 13 contributors have been paid to work on Debian LTS, their reports are available:
  • Bastien Roucari s did 20.0h (out of 20.0h assigned).
  • Chris Lamb did 18.0h (out of 18.0h assigned).
  • Daniel Leidert did 5.0h (out of 4.0h assigned and 6.0h from previous period), thus carrying over 5.0h to the next month.
  • Guilhem Moulin did 8.75h (out of 4.5h assigned and 15.5h from previous period), thus carrying over 11.25h to the next month.
  • Lee Garrett did 51.5h (out of 10.5h assigned and 43.0h from previous period), thus carrying over 2.0h to the next month.
  • Lucas Kanashiro did 5.0h (out of 5.0h assigned and 15.0h from previous period), thus carrying over 15.0h to the next month.
  • Markus Koschany did 40.0h (out of 40.0h assigned).
  • Ola Lundqvist did 4.0h (out of 10.0h assigned and 14.0h from previous period), thus carrying over 20.0h to the next month.
  • Roberto C. S nchez did 5.0h (out of 5.25h assigned and 6.75h from previous period), thus carrying over 7.0h to the next month.
  • Santiago Ruano Rinc n did 6.0h (out of 16.0h assigned), thus carrying over 10.0h to the next month.
  • Sean Whitton did 2.25h (out of 6.0h assigned), thus carrying over 3.75h to the next month.
  • Sylvain Beucler did 39.5h (out of 2.5h assigned and 51.0h from previous period), thus carrying over 14.0h to the next month.
  • Thorsten Alteholz did 11.0h (out of 11.0h assigned).

Evolution of the situation In July, we have released 1 DLA. August will be the month that Debian 11 makes the transition to LTS. Our contributors have already been hard at work with preparatorty tasks and also with making contributions to packages in Debian 11 in close collaboration with the Debian security team and package maintainers. As a result, users and sponsors should not observe any especially notable differences as the transition occurs. While only one DLA was released in July (as a result of the transitional state of Debian 11 bullseye ), there were some notable highlights. LTS contributor Guilhem Moulin prepared an update of libvirt for Debian 11 (in collaboration with the Old-Stable Release Managers and the Debian Security Team) to fix a number of outstanding CVEs which did not rise to the level of a DSA by the Debian Security Team. The update prepared by Guilhem will be included in Debian 11 as part of the final point release at the end of August, one of the final transition steps by the Release Managers as Debian 11 moves entirely to the LTS Team s responsibility. Notable work was also undertaken by contributors Lee Garrett (fixes on the ansible test suite and a bullseye update), Lucas Kanashiro (Rust toolchain, utilized by the clamav, firefox-esr, and thunderbird packages), and Sylvain Beucler (fixes on the ruby2.5/2.7 test suites and CI infrastructure), which will help improve the quality of updates produced during the next LTS cycle. June was the final month of LTS for Debian 10 (as announced on the debian-lts-announce mailing list). No additional Debian 10 security updates will be made available on security.debian.org. However, Freexian and its team of paid Debian contributors will continue to maintain Debian 10 going forward for customers of the Extended LTS offer. Subscribe right away if you still have Debian 10 systems which must be kept secure (and which cannot yet be upgraded).

Thanks to our sponsors Sponsors that joined recently are in bold.

Freexian Collaborators: Debian Contributions: autopkgtest/incus builds, live-patching, Salsa CI, Python 3.13 (by Stefano Rivera)

Debian Contributions: 2024-07 Contributing to Debian is part of Freexian s mission. This article covers the latest achievements of Freexian and their collaborators. All of this is made possible by organizations subscribing to our Long Term Support contracts and consulting services.

autopkgtest/Incus build streamlining, by Colin Watson Colin contributed a change to allow maintaining Incus container and VM images in parallel. Both of these are useful (containers are faster, but some tests need full machine isolation), and the build tools previously didn t handle that very well. This isn t yet in unstable, but once it is, keeping both flavours of unstable images up to date will be a simple matter of running this regularly:
RELEASE=sid autopkgtest-build-incus images:debian/trixie
RELEASE=sid autopkgtest-build-incus --vm images:debian/trixie

Linux live-patching, by Santiago Ruano Rinc n In collaboration with Emmanuel Arias, Santiago continued the work on the support for applying security fixes to the Linux kernel in Debian, without the need to reboot the machine. As mentioned in the previous month report, kpatch 0.9.9-1 (and 0.9.9-2 afterwards) was uploaded to unstable in July, closing the Intent to Salvage (ITS) bug. With this upload, the remaining RC bugs were solved, and kpatch was able to transition to Debian testing recently. Kpatch is expected to be an important component in the live-patching support, since it makes it easy to build a patch as a kernel module. Emmanuel and Santiago continued to work on the design for Linux live-patching and presented the current status in the DebConf24 presentation.

Salsa CI, by Santiago Ruano Rinc n To be able to add RISC-V support and to avoid using tools not packaged in Debian (See #331), the Salsa CI pipeline first needed to move away from kaniko to build the images used by the pipeline. Santiago created a merge request to use buildah instead, and it was merged last month. Santiago also prepared a couple of more MRs related to how the images are built: initial RISC-V support, that should be merged after improving how built images are tested. The switch to buildah introduced a regression in the work-in-progress MR that adds new build image so the build job can run sbuild. Santiago hopes to address this regression and continue with the sbuild-related MRs in August. Additionally, Santiago also contributed to the install docker-cli instead of docker.io in the piuparts image MR, and reviewed others such as reprotest: Add append-build-command option, fix failure at manual pipeline run when leaving RELEASE variable empty and Fix image not found error on image building stage.

Python 3.13 Betas, by Stefano Rivera As Python 3.13 is approaching the first release, Stefano has been uploading the beta releases to Debian unstable. Most of these have uncovered small bugs that needed to be investigated and fixed. Stefano also took the time to review the current patch set against cPython in Debian. Python 3.13 isn t marked as a supported Python release in Debian s Python tooling, yet, so nothing has been built against it, yet. Now that the Python 3.12 transition has completed, the next task will be to start trying to build Debian s Python module packages against Python 3.13, to estimate the work required to transition to 3.13 in unstable.

Miscellaneous contributions
  • Carles Pina updated the packages python-asyncclick, python-pyaarlo and prepared updates for python-ring-doorbell and simplemonitor.
  • Carles Pina updated (reviewing or translating) Catalan translations for adduser, apt-listchanges, debconf and shadow.
  • Colin merged OpenSSH 9.8, and prepared a corresponding release note for DSA support now being disabled. This version included some substantial changes to split the server into a listener binary and a per-session binary, and those required some corresponding changes in the GSS-API key exchange patch. Sorting out the details of this and getting it to work again took some time.
  • Colin upgraded 11 Python packages to new upstream versions, and modernized the build process and/or added non-superficial autopkgtests to several more.
  • Rapha l Hertzog tweaked tracker.debian.org s debci task to work around changes in the JSON output. He also improved tracker.debian.org s ability to detect bounces due to spam to avoid unsubscribing emails that are not broken, but that are better than Debian at rejecting spam.
  • Helmut Grohne monitored the /usr-move transition with few incidents. A notable one is that some systems have ended up with aliasing links that don t match the ones installed by base-files which could lead to an unpack error from dpkg. This is now prevented by having base-files.preinst error out.
  • Helmut investigated toolchain bootstrap failures with gcc-14 in rebootstrap but would only discover the cause in August.
  • Helmut sent a MR for the cross-exe-wrapper requested by Simon McVittie for gobject-introspection. It is a way of conditionally requesting qemu-user when emulation is required for execution during cross compilation.
  • Helmut sent three patches for cross build failures.
  • Thorsten Alteholz uploaded packages lprint and magicfilter to fix RC-bugs that appeared due to the introduction of gcc-14.
  • Santiago continued to work on activities related to the DebConf24 Content Team, including reviewing the schedule and handling updates on it.
  • Santiago worked on preparations for the DebConf25, to be held in Brest, France, next year. A video of the BoF presented during DebConf24 can be found here.
  • Stefano worked on preparations for DebConf24, and helped to run the event.

9 August 2024

Kalyani Kenekar: One Backpack, One Passport: My First Solo Trip

Planing A Self Organized Solo Trip You know the movie Queen? The actor Kangana Ranaut plays in that movie the role of Rani Mehra, a 24-year-old Punjabi woman, who was a simple, homely girl that was always reliant on her family. Similar to Rani I too rarely ventured out without my parents and often needed my younger sibling by my side. Inspired by her transformation, I decided it was time to take control of my own story and discover who I truly am. Queen movie picture Of Kangana

Trip Requirements

My First Passport The journey began with a significant first step: Obtaining my first passport Never having had one before, I scheduled the nearest available interview date on June 29 2022. This meant traveling to Solapur, a city 309 km from my hometown, accompanied by my father. After successfully completing the interview, I received my passport on July 14 2022.

Select A Country, Booking Flights And Accommodation Excited and ready to embark on my adventure, I planed trip to Albania and booked the flight tickets. Why? I had heard from friends that it was a beautiful European country with beaches and other attractions, and importantly, it didn t require a visa for Indian citizens and was more affordable than other European destinations. Before heading to Albania, I planned a overnight stop in Abu Dhabi with a transit visa, thanks to friend who knew the process for obtaining it. Some of my friends did travel also to Europe at the same time and quite close to my plannings, but that I realized just later the trip.

Day 1, Starting The Experience On July 20, 2022, I started my journey by traveling from Pune, Maharashtra, to Delhi, where my brother lives. He came to see me off at the airport, adding a touch of warmth and support to the beginning of my solo adventure. Upon arriving in Delhi, with my next flight scheduled for July 21, I stayed at a backpacker hostel named Zostel, Paharganj, Delhi to rest. During my stay, I noticed that many travelers at the hostel carried rucksacks, which sparked a desire in me to get one for my own trip to Europe. Up until then, I had always shopped with my mom and had never bought anything on my own. Inspired by the travelers, I set out to find a suitable rucksack. I traveled alone by metro from Paharganj to Rohini to visit a Decathlon store, where I purchased a 50-liter rucksack. This was a significant step in preparing for my European adventure and marked a milestone in my journey of self reliance. Rucksack description tag Kalyani s packpacker

Day 2, Flying To Abu Dhabi The following day, July 21 2024, I had a flight to Abu Dhabi. I spent the night at the hostel to rest before my journey. On the day of the flight, I needed to reach the airport by 3 PM, and a friend kindly came to drop me off. With my rucksack packed and excitement building, I was ready for the next leg of my adventure. When we arrived at the airport, my friend saw me off, marking the start of my international journey. With mom made spices, chutneys, and chilly flakes packed for comfort, I completed my immigration process in about two and a half hours. I then settled at the gate for my flight, feeling a mix of excitement and anxiety as thoughts raced through my mind. mom-made spices Passport and boarding pass To ease my nerves, I struck up a conversation with a man seated nearby who was also traveling to Abu Dhabi for work. He provided helpful information about safety and transportation in Abu Dhabi, which reassured me. With the boarding process complete and my anxiety somewhat eased. I found my window seat on the flight and settled in, excited for the journey ahead. Next to me was a young man from Ranchi(Zarkhand, India), heading to Abu Dhabi for work at a mining factory. We had an engaging conversation about work culture in Abu Dhabi and recruitment from India. Upon arriving in Abu Dhabi, I completed my transit, collected my luggage, and began finding my way to the hotel Premier Inn AbuDhabi, which was in the airport area. To my surprise, I ran into the same man from the flight, now in a cab. He kindly offered to drop me at my hotel, which I gladly accepted since navigating an unfamiliar city with a short acquaintance felt safer. At the hotel gate, he asked if I had local currency (Dirhams) for payment, as sometimes online transactions can fail. That hadn t crossed my mind, and I realized I might be left stranded if a transaction failed. Recognizing his help as a godsend, I asked if he could lend me some Dirhams, promising to transfer the amount later. He kindly assured me to pay him back once I reached the hotel room. With that relief, I checked into the hotel, feeling deeply grateful for the unexpected assistance and transferred the money to him after getting to my room. dhiramm money hotel room Kalyani in hotel room

Day 3, Flying And Arrive In Tirana Once in the hotel room, I found it hard to sleep, anxious about waking up on time for my flight. I set an alarm to wake up early, but my subconscious mind kept me alert, and I woke up before the alarm went off. I got freshened up and went down for breakfast, where I found some vegetarian options like Idli-Sambar and bread with butter, along with some morning tea. After breakfast, I headed back to the airport, ready to catch my flight to my final destination: Tirana, Albania. Breakfast at hotel Airport area I reached Tirana, Albania after a six hours flight, feeling exhausted and I was suffering from a headache. The air pressure had blocked my ears, and jet lag added to my fatigue. After collecting my checked luggage, I headed to the first ATM machine at the airport. Struggling to insert my card, I asked a nearby gentleman for help. He tried his best, but my card got stuck inside the machine. Panic set in as I worried about how I would survive without money. Taking a deep breath, I found an airport employee and explained the situation. The gentleman stayed with me, offering support and repeatedly apologizing for his mistake. However, it wasn t his fault, the ATM was out of order, which I hadn t noticed. My focus was solely on retrieving my ATM card. The airport employee worked diligently, using a hairpin to carefully extract my card. Finally, the card was freed, and I felt an immense sense of relief, grateful for the help of these kind strangers. I used another ATM, successfully withdrew money, and then went to an airport mobile SIM shop to buy a new SIM card for local internet and connectivity. sim plans

Day 4, Arriving In Tirana, Facing Challenges In A Foreign Country I had booked a stay at a backpacker hostel near the city center of Tirana. After sorting out the ATM and SIM card issues, I searched for a bus or any transport to get there. It was quite late, around 8:30 PM, and being in a new city, I was in a hurry. I saw a bus nearly leaving the airport, stopped it, and asked if it went to the city center. They gave me the green flag, so I boarded the airport service bus and reached the city center. Feeling very tired, I discovered that the hostel was about an hour and a half away by walking. Deciding to take a cab, I faced a challenge as the driver couldn t understand my English or accent. Using a mobile translator to convert my address from English to Albanian, I finally communicated my destination to him. With that sorted out, I headed to the Blue Door Backpacker Hostel and arrived around 9 PM, relieved to have finally reached my destination and I checked in. Hostel gate Street in Tirana I found my top bunk bed, only to realize I had booked a mixed-gender dormitory. This detail had completely escaped my notice during the booking process. I felt unsure about how to handle the situation. Coincidentally, my experience mirrored what Kangana faced in the movie Queen . Feeling acidic due to an empty stomach and the exhaustion of heavy traveling, I wasn t up to cooking in the hostel s kitchen. I asked the front desk about the nearest restaurant. It was nearly 9:30 PM, and the streets were deserted. To avoid any mishaps like in the movie Queen, I kept my passport securely locked in my bag, ensuring it wouldn t be a victim of theft. Venturing out for dinner, I felt uneasy on the quiet streets. I eventually found a restaurant recommended by the hostel, but the menu was almost entirely non-vegetarian. I struggled to ask about vegetarian options and was uncertain if any dishes contained eggs, as some people consider eggs to be vegetarian. Feeling frustrated and unsure, I left the restaurant without eating. I noticed a nearby grocery store that was about to close and managed to get a few extra minutes to shop. I bought some snacks, wafers, milk, and tea bags (though I couldn t find tea powder to make Indian-style tea). Returning to the hostel, I made do with wafers, cookies, and milk for dinner. That day was incredibly tough for me, I filled with exhaustion and struggle in a new country, I was on the verge of tears . I made a video call home before sleeping on the top bunk bed. It was a new experience for me, sharing a room with both unknown men and women. I kept my passport safe inside my purse and under my pillow while sleeping, staying very conscious about its security.

Day 5, Exploring Nearby Places I woke up the next day at noon. After having some coffee, the hostel management girl asked if I wanted breakfast. She offered curd with cornflakes, which I refused because I don t like curd. Instead, I ordered a pizza from a vegetarian pizza place with her help, and I started feeling better. I met some people in the hostel, some from Syria and others from Italy. I struggled to understand their accents but kept pushing myself to get involved in their discussions. Despite the challenges, I felt more at ease and was slowly adapting to my new environment. I went out from the hostel in the evening to buy some vegetables to cook something. I searched for shops and found some potatoes, tomatoes, and rice. I decided to cook Khichdi, an Indian dish made with rice, and added some chili flakes I brought from home. After preparing my dinner, I ate and then went to sleep again. vegetable shop cooking in kitchen Food

Day 6, Tiranas Recent History The next day, I planned to explore the city and visited Bunkart-1, a fascinating museum in a massive underground bunker from the communist era. Originally built as a shelter for Albania s political and military elite, it now offers a unique glimpse into the country s history under Enver Hoxha s oppressive regime. The museum s exhibits include historical artifacts, photographs, and multimedia displays that detail the lives of Albanians during that time. Walking through the dimly lit corridors, I felt the weight of history and gained a deeper understanding of Albania s past. Bunkart Bunkart Bunkart Bunkart Bunkart Bunkart Bunkar Bunkart Bunkart Bunkart Bunkart

Day 7-8, Meeting Friends From India The next day, I accidentally met with Chirag, who was returning from the Debian Conference 2022 held in Prizren, Kosovo, and staying at the same hostel. When I encountered him, he was talking on the phone, and I recognized he was Indian by his accent. I introduced myself, and we discovered we had some mutual friends. Chirag told me that our common friend, Raju, was also coming to stay at the hostel the next day. This news made me feel relaxed and happy to have known people around. When Raju arrived, the three of us, Chirag, Raju, and I planned to have dinner at an Indian restaurant and explore Tirana city. I had a great time talking and enjoying their company. Friends on street

Day 9-10, Meeting More Friends Raju had a ticket to leave soon, so Chirag and I made a plan to visit Shkod r and the nearby Komani Lake for kayaking. We started our journey early in the morning by bus and reached Shkod r. There, we met new friends from the conference, Pavit and Abraham, who were already there. We had dinner together and enjoyed an ice cream treat from Chirag. Friends on dinner

Day 12, Kayaking And Say Good Bye To Friends The next day, Pavit and Abraham had a flight back to India, so Chirag and I went to Komani Lake. We had an adventurous time kayaking, even though neither of us knew how to swim. We took a ferry through the backwaters to the island on Komani Lake and enjoyed a fantastic adventure together. After our trip, Chirag returned to Tirana for his flight back to India, leaving me to continue my journey alone. Lake with mountain Kayak

Day 13, Climbing Rozafa Castel By stopping at Shkod r, I visited Rozafa Castle. Despite the language barrier, as most locals only spoke Albanian, people around me guided me correctly on how to get there. At times, I used applications like Google Translate to communicate. To read signs or hotel menus, I used Google Photos' language converter. I even used the audio converter to understand and speak some basic Albanian phrases. View from top of Castel Rozafa castel I took a bus from Shkod r to the southern part of Albania, heading to Sarand . The journey lasted about five to six hours, and I had booked a stay at Mona s Hostel. Upon arrival, I met Eliza from America, and we went together to Ksamil Beach, spending a wonderful day there.

Day 14, Vlora Beach: Beach Side Cycling Next, I traveled to Vlor , where I stayed for one day. During my time there, I enjoyed beach side cycling with a cycle provided by the hostel owner and spent some time feeding fish. I also met a fellow traveler from Delhi who had brought along some preserved Indian curry. He kindly shared it with me, which was a welcome change after nearly 15 days without authentic Indian cuisine, except for what I had cooked myself in various hostels. Sunset on BeachKalyani on Beach Beach with streetBeach side cycling

Day 15-16 Visiting Durress, Travelling Back To Tirana I then visited Durr s, exploring its beautiful beaches, before heading back to Tirana one day before my flight home. On the day of my flight, my alarm didn t go off, and I woke up late at the hostel. In a frantic rush, I packed everything in just five minutes and dashed toward the city center to catch the bus to the airport. If I had been just five minutes later, I would have missed the bus. Thankfully, I managed to stop it just in time and began my journey back home, reflecting on the incredible adventure I had experienced. Fortunately, I wasn t late; I arrived at the airport just in time. After clearing immigration, I boarded my flight, which had a layover in Warsaw, Poland. The journey from Tirana to Warsaw took about two and a half hours, followed by a seven to eight-hour flight from Poland back to India. Once I arrived in Delhi, I returned to Zostel and booked a train ticket to Aurangabad for the next three days.

Backview This trip was an incredible adventure for me. I never imagined I could accomplish something like this, but I did. Meeting diverse people, experiencing different cultures, and learning so much made this journey truly unforgettable. Looking back, I realize how much I ve grown from this experience. Although I may have more opportunities to travel abroad in the future, this trip will always hold a special place in my heart. The memories I made and the incredible people I met along the way are irreplaceable. This experience goes beyond what I can express through this blog or words; it was incredibly precious to me. Every moment of this journey is etched in my memory, and I am grateful for every part of it.

8 August 2024

Reproducible Builds: Reproducible Builds in July 2024

Welcome to the July 2024 report from the Reproducible Builds project! In our reports, we outline what we ve been up to over the past month and highlight news items in software supply-chain security more broadly. As always, if you are interested in contributing to the project, please visit our Contribute page on our website. Table of contents:
  1. Reproducible Builds Summit 2024
  2. Pulling Linux up by its bootstraps
  3. Towards Idempotent Rebuilds?
  4. AROMA: Automatic Reproduction of Maven Artifacts
  5. Community updates
  6. Android Reproducible Builds at IzzyOnDroid with rbtlog
  7. Extending the Scalability, Flexibility and Responsiveness of Secure Software Update Systems
  8. Development news
  9. Website updates
  10. Upstream patches
  11. Reproducibility testing framework


Reproducible Builds Summit 2024 Last month, we were very pleased to announce the upcoming Reproducible Builds Summit, set to take place from September 17th 19th 2024 in Hamburg, Germany. We are thrilled to host the seventh edition of this exciting event, following the success of previous summits in various iconic locations around the world, including Venice, Marrakesh, Paris, Berlin and Athens. Our summits are a unique gathering that brings together attendees from diverse projects, united by a shared vision of advancing the Reproducible Builds effort. During this enriching event, participants will have the opportunity to engage in discussions, establish connections and exchange ideas to drive progress in this vital field. Our aim is to create an inclusive space that fosters collaboration, innovation and problem-solving. If you re interesting in joining us this year, please make sure to read the event page, which has more details about the event and location. We are very much looking forward to seeing many readers of these reports there.

Pulling Linux up by its bootstraps (LWN) In a recent edition of Linux Weekly News, Daroc Alden has written an article on bootstrappable builds. Starting with a brief introduction that
a bootstrappable build is one that builds existing software from scratch for example, building GCC without relying on an existing copy of GCC. In 2023, the Guix project announced that the project had reduced the size of the binary bootstrap seed needed to build its operating system to just 357-bytes not counting the Linux kernel required to run the build process.
The article goes onto to describe that now, the live-bootstrap project has gone a step further and removed the need for an existing kernel at all. and concludes:
The real benefit of bootstrappable builds comes from a few things. Like reproducible builds, they can make users more confident that the binary packages downloaded from a package mirror really do correspond to the open-source project whose source code they can inspect. Bootstrappable builds have also had positive effects on the complexity of building a Linux distribution from scratch [ ]. But most of all, bootstrappable builds are a boon to the longevity of our software ecosystem. It s easy for old software to become unbuildable. By having a well-known, self-contained chain of software that can build itself from a small seed, in a variety of environments, bootstrappable builds can help ensure that today s software is not lost, no matter where the open-source community goes from here

Towards Idempotent Rebuilds? Trisquel developer Simon Josefsson wrote an interesting blog post comparing the output of the .deb files from our tests.reproducible-builds.org testing framework and the ones in the official Debian archive. Following up from a previous post on the reproducibility of Trisquel, Simon notes that typically [the] rebuilds do not match the official packages, even when they say the package is reproducible , Simon correctly identifies that the purpose of [these] rebuilds are not to say anything about the official binary build, instead the purpose is to offer a QA service to maintainers by performing two builds of a package and declaring success if both builds match. However, Simon s post swiftly moves on to announce a new tool called debdistrebuild that performs rebuilds of the difference between two distributions in a GitLab pipeline and displays diffoscope output for further analysis.

AROMA: Automatic Reproduction of Maven Artifacts Mehdi Keshani, Tudor-Gabriel Velican, Gideon Bot and Sebastian Proksch of the Delft University of Technology, Netherlands, have published a new paper in the ACM Software Engineering on a new tool to automatically reproduce Apache Maven artifacts:
Reproducible Central is an initiative that curates a list of reproducible Maven libraries, but the list is limited and challenging to maintain due to manual efforts. [We] investigate the feasibility of automatically finding the source code of a library from its Maven release and recovering information about the original release environment. Our tool, AROMA, can obtain this critical information from the artifact and the source repository through several heuristics and we use the results for reproduction attempts of Maven packages. Overall, our approach achieves an accuracy of up to 99.5% when compared field-by-field to the existing manual approach [and] we reveal that automatic reproducibility is feasible for 23.4% of the Maven packages using AROMA, and 8% of these packages are fully reproducible.

Community updates On our mailing list this month:
  • Nichita Morcotilo reached out to the community, first to share their efforts to build reproducible packages cross-platform with a new build tool called rattler-build, noting that as you can imagine, building packages reproducibly on Windows is the hardest challenge (so far!) . Nichita goes onto mention that the Apple ecosystem appears to be using ZERO_AR_DATE over SOURCE_DATE_EPOCH. [ ]
  • Roland Clobus announced that the Debian bookworm 12.6 live images are nearly reproducible , with more detail in the post itself and input in the thread from other contributors.
  • As reported in last month s report, Pol Dellaiera completed his master thesis on Reproducibility in Software Engineering at the University of Mons, Belgium. This month, Pol announced this on the list with more background info. Since the master thesis sources have been available, it has received some feedback and contributions. As a result, an updated version of the thesis has been published containing those community fixes.
  • Daniel Gr ber asked for help in getting the Yosys documentation to build reproducibly, citing issues in inter alia the PDF generation causing differing CreationDate metadata values.
  • James Addison continued his long journey towards getting the Sphinx documentation generator to build reproducible documentation. In this thread, James concerns himself with the problem that even when SOURCE_DATE_EPOCH is configured, Sphinx projects that have configured their copyright notices using dynamic elements can produce nonsensical output under some circumstances. James query ended up generating a number of replies.
  • Allen gunner Gunner posted a brief update on the progress the core team is making towards introducing a Code of Conduct (CoC) such that it is in place in time for the RB Summit in Hamburg in September . In particular, gunner asks if you are interested in helping with CoC design and development in the weeks ahead, simply email rb-core@lists.reproducible-builds.org and let us know . [ ]

Android Reproducible Builds at IzzyOnDroid with rbtlog On our mailing list, Fay Stegerman announced a new Reproducible Builds collaboration in the Android ecosystem:
We are pleased to announce Reproducible Builds, special client support and more in our repo : a collaboration between various independent interoperable projects: the IzzyOnDroid team, 3rd-party clients Droid-ify & Neo Store, and rbtlog (part of my collection of tools for Android Reproducible Builds) to bring Reproducible Builds to IzzyOnDroid and the wider Android ecosystem.

Extending the Scalability, Flexibility and Responsiveness of Secure Software Update Systems Congratulations to Marina Moore of the New York Tandon School of Engineering who has submitted her PhD thesis on Extending the Scalability, Flexibility and Responsiveness of Secure Software Update Systems. The introduction outlines its contributions to the field:
[S]oftware repositories are a vital component of software development and release, with packages downloaded both for direct use and to use as dependencies for other software. Further, when software is updated due to patched vulnerabilities or new features, it is vital that users are able to see and install this patched version of the software. However, this process of updating software can also be the source of attack. To address these attacks, secure software update systems have been proposed. However, these secure software update systems have seen barriers to widespread adoption. The Update Framework (TUF) was introduced in 2010 to address several attacks on software update systems including repository compromise, rollback attacks, and arbitrary software installation. Despite this, compromises continue to occur, with millions of users impacted by such compromises. My work has addressed substantial challenges to adoption of secure software update systems grounded in an understanding of practical concerns. Work with industry and academic communities provided opportunities to discover challenges, expand adoption, and raise awareness about secure software updates. [ ]

Development news In Debian this month, 12 reviews of Debian packages were added, 13 were updated and 6 were removed this month adding to our knowledge about identified issues. A new toolchain issue type was identified as well, specifically ordering_differences_in_pkg_info.
Colin Percival filed a bug against the LLVM compiler noting that building i386 binaries on the i386 architecture is different when building i386 binaries under amd64. The fix was narrowed down to x87 excess precision, which can result in slightly different register choices when the compiler is hosted on x86_64 or i386 and a fix committed. [ ]
Fay Stegerman performed some in-depth research surrounding her apksigcopier tool, after some Android .apk files signed with the latest apksigner could no longer be verified as reproducible. Fay identified the issue as follows:
Since build-tools >= 35.0.0-rc1, backwards-incompatible changes to apksigner break apksigcopier as it now by default forcibly replaces existing alignment padding and changed the default page alignment from 4k to 16k (same as Android Gradle Plugin >= 8.3, so the latter is only an issue when using older AGP). [ ]
She documented multiple available workarounds and filed a bug in Google s issue tracker.
Lastly, diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. This month, Chris Lamb uploaded version 272 and Mattia Rizzolo uploaded version 273 to Debian, and the following changes were made as well:
  • Chris Lamb:
    • Ensure that the convert utility is from ImageMagick version 6.x. The command-line interface has seemingly changed with the 7.x series of ImageMagick. [ ]
    • Factor out version detection in test_jpeg_image. [ ]
    • Correct the import of the identify_version method after a refactoring change in a previous commit. [ ]
    • Move away from using DSA OpenSSH keys in tests as support has been deprecated and removed in OpenSSH version 9.8p1. [ ]
    • Move to assert_diff in the test_openssh_pub_key package. [ ]
    • Update copyright years. [ ]
  • Mattia Rizzolo:
    • Add support for ffmpeg version 7.x which adds some extra context to the diff. [ ]
    • Rework the handling of OpenSSH testing of DSA keys if OpenSSH is strictly 9.7, and add an OpenSSH key test with a ed25519-format key [ ][ ][ ]
    • Temporarily disable a few packages that are not available in Debian testing. [ ][ ]
    • Stop ignoring the results of Debian testing in the continuous integration system. [ ]
    • Adjust options in debian/source to make sure not to pack the Python sdist directory into the binary Debian package. [ ]
    • Adjust Lintian overrides. [ ]

Website updates There were a number of improvements made to our website this month, including:

Upstream patches The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:

Reproducibility testing framework The Reproducible Builds project operates a comprehensive testing framework running primarily at tests.reproducible-builds.org in order to check packages and other artifacts for reproducibility. In July, a number of changes were made by Holger Levsen, including:
  • Grant bremner access to the ionos7 node. [ ][ ]
  • Perform a dummy change to force update of all jobs. [ ][ ]
In addition, Vagrant Cascadian performed some necessary node maintenance of the underlying build hosts. [ ]

If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

4 August 2024

Matthias Klumpp: Freedesktop Specs Website Update

The Freedesktop.org Specifications directory contains a list of common specifications that have accumulated over the decades and define how common desktop environment functionality works. The specifications are designed to increase interoperability between desktops. Common specifications make the life of both desktop-environment developers and especially application developers (who will almost always want to maximize the amount of Linux DEs their app can run on and behave as expected, to increase their apps target audience) a lot easier. Unfortunately, building the HTML specifications and maintaining the directory of available specs has become a bit of a difficult chore, as the pipeline for building the site has become fairly old and unmaintained (parts of it still depended on Python 2). In order to make my life of maintaining this part of Freedesktop easier, I aimed to carefully modernize the website. I do have bigger plans to maybe eventually restructure the site to make it easier to navigate and not just a plain alphabetical list of specifications, and to integrate it with the Wiki, but in the interest of backwards compatibility and to get anything done in time (rather than taking on a mega-project that can t be finished), I decided to just do the minimum modernization first to get a viable website, and do the rest later. So, long story short: Most Freedesktop specs are written in DocBook XML. Some were plain HTML documents, some were DocBook SGML, a few were plaintext files. To make things easier to maintain, almost every specification is written in DocBook now. This also simplifies the review process and we may be able to switch to something else like AsciiDoc later if we want to. Of course, one could have switched to something else than DocBook, but that would have been a much bigger chore with a lot more broken links, and I did not want this to become an even bigger project than it already was and keep its scope somewhat narrow. DocBook is a markup language for documentation which has been around for a very long time, and therefore has older tooling around it. But fortunately our friends at openSUSE created DAPS (DocBook Authoring and Publishing Suite) as a modern way to render DocBook documents to HTML and other file formats. DAPS is now used to generate all Freedesktop specifications on our website. The website index and the specification revisions are also now defined in structured TOML files, to make them easier to read and to extend. A bunch of specifications that had been missing from the original website are also added to the index and rendered on the website now. Originally, I wanted to put the website live in a temporary location and solicit feedback, especially since some links have changed and not everything may have redirects. However, due to how GitLab Pages worked (and due to me not knowing GitLab CI well enough ) the changes went live before their MR was actually merged. Rather than reverting the change, I decided to keep it (as the old website did not build properly anymore) and to see if anything breaks. So far, no dead links or bad side effects have been observed, but: If you notice any broken link to specifications.fd.o or anything else weird, please file a bug so that we can fix it! Thank you, and I hope you enjoy reading the specifications in better rendering and more coherent look!

2 August 2024

Colin Watson: Free software activity in July 2024

My Debian contributions this month were all sponsored by Freexian. You can also support my work directly via Liberapay. OpenSSH At the start of the month, I uploaded a quick fix (via Salvatore Bonaccorso) for a regression from CVE-2006-5051, found by Qualys; this was because I expected it to take me a bit longer to merge OpenSSH 9.8, which had the full fix. This turned out to be a good guess: it took me until the last day of the month to get the merge done. OpenSSH 9.8 included some substantial changes to split the server into a listener binary and a per-session binary, which required some corresponding changes in the GSS-API key exchange patch. At this point I was very grateful for the GSS-API integration test contributed by Andreas Hasenack a little while ago, because otherwise I might very easily not have noticed my mistake: this patch adds some entries to the key exchange algorithm proposal, and on the server side I d accidentally moved that to after the point where the proposal is sent to the client, which of course meant it didn t work at all. Even with a failing test, it took me quite a while to spot the problem, involving a lot of staring at strace output and comparing debug logs between versions. There are still some regressions to sort out, including a problem with socket activation, and problems in libssh2 and Twisted due to DSA now being disabled at compile-time. Speaking of DSA, I wrote a release note for this change, which is now merged. GCC 14 regressions I fixed a number of build failures with GCC 14, mostly in my older packages: grub (legacy), imaptool, kali, knews, and vigor. autopkgtest I contributed a change to allow maintaining Incus container and VM images in parallel. I use both of these regularly (containers are faster, but some tests need full machine isolation), and the build tools previously didn t handle that very well. I now have a script that just does this regularly to keep my images up to date (although for now I m running this with PATH pointing to autopkgtest from git, since my change hasn t been released yet):
RELEASE=sid autopkgtest-build-incus images:debian/trixie
RELEASE=sid autopkgtest-build-incus --vm images:debian/trixie
Python team I fixed dnsdiag s uninstallability in unstable, and contributed the fix upstream. I reverted python-tenacity to an earlier version due to regressions in a number of OpenStack packages, including octavia and ironic. (This seems to be due to #486 upstream.) I fixed a build failure in python3-simpletal due to Python 3.12 removing the old imp module. I added non-superficial autopkgtests to a number of packages, including httmock, py-macaroon-bakery, python-libnacl, six, and storm. I switched a number of packages to build using PEP 517 rather than calling setup.py directly, including alembic, constantly, hyperlink, isort, khard, python-cpuinfo, and python3-onelogin-saml2. (Much of this was by working through the missing-prerequisite-for-pyproject-backend Lintian tag, but there s still lots to do.) I upgraded frozenlist, ipykernel, isort, langtable, python-exceptiongroup, python-launchpadlib, python-typeguard, pyupgrade, sqlparse, storm, and uncertainties to new upstream versions. In the process, I added myself to Uploaders for isort, since the previous primary uploader has retired. Other odds and ends I applied a suggestion by Chris Hofstaedtler to create /etc/subuid and /etc/subgid in base-passwd, since the login package is no longer essential. I fixed a wireless-tools regression due to iproute2 dropping its (/usr)/sbin/ip compatibility symlink. I applied a suggestion by Petter Reinholdtsen to add AppStream metainfo to pcmciautils.

28 July 2024

Vincent Bernat: Crafting endless AS paths in BGP

Combining BGP confederations and AS override can potentially create a BGP routing loop, resulting in an indefinitely expanding AS path. BGP confederation is a technique used to reduce the number of iBGP sessions and improve scalability in large autonomous systems (AS). It divides an AS into sub-ASes. Most eBGP rules apply between sub-ASes, except that next-hop, MED, and local preferences remain unchanged. The AS path length ignores contributions from confederation sub-ASes. BGP confederation is rarely used and BGP route reflection is typically preferred for scaling. AS override is a feature that allows a router to replace the ASN of a neighbor in the AS path of outgoing BGP routes with its own. It s useful when two distinct autonomous systems share the same ASN. However, it interferes with BGP s loop prevention mechanism and should be used cautiously. A safer alternative is the allowas-in directive.1 In the example below, we have four routers in a single confederation, each in its own sub-AS. R0 originates the 2001:db8::1/128 prefix. R1, R2, and R3 forward this prefix to the next router in the loop.
BGP routing loop involving 4 routers: R0 originates a prefix, R1, R2, R3 make it loop using next-hop-self and as-override
BGP routing loop using a confederation
The router configurations are available in a Git repository. They are running Cisco IOS XR. R2 uses the following configuration for BGP:
router bgp 64502
 bgp confederation peers
  64500
  64501
  64503
 !
 bgp confederation identifier 64496
 bgp router-id 1.0.0.2
 address-family ipv6 unicast
 !
 neighbor 2001:db8::2:0
  remote-as 64501
  description R1
  address-family ipv6 unicast
  !
 !
 neighbor 2001:db8::3:1
  remote-as 64503
  advertisement-interval 0
  description R3
  address-family ipv6 unicast
   next-hop-self
   as-override
  !
 !
!
The session with R3 uses both as-override and next-hop-self directives. The latter is only necessary to make the announced prefix valid, as there is no IGP in this example.2 Here s the sequence of events leading to an infinite AS path:
  1. R0 sends the prefix to R1 with AS path (64500).3
  2. R1 selects it as the best path, forwarding it to R2 with AS path (64501 64500).
  3. R2 selects it as the best path, forwarding it to R3 with AS path (64500 64501 64502).
  4. R3 selects it as the best path. It would forward it to R1 with AS path (64503 64502 64501 64500), but due to AS override, it substitutes R1 s ASN with its own, forwarding it with AS path (64503 64502 64503 64500).
  5. R1 accepts the prefix, as its own ASN is not in the AS path. It compares this new prefix with the one from R0. Both (64500) and (64503 64502 64503 64500) have the same length because confederation sub-ASes don t contribute to AS path length. The first tie-breaker is the router ID. R0 s router ID (1.0.0.4) is higher than R3 s (1.0.0.3). The new prefix becomes the best path and is forwarded to R2 with AS path (64501 64503 64501 64503 64500).
  6. R2 receives the new prefix, replacing the old one. It selects it as the best path and forwards it to R3 with AS path (64502 64501 64502 64501 64502 64500).
  7. R3 receives the new prefix, replacing the old one. It selects it as the best path and forwards it to R0 with AS path (64503 64502 64503 64502 64503 64502 64500).
  8. R1 receives the new prefix, replacing the old one. Again, it competes with the prefix from R0, and again the new prefix wins due to the lower router ID. The prefix is forwarded to R2 with AS path (64501 64503 64501 64503 64501 64503 64501 64500).
A few iterations later, R1 views the looping prefix as follows:4
RP/0/RP0/CPU0:R1#show bgp ipv6 u 2001:db8::1/128 bestpath-compare
BGP routing table entry for 2001:db8::1/128
Last Modified: Jul 28 10:23:05.560 for 00:00:00
Paths: (2 available, best #2)
  Path #1: Received by speaker 0
  Not advertised to any peer
  (64500)
    2001:db8::1:0 from 2001:db8::1:0 (1.0.0.4), if-handle 0x00000000
      Origin IGP, metric 0, localpref 100, valid, confed-external
      Received Path ID 0, Local Path ID 0, version 0
      Higher router ID than best path (path #2)
  Path #2: Received by speaker 0
  Advertised IPv6 Unicast paths to peers (in unique update groups):
    2001:db8::2:1
  (64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64500)
    2001:db8::4:0 from 2001:db8::4:0 (1.0.0.3), if-handle 0x00000000
      Origin IGP, metric 0, localpref 100, valid, confed-external, best, group-best
      Received Path ID 0, Local Path ID 1, version 37
      best of AS 64503, Overall best
There s no upper bound for an AS path, but BGP messages have size limits (4096 bytes per RFC 4271 or 65535 bytes per RFC 8654). At some point, BGP updates can t be generated. On Cisco IOS XR, the BGP process crashes well before reaching this limit.5
The main lessons from this tale are:

  1. When using BGP confederations with Cisco IOS XR, use allowconfedas-in instead. It s available since IOS XR 7.11.
  2. Using BGP confederations is already inadvisable. If you don t use the same IGP for all sub-ASes, you re inviting trouble! However, the scenario described here is also possible with an IGP.
  3. When an AS path segment is composed of ASNs from a confederation, it is displayed between parentheses.
  4. By default, IOS XR paces eBGP updates. This is controlled by the advertisement-interval directive. Its default value is 30 seconds for eBGP peers (even in the same confederation). R1 and R2 set this value to 0, while R3 sets it to 2 seconds. This gives some time to watch the AS path grow.
  5. This is CSCwk15887. It only happens when using as-override on an AS path with a too long AS_CONFED_SEQUENCE. This should be fixed around 24.3.1.

22 July 2024

Martin-Éric Racine: dhcpcd replacing dhclient for Trixie... or perhaps networkd?

My work on overhauling dhcpcd as the prime replacement for ISC's discontinued DHCP client is done. The package has achieved stability, both upstream and at Debian. The only remaining points are bug #1038882 to swap the Priorities of isc-dhcp-client and dhcpcd-base in the repository's override, and swaping ifupdown's search order to put dhcpcd first. Meanwhile, ifupdown's de-facto maintainer prompted me to sollicit opinions on which of the 4 ifupdown implementations should ship with a minimal installation for Trixie. This, in turn, re-opened the debate of what should be Debian's default network configuation framework (see the thread starting with this post). networkd Given how most Debian ports (except for Hurd) ship with systemd, which includes a disabled networkd by standard, many people in the thread feel that this should become the default network configuration tool for minimal installations. As it happens, most of my hosts fit that scenario, so I figured that I would give another go at testing networkd on one host. I used the following minimalistic /etc/systemd/network/dhcp.network:
[Match]
Name=en* wl*
[Network]
DHCP=yes
This correctly configured IPv4 via DHCP, with the small caveat that it doesn't update /etc/resolv.conf without installing resolvconf or systemd-resolved. However, networkd's default IPv6 settings really are not suitable for public consumption. The key issues (see Bug #1076432):
  1. Temporary addresses are not enabled by default. Worse, the setting is ignored if it was enabled by sysctl during bootup. This is a major privacy issue. Adding IPv6PrivacyExtensions=yes to the above exposed another issue: instead of using the fe80 address generated by the kernel, networkd adds a new one.
  2. Networkd uses EUI64 addresses by default. This is another major privacy issue, since EUI64 addresses are forensically traceable to the interface's MAC address. Worse, the setting is ignored if stable-privacy was enabled by sysctl during bootup. To top it all, networkd does stable-privacy using systemd's time-proven brain-dead approach of reinventing the wheel: instead of merely setting the kernel's address generation mode to 3 and letting it configure the secret address, it expects the secret address to be spelled out in the systemd unit.
Conclusion: networkd works well enough for someone configuring an IPv4-only network from 20 years ago, but is utterly inadequate for IPv6 or dual-stack installations, doubly so on a software distribution that claims to care about privacy and network security.

21 July 2024

Russell Coker: SE Linux Policy for Dell Management

The recent issue of Windows security software killing computers has reminded me about the issue of management software for Dell systems. I wrote policy for the Dell management programs that extract information from iDRAC and store it in Linux. After the break I ve pasted in the policy. It probably needs some changes for recent software, it was last tested on a PowerEdge T320 and prior to that was used on a PowerEdge R710 both of which are old hardware and use different management software to the recent hardware. One would hope that the recent software would be much better but usually such hope is in vain. I deliberately haven t submitted this for inclusion in the reference policy because it s for proprietary software and also it permits many operations that we would prefer not to permit. The policy is after the break because it s larger than you want on a Planet feed. But first I ll give a few selected lines that are bad in a noteworthy way:
  1. sys_admin means the ability to break everything
  2. dac_override means break Unix permissions
  3. mknod means a daemon creates devices due to a lack of udev configuration
  4. sys_rawio means someone didn t feel like writing a device driver, maintaining a device driver for DKMS is hard and getting a driver accepted upstream requires writing quality code, in any case this is a bad sign.
  5. self:lockdown is being phased out, but used to mean bypassing some integrity protections, that would usually be related to sys_rawio or similar.
  6. dev_rx_raw_memory is bad, reading raw memory allows access to pretty much everything and execute of raw memory is something I can t imagine a good use for, the Reference Policy doesn t use this anywhere!
  7. dev_rw_generic_chr_files usually means a lack of udev configuration as udev should do that.
  8. storage_raw_write_fixed_disk shouldn t be needed for this sort of thing, it doesn t do anything that involves managing partitions.
Now without network access or other obvious ways of remote control this level of access while excessive isn t necessarily going to allow bad things to happen due to outside attack. But if there are bugs in the software there s nothing to stop it from giving the worst results.
allow dell_datamgrd_t self:capability   dac_override dac_read_search mknod sys_rawio sys_admin  ;
allow dell_datamgrd_t self:lockdown integrity;
dev_rx_raw_memory(dell_datamgrd_t)
dev_rw_generic_chr_files(dell_datamgrd_t)
dev_rw_ipmi_dev(dell_datamgrd_t)
dev_rw_sysfs(dell_datamgrd_t)
storage_raw_read_fixed_disk(dell_datamgrd_t)
storage_raw_write_fixed_disk(dell_datamgrd_t)
allow dellsrvadmin_t self:lockdown integrity;
allow dellsrvadmin_t self:capability   sys_admin sys_rawio  ;
dev_read_raw_memory(dellsrvadmin_t)
dev_rw_sysfs(dellsrvadmin_t)
dev_rx_raw_memory(dellsrvadmin_t)
The best thing that Dell could do for their customers is to make this free software and allow the community to fix some of these issues.
Here is dellsrvadmin.te:
policy_module(dellsrvadmin,1.0.0)
require  
  type dmidecode_exec_t;
  type udev_t;
  type device_t;
  type event_device_t;
  type mon_local_test_t;
 
type dellsrvadmin_t;
type dellsrvadmin_exec_t;
init_daemon_domain(dellsrvadmin_t, dellsrvadmin_exec_t)
type dell_datamgrd_t;
type dell_datamgrd_exec_t;
init_daemon_domain(dell_datamgrd_t, dell_datamgrd_t)
type dellsrvadmin_var_t;
files_type(dellsrvadmin_var_t)
domain_transition_pattern(udev_t, dellsrvadmin_exec_t, dellsrvadmin_t)
modutils_domtrans(dellsrvadmin_t)
allow dell_datamgrd_t device_t:dir rw_dir_perms;
allow dell_datamgrd_t device_t:chr_file create;
allow dell_datamgrd_t event_device_t:chr_file   read write  ;
allow dell_datamgrd_t self:process signal;
allow dell_datamgrd_t self:fifo_file rw_file_perms;
allow dell_datamgrd_t self:sem create_sem_perms;
allow dell_datamgrd_t self:capability   dac_override dac_read_search mknod sys_rawio sys_admin  ;
allow dell_datamgrd_t self:lockdown integrity;
allow dell_datamgrd_t self:unix_dgram_socket create_socket_perms;
allow dell_datamgrd_t self:netlink_route_socket r_netlink_socket_perms;
modutils_domtrans(dell_datamgrd_t)
can_exec(dell_datamgrd_t, dmidecode_exec_t)
allow dell_datamgrd_t dellsrvadmin_var_t:dir rw_dir_perms;
allow dell_datamgrd_t dellsrvadmin_var_t:file manage_file_perms;
allow dell_datamgrd_t dellsrvadmin_var_t:lnk_file read;
allow dell_datamgrd_t dellsrvadmin_var_t:sock_file manage_file_perms;
kernel_read_network_state(dell_datamgrd_t)
kernel_read_system_state(dell_datamgrd_t)
kernel_search_fs_sysctls(dell_datamgrd_t)
kernel_read_vm_overcommit_sysctl(dell_datamgrd_t)
# for /proc/bus/pci/*
kernel_write_proc_files(dell_datamgrd_t)
corecmd_exec_bin(dell_datamgrd_t)
corecmd_exec_shell(dell_datamgrd_t)
corecmd_shell_entry_type(dell_datamgrd_t)
dev_rx_raw_memory(dell_datamgrd_t)
dev_rw_generic_chr_files(dell_datamgrd_t)
dev_rw_ipmi_dev(dell_datamgrd_t)
dev_rw_sysfs(dell_datamgrd_t)
files_search_tmp(dell_datamgrd_t)
files_read_etc_files(dell_datamgrd_t)
files_read_etc_symlinks(dell_datamgrd_t)
files_read_usr_files(dell_datamgrd_t)
logging_search_logs(dell_datamgrd_t)
miscfiles_read_localization(dell_datamgrd_t)
storage_raw_read_fixed_disk(dell_datamgrd_t)
storage_raw_write_fixed_disk(dell_datamgrd_t)
can_exec(mon_local_test_t, dellsrvadmin_exec_t)
allow mon_local_test_t dellsrvadmin_var_t:dir search;
allow mon_local_test_t dellsrvadmin_var_t:file read_file_perms;
allow mon_local_test_t dellsrvadmin_var_t:file setattr;
allow mon_local_test_t dellsrvadmin_var_t:sock_file write;
allow mon_local_test_t dell_datamgrd_t:unix_stream_socket connectto;
allow mon_local_test_t self:sem   create read write destroy unix_write  ;
allow dellsrvadmin_t self:process signal;
allow dellsrvadmin_t self:lockdown integrity;
allow dellsrvadmin_t self:sem create_sem_perms;
allow dellsrvadmin_t self:fifo_file rw_file_perms;
allow dellsrvadmin_t self:packet_socket create;
allow dellsrvadmin_t self:unix_stream_socket   connectto create_stream_socket_perms  ;
allow dellsrvadmin_t self:capability   sys_admin sys_rawio  ;
dev_read_raw_memory(dellsrvadmin_t)
dev_rw_sysfs(dellsrvadmin_t)
dev_rx_raw_memory(dellsrvadmin_t)
allow dellsrvadmin_t dellsrvadmin_var_t:dir rw_dir_perms;
allow dellsrvadmin_t dellsrvadmin_var_t:file manage_file_perms;
allow dellsrvadmin_t dellsrvadmin_var_t:lnk_file read;
allow dellsrvadmin_t dellsrvadmin_var_t:sock_file write;
allow dellsrvadmin_t dell_datamgrd_t:unix_stream_socket connectto;
kernel_read_network_state(dellsrvadmin_t)
kernel_read_system_state(dellsrvadmin_t)
kernel_search_fs_sysctls(dellsrvadmin_t)
kernel_read_vm_overcommit_sysctl(dellsrvadmin_t)
corecmd_exec_bin(dellsrvadmin_t)
corecmd_exec_shell(dellsrvadmin_t)
corecmd_shell_entry_type(dellsrvadmin_t)
files_read_etc_files(dellsrvadmin_t)
files_read_etc_symlinks(dellsrvadmin_t)
files_read_usr_files(dellsrvadmin_t)
logging_search_logs(dellsrvadmin_t)
miscfiles_read_localization(dellsrvadmin_t)
Here is dellsrvadmin.fc:
/opt/dell/srvadmin/sbin/.*        --        gen_context(system_u:object_r:dellsrvadmin_exec_t,s0)
/opt/dell/srvadmin/sbin/dsm_sa_datamgrd        --        gen_context(system_u:object_r:dell_datamgrd_t,s0)
/opt/dell/srvadmin/bin/.*        --        gen_context(system_u:object_r:dellsrvadmin_exec_t,s0)
/opt/dell/srvadmin/var(/.*)?                        gen_context(system_u:object_r:dellsrvadmin_var_t,s0)
/opt/dell/srvadmin/etc/srvadmin-isvc/ini(/.*)?        gen_context(system_u:object_r:dellsrvadmin_var_t,s0)

18 July 2024

Enrico Zini: meson, includedir, and current directory

Suppose you have a meson project like this: meson.build:
project('example', 'cpp', version: '1.0', license : ' ', default_options: ['warning_level=everything', 'cpp_std=c++17'])
subdir('example')
example/meson.build:
test_example = executable('example-test', ['main.cc'])
example/string.h:
/* This file intentionally left empty */
example/main.cc:
#include <cstring>
int main(int argc,const char* argv[])
 
    std::string foo("foo");
    return 0;
 
This builds fine with autotools and cmake, but not meson:
$ meson setup builddir
The Meson build system
Version: 1.0.1
Source dir: /home/enrico/dev/deb/wobble-repr
Build dir: /home/enrico/dev/deb/wobble-repr/builddir
Build type: native build
Project name: example
Project version: 1.0
C++ compiler for the host machine: ccache c++ (gcc 12.2.0 "c++ (Debian 12.2.0-14) 12.2.0")
C++ linker for the host machine: c++ ld.bfd 2.40
Host machine cpu family: x86_64
Host machine cpu: x86_64
Build targets in project: 1
Found ninja-1.11.1 at /usr/bin/ninja
$ ninja -C builddir
ninja: Entering directory  builddir'
[1/2] Compiling C++ object example/example-test.p/main.cc.o
FAILED: example/example-test.p/main.cc.o
ccache c++ -Iexample/example-test.p -Iexample -I../example -fdiagnostics-color=always -D_FILE_OFFSET_BITS=64 -Wall -Winvalid-pch -Wextra -Wpedantic -Wcast-qual -Wconversion -Wfloat-equal -Wformat=2 -Winline -Wmissing-declarations -Wredundant-decls -Wshadow -Wundef -Wuninitialized -Wwrite-strings -Wdisabled-optimization -Wpacked -Wpadded -Wmultichar -Wswitch-default -Wswitch-enum -Wunused-macros -Wmissing-include-dirs -Wunsafe-loop-optimizations -Wstack-protector -Wstrict-overflow=5 -Warray-bounds=2 -Wlogical-op -Wstrict-aliasing=3 -Wvla -Wdouble-promotion -Wsuggest-attribute=const -Wsuggest-attribute=noreturn -Wsuggest-attribute=pure -Wtrampolines -Wvector-operation-performance -Wsuggest-attribute=format -Wdate-time -Wformat-signedness -Wnormalized=nfc -Wduplicated-cond -Wnull-dereference -Wshift-negative-value -Wshift-overflow=2 -Wunused-const-variable=2 -Walloca -Walloc-zero -Wformat-overflow=2 -Wformat-truncation=2 -Wstringop-overflow=3 -Wduplicated-branches -Wattribute-alias=2 -Wcast-align=strict -Wsuggest-attribute=cold -Wsuggest-attribute=malloc -Wanalyzer-too-complex -Warith-conversion -Wbidi-chars=ucn -Wopenacc-parallelism -Wtrivial-auto-var-init -Wctor-dtor-privacy -Weffc++ -Wnon-virtual-dtor -Wold-style-cast -Woverloaded-virtual -Wsign-promo -Wstrict-null-sentinel -Wnoexcept -Wzero-as-null-pointer-constant -Wabi-tag -Wuseless-cast -Wconditionally-supported -Wsuggest-final-methods -Wsuggest-final-types -Wsuggest-override -Wmultiple-inheritance -Wplacement-new=2 -Wvirtual-inheritance -Waligned-new=all -Wnoexcept-type -Wregister -Wcatch-value=3 -Wextra-semi -Wdeprecated-copy-dtor -Wredundant-move -Wcomma-subscript -Wmismatched-tags -Wredundant-tags -Wvolatile -Wdeprecated-enum-enum-conversion -Wdeprecated-enum-float-conversion -Winvalid-imported-macros -std=c++17 -O0 -g -MD -MQ example/example-test.p/main.cc.o -MF example/example-test.p/main.cc.o.d -o example/example-test.p/main.cc.o -c ../example/main.cc
In file included from ../example/main.cc:1:
/usr/include/c++/12/cstring:77:11: error:  memchr  has not been declared in  :: 
   77     using ::memchr;
                  ^~~~~~
/usr/include/c++/12/cstring:78:11: error:  memcmp  has not been declared in  :: 
   78     using ::memcmp;
                  ^~~~~~
/usr/include/c++/12/cstring:79:11: error:  memcpy  has not been declared in  :: 
   79     using ::memcpy;
                  ^~~~~~
/usr/include/c++/12/cstring:80:11: error:  memmove  has not been declared in  :: 
   80     using ::memmove;
                  ^~~~~~~
 
It turns out that meson adds the current directory to the include path by default:
Another thing to note is that include_directories adds both the source directory and corresponding build directory to include path, so you don't have to care.
It seems that I have to care after all. Thankfully there is an implicit_include_directories setting that can turn this off if needed. Its documentation is not as easy to find as I'd like (kudos to Kangie on IRC), and hopefully this blog post will make it easier for me to find it in the future.

17 July 2024

Dirk Eddelbuettel: Rcpp 1.0.13 on CRAN: Some Updates

rcpp logo The Rcpp Core Team is once again pleased to announce a new release (now at 1.0.13) of the Rcpp package. It arrived on CRAN earlier today, and has since been uploaded to Debian. Windows and macOS builds should appear at CRAN in the next few days, as will builds in different Linux distribution and of course r2u should catch up tomorrow too. The release was uploaded last week, but not only does Rcpp always gets flagged because of the grandfathered .Call(symbol) but CRAN also found two packages regressing which then required them to take five days to get back to us. One issue was known; another did not reproduce under our tests against over 2800 reverse dependencies leading to the eventual release today. Yay. Checks are good and appreciated, and it does take time by humans to review them. This release continues with the six-months January-July cycle started with release 1.0.5 in July 2020. As a reminder, we do of course make interim snapshot dev or rc releases available via the Rcpp drat repo as well as the r-universe page and repo and strongly encourage their use and testing I run my systems with these versions which tend to work just as well, and are also fully tested against all reverse-dependencies. Rcpp has long established itself as the most popular way of enhancing R with C or C++ code. Right now, 2867 packages on CRAN depend on Rcpp for making analytical code go faster and further, along with 256 in BioConductor. On CRAN, 13.6% of all packages depend (directly) on Rcpp, and 59.9% of all compiled packages do. From the cloud mirror of CRAN (which is but a subset of all CRAN downloads), Rcpp has been downloaded 86.3 million times. The two published papers (also included in the package as preprint vignettes) have, respectively, 1848 (JSS, 2011) and 324 (TAS, 2018) citations, while the the book (Springer useR!, 2013) has another 641. This release is incremental as usual, generally preserving existing capabilities faithfully while smoothing our corners and / or extending slightly, sometimes in response to changing and tightened demands from CRAN or R standards. The move towards a more standardized approach for the C API of R leads to a few changes; Kevin did most of the PRs for this. Andrew Johnsom also provided a very nice PR to update internals taking advantage of variadic templates. The full list below details all changes, their respective PRs and, if applicable, issue tickets. Big thanks from all of us to all contributors!

Changes in Rcpp release version 1.0.13 (2024-07-11)
  • Changes in Rcpp API:
    • Set R_NO_REMAP if not already defined (Dirk in #1296)
    • Add variadic templates to be used instead of generated code (Andrew Johnson in #1303)
    • Count variables were switches to size_t to avoid warnings about conversion-narrowing (Dirk in #1307)
    • Rcpp now avoids the usage of the (non-API) DATAPTR function when accessing the contents of Rcpp Vector objects where possible. (Kevin in #1310)
    • Rcpp now emits an R warning on out-of-bounds Vector accesses. This may become an error in a future Rcpp release. (Kevin in #1310)
    • Switch VECTOR_PTR and STRING_PTR to new API-compliant RO variants (Kevin in #1317 fixing #1316)
  • Changes in Rcpp Deployment:
    • Small updates to the CI test containers have been made (#1304)

Thanks to my CRANberries, you can also look at a diff to the previous release Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page. Bugs reports are welcome at the GitHub issue tracker as well (where one can also search among open or closed issues). If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Kalyani Kenekar: Securing Your Website: Installing and Configuring Nginx with SSL

Logo Nginx

The Initial Encounter: I recently started to work with Nginx to explore the requirements on how to configure a then so called server block. It s quite different than within Apache. But there are a tons of good websites out there which do explain the different steps and options quite well. I also realized quickly that I need to be able to configure my Nginx setups in a way so the content is delivered through https with some automatic redirection from http URLs.
  • Let s install Nginx

Installing Nginx
$ sudo apt update
$ sudo apt install nginx

Checking your Web Server
  • We can check now nginx service is active or inactive
Output
  nginx.service - A high performance web server and a reverse proxy server
     Loaded: loaded (/lib/systemd/system/nginx.service; enabled; vendor preset: enabled)
     Active: active (running) since Tue 2024-02-12 09:59:20 UTC; 3h ago
       Docs: man:nginx(8)
   Main PID: 2887 (nginx)
      Tasks: 2 (limit: 1132)
     Memory: 4.2M
        CPU: 81ms
     CGroup: /system.slice/nginx.service
              2887 nginx: master process /usr/sbin/nginx -g daemon on; master_process on;
              2890 nginx: worker process
  • Now we successfully installed nginx and it in running state.

How To Secure Nginx with Let s Encrypt on Debian 12
  • In this documentation, you will use Certbot to obtain a free SSL certificate for Nginx on Debian 12 and set up your certificate.

Step 1 Installing Certbot $ sudo apt install certbot python3-certbot-nginx
  • Certbot is now ready to use, but in order for it to automatically configure SSL for Nginx, we need to verify some of Nginx s configuration.

Step 2 Confirming Nginx s Configuration
  • Certbot needs to be able to find the correct server block in your Nginx configuration for it to be able to automatically configure SSL. Specifically, it does this by looking for a server_name directive that matches the domain you request a certificate for. To check, open the configuration file for your domain using nano or your favorite text editor.
$ sudo vi /etc/nginx/sites-available/example.com
 server  
    listen 80;
    root /var/www/html/;
    index index.html;
    server_name example.com
    location /  
        try_files $uri $uri/ =404;
     
    location /test.html  
        try_files $uri $uri/ =404;
        auth_basic "admin area";
        auth_basic_user_file /etc/nginx/.htpasswd;
     
 
  • Fillup above data your project wise and then save the file, quit your editor, and verify the syntax of your configuration edits.
$ sudo nginx -t

Step 3 Obtaining an SSL Certificate
  • Certbot provides a variety of ways to obtain SSL certificates through plugins. The Nginx plugin will take care of reconfiguring Nginx and reloading the config whenever necessary. To use this plugin, type the following command line.
$ sudo certbot --nginx -d example.com
  • The configuration will be updated, and Nginx will reload to pick up the new settings. certbot will wrap up with a message telling you the process was successful and where your certificates are stored.
Output
IMPORTANT NOTES:
 - Congratulations! Your certificate and chain have been saved at:
   /etc/letsencrypt/live/example.com/fullchain.pem
   Your key file has been saved at:
   /etc/letsencrypt/live/example.com/privkey.pem
   Your cert will expire on 2024-05-12. To obtain a new or tweaked
   version of this certificate in the future, simply run certbot again
   with the "certonly" option. To non-interactively renew *all* of
   your certificates, run "certbot renew"
 - If you like Certbot, please consider supporting our work by:
   Donating to ISRG / Let's Encrypt:   https://letsencrypt.org/donate
   Donating to EFF:                    https://eff.org/donate-le
  • Your certificates are downloaded, installed, and loaded. Next check the syntax again of your configuration.
$ sudo nginx -t
  • If you get an error, reopen the server block file and check for any typos or missing characters. Once your configuration file s syntax is correct, reload Nginx to load the new configuration.
$ sudo systemctl reload nginx
  • Try reloading your website using https:// and notice your browser s security indicator. It should indicate that the site is properly secured, usually with a lock icon.
Now your website is secured by SSL usage.

12 July 2024

Reproducible Builds: Reproducible Builds in June 2024

Welcome to the June 2024 report from the Reproducible Builds project! In our reports, we outline what we ve been up to over the past month and highlight news items in software supply-chain security more broadly. As always, if you are interested in contributing to the project, please visit our Contribute page on our website. Table of contents:
  1. Next Reproducible Builds Summit dates announced
  2. GNU Guix patch review session for reproducibility
  3. New reproducibility-related academic papers
  4. Misc development news
  5. Website updates
  6. Reproducibility testing framework


Next Reproducible Builds Summit dates announced We are very pleased to announce the upcoming Reproducible Builds Summit, set to take place from September 17th 19th 2024 in Hamburg, Germany. We are thrilled to host the seventh edition of this exciting event, following the success of previous summits in various iconic locations around the world, including Venice, Marrakesh, Paris, Berlin and Athens. Our summits are a unique gathering that brings together attendees from diverse projects, united by a shared vision of advancing the Reproducible Builds effort. During this enriching event, participants will have the opportunity to engage in discussions, establish connections and exchange ideas to drive progress in this vital field. Our aim is to create an inclusive space that fosters collaboration, innovation and problem-solving. If you re interesting in joining us this year, please make sure to read the event page which has more details about the event and location. We are very much looking forward to seeing many readers of these reports there.

GNU Guix patch review session for reproducibility Vagrant Cascadian will be holding a Reproducible Builds session as part of the monthly Guix patch review series on July 11th at 17:00 UTC. These online events are intended to encourage everyone everyone becoming a patch reviewer and the goal of reviewing patches is to help Guix project accept contributions while maintaining our quality standards and learning how to do patch reviews together in a friendly hacking session.

Development news In Debian this month, 4 reviews of Debian packages were added, 11 were updated and 14 were removed this month adding to our knowledge about identified issues. Only one issue types was updated, though, explaining that we don t vary the build path anymore.
On our mailing list this month, Bernhard M. Wiedemann wrote that whilst he had previously collected issues that introduce non-determinism he has now moved on to discuss about mitigations , in the sense of how can we avoid whole categories of problem without patching an infinite number of individual packages . In addition, Janneke Nieuwenhuizen announced the release of two versions of GNU Mes. [ ][ ]
In openSUSE news, Bernhard M. Wiedemann published another report for that distribution.
In NixOS, with the 24.05 release out, it was again validated that our minimal ISO is reproducible by building it on a virtual machine with no access to the binary cache.
What s more, we continued to write patches in order to fix specific reproducibility issues, including Bernhard M. Wiedemann writing three patches (for qutebrowser, samba and systemd), Chris Lamb filing Debian bug #1074214 against the fastfetch package and Arnout Engelen proposing fixes to refind and for the Scala compiler [ .
Lastly, diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. This month, Chris Lamb uploaded two versions (270 and 271) to Debian, and made the following changes as well:
  • Drop Build-Depends on liblz4-tool in order to fix Debian bug #1072575. [ ]
  • Update tests to support zipdetails version 4.004 that is shipped with Perl 5.40. [ ]

Website updates There were a number of improvements made to our website this month, including Akihiro Suda very helpfully making the <h4> elements more distinguishable from the <h3> level [ ][ ] as well as adding a guide for Dockerfile reproducibility [ ]. In addition Fay Stegerman added two tools, apksigcopier and reproducible-apk-tools, to our Tools page.

Reproducibility testing framework The Reproducible Builds project operates a comprehensive testing framework running primarily at tests.reproducible-builds.org in order to check packages and other artifacts for reproducibility. In June, a number of changes were made by Holger Levsen, including:
  • Marking the virt(32 64)c-armhf nodes as down. [ ]
  • Granting a developer access to the osuosl4 node in order to debug a regression on the ppc64el architecture. [ ]
  • Granting a developer access to the osuosl4 node. [ ][ ]
In addition, Mattia Rizzolo re-aligned the /etc/default/jenkins file with changes performed upstream [ ] and changed how configuration files are handled on the rb-mail1 host. [ ], whilst Vagrant Cascadian documented the failure of the virt32c and virt64c nodes after initial investigation [ ].

If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

Freexian Collaborators: Monthly report about Debian Long Term Support, June 2024 (by Roberto C. S nchez)

Like each month, have a look at the work funded by Freexian s Debian LTS offering.

Debian LTS contributors In June, 18 contributors have been paid to work on Debian LTS, their reports are available:
  • Adrian Bunk did 47.0h (out of 74.25h assigned and 11.75h from previous period), thus carrying over 39.0h to the next month.
  • Arturo Borrero Gonzalez did 6.0h (out of 6.0h assigned).
  • Bastien Roucari s did 20.0h (out of 20.0h assigned).
  • Ben Hutchings did 15.5h (out of 16.0h assigned and 8.0h from previous period), thus carrying over 8.5h to the next month.
  • Chris Lamb did 18.0h (out of 18.0h assigned).
  • Daniel Leidert did 4.0h (out of 8.0h assigned and 2.0h from previous period), thus carrying over 6.0h to the next month.
  • Emilio Pozuelo Monfort did 23.25h (out of 49.5h assigned and 10.5h from previous period), thus carrying over 36.75h to the next month.
  • Guilhem Moulin did 4.5h (out of 13.0h assigned and 7.0h from previous period), thus carrying over 15.5h to the next month.
  • Lee Garrett did 17.0h (out of 25.0h assigned and 35.0h from previous period), thus carrying over 43.0h to the next month.
  • Lucas Kanashiro did 5.0h (out of 10.0h assigned and 10.0h from previous period), thus carrying over 15.0h to the next month.
  • Markus Koschany did 40.0h (out of 40.0h assigned).
  • Ola Lundqvist did 10.0h (out of 6.5h assigned and 17.5h from previous period), thus carrying over 14.0h to the next month.
  • Roberto C. S nchez did 5.25h (out of 7.75h assigned and 4.25h from previous period), thus carrying over 6.75h to the next month.
  • Santiago Ruano Rinc n did 22.5h (out of 14.5h assigned and 8.0h from previous period).
  • Sean Whitton did 6.5h (out of 6.0h assigned and 0.5h from previous period).
  • Stefano Rivera did 0.5h (out of 0.0h assigned and 10.0h from previous period), thus carrying over 9.5h to the next month.
  • Sylvain Beucler did 9.0h (out of 24.5h assigned and 35.5h from previous period), thus carrying over 51.0h to the next month.
  • Thorsten Alteholz did 14.0h (out of 14.0h assigned).

Evolution of the situation In June, we have released 31 DLAs. Notable security updates in June included:
  • git: multiple vulnerabilities, which may result in privilege escalation, denial of service, and arbitrary code execution
  • sendmail: SMTP smuggling allowed remote attackers bypass SPF protection checks
  • cups: arbitrary remote code execution
Looking further afield to the broader Debian ecosystem, LTS contributor Bastien Roucari s also patched sendmail in Debian 12 (bookworm) and 11 (bullseye) in order to fix the previously mentioned SMTP smuggling vulnerability. Furthermore, LTS contributor Thorsten Alteholz provided fixes for the cups packages in Debian 12 (bookworm) and 11 (bullseye) in order to fix the aforementioned arbitrary remote code execution vulnerability. Additionally, LTS contributor Ben Hutchings has commenced work on an updated backport of Linux kernel 6.1 to Debian 11 (bullseye), in preparation for bullseye transitioning to the responsibility of the LTS (and the associated closure of the bullseye-backports repository). LTS Lucas Kanashiro also began the preparatory work of backporting parts of the rust/cargo toolchain to Debian 11 (bullseye) in order to make future updates of the clamav virus scanner possible. June was the final month of LTS for Debian 10 (as announced on the debian-lts-announce mailing list). No additional Debian 10 security updates will be made available on security.debian.org. However, Freexian and its team of paid Debian contributors will continue to maintain Debian 10 going forward for the customers of the Extended LTS offer. Subscribe right away if you sill have Debian 10 which must be kept secure (and which cannot yet be upgraded).

Thanks to our sponsors Sponsors that joined recently are in bold.

10 July 2024

Russell Coker: Computer Adavances in the Last Decade

I wrote a comment on a social media post where someone claimed that there s no computer advances in the last 12 years which got long so it s worth a blog post. In the last decade or so new laptops have become cheaper than new desktop PCs. USB-C has taken over for phones and for laptop charging so all recent laptops support USB-C docks and monitors with USB-C docks built in have become common. 4K monitors have become cheap and common and higher than 4K is cheap for some use cases such as ultra wide. 4K TVs are cheap and TVs with built-in Android computers for playing internet content are now standard. For most use cases spinning media hard drives are obsolete, SSDs large enough for all the content most people need to store are cheap. We have gone from gigabit Ethernet being expensive to 2.5 gigabit being cheap. 12 years ago smart phones were very limited and every couple of years there would be significant improvements. Since about 2018 phones have been capable of doing most things most people want. 5yo Android phones can run the latest apps and take high quality pics. Any phone that supports VoLTE will be good for another 5+ years if it has security support. Phones without security support still work and are quite usable apart from being insecure. Google and Samsung have significantly increased their minimum security support for their phones and the GKI project from Google makes it easier for smaller vendors to give longer security support. There are a variety of open Android projects like LineageOS which give longer security support on a variety of phones. If you deliberately choose a phone that is likely to be well supported by projects like LineageOS (which pretty much means just Pixel phones) then you can expect to be able to actually use it when it is 10 years old. Compare this to the Samsung Galaxy S3 released in 2012 which was a massive improvement over the original Galaxy S (the S2 felt closer to the S than the S3). The Samsung Galaxy S4 released in 2013 was one of the first phones to have FullHD resolution which is high enough that most people can t easily recognise the benefits of higher resolution. It wasn t until 2015 that phones with 4G of RAM became common which is enough that for most phone use it s adequate today. Now that 16G of RAM is affordable in laptops running more secure OSs like Qubes is viable for more people. Even without Qubes, OS security has been improving a lot with better compiler features, new languages like Rust, and changes to software design and testing. Containers are being used more but we still aren t getting all the benefits of that. TPM has become usable in the last few years and we are only starting to take advantage of what it can offer. In 2012 BTRFS was still at an early stage of development and not many people wanted to use it in production, I was using it in production then and while I didn t lose any data from bugs I did have some downtime because of BTRFS issues. Now BTRFS is quite solid for server use. DDR4 was released in 2014 and gave significant improvements over DDR3 for performance and capacity. My home workstation now has 256G of DDR4 which wasn t particularly expensive while the previous biggest system I owned had 96G of DDR3 RAM. Now DDR5 is available to again increase performance and size while also making DDR4 cheap on the second hand market. This isn t a comprehensive list of all advances in the computer industry over the last 12 years or so, it s just some things that seem particularly noteworthy to me. Please comment about what you think are the most noteworthy advances I didn t mention.

9 July 2024

Simon Josefsson: Towards Idempotent Rebuilds?

After rebuilding all added/modified packages in Trisquel, I have been circling around the elephant in the room: 99% of the binary packages in Trisquel comes from Ubuntu, which to a large extent are built from Debian source packages. Is it possible to rebuild the official binary packages identically? Does anyone make an effort to do so? Does anyone care about going through the differences between the official package and a rebuilt version? Reproducible-build.org s effort to track reproducibility bugs in Debian (and other systems) is amazing. However as far as I know, they do not confirm or deny that their rebuilds match the official packages. In fact, typically their rebuilds do not match the official packages, even when they say the package is reproducible, which had me surprised at first. To understand why that happens, compare the buildinfo file for the official coreutils 9.1-1 from Debian bookworm with the buildinfo file for reproducible-build.org s build and you will see that the SHA256 checksum does not match, but still they declare it as a reproducible package. As far as I can tell of the situation, the purpose of their rebuilds are not to say anything about the official binary build, instead the purpose is to offer a QA service to maintainers by performing two builds of a package and declaring success if both builds match. I have felt that something is lacking, and months have passed and I haven t found any project that address the problem I am interested in. During my earlier work I created a project called debdistreproduce which performs rebuilds of the difference between two distributions in a GitLab pipeline, and display diffoscope output for further analysis. A couple of days ago I had the idea of rewriting it to perform rebuilds of a single distribution. A new project debdistrebuild was born and today I m happy to bless it as version 1.0 and to announces the project! Debdistrebuild has rebuilt the top-50 popcon packages from Debian bullseye, bookworm and trixie, on amd64 and arm64, as well as Ubuntu jammy and noble on amd64, see the summary status page for links. This is intended as a proof of concept, to allow people experiment with the concept of doing GitLab-based package rebuilds and analysis. Compare how Guix has the guix challenge command. Or I should say debdistrebuild has attempted to rebuild those distributions. The number of identically built packages are fairly low, so I didn t want to waste resources building the rest of the archive until I understand if the differences are due to consequences of my build environment (plain apt-get build-dep followed by dpkg-buildpackage in a fresh container), or due to some real difference. Summarizing the results, debdistrebuild is able to rebuild 34% of Debian bullseye on amd64, 36% of bookworm on amd64, 32% of bookworm on arm64. The results for trixie and Ubuntu are disappointing, below 10%. So what causes my rebuilds to be different from the official rebuilds? Some are trivial like the classical problem of varying build paths, resulting in a different NT_GNU_BUILD_ID causing a mismatch. Some are a bit strange, like a subtle difference in one of perl s headers file. Some are due to embedded version numbers from a build dependency. Several of the build logs and diffoscope outputs doesn t make sense, likely due to bugs in my build scripts, especially for Ubuntu which appears to strip translations and do other build variations that I don t do. In general, the classes of reproducibility problems are the expected. Some are assembler differences for GnuPG s gpgv-static, likely triggered by upload of a new version of gcc after the original package was built. There are at least two ways to resolve that problem: either use the same version of build dependencies that were used to produce the original build, or demand that all packages that are affected by a change in another package are rebuilt centrally until there are no more differences. The current design of debdistrebuild uses the latest version of a build dependency that is available in the distribution. We call this a idempotent rebuild . This is usually not how the binary packages were built originally, they are often built against earlier versions of their build dependency. That is the situation for most binary distributions. Instead of using the latest build dependency version, higher reproducability may be achieved by rebuilding using the same version of the build dependencies that were used during the original build. This requires parsing buildinfo files to find the right version of the build dependency to install. We believe doing so will lead to a higher number of reproducibly built packages. However it begs the question: can we rebuild that earlier version of the build dependency? This circles back to really old versions and bootstrappable builds eventually. While rebuilding old versions would be interesting on its own, we believe that is less helpful for trusting the latest version and improving a binary distribution: it is challenging to publish a new version of some old package that would fix a reproducibility bug in another package when used as a build dependency, and then rebuild the later packages with the modified earlier version. Those earlier packages were already published, and are part of history. It may be that ultimately it will no longer be possible to rebuild some package, because proper source code is missing (for packages using build dependencies that were never part of a release); hardware to build a package could be missing; or that the source code is no longer publicly distributable. I argue that getting to 100% idempotent rebuilds is an interesting goal on its own, and to reach it we need to start measure idempotent rebuild status. One could conceivable imagine a way to rebuild modified versions of earlier packages, and then rebuild later packages using the modified earlier packages as build dependencies, for the purpose of achieving higher level of reproducible rebuilds of the last version, and to reach for bootstrappability. However, it may be still be that this is insufficient to achieve idempotent rebuilds of the last versions. Idempotent rebuilds are different from a reproducible build (where we try to reproduce the build using the same inputs), and also to bootstrappable builds (in which all binaries are ultimately built from source code). Consider a cycle where package X influence the content of package Y, which in turn influence the content of package X. These cycles may involve several packages, and it is conceivable that a cycle could be circular and infinite. It may be difficult to identify these chains, and even more difficult to break them up, but this effort help identify where to start looking for them. Rebuilding packages using the same build dependency versions as were used during the original build, or rebuilding packages using a bootsrappable build process, both seem orthogonal to the idempotent rebuild problem. Our notion of rebuildability appears thus to be complementary to reproducible-builds.org s definition and bootstrappable.org s definition. Each to their own devices, and Happy Hacking! Addendum about terminology: With idempotent rebuild I am talking about a rebuild of the entire operating system, applied to itself. Compare how you build the latest version of the GNU C Compiler: it first builds itself using whatever system compiler is available (often an earlier version of gcc) which we call step 1. Then step 2 is to build a copy of itself using the compiler built in step 1. The final step 3 is to build another copy of itself using the compiler from step 2. Debian, Ubuntu etc are at step 1 in this process right now. The output of step 2 and step 3 ought to be bit-by-bit identical, or something is wrong. The comparison between step 2 and 3 is what I refer to with an idempotent rebuild. Of course, most packages aren t a compiler that can compile itself. However entire operating systems such as Trisquel, PureOS, Ubuntu or Debian are (hopefully) a self-contained system that ought to be able to rebuild itself to an identical copy. Or something is amiss. The reproducible build and bootstrappable build projects are about improve the quality of step 1. The property I am interested is the identical rebuild and comparison in step 2 and 3. I feel the word idempotent describes the property I m interested in well, but I realize there may be better ways to describe this. Ideas welcome!

Russ Allbery: Review: Raising Steam

Review: Raising Steam, by Terry Pratchett
Series: Discworld #40
Publisher: Anchor Books
Copyright: 2013
Printing: October 2014
ISBN: 0-8041-6920-9
Format: Trade paperback
Pages: 365
Raising Steam is the 40th Discworld novel and the third Moist von Lipwig novel, following Making Money. This is not a good place to start reading the series. Dick Simnel is a tinkerer from a line of tinkerers. He has been obsessed with mastering the power of steam since the age of ten, when his father died in a steam accident. That pursuit took him deeper into mathematics and precision, calculations and experiments, until he built Iron Girder: Discworld's first steam-powered locomotive. His early funding came from some convenient family pirate treasure, but turning his prototype into something more will require significantly more resources. That is how he ends up in the office of Harry King, Ankh-Morpork's sanitation magnate. Simnel's steam locomotive has the potential to solve some obvious logistical problems, such as getting fish from the docks of Quirm to the streets of Ankh-Morpork before it stops being vaguely edible. That's not what makes railways catch fire, however. As soon as Iron Girder is huffing and puffing its way around King's compound, it becomes the most popular attraction in the city. People stand in line for hours to ride it over and over again for reasons that they cannot entirely explain. There is something wild and uncontrollable going on. Vetinari is not sure he likes wild and uncontrollable, but he knows the lap into which such problems can be dumped: Moist von Lipwig, who is already getting bored with being a figurehead for the city's banking system. The setup for Raising Steam reminds me more of Moving Pictures than the other Moist von Lipwig novels. Simnel himself is a relentlessly practical engineer, but the trains themselves have tapped some sort of primal magic. Unlike Moving Pictures, Pratchett doesn't provide an explicit fantasy explanation involving intruding powers from another world. It might have been a more interesting book if he had. Instead, this book expects the reader to believe there is something inherently appealing and fascinating about trains, without providing much logic or underlying justification. I think some readers will be willing to go along with this, and others (myself included) will be left wishing the story had more world-building and fewer exclamation points. That's not the real problem with this book, though. Sadly, its true downfall is that Pratchett's writing ability had almost completely collapsed by the time he wrote it. As mentioned in my review of Snuff, we're now well into the period where Pratchett was suffering the effects of early-onset Alzheimer's. In that book, his health issues mostly affected the dialogue near the end of the novel. In this book, published two years later, it's pervasive and much worse. Here's a typical passage from early in the book:
It is said that a soft answer turneth away wrath, but this assertion has a lot to do with hope and was now turning out to be patently inaccurate, since even a well-spoken and thoughtful soft answer could actually drive the wrong kind of person into a state of fury if wrath was what they had in mind, and that was the state the elderly dwarf was now enjoying.
One of the best things about Discworld is Pratchett's ability to drop unexpected bits of wisdom in a sentence or two, or twist a verbal knife in an unexpected and surprising direction. Raising Steam still shows flashes of that ability, but it's buried in run-on sentences, drowned in cliches and repetition, and often left behind as the containing sentence meanders off into the weeds and sputters to a confused halt. The idea is still there; the delivery, sadly, is not. This is the first Discworld novel that I found mentally taxing to read. Sentences are often so overpacked that they require real effort to untangle, and the untangled meaning rarely feels worth the effort. The individual voice of the characters is almost gone. Vetinari's monologues, rather than being a rare event with dangerous layers, are frequent, rambling, and indecisive, often sounding like an entirely different character than the Vetinari we know. The constant repetition of the name any given character is speaking to was impossible for me to ignore. And the momentum of the story feels wrong; rather than constructing the events of the story in a way that sweeps the reader along, it felt like Pratchett was constantly pushing, trying to convince the reader that trains were the most exciting thing to ever happen to Discworld. The bones of a good story are here, including further development of dwarf politics from The Fifth Elephant and Thud! and the further fallout of the events of Snuff. There are also glimmers of Pratchett's typically sharp observations and turns of phrase that could have been unearthed and polished. But at the very least this book needed way more editing and a lot of rewriting. I suspect it could have dropped thirty pages just by tightening the dialogue and removing some of the repetition. I'm afraid I did not enjoy this. I am a bit of a hard sell for the magic fascination of trains I love trains, but my model railroad days are behind me and I'm now more interested in them as part of urban transportation policy. Previous Discworld books on technology and social systems did more of the work of drawing the reader in, providing character hooks and additional complexity, and building a firmer foundation than "trains are awesome." The main problem, though, was the quality of the writing, particularly when compared to the previous novels with the same characters. I dragged myself through this book out of a sense of completionism and obligation, and was relieved when I finished it. This is the first Discworld novel that I don't recommend. I think the only reason to read it is if you want to have read all of Discworld. Otherwise, consider stopping with Snuff and letting it be the send-off for the Ankh-Morpork characters. Followed by The Shepherd's Crown, a Tiffany Aching story and the last Discworld novel. Rating: 3 out of 10

8 July 2024

Thorsten Alteholz: My Debian Activities in June 2024

FTP master This month I accepted 270 and rejected 23 packages. The overall number of packages that got accepted was 279.

Debian LTS This was my hundred-twentieth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian. During my allocated time I uploaded or worked on: This month handling of the CVE of cups was a bit messy. After lifting the embargo of the CVE, a published patch did not work with all possible combinations of the configuration. In other words, in cases of having only one local domain socket configured, the cupsd did not start and failed with a strange error. Anyway, upstream published a new set of patches, which made cups work again. Unfortunately this happended just before the latest point release for Bullseye and Bookworm, so that the new packages did not make it into the release, but stopped in the corresponding p-u-queues: stable-p-u and old-p-u. I also continued to work on tiff and last but not least did a week of FD and attended the monthly LTS/ELTS meeting. Debian ELTS This month was the seventy-first ELTS month. During my allocated time I tried to upload a new version of cups for Jessie and Stretch. Unfortunately this was stopped due to an autopkgtest error, which I could not reproduce yet. I also wanted to finally upload a fixed version of exim4. Unfortunately this was stopped due to lots of CI-jobs for Buster. Updates for Buster are now also availble from ELTS, so some stuff had to prepared before the actual switch end of June. Additionally everything was delayed due to a crash of the CI worker. All in all this month was rather ill-fated. At least the exim4 upload will happen/already happened in July. I also continued to work on an update for libvirt, did a week of FD and attended the LTS/ELTS meeting. Debian Printing This month I uploaded new upstream or bugfix versions of: This work is generously funded by Freexian! Debian Astro This month I uploaded a new upstream or bugfix version of: All of those uploads are somehow related to /usr-move. Debian IoT This month I uploaded new upstream or bugfix versions of: Debian Mobcom The following packages have been prepared by the GSoC student Nathan: misc This month I uploaded new upstream or bugfix versions of: Here as well all uploads are somehow related to /usr-move

7 July 2024

Niels Thykier: Improving packaging file detection in Debian

Debian packaging consists of a directory (debian/) containing a number of "hard-coded" filenames such as debian/control, debian/changelog, debian/copyright. In addition to these, many packages will also use a number of optional files that are named via a pattern such as debian/ PACKAGE .install. At a high level the patterns looks deceptively simple. However, if you start working on trying to automatically classify files in debian/ (which could be helpful to tell the user they have a typo in the filename), you will quickly realize these patterns are not machine friendly at all for this purpose.
The patterns deconstructed To appreciate the problem fully, here is a primer on the pattern and all its issues. If you are already well-versed in these, you might want to skip the section. The most common patterns are debian/package.stem or debian/stem and usually the go to example is the install stem ( a concrete example being debian/debhelper.install). However, the full pattern consists of 4 parts where 3 of them are optional.
  • The package name followed by a period. Optional, but must be the first if present.
  • The name segment followed by a period. Optional, but must appear between the package name (if present) and the stem. If the package name is not present, then the name segment must be first.
  • The stem. Mandatory.
  • An architecture restriction prefixed by a period. Optional, must appear after the stem if present.
To visualize it with [foo] to mark optional parts, it looks like debian/[PACKAGE.][NAME.]STEM[.ARCH] Detecting whether a given file is in fact a packaging file now boils down to reverse engineering its name against this pattern. Again, so far, it might still look manageable. One major complication is that every part (except ARCH) can contain periods. So a trivial "split by period" is not going to cut it. As an example:
debian/g++-3.0.user.service
This example is deliberately crafted to be ambiguous and show this problem in its full glory. This file name can be in multiple ways:
  • Is the stem service or user.service? (both are known stems from dh_installsystemd and dh_installsystemduser respectively). In fact, it can be both at the same time with "clever" usage of --name=user passed to dh_installsystemd.
  • The g++-3.0 can be a package prefix or part of the name segment. Even if there is a g++-3.0 package in debian/control, then debhelper (until compat 15) will still happily match this file for the main package if you pass --name=g++-3.0 to the helper. Side bar: Woe is you if there is a g++-3 and a g++-3.0 package in debian/control, then we have multiple options for the package prefix! Though, I do not think that happens in practice.
Therefore, there are a lot of possible ways to split this filename that all matches the pattern but with vastly different meaning and consequences.
Making detection practical To make this detection practical, lets look at the first problems that we need to solve.
  1. We need the possible stems up front to have a chance at all. When multiple stems are an option, go for the longest match (that is, the one with most periods) since --name is rare and "code golfing" is even rarer.
  2. We can make the package prefix mandatory for files with the name segment. This way, the moment there is something before the stem, we know the package prefix will be part of it and can cut it. It does not solve the ambiguity if one package name is a prefix of another package name (from the same source), but it still a lot better. This made its way into debhelper compat 15 and now it is "just" a slow long way to a better future.
A simple solution to the first problem could be to have a static list of known stems. That will get you started but the debhelper eco-system strive on decentralization, so this feels like a mismatch. There is also a second problem with the static list. Namely, a given stem is only "valid" if the command in question is actually in use. Which means you now need to dumpster dive into the mess that is Turning-complete debhelper configuration file known as debian/rules to fully solve that. Thanks to the Turning-completeness, we will never get a perfect solution for a static analysis. Instead, it is time to back out and instead apply some simplifications. Here is a sample flow:
  1. Check whether the dh sequencer is used. If so, use some heuristics to figure out which addons are used.
  2. Delegate to dh_assistant to figure out which commands will be used and which debhelper config file stems it knows about. Here we need to know which sequences are in use from step one (if relevant). Combine this with any other sources for stems you have.
  3. Deconstruct all files in debian/ against the stems and known package names from debian/control. In theory, dumpster diving after --name options would be helpful here, but personally I skipped that part as I want to keep my debian/rules parsing to an absolute minimum.
With this logic, you can now:
  • Provide typo detection of the stem (debian/foo.intsall -> debian/foo.install) provided to have adequate handling of the corner cases (such as debian/*.conf not needing correction into debian/*.config)
  • Detect possible invalid package prefix (debian/foo.install without foo being a package). Note this has to be a weak warning unless the package is using debhelper compat 15 or you dumpster dived to validate that dh_install was not passed dh_install --name foo. Agreed, no one should do that, but they can and false positives are the worst kind of positives for a linting tool.
  • With some limitations, detect files used without the relevant command being active. As an example, the some integration modes of debputy removes dh_install, so a debian/foo.install would not be used.
  • Associate a given file with a given command to assist users with the documentation look up. Like debian/foo.user.service is related to dh_installsystemduser, so man dh_installsystemduser is a natural start for documentation.
I have added the logic for all these features in debputy though the documentation association is currently not in a user facing command. All the others are now diagnostics emitted by debputy in its editor support mode (debputy lsp server) or via debputy lint. In the editor mode, the diagnostics are currently associated with the package name in debian/control due to technical limitations of how the editor integration works. Some of these features will the latest version of debhelper (moving target at times). Check with debputy lsp features for the Extra dh support feature, which will be enabled if you got all you need. Note: The detection is currently (mostly) ignoring files with architecture restrictions. That might be lifted in the future. However, architecture restricted config files tend to be rare, so they were not a priority at this point. Additionally, debputy for technical reasons ignores stem typos with multiple matches. That sadly means that typos of debian/docs will often be unreported due to its proximity to debian/dirs and vice versa.
Diving a bit deeper on getting the stems To get the stems, debputy has 3 primary sources:
  1. Its own plugins can provide packager provided files. These are only relevant if the package is using debputy.
  2. It is als possible to provide a debputy plugin that identifies packaging files (either static or named ones). Though in practice, we probably do not want people to roll their own debputy plugin for this purpose, since the detection only works if the plugin is installed. I have used this mechanism to have debhelper provide a debhelper-documentation plugin to enrich the auto-detected data and we can assume most people interested in this feature would have debhelper installed.
  3. It asks dh_assistant list-guessed-dh-config-files for config files, which is covered below.
The dh_assistant command uses the same logic as dh to identify the active add-ons and loads them. From there, it scans all commands mentioned in the sequence for the PROMISE: DH NOOP WITHOUT ...-hint and a new INTROSPECTABLE: CONFIG-FILES ...-hint. When these hints reference a packaging file (as an example, via pkgfile(foo)) then dh_assistant records that as a known packaging file for that helper. Additionally, debhelper now also tracks commands that were removed from the sequence. Several of the dh_assistant subcommand now use this to enrich their (JSON) output with notes about these commands being known but not active.
The end result With all of this work, you now get:
$ apt satisfy 'dh-debputy (>= 0.1.43~), debhelper (>= 13.16~), python3-lsprotocol, python3-levenshtein'
# For demo purposes, pull two known repos (feel free to use your own packages here)
$ git clone https://salsa.debian.org/debian/debhelper.git -b debian/13.16
$ git clone https://salsa.debian.org/debian/debputy.git -b debian/0.1.43
$ cd debhelper
$ mv debian/debhelper.install debian/debhelper.intsall
$ debputy lint
warning: File: debian/debhelper.intsall:1:0:1:0: The file "debian/debhelper.intsall" is likely a typo of "debian/debhelper.install"
    File-level diagnostic
$ mv debian/debhelper.intsall debian/debhleper.install
$ debputy lint
warning: File: debian/debhleper.install:1:0:1:0: Possible typo in "debian/debhleper.install". Consider renaming the file to "debian/debhelper.debhleper.install" or "debian/debhelper.install" if it is intended for debhelper
    File-level diagnostic
$ cd ../debputy
$ touch debian/install
$ debputy lint --no-warn-about-check-manifest
warning: File: debian/install:1:0:1:0: The file debian/install is related to a command that is not active in the dh sequence with the current addons
    File-level diagnostic
As mentioned, you also get these diagnostics in the editor via the debputy lsp server feature. Here the diagnostics appear in debian/control over the package name for technical reasons. The editor side still needs a bit more work. Notably, changes to the filename is not triggered automatically and will first be caught on the next change to debian/control. Likewise, changes to debian/rules to add --with to dh might also have some limitations depending on the editor. Saving both files and then triggering an edit of debian/control seems to work reliable but ideally it should not be that involved. The debhelper side could also do with some work to remove the unnecessary support for the name segment with many file stems that do not need them and announce that to debputy. Anyhow, it is still a vast improvement over the status quo that was "Why is my file silently ignored!?".

Next.

Previous.