Search Results: "paul"

2 January 2025

Paul Wise: FLOSS Activities December 2024

Focus This month I didn't have any particular focus. I just worked on issues in my info bubble.

Changes

Issues

Sponsors The SWH work was sponsored. All other work was done on a volunteer basis.

9 December 2024

Paul Wise: FLOSS Activities November 2024

Focus This month I didn't have any particular focus. I just worked on issues in my info bubble.

Changes

Issues

Review

Communication
  • Respond to queries from Debian users and contributors on IRC

Sponsors The SWH work was sponsored. All other work was done on a volunteer basis.

8 December 2024

Paulo Henrique de Lima Santana: Bits from MiniDebConf Toulouse 2024

Intro I always find it amazing the opportunities I have thanks to my contributions to the Debian Project. I am happy to receive this recognition through the help I receive with travel to attend events in other countries. This year, two MiniDebConfs were scheduled for the second half of the year in Europe: the traditional edition in Cambridge in UK and a new edition in Toulouse in France. After weighing the difficulties and advantages that I would have to attend one of them, I decided to choose Toulouse, mainly because it was cheaper and because it was in November, giving me more time to plan the trip. I contacted the current DPL Andreas Tille explaining my desire to attend the event and he kindly approved my request for Debian to pay for the tickets. Thanks again to Andreas! MiniDebConf Toulouse 2024 was held in November 16th and 17th (Saturday and Sunday) and took place in one of the rooms of a traditional Free Software event in the city named Capitole du Libre. Before MiniDebConf, the team organized a MiniDebCamp in November 14th and 15th at a coworking space. The whole experience promised to be incredible, and it was! From visiting a city in France for the first time, to attending a local Free Software event, and sharing four days with people from the Debian community from various countries.

Travel and the city My plan was to leave Belo Horizonte on Monday, pass through S o Paulo, and arrive in Toulouse on Tuesday night. I was going to spend the whole of Wednesday walking around the city and then participate in the MiniDebCamp on Thursday. But the flight that was supposed to leave S o Paulo in the early hours of Monday to Tuesday was cancelled due to a problem with the airplane and I had spent all Tuesday waiting. I was rebooked on another flight that left in the evening and arrived in Toulouse on Wednesday afternoon. Even though I was very tired from the trip, I still took advantage of the end of the day to walk around the city. But it was a shame to have lost an entire day of sightseeing. On Thursday I left early in the morning to walk around a little more before going to the MiniDebCamp venue. I walked around a lot and saw several tourist attractions. The city is really very beautiful, as they say, especially the houses and buildings made of pink bricks. I was impressed by the narrow and winding streets; at one point it seemed like I was walking through a maze. I arrived to a corner and there would be 5 streets crossing in different directions. The riverbank that runs through the city is very beautiful and people spend their evenings there just hanging out. There was a lot of history around there. I stayed in an airbnb 25 minutes walking from the coworking space and only 10 minutes from the event venue. It was a very spacious apartment that was much cheaper than a hotel. MiniDebConf Toulouse 2024

MiniDebConf Toulouse 2024

MiniDebCamp I arrived at the coworking space where the MiniDebCamp was being held and met up with several friends. I also met some new people, talked about the translation work we do in Brazil, and other topics. We already knew that the organization would pay for lunch for everyone during the two days of MiniDebCamp, and at a certain point they told us that we could go to the room (which was downstairs from the coworking space) to have lunch. They set up a table with quiches, breads, charcuterie and LOTS of cheese :-) There were several types of cheese and they were all very good. I just found it a little strange because I m not used to having cheese for lunch, but the experience was amazing anyway :-) MiniDebConf Toulouse 2024

MiniDebConf Toulouse 2024 In the evening, we went as a group to dinner at a restaurant in front of the Capitolium, the city s main tourist attraction. On the second day, in the morning, I walked around the city a bit more, then went to the coworking space and had another incredible cheese table for lunch.

Video Team One of my ideas for going to Toulouse was to be able to help the video team in setting up the equipment for broadcasting and recording the talks. I wanted to follow this work from the beginning and learn some details, something I can t do before the DebConfs because I always arrive after the people have already set up the infrastructure. And later reproduce this work in the MiniDebConfs in Brazil, such as the one in Macei that is already scheduled for May 1-4, 2025. As I had agreed with the people from the video team that I would help set up the equipment, on Friday night we went to the University and stayed in the room working. I asked several questions about what they were doing, about the equipment, and I was able to clear up several doubts. Over the next two days I was handling one of the cameras during the talks. And on Sunday night I helped put everything away. Thanks to olasd, tumbleweed and ivodd for their guidance and patience. MiniDebConf Toulouse 2024

The event in general There was also a meeting with some members of the publicity team who were there with the DPL. We went to a cafeteria and talked mainly about areas that could be improved in the team. The talks at MiniDebConf were very good and the recordings are also available here. I ended up not watching any of the talks from the general schedule at Capitole du Libre because they were in French. It s always great to see free software events abroad to learn how they are done there and to bring some of those experiences to our events in Brazil. I hope that MiniDebConf in Toulouse will continue to take place every year, or that the French community will hold the next edition in another city and I will be able to join again :-) If everything goes well, in July next year I will return to France to join DebConf25 in Brest. MiniDebConf Toulouse 2024 More photos

30 November 2024

Russell Coker: Links November 2024

Interesting news about NVidia using RISC-V CPUs in all their GPUs [1]. Hopefully they will develop some fast RISC-V cores. Interesting blog post about using an 8K TV as a monitor, I m very tempted to do this [2]. Interesting post about how the Windows kernel development work can t compete with Linux kernel development [3]. Paul T wrote an insightful article about the ideal of reducing complexity of computer systems and the question of from who s perspective complexity will be reduced [4]. Interesting lecture at the seL4 symposium about the PANCAKE language for verified systems programming [5]. The idea that if you are verifying your code types don t help much is interesting. Interesting lecture from the seL4 summit about real world security, starts with the big picture and ends with seL4 specifics [6]. Interesting lecture from the seL4 summit about Cog s work building a commercial virtualised phome [7]. He talks about not building a brick of a smartphone that s obsolete 6 months after release , is he referring to the Librem5? Informative document about how Qualcom prevents OSs from accessing EL2 on Snapdragon devices with a link to a work-around for devices shipped with Windows (not Android), this means that only Windows can use the hypervisor features of those CPUs [8]. Linus tech tips did a walk through of an Intel fab, I learned a few things about CPU manufacture [9]. Interesting information on the amount of engineering that can go into a single component. There s lots of parts that are grossly overpriced (Dell and HP have plenty of examples in their catalogues) but generally aerospace doesn t have much overpricing [10]. Interesting lecture about TEE on RISC-V with the seL4 kernel [11]. Ian Jackson wrote an informative blog post about the repeating issue of software licenses that aren t free enough with Rust being the current iteration of this issue [12]. The quackery of Master Bates to allegedly remove the need for glasses is still going around [13].

12 November 2024

Paul Tagliamonte: Complex for Whom?

In basically every engineering organization I ve ever regarded as particularly high functioning, I ve sat through one specific recurring conversation which is not a conversation about complexity . Things are good or bad because they are or aren t complex, architectures needs to be redone because it s too complex some refactor of whatever it is won t work because it s too complex. You may have even been a part of some of these conversations or even been the one advocating for simple light-weight solutions. I ve done it. Many times. Rarely, if ever, do we talk about complexity within its rightful context complexity for whom. Is a solution complex because it s complex for the end user? Is it complex if it s complex for an API consumer? Is it complex if it s complex for the person maintaining the API service? Is it complex if it s complex for someone outside the team maintaining it to understand? Complexity within a problem domain I ve come to believe, is fairly zero-sum there s a fixed amount of complexity in the problem to be solved, and you can choose to either solve it, or leave it for those downstream of you to solve that problem on their own. That being said, while I believe there is a lower bound in complexity to contend with for a problem, I do not believe there is an upper bound to the complexity of solutions possible. It is always possible, and in fact, very likely that teams create problems for themselves while trying to solve a problem. The rest of this post is talking to the lower bound. When getting feedback on an early draft of this blog post, I ve been informed that Fred Brooks coined a term for what I call lower bound complexity Essential Complexity , in the paper No Silver Bullet Essence and Accident in Software Engineering , which is a better term and can be used interchangeably.

Complexity Culture In a large enough organization, where the team is high functioning enough to have and maintain trust amongst peers, members of the team will specialize. People will begin to engage with subsets of the work to be done, and begin to have their efficacy measured against that part of the organization s problems. Incentives shift, and over time it becomes increasingly likely that two engineers may have two very different priorities when working on the same system together. Someone accountable for uptime and tasked with responding to outages will begin to resist changes. Someone accountable for rapidly delivering features will resist gates between them and their users. Companies (either wittingly or unwittingly) will deal with this by tasking engineers with both production (feature development) and operational tasks (maintenance), so the difference in incentives isn t usually as bad as it could be. When we get a bunch of folks from far-flung corners of an organization in a room, fire up a slide deck and throw up some aspirational to-be architecture diagram in order to get a sign-off to solve some problem (be it someone needs a credible promotion packet, new feature needs to get delivered, or the system has begun to fail and needs fixing), the initial reaction will, more often than I d like, start to devolve into a discussion of how this is going to introduce a bunch of complexity, going to be hard to maintain, why can t you make it less complex? Right around here is when I start to try and contextualize the conversation happening around me understand what complexity is that being discussed, and understand who is taking on that burden. Think about who should be owning that problem, and work through the tradeoffs involved. Is it best solved here, or left to consumers (be them other systems, developers, or users). Should something become an API call s optional param, taking on all the edge-cases and on, or should users have to implement the logic using the data you return (leaving everyone else to take on all the edge-cases and maintenance)? Should you process the data, or require the user to preprocess it for you? Frequently it s right to make an active and explicit decision to simplify and leave problems to be solved downstream, since they may not actually need to be solved or perhaps you expect consumers will want to own the specifics of how the problem is solved, in which case you leave lots of documentation and examples. Many other times, especially when it s something downstream consumers are likely to hit, it s best solved internal to the system, since the only thing that can come of leaving it unsolved are bugs, frustration and half-correct solutions. This is a grey-space of tradeoffs, not a clear decision tree. No one wants the software manifestation of a katamari ball or a junk drawer, nor does anyone want a half-baked service unable to handle the simplest use-case.

Head-in-sand as a Service Popoffs about how complex something is, are, to a first approximation, best understood as meaning complicated for the person making comments . A lot of the #thoughtleadership believe that an AWS hosted EKS k8s cluster running images built by CI talking to an AWS hosted PostgreSQL RDS is not complex. They re right. Mostly right. This is less complex less complex for them. It s not, however, without complexity and its own tradeoffs it s just complexity that they do not have to deal with. Now they don t have to maintain machines that have pesky operating systems or hard drive failures. They don t have to deal with updating the version of k8s, nor ensuring the backups work. No one has to push some artifact to prod manually. Deployments happen unattended. You click a button and get a cluster. On the other hand, developers outside the ops function need to deal with troubleshooting CI, debugging access control rules encoded in turing complete YAML, permissions issues inside the cluster due to whatever the fuck a service mesh is, everyone needs to learn how to use some k8s tools they only actually use during a bad day, likely while doing some x.509 troubleshooting to connect to the cluster (an internal only endpoint; just port forward it) not to mention all sorts of rules to route packets to their project (a single repo s binary being run in 3 containers on a single vm host). Beyond that, there s the invisible complexity complexity on the interior of a service you depend on. I think about the dozens of teams maintaining the EKS service (which is either run on EC2 instances, or alternately, EC2 instances in a trench coat, moustache and even more shell scripts), the RDS service (also EC2 and shell scripts, but this time accounting for redundancy, backups, availability zones), scores of hypervisors pulled off the shelf (xen, kvm) smashed together with the ones built in-house (firecracker, nitro, etc) running on hardware that has to be refreshed and maintained continuously. Every request processed by network ACL rules, AWS IAM rules, security group rules, using IP space announced to the internet wired through IXPs directly into ISPs. I don t even want to begin to think about the complexity inherent in how those switches are designed. Shitloads of complexity to solve problems you may or may not have, or even know you had. What s more complex? An app running in an in-house 4u server racked in the office s telco closet in the back running off the office Verizon line, or an app running four hypervisors deep in an AWS datacenter? Which is more complex to you? What about to your organization? In total? Which is more prone to failure? Which is more secure? Is the complexity good or bad? What type of Complexity can you manage effectively? Which threaten the system? Which threaten your users?

COMPLEXIVIBES This extends beyond Engineering. Decisions regarding what tools are we able to use be them existing contracts with cloud providers, CIO mandated SaaS products, a list of the only permissible open source projects will incur costs in terms of expressed complexity . Pinning open source projects to a fixed set makes SBOM production less complex . Using only one SaaS provider s product suite (even if its terrible, because it has all the types of tools you need) makes accreditation less complex . If all you have is a contract with Pauly T s lowest price technically acceptable artisinal cloudary and haberdashery, the way you pay for your compute is less complex for the CIO shop, though you will find yourself building your own hosted database template, mechanism to spin up a k8s cluster, and all the operational and technical burden that comes with it. Or you won t and make it everyone else s problem in the organization. Nothing you can do will solve for the fact that you must now deal with this problem somewhere because it was less complicated for the business to put the workloads on the existing contract with a cut-rate vendor. Suddenly, the decision to reduce complexity because of an existing contract vehicle has resulted in a huge amount of technical risk and maintenance burden being onboarded. Complexity you would otherwise externalize has now been taken on internally. With a large enough organizations (specifically, in this case, i m talking about you, bureaucracies), this is largely ignored or accepted as normal since the personnel cost is understood to be free to everyone involved. Doing it this way is more expensive, more work, less reliable and less maintainable, and yet, somehow, is, in a lot of ways, less complex to the organization. It s particularly bad with bureaucracies, since screwing up a contract will get you into much more trouble than delivering a broken product, leaving basically no reason for anyone to care to fix this. I can t shake the feeling that for every story of technical mandates gone awry, somewhere just out of sight there s a decisionmaker optimizing for what they believe to be the least amount of complexity least hassle, fewest unique cases, most consistency as they can. They freely offload complexity from their accreditation and risk acceptance functions through mandates. They will never have to deal with it. That does not change the fact that someone does.

TC;DR (TOO COMPLEX; DIDN T REVIEW) We wish to rid ourselves of systemic Complexity after all, complexity is bad, simplicity is good. Removing upper-bound own-goal complexity ( accidental complexity in Brooks s terms) is important, but once you hit the lower bound complexity, the tradeoffs become zero-sum. Removing complexity from one part of the system means that somewhere else maybe outside your organization or in a non-engineering function must grow it back. Sometimes, the opposite is the case, such as when a previously manual business processes is automated. Maybe that s a good idea. Maybe it s not. All I know is that what doesn t help the situation is conflating complexity with everything we don t like legacy code, maintenance burden or toil, cost, delivery velocity.
  • Complexity is not the same as proclivity to failure. The most reliable systems I ve interacted with are unimaginably complex, with layers of internal protection to prevent complete failure. This has its own set of costs which other people have written about extensively.
  • Complexity is not cost. Sometimes the cost of taking all the complexity in-house is less, for whatever value of cost you choose to use.
  • Complexity is not absolute. Something simple from one perspective may be wildly complex from another. The impulse to burn down complex sections of code is helpful to have generally, but sometimes things are complicated for a reason, even if that reason exists outside your codebase or organization.
  • Complexity is not something you can remove without introducing complexity elsewhere. Just as not making a decision is a decision itself; choosing to require someone else to deal with a problem rather than dealing with it internally is a choice that needs to be considered in its full context.
Next time you re sitting through a discussion and someone starts to talk about all the complexity about to be introduced, I want to pop up in the back of your head, politely asking what does complex mean in this context? Is it lower bound complexity? Is this complexity desirable? Is what they re saying mean something along the lines of I don t understand the problems being solved, or does it mean something along the lines of this problem should be solved elsewhere? Do they believe this will result in more work for them in a way that you don t see? Should this not solved at all by changing the bounds of what we should accept or redefine the understood limits of this system? Is the perceived complexity a result of a decision elsewhere? Who s taking this complexity on, or more to the point, is failing to address complexity required by the problem leaving it to others? Does it impact others? How specifically? What are you not seeing? What can change? What should change?

1 November 2024

Paul Wise: FLOSS Activities October 2024

Focus This month I didn't have any particular focus. I just worked on issues in my info bubble.

Changes

Issues

Sponsors All work was done on a volunteer basis.

7 September 2024

Paul Wise: FLOSS Activities August 2024

Focus This month I didn't have any particular focus. I just worked on issues in my info bubble.

Changes

Issues
  • Regressions in adequate (1 2 3 4)

Review
  • Debian publicity: reviewed Debian birthday post

Administration
  • Debian servers: contributed to a review of current Debian Partners

Communication
  • Respond to queries from Debian users and contributors on IRC

Sponsors All work was done on a volunteer basis.

30 August 2024

Dirk Eddelbuettel: pkgKitten 0.2.4 on CRAN: Updates

kitten A shiny new release 0.2.4 of pkgKitten arrived on CRAN earlier, and has also been been uploaded to Debian. pkgKitten makes it simple to create new R packages via a simple function invocation. A wrapper kitten.r exists in the littler package to make it even easier. This release contains several improvements to the (optional) setup of the (wonderful) tinytest package, now supports the (now mandatory) Authors@R and polished a few aspect around the package repository and continuous integrations. The set of changes follows.

Changes in version 0.2.4 (2024-08-30)
  • The .Rbuildignore stanza now includes .github
  • The support of and usage illustrations of tinytest are much enhanced (Paul Hudor in #18 adressing #19 and #20)
  • The .gitignore file now includes C++ related files
  • Improvements and polish to badges and continuous integration
  • The DESCRIPTION file now contains an Authors@R entry

More details about the package are at the pkgKitten webpage, the pkgKitten docs site, and the pkgKitten GitHub repo. Courtesy of CRANberries, there is also a diffstat report for the most recent release. If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

28 August 2024

Debian Brasil: Debian Day 2024 em Bel m e Po os de Caldas - Brasil

por Paulo Henrique de Lima Santana (phls) Listamos abaixo os links para os relatos e not cias do Debian Day 2024 realizado em Bel m e Po os de Caldas:

20 August 2024

Debian Brasil: Debian Day 2024 em Santa Maria/RS - Brasil

por Andrew Gon alves O Debian Day em Santa Maria - RS 2024 foi realizado ap s 5 anos de hiato, foi feito durante a manh do dia 16/08/2024 no Sal o Azul da Universidade Franciscana (UFN) com apoio da comunidade Debian e do Laborat rio de Pr ticas da Computa o da UFN. O evento contou com alunos de todos os semestres dos cursos de Ci ncia da Computa o, Jogos Digitais e Sistemas de Informa o, fizemos um coffee break onde tivemos a oportunidade de conversar com os participantes. Cerca de 60 alunos prestigiaram uma palestra de introdu o ao Software Livre e de C digo Aberto, Linux e foram introduzidos ao projeto Debian, tanto sobre a filosofia do projeto, at como ele acontece na pr tica e oportunidades que se abriram para participantes do projeto por fazerem parte do Debian. Ap s a palestra foi feita uma demonstra o de empacotamento pelo DD local Francisco Vilmar, que demonstrou na pr tica como funciona o empacotamento de software no Debian. Gostaria de agradecer a todas as pessoas que nos ajudaram: E um muito obrigado a todos os participantes que nos prestigiaram neste evento fazendo perguntas intrigantes e se interessando pelo mundo do Software Livre. Algumas fotos: DD em Santa Maria 1 DD em Santa Maria 2 DD em Santa Maria 3 DD em Santa Maria 4

Debian Brasil: Debian Day 2024 in Santa Maria - Brazil

by por Andrew Gon alves Debian Day in Santa Maria - RS 2024 was held after a 5-year hiatus from the previous version of the event. It took place on the morning of August 16, in the Blue Hall of the Franciscan University (UFN) with support from the Debian community and the Computing Practices Laboratory of UFN. The event was attended by students from all semesters of the Computer Science, Digital Games and Informational Systems, where we had the opportunity to talk to the participants. Around 60 students attended a lecture introducing them to Free and Open Source Software, Linux and were introduced to the Debian project, both about the philosophy of the project and how it works in practice and the opportunities that have opened up for participants by being part of Debian. After the talk, a packaging demonstration was given by local DD Francisco Vilmar, who demonstrated in practice how software packaging works in Debian. I would like to thank all the people who helped us: And thanks to all the participants who attended this event asking intriguing questions and taking an interest in the world of Free Software. Photos: DD em Santa Maria 1 DD em Santa Maria 2 DD em Santa Maria 3 DD em Santa Maria 4

18 August 2024

Debian Brasil: Debian Day 2024 em Pouso Alegre/MG - Brasil

por Thiago Pezzo e Giovani Ferreira As celebra es locais do Dia do Debian 2024 tamb m aconteceram em Pouso Alegre, MG, Brasil. Neste ano conseguimos organizar dois dias de palestras! No dia 14 de agosto de 2024, quarta-feira pela manh , estivemos no campus Pouso Alegre do Instituto Federal de Educa o, Ci ncia e Tecnologia do Sul de Minas Gerais (IFSULDEMINAS). Fizemos a apresenta o introdut ria do Projeto Debian, sistema operacional e comunidade, para os tr s anos do Curso T cnico de Ensino M dio em Inform tica. O evento foi fechado para o IFSULDEMINAS e estiveram presentes por volta de 60 estudantes. J no dia 17 de agosto de 2024, um s bado pela manh , realizamos o evento aberto comunidade na Universidade do Vale do Sapuca (Univ s), com apoio institucional do Curso de Sistemas de Informa o. Falamos sobre o Projeto Debian com Giovani Ferreira (Debian Developer); sobre a equipe de tradu o Debian pt_BR com Thiago Pezzo; sobre experi ncias no dia a dia com uso de softwares livres com Virg nia Cardoso; e sobre como configurar um ambiente de desenvolvimento pronto para produ o usando Debian e Docker com Marcos Ant nio dos Santos. Encerradas as palestras, foram servidos salgadinhos, caf e bolo, enquanto os/as participantes conversavam, tiravam d vidas e partilhavam experi ncias. Gostar amos de agradecer a todas as pessoas que nos ajudaram: Algumas fotos: Apresenta o no campus Pouso Alegre do IFSULDEMINAS 1 Apresenta o no campus Pouso Alegre do IFSULDEMINAS 2 Apresenta o no campus F tica da UNIV S 1 Apresenta o no campus F tica da UNIV S 2 Apresenta o no campus F tica da UNIV S 3 Apresenta o no campus F tica da UNIV S 4

Debian Brasil: Debian Day 2024 in Pouso Alegre - Brazil

by Thiago Pezzo and Giovani Ferreira Local celebrations of Debian 2024 Day also happened on [Pouso Alegre, MG, Brazil] (https://www.openstreetmap.org/relation/315431). In this year we managed to organize two days of lectures! On the 14th of August 2024, Wednesday morning, we were on the [Federal Institute of Education, Science and Technology of the South of Minas Gerais] (https://portal.ifsuldeminas.edu.br/index.php), (IFSULDEMINAS), Pouso Alegre campus. We did an introductory presentation of the Project Debian, operating system and community, for the three years of the Technical Course in Informatics (professional high school). The event was closed to IFSULDEMINAS students and talked to 60 people. On August 17th, 2024, a Saturday morning, we held the event open to the community at the University of the Sapuca Valley (Univ s), with institutional support of the Information Systems Course. We speak about the Debian Project with Giovani Ferreira (Debian Developer); about the Debian pt_BR translation team with Thiago Pezzo; about everyday experiences using free software with Virginia Cardoso; and on how to set up a development environment ready for production using Debian and Docker with Marcos Ant nio dos Santos. After the lectures, snacks, coffee and cake were served, while the participants talked, asked questions and shared experiences. We would like to thank all the people who have helped us: Some pictures from Pouso Alegre: Presentation at IFSULDEMINAS Pouso Alegre campus 1 Presentation at IFSULDEMINAS Pouso Alegre campus 2 Presentation at UNIV S F tima campus 1 Presentation at UNIV S F tima campus 2 Presentation at UNIV S F tima campus 3 Presentation at UNIV S F tima campus 4

3 August 2024

Paul Wise: FLOSS Activities July 2024

Focus This month I didn't have any particular focus. I just worked on issues in my info bubble.

Issues
  • Unnecessary deps in zlint

Administration
  • Debian IRC: add an entry message redirecting folks from #debian-ci to #debci

Communication
  • Respond to queries from folks on Debian and other IRC channels.

Sponsors All work was done on a volunteer basis.

1 July 2024

Paul Wise: FLOSS Activities June 2024

Focus This month I didn't have any particular focus. I just worked on issues in my info bubble.

Changes

Issues

Communication
  • Respond to queries from Debian users and contributors on IRC

Sponsors All work was done on a volunteer basis.

14 June 2024

Paul Tagliamonte: Reverse Engineering a Restaurant Pager system

It s been a while since I played with something new been stuck in a bit of a rut with radios recently - working on refining and debugging stuff I mostly understand for the time being. The other day, I was out getting some food and I idly wondered how the restaurant pager system worked. Idle curiosity gave way to the realization that I, in fact, likely had the means and ability to answer this question, so I bought the first set of the most popular looking restaurant pagers I could find on eBay, figuring it d be a fun multi-week adventure.

Order up! I wound up buying a Retekess brand TD-158 Restaurant Pager System (they looked like ones I d seen before and seemed to be low-cost and popular), and quickly after, had a pack of 10 pagers and a base station in-hand. The manual stated that the radios operated at 433 MHz (cool! can do! Love a good ISM band device), and after taking an inital read through the manual for tips on the PHY, I picked out a few interesting things. First is that the base station ID was limited to 0-999, which is weird because it means the limiting factor is likely the base-10 display on the base station, not the protocol we need enough bits to store 999 at least 10 bits. Nothing else seemed to catch my eye, so I figured may as well jump right to it. Not being the type to mess with success, I did exactly the same thing as I did in my christmas tree post, and took a capture at 433.92MHz since it was in the middle of the band, and immediately got deja-vu. Not only was the signal at 433.92MHz, but throwing the packet into inspectrum gave me the identical plot of the OOK encoding scheme. Not just similar identical. The only major difference was the baud rate and bit structure of the packets, and the only minor difference was the existence of what I think is a wakeup preamble packet (of all zeros), rather than a preamble symbol that lasted longer than usual PHY symbol (which makes this pager system a bit easier to work with than my tree, IMHO). Getting down to work, I took some measurements to determine what the symbol duration was over the course of a few packets, I was able to determine the symbol rate was somewhere around 858 microseconds (0.000858 seconds per symbol), which is a weird number, but maybe I m slightly off or there s some larger math I m missing that makes this number satisfyingly round (internal low cost crystal clock or something? I assume this is some hardware constrait with the pager?) Anyway, good enough. Moving along, let s try our hand at a demod let s just assume it s all the same as the chrismas tree post and demod ones and zeros the same way here. That gives us 26 bits:
00001101110000001010001000
Now, I know we need at least 10 bits for the base station ID, some number of bits for the pager ID, and some bits for the command. This was a capture of me hitting call from a base station ID of 55 to a pager with the ID of 10, so let s blindly look for 10 bit chunks with the numbers we re looking for:
0000110111 0000001010 001000
Jeez. First try. 10 bits for the base station ID (55 in binary is 0000110111), 10 bits for the pager ID (10 in binary is 0000001010), which leaves us with 6 bits for a command (and maybe something else too?) which is 8 here. Great, cool, let s work off that being the case and revisit it if we hit bugs. Besides our data packet, there s also a preamble packet that I ll add in, in case it s used for signal detection or wakeup or something which is fairly easy to do since it s the same packet structure as the above, just all zeros. Very kind of them to leave it with the same number of bits and encoding scheme it s nice that it can live outside the PHY. Once I got here, I wrote a quick and dirty modulator, and was able to ring up pagers! Unmitigated success and good news only downside was that it took me a single night, and not the multi-week adventure I was looking for. Well, let s finish the job and document what we ve found for the sake of completeness.

Boxing everything up My best guess on the packet structure is as follows:
base id
argument
command
For a call or F2 operation, the argument is the Pager s ID code, but for other commands it s a value or an enum, depending. Here s a table of my by-hand demodulation of all the packet types the base station produces:
Type Cmd Id Description
Call8Call the pager identified by the id in argument
Off60Request any pagers on the charger power off when power is removed, argument is all zero
F240Program a pager to the specified Pager ID (in argument) and base station
F344Set the reminder duration in seconds specified in argument
F448Set the pager's beep mode to the one in argument (0 is disabled, 1 is slow, 2 is medium, 3 is fast)
F552Set the pager's vibration mode to the one in argument (0 is disabled, 1 is enabled)

Kitchen s closed for the night I m not going to be publishing this code since I can t think of a good use anyone would have for this besides folks using a low cost SDR and annoying local resturants; but there s enough here for folks who find this interesting to try modulating this protocol on their own hardware if they want to buy their own pack of pagers and give it a shot, which I do encourage! It s fun! Radios are great, and this is a good protocol to hack with it s really nice. All in all, this wasn t the multi-week adventure I was looking for, this was still a great exercise and a fun reminder that I ve come a far way from when I ve started. It felt a lot like cheating since I was able to infer a lot about the PHY because I d seen it before, but it was still a great time. I may grab a few more restaurant pagers and see if I can find one with a more exotic PHY to emulate next. I mean why not, I ve already got the thermal printer libraries working

6 June 2024

Paul Wise: FLOSS Activities May 2024

Focus This month I didn't have any particular focus. I just worked on issues in my info bubble.

Changes

Issues

Review
  • Debian BTS usertags: changes for the month

Administration
  • Debian wiki: approve accounts

Communication
  • Respond to queries from Debian users and contributors on the mailing lists and IRC

Sponsors All work was done on a volunteer basis.

12 May 2024

Daniel Lange: htop and PCP have a new home at Hack Club

After the unfortunate and somewhat surprising shutdown of the Open Collective Foundation (OCF), htop and Performance Co-Pilot (PCP) have migrated to Hack Club. Initially founded to improve STEM education, support high school computer science clubs and firmly founded in the hacker culture, Hack Club have created a US IRS approved 501(c)(3) charity that provides what Open Collective did/does1 and more at a flat 7% fee of the project income. Nathan Scott organized these moves with Paul Spitler. Many thanks! We considered other options for the projects, e.g. Gentoo has moved to Software in the Public Interest (SPI) and I know SPI quite well as they were created initially to host Debian. But PCP moved from SPI to OCF in 2021. Open Collective has a European branch that seems independent of the dissolved US foundation. But all-in-all Hack Club seemed the best fit. You can find the new fiscal sponsorship and donation landing pages at:
htophttps://hcb.hackclub.com/htop/https://hcb.hackclub.com/donations/start/htop
PCPhttps://hcb.hackclub.com/pcp/https://hcb.hackclub.com/donations/start/pcp

  1. Open Collective as in the fancy "manage your project donations and reimbursements" website still continues to run but the foundation of the same name that provided the actual fiscal sponsorship (i.e. managing the funds) got dissolved. It's ... complicated.

2 May 2024

Paul Wise: FLOSS Activities April 2024

Focus This month I didn't have any particular focus. I just worked on issues in my info bubble.

Changes

Issues

Administration
  • Debian IRC: updated #debian-dpl access list for new DPL
  • Debian wiki: unblock IP addresses, approve accounts

Communication

Sponsors The SWH work was sponsored. All other work was done on a volunteer basis.

13 April 2024

Simon Josefsson: Reproducible and minimal source-only tarballs

With the release of Libntlm version 1.8 the release tarball can be reproduced on several distributions. We also publish a signed minimal source-only tarball, produced by git-archive which is the same format used by Savannah, Codeberg, GitLab, GitHub and others. Reproducibility of both tarballs are tested continuously for regressions on GitLab through a CI/CD pipeline. If that wasn t enough to excite you, the Debian packages of Libntlm are now built from the reproducible minimal source-only tarball. The resulting binaries are reproducible on several architectures. What does that even mean? Why should you care? How you can do the same for your project? What are the open issues? Read on, dear reader This article describes my practical experiments with reproducible release artifacts, following up on my earlier thoughts that lead to discussion on Fosstodon and a patch by Janneke Nieuwenhuizen to make Guix tarballs reproducible that inspired me to some practical work. Let s look at how a maintainer release some software, and how a user can reproduce the released artifacts from the source code. Libntlm provides a shared library written in C and uses GNU Make, GNU Autoconf, GNU Automake, GNU Libtool and gnulib for build management, but these ideas should apply to most project and build system. The following illustrate the steps a maintainer would take to prepare a release:
git clone https://gitlab.com/gsasl/libntlm.git
cd libntlm
git checkout v1.8
./bootstrap
./configure
make distcheck
gpg -b libntlm-1.8.tar.gz
The generated files libntlm-1.8.tar.gz and libntlm-1.8.tar.gz.sig are published, and users download and use them. This is how the GNU project have been doing releases since the late 1980 s. That is a testament to how successful this pattern has been! These tarballs contain source code and some generated files, typically shell scripts generated by autoconf, makefile templates generated by automake, documentation in formats like Info, HTML, or PDF. Rarely do they contain binary object code, but historically that happened. The XZUtils incident illustrate that tarballs with files that are not included in the git archive offer an opportunity to disguise malicious backdoors. I blogged earlier how to mitigate this risk by using signed minimal source-only tarballs. The risk of hiding malware is not the only motivation to publish signed minimal source-only tarballs. With pre-generated content in tarballs, there is a risk that GNU/Linux distributions such as Trisquel, Guix, Debian/Ubuntu or Fedora ship generated files coming from the tarball into the binary *.deb or *.rpm package file. Typically the person packaging the upstream project never realized that some installed artifacts was not re-built through a typical autoconf -fi && ./configure && make install sequence, and never wrote the code to rebuild everything. This can also happen if the build rules are written but are buggy, shipping the old artifact. When a security problem is found, this can lead to time-consuming situations, as it may be that patching the relevant source code and rebuilding the package is not sufficient: the vulnerable generated object from the tarball would be shipped into the binary package instead of a rebuilt artifact. For architecture-specific binaries this rarely happens, since object code is usually not included in tarballs although for 10+ years I shipped the binary Java JAR file in the GNU Libidn release tarball, until I stopped shipping it. For interpreted languages and especially for generated content such as HTML, PDF, shell scripts this happens more than you would like. Publishing minimal source-only tarballs enable easier auditing of a project s code, to avoid the need to read through all generated files looking for malicious content. I have taken care to generate the source-only minimal tarball using git-archive. This is the same format that GitLab, GitHub etc offer for the automated download links on git tags. The minimal source-only tarballs can thus serve as a way to audit GitLab and GitHub download material! Consider if/when hosting sites like GitLab or GitHub has a security incident that cause generated tarballs to include a backdoor that is not present in the git repository. If people rely on the tag download artifact without verifying the maintainer PGP signature using GnuPG, this can lead to similar backdoor scenarios that we had for XZUtils but originated with the hosting provider instead of the release manager. This is even more concerning, since this attack can be mounted for some selected IP address that you want to target and not on everyone, thereby making it harder to discover. With all that discussion and rationale out of the way, let s return to the release process. I have added another step here:
make srcdist
gpg -b libntlm-1.8-src.tar.gz
Now the release is ready. I publish these four files in the Libntlm s Savannah Download area, but they can be uploaded to a GitLab/GitHub release area as well. These are the SHA256 checksums I got after building the tarballs on my Trisquel 11 aramo laptop:
91de864224913b9493c7a6cec2890e6eded3610d34c3d983132823de348ec2ca  libntlm-1.8-src.tar.gz
ce6569a47a21173ba69c990965f73eb82d9a093eb871f935ab64ee13df47fda1  libntlm-1.8.tar.gz
So how can you reproduce my artifacts? Here is how to reproduce them in a Ubuntu 22.04 container:
podman run -it --rm ubuntu:22.04
apt-get update
apt-get install -y --no-install-recommends autoconf automake libtool make git ca-certificates
git clone https://gitlab.com/gsasl/libntlm.git
cd libntlm
git checkout v1.8
./bootstrap
./configure
make dist srcdist
sha256sum libntlm-*.tar.gz
You should see the exact same SHA256 checksum values. Hooray! This works because Trisquel 11 and Ubuntu 22.04 uses the same version of git, autoconf, automake, and libtool. These tools do not guarantee the same output content for all versions, similar to how GNU GCC does not generate the same binary output for all versions. So there is still some delicate version pairing needed. Ideally, the artifacts should be possible to reproduce from the release artifacts themselves, and not only directly from git. It is possible to reproduce the full tarball in a AlmaLinux 8 container replace almalinux:8 with rockylinux:8 if you prefer RockyLinux:
podman run -it --rm almalinux:8
dnf update -y
dnf install -y make wget gcc
wget https://download.savannah.nongnu.org/releases/libntlm/libntlm-1.8.tar.gz
tar xfa libntlm-1.8.tar.gz
cd libntlm-1.8
./configure
make dist
sha256sum libntlm-1.8.tar.gz
The source-only minimal tarball can be regenerated on Debian 11:
podman run -it --rm debian:11
apt-get update
apt-get install -y --no-install-recommends make git ca-certificates
git clone https://gitlab.com/gsasl/libntlm.git
cd libntlm
git checkout v1.8
make -f cfg.mk srcdist
sha256sum libntlm-1.8-src.tar.gz 
As the Magnus Opus or chef-d uvre, let s recreate the full tarball directly from the minimal source-only tarball on Trisquel 11 replace docker.io/kpengboy/trisquel:11.0 with ubuntu:22.04 if you prefer.
podman run -it --rm docker.io/kpengboy/trisquel:11.0
apt-get update
apt-get install -y --no-install-recommends autoconf automake libtool make wget git ca-certificates
wget https://download.savannah.nongnu.org/releases/libntlm/libntlm-1.8-src.tar.gz
tar xfa libntlm-1.8-src.tar.gz
cd libntlm-v1.8
./bootstrap
./configure
make dist
sha256sum libntlm-1.8.tar.gz
Yay! You should now have great confidence in that the release artifacts correspond to what s in version control and also to what the maintainer intended to release. Your remaining job is to audit the source code for vulnerabilities, including the source code of the dependencies used in the build. You no longer have to worry about auditing the release artifacts. I find it somewhat amusing that the build infrastructure for Libntlm is now in a significantly better place than the code itself. Libntlm is written in old C style with plenty of string manipulation and uses broken cryptographic algorithms such as MD4 and single-DES. Remember folks: solving supply chain security issues has no bearing on what kind of code you eventually run. A clean gun can still shoot you in the foot. Side note on naming: GitLab exports tarballs with pathnames libntlm-v1.8/ (i.e.., PROJECT-TAG/) and I ve adopted the same pathnames, which means my libntlm-1.8-src.tar.gz tarballs are bit-by-bit identical to GitLab s exports and you can verify this with tools like diffoscope. GitLab name the tarball libntlm-v1.8.tar.gz (i.e., PROJECT-TAG.ARCHIVE) which I find too similar to the libntlm-1.8.tar.gz that we also publish. GitHub uses the same git archive style, but unfortunately they have logic that removes the v in the pathname so you will get a tarball with pathname libntlm-1.8/ instead of libntlm-v1.8/ that GitLab and I use. The content of the tarball is bit-by-bit identical, but the pathname and archive differs. Codeberg (running Forgejo) uses another approach: the tarball is called libntlm-v1.8.tar.gz (after the tag) just like GitLab, but the pathname inside the archive is libntlm/, otherwise the produced archive is bit-by-bit identical including timestamps. Savannah s CGIT interface uses archive name libntlm-1.8.tar.gz with pathname libntlm-1.8/, but otherwise file content is identical. Savannah s GitWeb interface provides snapshot links that are named after the git commit (e.g., libntlm-a812c2ca.tar.gz with libntlm-a812c2ca/) and I cannot find any tag-based download links at all. Overall, we are so close to get SHA256 checksum to match, but fail on pathname within the archive. I ve chosen to be compatible with GitLab regarding the content of tarballs but not on archive naming. From a simplicity point of view, it would be nice if everyone used PROJECT-TAG.ARCHIVE for the archive filename and PROJECT-TAG/ for the pathname within the archive. This aspect will probably need more discussion. Side note on git archive output: It seems different versions of git archive produce different results for the same repository. The version of git in Debian 11, Trisquel 11 and Ubuntu 22.04 behave the same. The version of git in Debian 12, AlmaLinux/RockyLinux 8/9, Alpine, ArchLinux, macOS homebrew, and upcoming Ubuntu 24.04 behave in another way. Hopefully this will not change that often, but this would invalidate reproducibility of these tarballs in the future, forcing you to use an old git release to reproduce the source-only tarball. Alas, GitLab and most other sites appears to be using modern git so the download tarballs from them would not match my tarballs even though the content would. Side note on ChangeLog: ChangeLog files were traditionally manually curated files with version history for a package. In recent years, several projects moved to dynamically generate them from git history (using tools like git2cl or gitlog-to-changelog). This has consequences for reproducibility of tarballs: you need to have the entire git history available! The gitlog-to-changelog tool also output different outputs depending on the time zone of the person using it, which arguable is a simple bug that can be fixed. However this entire approach is incompatible with rebuilding the full tarball from the minimal source-only tarball. It seems Libntlm s ChangeLog file died on the surgery table here. So how would a distribution build these minimal source-only tarballs? I happen to help on the libntlm package in Debian. It has historically used the generated tarballs as the source code to build from. This means that code coming from gnulib is vendored in the tarball. When a security problem is discovered in gnulib code, the security team needs to patch all packages that include that vendored code and rebuild them, instead of merely patching the gnulib package and rebuild all packages that rely on that particular code. To change this, the Debian libntlm package needs to Build-Depends on Debian s gnulib package. But there was one problem: similar to most projects that use gnulib, Libntlm depend on a particular git commit of gnulib, and Debian only ship one commit. There is no coordination about which commit to use. I have adopted gnulib in Debian, and add a git bundle to the *_all.deb binary package so that projects that rely on gnulib can pick whatever commit they need. This allow an no-network GNULIB_URL and GNULIB_REVISION approach when running Libntlm s ./bootstrap with the Debian gnulib package installed. Otherwise libntlm would pick up whatever latest version of gnulib that Debian happened to have in the gnulib package, which is not what the Libntlm maintainer intended to be used, and can lead to all sorts of version mismatches (and consequently security problems) over time. Libntlm in Debian is developed and tested on Salsa and there is continuous integration testing of it as well, thanks to the Salsa CI team. Side note on git bundles: unfortunately there appears to be no reproducible way to export a git repository into one or more files. So one unfortunate consequence of all this work is that the gnulib *.orig.tar.gz tarball in Debian is not reproducible any more. I have tried to get Git bundles to be reproducible but I never got it to work see my notes in gnulib s debian/README.source on this aspect. Of course, source tarball reproducibility has nothing to do with binary reproducibility of gnulib in Debian itself, fortunately. One open question is how to deal with the increased build dependencies that is triggered by this approach. Some people are surprised by this but I don t see how to get around it: if you depend on source code for tools in another package to build your package, it is a bad idea to hide that dependency. We ve done it for a long time through vendored code in non-minimal tarballs. Libntlm isn t the most critical project from a bootstrapping perspective, so adding git and gnulib as Build-Depends to it will probably be fine. However, consider if this pattern was used for other packages that uses gnulib such as coreutils, gzip, tar, bison etc (all are using gnulib) then they would all Build-Depends on git and gnulib. Cross-building those packages for a new architecture will therefor require git on that architecture first, which gets circular quick. The dependency on gnulib is real so I don t see that going away, and gnulib is a Architecture:all package. However, the dependency on git is merely a consequence of how the Debian gnulib package chose to make all gnulib git commits available to projects: through a git bundle. There are other ways to do this that doesn t require the git tool to extract the necessary files, but none that I found practical ideas welcome! Finally some brief notes on how this was implemented. Enabling bootstrappable source-only minimal tarballs via gnulib s ./bootstrap is achieved by using the GNULIB_REVISION mechanism, locking down the gnulib commit used. I have always disliked git submodules because they add extra steps and has complicated interaction with CI/CD. The reason why I gave up git submodules now is because the particular commit to use is not recorded in the git archive output when git submodules is used. So the particular gnulib commit has to be mentioned explicitly in some source code that goes into the git archive tarball. Colin Watson added the GNULIB_REVISION approach to ./bootstrap back in 2018, and now it no longer made sense to continue to use a gnulib git submodule. One alternative is to use ./bootstrap with --gnulib-srcdir or --gnulib-refdir if there is some practical problem with the GNULIB_URL towards a git bundle the GNULIB_REVISION in bootstrap.conf. The srcdist make rule is simple:
git archive --prefix=libntlm-v1.8/ -o libntlm-1.8-src.tar.gz HEAD
Making the make dist generated tarball reproducible can be more complicated, however for Libntlm it was sufficient to make sure the modification times of all files were set deterministically to the timestamp of the last commit in the git repository. Interestingly there seems to be a couple of different ways to accomplish this, Guix doesn t support minimal source-only tarballs but rely on a .tarball-timestamp file inside the tarball. Paul Eggert explained what TZDB is using some time ago. The approach I m using now is fairly similar to the one I suggested over a year ago. If there are problems because all files in the tarball now use the same modification time, there is a solution by Bruno Haible that could be implemented. Side note on git tags: Some people may wonder why not verify a signed git tag instead of verifying a signed tarball of the git archive. Currently most git repositories uses SHA-1 for git commit identities, but SHA-1 is not a secure hash function. While current SHA-1 attacks can be detected and mitigated, there are fundamental doubts that a git SHA-1 commit identity uniquely refers to the same content that was intended. Verifying a git tag will never offer the same assurance, since a git tag can be moved or re-signed at any time. Verifying a git commit is better but then we need to trust SHA-1. Migrating git to SHA-256 would resolve this aspect, but most hosting sites such as GitLab and GitHub does not support this yet. There are other advantages to using signed tarballs instead of signed git commits or git tags as well, e.g., tar.gz can be a deterministically reproducible persistent stable offline storage format but .git sub-directory trees or git bundles do not offer this property. Doing continous testing of all this is critical to make sure things don t regress. Libntlm s pipeline definition now produce the generated libntlm-*.tar.gz tarballs and a checksum as a build artifact. Then I added the 000-reproducability job which compares the checksums and fails on mismatches. You can read its delicate output in the job for the v1.8 release. Right now we insists that builds on Trisquel 11 match Ubuntu 22.04, that PureOS 10 builds match Debian 11 builds, that AlmaLinux 8 builds match RockyLinux 8 builds, and AlmaLinux 9 builds match RockyLinux 9 builds. As you can see in pipeline job output, not all platforms lead to the same tarballs, but hopefully this state can be improved over time. There is also partial reproducibility, where the full tarball is reproducible across two distributions but not the minimal tarball, or vice versa. If this way of working plays out well, I hope to implement it in other projects too. What do you think? Happy Hacking!

Next.