Search Results: "antonio"

28 December 2023

Antonio Terceiro: Debian CI: 10 years later

It was 2013, and I was on a break from work between Christmas and New Year of 2013. I had been working at Linaro for well over a year, on the LAVA project. I was living and breathing automated testing infrastructure, mostly for testing low-level components such as kernels and bootloaders, on real hardware. At this point I was also a Debian contributor for quite some years, and had become an official project members two years prior. Most of my involvement was in the Ruby team, where we were already consistently running upstream test suites during package builds. During that break, I put these two contexts together, and came to the conclusion that Debian needed a dedicated service that would test the contents of the Debian archive. I was aware of the existance of autopkgtest, and started working on a very simple service that would later become Debian CI. In January 2014, debci was initially announced on that month's Misc Developer News, and later uploaded to Debian. It's been continuously developed for the last 10 years, evolved from a single shell script running tests in a loop into a distributed system with 47 geographically-distributed machines as of writing this piece, became part of the official Debian release process gating migrations to testing, had 5 Summer of Code and Outrechy interns working on it, and processed beyond 40 million test runs. In there years, Debian CI has received contributions from a lot of people, but I would like to give special credits to the following:

25 August 2023

Debian Brasil: Debian Day 30 anos online no Brasil

Em 2023 o tradicional Debian Day est sendo celebrado de forma especial, afinal no dia 16 de agostoo Debian completou 30 anos! Para comemorar este marco especial na vida do Debian, a comunidade Debian Brasil organizou uma semana de palestras online de 14 a 18 de agosto. O evento foi chamado de Debian 30 anos. Foram realizadas 2 palestras por noite, das 19h s 22h, transmitidas pelo canal Debian Brasil no YouTube totalizando 10 palestras. As grava es j est o dispon veis tamb m no canal Debian Brasil no Peertube. Nas 10 atividades tivemos as participa es de 9 DDs, 1 DM, 3 contribuidores(as). A audi ncia ao vivo variou bastante, e o pico foi na palestra sobre preseed com o Eriberto Mota quando tivemos 47 pessoas assistindo. Obrigado a todos(as) participantes pela contribui o que voc s deram para o sucesso do nosso evento. Veja abaixo as fotos de cada atividade: Nova gera o: uma entrevista com iniciantes no projeto Debian
Nova gera o: uma entrevista com iniciantes no projeto Debian Instala o personalizada e automatizada do Debian com preseed
Instala o personalizada e automatizada do Debian com preseed Manipulando patches com git-buildpackage
Manipulando patches com git-buildpackage debian.social: Socializando Debian do jeito Debian
debian.social: Socializando Debian do jeito Debian Proxy reverso com WireGuard
Proxy reverso com WireGuard Celebra o dos 30 anos do Debian!
Celebra o dos 30 anos do Debian! Instalando o Debian em disco criptografado com LUKS
Instalando o Debian em disco criptografado com LUKS O que a equipe de localiza o j  conquistou nesses 30 anos
O que a equipe de localiza o j conquistou nesses 30 anos Debian - Projeto e Comunidade!
Debian - Projeto e Comunidade! Design Gr fico e Software livre, o que fazer e por onde come ar
Design Gr fico e Software livre, o que fazer e por onde come ar

Debian Brasil: Debian Day 30 years online in Brazil

In 2023 the traditional Debian Day is being celebrated in a special way, after all on August 16th Debian turned 30 years old! To celebrate this special milestone in the Debian's life, the Debian Brasil community organized a week with talks online from August 14th to 18th. The event was named Debian 30 years. Two talks were held per night, from 7:00 pm to 10:00 pm, streamed on the Debian Brasil channel on YouTube totaling 10 talks. The recordings are also available on the Debian Brazil channel on Peertube. We had the participation of 9 DDs, 1 DM, 3 contributors in 10 activities. The live audience varied a lot, and the peak was on the preseed talk with Eriberto Mota when we had 47 people watching. Thank you to all participants for the contribution you made to the success of our event. Veja abaixo as fotos de cada atividade: Nova gera o: uma entrevista com iniciantes no projeto Debian
Nova gera o: uma entrevista com iniciantes no projeto Debian Instala o personalizada e automatizada do Debian com preseed
Instala o personalizada e automatizada do Debian com preseed Manipulando patches com git-buildpackage
Manipulando patches com git-buildpackage debian.social: Socializando Debian do jeito Debian
debian.social: Socializando Debian do jeito Debian Proxy reverso com WireGuard
Proxy reverso com WireGuard Celebra o dos 30 anos do Debian!
Celebra o dos 30 anos do Debian! Instalando o Debian em disco criptografado com LUKS
Instalando o Debian em disco criptografado com LUKS O que a equipe de localiza o j  conquistou nesses 30 anos
O que a equipe de localiza o j conquistou nesses 30 anos Debian - Projeto e Comunidade!
Debian - Projeto e Comunidade! Design Gr fico e Software livre, o que fazer e por onde come ar
Design Gr fico e Software livre, o que fazer e por onde come ar

12 July 2023

Freexian Collaborators: Debian Contributions: /usr-merge updates, DebConf Bursary prep, and more! (by Utkarsh Gupta)

Contributing to Debian is part of Freexian s mission. This article covers the latest achievements of Freexian and their collaborators. All of this is made possible by organizations subscribing to our Long Term Support contracts and consulting services.

/usr-merge, by Helmut Grohne, et al The work on /usr-merge continues from May. The lengthy discussion was condensed into a still lengthy rewrite of DEP17 listing all known problems and proposed mitigations. An initial consensus call did not resolve all questions, but we now have rough consensus on finalizing the transition without relying on major changes to dpkg. Other questions still have diverging opinions and some matters such as how to not break backports are still missing satisfying answers.

DebConf Bursary prep, by Utkarsh Gupta DebCamp and DebConf is happening from 03rd September to 17th September in Kochi, India, and the DebConf Bursary team is gearing up for that. After extending the bursary deadline (catering to the requests coming in from various people), we ve finally managed to clock over 260 bursary requests. The team is set up and we re starting to review the applications. The team intends to roll out the result as soon as possible.

debci, by Helmut Grohne As Freexian is working on deploying autopkgtests for the LTS and ELTS services, debci and autopkgtests were improved in Debian to better deal with derivatives (e.g. by better supporting external package signing keyrings). Other aspects that are not deployed on ci.debian.net such as the qemu backend were also improved. We express thanks to the relevant maintainers Antonio Terceiro, Paul Gevers and Simon McVittie for their timely reviews and merges of our changes.

Miscellaneous contributions
  • Following the release of Debian 12, Rapha l Hertzog updated tracker.debian.org to be aware of trixie. He also pushed some fixes to distro-tracker (the software powering tracker.debian.org) and released version 1.2.0 (since the former release was lacking fixes to run on Debian 12 bookworm).
  • Following the release of Debian 12, Helmut Grohne updated crossqa.debian.net systems. He also sent 7 patches for cross build failures and continued adapting rebootstrap to changes in unstable.
  • Santiago Ruano Rinc n started to work on how to improve the robustness of Salsa CI s pipeline for some jobs failing frequently.
  • Thorsten Alteholz did security updates of cpdb-libs in Unstable and Bookworm.
  • Stefano Rivera upgraded pixelfed.debian.social to bookworm.
  • Stefano started an re2 library transition, and started preparation for the next transition.
  • Helmut Grohne updated debvm in unstable releasing changes that accumulated during the freeze.
  • Stefano did some work on the website and infrastructure for DebConf 23.
  • Utkarsh Gupta helped review and fix open redmine bugs and fix them all in unstable.

27 April 2023

Arturo Borrero Gonz lez: Kubecon and CloudNativeCon 2023 Europe summary

Post logo This post serves as a report from my attendance to Kubecon and CloudNativeCon 2023 Europe that took place in Amsterdam in April 2023. It was my second time physically attending this conference, the first one was in Austin, Texas (USA) in 2017. I also attended once in a virtual fashion. The content here is mostly generated for the sake of my own recollection and learnings, and is written from the notes I took during the event. The very first session was the opening keynote, which reunited the whole crowd to bootstrap the event and share the excitement about the days ahead. Some astonishing numbers were announced: there were more than 10.000 people attending, and apparently it could confidently be said that it was the largest open source technology conference taking place in Europe in recent times. It was also communicated that the next couple iteration of the event will be run in China in September 2023 and Paris in March 2024. More numbers, the CNCF was hosting about 159 projects, involving 1300 maintainers and about 200.000 contributors. The cloud-native community is ever-increasing, and there seems to be a strong trend in the industry for cloud-native technology adoption and all-things related to PaaS and IaaS. The event program had different tracks, and in each one there was an interesting mix of low-level and higher level talks for a variety of audience. On many occasions I found that reading the talk title alone was not enough to know in advance if a talk was a 101 kind of thing or for experienced engineers. But unlike in previous editions, I didn t have the feeling that the purpose of the conference was to try selling me anything. Obviously, speakers would make sure to mention, or highlight in a subtle way, the involvement of a given company in a given solution or piece of the ecosystem. But it was non-invasive and fair enough for me. On a different note, I found the breakout rooms to be often small. I think there were only a couple of rooms that could accommodate more than 500 people, which is a fairly small allowance for 10k attendees. I realized with frustration that the more interesting talks were immediately fully booked, with people waiting in line some 45 minutes before the session time. Because of this, I missed a few important sessions that I ll hopefully watch online later. Finally, on a more technical side, I ve learned many things, that instead of grouping by session I ll group by topic, given how some subjects were mentioned in several talks. On gitops and CI/CD pipelines Most of the mentions went to FluxCD and ArgoCD. At that point there were no doubts that gitops was a mature approach and both flux and argoCD could do an excellent job. ArgoCD seemed a bit more over-engineered to be a more general purpose CD pipeline, and flux felt a bit more tailored for simpler gitops setups. I discovered that both have nice web user interfaces that I wasn t previously familiar with. However, in two different talks I got the impression that the initial setup of them was simple, but migrating your current workflow to gitops could result in a bumpy ride. This is, the challenge is not deploying flux/argo itself, but moving everything into a state that both humans and flux/argo can understand. I also saw some curious mentions to the config drifts that can happen in some cases, even if the goal of gitops is precisely for that to never happen. Such mentions were usually accompanied by some hints on how to operate the situation by hand. Worth mentioning, I missed any practical information about one of the key pieces to this whole gitops story: building container images. Most of the showcased scenarios were using pre-built container images, so in that sense they were simple. Building and pushing to an image registry is one of the two key points we would need to solve in Toolforge Kubernetes if adopting gitops. In general, even if gitops were already in our radar for Toolforge Kubernetes, I think it climbed a few steps in my priority list after the conference. Another learning was this site: https://opengitops.dev/. Group On etcd, performance and resource management I attended a talk focused on etcd performance tuning that was very encouraging. They were basically talking about the exact same problems we have had in Toolforge Kubernetes, like api-server and etcd failure modes, and how sensitive etcd is to disk latency, IO pressure and network throughput. Even though Toolforge Kubernetes scale is small compared to other Kubernetes deployments out there, I found it very interesting to see other s approaches to the same set of challenges. I learned how most Kubernetes components and apps can overload the api-server. Because even the api-server talks to itself. Simple things like kubectl may have a completely different impact on the API depending on usage, for example when listing the whole list of objects (very expensive) vs a single object. The conclusion was to try avoiding hitting the api-server with LIST calls, and use ResourceVersion which avoids full-dumps from etcd (which, by the way, is the default when using bare kubectl get calls). I already knew some of this, and for example the jobs-framework-emailer was already making use of this ResourceVersion functionality. There have been a lot of improvements in the performance side of Kubernetes in recent times, or more specifically, in how resources are managed and used by the system. I saw a review of resource management from the perspective of the container runtime and kubelet, and plans to support fancy things like topology-aware scheduling decisions and dynamic resource claims (changing the pod resource claims without re-defining/re-starting the pods). On cluster management, bootstrapping and multi-tenancy I attended a couple of talks that mentioned kubeadm, and one in particular was from the maintainers themselves. This was of interest to me because as of today we use it for Toolforge. They shared all the latest developments and improvements, and the plans and roadmap for the future, with a special mention to something they called kubeadm operator , apparently capable of auto-upgrading the cluster, auto-renewing certificates and such. I also saw a comparison between the different cluster bootstrappers, which to me confirmed that kubeadm was the best, from the point of view of being a well established and well-known workflow, plus having a very active contributor base. The kubeadm developers invited the audience to submit feature requests, so I did. The different talks confirmed that the basic unit for multi-tenancy in kubernetes is the namespace. Any serious multi-tenant usage should leverage this. There were some ongoing conversations, in official sessions and in the hallway, about the right tool to implement K8s-whitin-K8s, and vcluster was mentioned enough times for me to be convinced it was the right candidate. This was despite of my impression that multiclusters / multicloud are regarded as hard topics in the general community. I definitely would like to play with it sometime down the road. On networking I attended a couple of basic sessions that served really well to understand how Kubernetes instrumented the network to achieve its goal. The conference program had sessions to cover topics ranging from network debugging recommendations, CNI implementations, to IPv6 support. Also, one of the keynote sessions had a reference to how kube-proxy is not able to perform NAT for SIP connections, which is interesting because I believe Netfilter Conntrack could do it if properly configured. One of the conclusions on the CNI front was that Calico has a massive community adoption (in Netfilter mode), which is reassuring, especially considering it is the one we use for Toolforge Kubernetes. Slide On jobs I attended a couple of talks that were related to HPC/grid-like usages of Kubernetes. I was truly impressed by some folks out there who were using Kubernetes Jobs on massive scales, such as to train machine learning models and other fancy AI projects. It is acknowledged in the community that the early implementation of things like Jobs and CronJobs had some limitations that are now gone, or at least greatly improved. Some new functionalities have been added as well. Indexed Jobs, for example, enables each Job to have a number (index) and process a chunk of a larger batch of data based on that index. It would allow for full grid-like features like sequential (or again, indexed) processing, coordination between Job and more graceful Job restarts. My first reaction was: Is that something we would like to enable in Toolforge Jobs Framework? On policy and security A surprisingly good amount of sessions covered interesting topics related to policy and security. It was nice to learn two realities:
  1. kubernetes is capable of doing pretty much anything security-wise and create greatly secured environments.
  2. it does not by default. The defaults are not security-strict on purpose.
It kind of made sense to me: Kubernetes was used for a wide range of use cases, and developers didn t know beforehand to which particular setup they should accommodate the default security levels. One session in particular covered the most basic security features that should be enabled for any Kubernetes system that would get exposed to random end users. In my opinion, the Toolforge Kubernetes setup was already doing a good job in that regard. To my joy, some sessions referred to the Pod Security Admission mechanism, which is one of the key security features we re about to adopt (when migrating away from Pod Security Policy). I also learned a bit more about Secret resources, their current implementation and how to leverage a combo of CSI and RBAC for a more secure usage of external secrets. Finally, one of the major takeaways from the conference was learning about kyverno and kubeaudit. I was previously aware of the OPA Gatekeeper. From the several demos I saw, it was to me that kyverno should help us make Toolforge Kubernetes more sustainable by replacing all of our custom admission controllers with it. I already opened a ticket to track this idea, which I ll be proposing to my team soon. Final notes In general, I believe I learned many things, and perhaps even more importantly I re-learned some stuff I had forgotten because of lack of daily exposure. I m really happy that the cloud native way of thinking was reinforced in me, which I still need because most of my muscle memory to approach systems architecture and engineering is from the old pre-cloud days. List of sessions I attended on the first day: List of sessions I attended on the second day: List of sessions I attended on third day: The videos have been published on Youtube.

16 January 2023

Gunnar Wolf: Back to Understanding Computers and Cognition

As many of you know, I work at UNAM, Mexico s largest university. My work is split in two parts: My full-time job is to be the systems and network administrator at the Economics Research Institute, and I do some hours of teaching at the Engineering Faculty. At the Institute, my role is academic but although I have tried to frame my works in a way amenable to analysis grounded on the Social Sciences (Construcci n Colaborativa del Conocimiento, Hecho con Creative Commons, Mecanismos de privacidad y anonimato), so far, I have not taken part of academic collaboration with my coworkers Economics is a field very far from my interests, to somehow illustrate it. I was very happy when I was invited to be a part of a Seminar on The Digital Economy in the age of Artificial Intelligence . I talked with the coordinator, and we agreed we have many Economic Science experts but understanding what does Artificial Intelligence mean eludes then, so I will be writing one of the introductory chapters to this analysis. But Hey, I m no expert in Artificial Intelligence. If anything, I could be categorized as an AI-skeptical! Well, at least I might be the closest thing at hand in the Institute So I have been thinking about what I will be writing, and finding and reading material to substantiate what I ll be writing. One of the readings I determined early on I would be going back to is Terry Winograd and Fernando Flores 1986 book, Understanding Computers and Cognition: A New Foundation for Design (oh, were you expecting a link to buy it instead of reading it online?) I first came across this book by mere chance. Back in the last day of year 2000, my friend Ariel invited me and my then-girlfriend to tag along and travel by land to the USA to catch some just-after-Christmas deals in San Antonio. I was not into shopping, but have always enjoyed road trips, so we went together. We went, yes, to the never-ending clothing shops, but we also went to some big libraries And the money I didn t spend at other shops, I spent there. And then some more. There was a little, somewhat oldish book that caught my eye. And I ll be honest: I looked at this book only because it was obviously produced by the LaTeX typesetting system (the basics of which I learnt in 1983, and with which I have written well, basically everything substantial I ve ever done). I remember I read this book with most interest back in that year, and finished it with a Wow, that was a strange trip! And Although I have never done much that could be considered AI-related, this has always been my main reference. But not for explaining what is a perceptron, how is an expert system to ponder the weight of given information, or whether a neural network is convolutional or recurrent, or how to turn from a network trained to recognize feature x into a generational network. No, our book is not technical. Well Not in that sense. This book tackles cognition. But in order to discuss cognition, it must first come to a proper definition of it. And to do so, it has to base itself on philosophy, starting by noting the author s disagreement with what they term as the rationalistic tradition: what we have come to term valid reasoning in Western countries. Their main claim is that the rationalistic tradition cannot properly explain a process as complex as cognition (how much bolder can you be than proposing something like this?). So, this book presents many constructs of Heidggerian origin, aiming to explain what it is understanding and being. In doing so, it follows Humberto Maturana s work. Maturana is also a philosopher, but comes from a background in biology he published works on animal neurophysiology that are also presented here. Writing this, I must ensure you I am not a philosopher, and lack field-specific knowledge to know whether this book is so unique. I know from the onset it does not directly help me to write the chapter I will be writing (but it will surely help me write some important caveats that will make the chapter much more interesting and different to what anybody with a Web browser could write about artificial intelligence). One last note: Although very well written, and notable for bringing hard to grasp concepts to mere technical staff as myself, this is not light, easy reading. I started re-reading this book a couple of weeks ago, and have just finished chapter 5 (page 69). As some reviewers state, this is one of those books you have to go back a paragraph or two over and over. But it is a most enjoyable and interesting reading.

31 December 2022

Chris Lamb: Favourite films of 2022

In my four most recent posts, I went over the memoirs and biographies, the non-fiction, the fiction and the 'classic' fiction I enjoyed reading in 2022. But in the very last of my roundup posts and in relatively less detail I'll be quickly sketching out the favourite movies that were new to me in 2022:

La Ronde (1950) An all-knowing narrator (Adolf Wohlbr ck) guides us through a series of vignettes in 1900s Vienna a soldier meets an eager young lady of the evening, and later he has an affair with a young lady who becomes a maid and who then does similarly with the young man of the house. On and on it goes, spinning on the carousel of life. A wonderfully beguiling movie, and a reminder that 'innocent' and 'charming' doesn't always imply 'childish'.

A Man Escaped (1956) A Man Escaped is the story of a WW2 resistance fighter who wishes to escape from a prison. Filmed in Robert Bresson's quasi-minimalistic style (and touching on his usual themes of Christian redemption), this deceptively simple-looking film rewards deeper analysis.

The Music Room (1958) In The Music Room, Satyajit Ray (best known for his Apu Trilogy) tells the story of a wealthy landlord who lives a decadent life with his wife and son. His passion is for music, however, and he spends a significant portion of his fortune on concerts held for the locals in his magnificent music room (or 'jalsaghar). However, his obsession for music (and its attendant quest for respect from his peers) is his eventual undoing, as he sacrifices both his family and wealth whilst trying to retain it. Indeed, The Music Room is less of an examination of a single rich man (or of the hypnotic sitar music by Vilayat Khan) than of the dying Bengali landowning class, which I read as symbolic of what was about to vanish from Indian culture in the post-Independence drive toward modernity. Still, this is a multi-layered and textured film, as this is also an intensely moving portrait of a highly-detached man as well.

The Hustler (1961) 'Fast Eddie' Felson is a small-time pool hustler with a lot of talent but a self-destructive attitude. His bravado causes him to challenge the legendary Minnesota Fats to a high-stakes match. The stakes are always much bigger than just money though, as Fast Eddie's ego is on the line. Despite not caring so much about the game, I was completely gripped by Paul Newman's superb performance. Almost certainly the best film about pool ever made, and probably the best sports film as well.

Picnic at Hanging Rock (1975) One Valentine's Day in early-1900s Australia, an upper-class school takes its girls on a field trip to a scenic volcanic formation in the middle of the brush. Despite all the rules against it, however, several of the girls venture off onto the rock, and it's not until the end of the day that the group realises some of the girls and one of the teachers have disappeared without a trace. A mesmerising film about colonialism, sexual repression and Antonioni-esque alienation, Picnic at Hanging Rock's dreamlike aesthetic is haunting.

Jeanne Dielman, 23, quai du Commerce, 1080 Bruxelles (1976) Jeanne Dielman is a three-and-a-half-hour film that shows a widowed housewife doing her daily chores and taking care of her apartment. Yet in its uncompromising nature, it raises the tedium of her life to the level of profundity. I watched this radically political film a few months before it was recently voted as the 'greatest of all time'), but despite its relentless rigorous nature (and long running time), the conclusion when it comes is shattering.

Raise the Red Lantern (1991) Set in China during the 1920s, Raise the Red Lantern tells the story of a young woman who has become the new concubine of a wealthy man. However, as he has three wives already (each living in a separate house within his castle) she becomes are engaged in an extremely complicated competition attention and the privileges that come with it. Richly colourful, both in its visuals and symbolism, this is film both specifically about China at the time and human universals.

Moonlight (2016) A heartbreaking story of a young man's struggle to find himself, told across three defining chapters in his life as he experiences the pain and beauty of falling in love, whilst grappling with his own sexuality within the ambient weather of race and class in the United States. Absolutely spellbinding and visually stunning.

Drive My Car (2021) Yusuke Kafuku is a stage director who is unable to cope with the loss of his wife, and has therefore accepted an invitation to direct Anton Chekhov's [Uncle Vanya][https://en.wikipedia.org/wiki/Uncle_Vanya] at a festival in Hiroshima, Japan. Yet whilst many reviewers have focused on the compelling relationship between Yusuke and Misaki (the introverted young woman who has been appointed to drive his car), what I found devastating was the embedded meditation on language, communication, understanding and empathy, especially concerning what happens when these break down or are no longer possible. Drive My Car touches on the limits of each of these phenomena connecting to the next: being able to speak the same language as another doesn't mean you can communicate with them; and being able to communicate doesn't mean you can understand someone, let alone empathise with them. All of this through the near-ungraspable emotion of grief as well, and the portion of this film where the actress was silently employing Korean Sign Language was the most moving thing I saw this year.

8 December 2022

Louis-Philippe V ronneau: Debian Python Team 2022 Sprint Report

This is the report for the Debian Python Team remote sprint that took place on December 2-3-4 2022. Many thanks to those who participated, namely: Here is a list of issues we worked on: pybuild autodep8 feature About a year ago, Antonio Terceiro contributed code to pybuild to make it possible to automatically run the upstream test suite as autopkgtests. This feature has now been merged and uploaded to unstable. Although you can find out more about it in the pybuild-autopkgtest manpage, an email providing more details should be sent to the debian-python mailing list relatively soon. Fixing packages that run tests via python3 setup.py test Last August, Stefano Rivera poked the team about the deprecation of the python3 setup.py test command to run tests in pybuild. Although this feature has been deprecated upstream for 6 years now, many packages in the archive still use it to run the upstream test suite during build. Around 29 of the 67 packages that are team-maintained by the Debian Python Team were fixed during the sprint. Ideally, all of them would be before the feature is removed from pybuild. if a package you maintain still runs this command, please consider fixing it! Fixing packages that use nose nose, provided by the python3-nose package, is an obsolete testing framework for Python and has been unmaintained since 2015. During the sprint, people worked on fixing some of the many bugs filled against packages still running tests via nose, but there are still around 240 packages affected by this issue in the archive. Again, if a package you maintain still runs this command, please consider fixing it! Removal of the remaining Python2 packages With the upload of dh-python 5.20221202, Stefano Rivera officially removed support for dh_python2 and dh_pypy, thus closing the "Python2 removal in sid/bullseye" bug. It seems some work still needs to be done for complete Python2 removal from Sid, but I expect this will be done in time for the Bookworm release. Working on Lintian tags for the Team During the sprint, I managed to work on some Lintian issues that we had targeted, namely: I also worked on a few other Lintian tags, but they were unrelated to the Debian Python Team itself. I'm also happy to report many of the tags I wrote for the team in the past few months were merged by the awesome Russ Allbery and should land in unstable as soon as a new release is made. I'm particularly looking forward the new "uses-python-distutils" tag that should help us flag packages that still use the deprecated distutils library. Patching distro-tracker (tracker.debian.org) to show pending team MRs It's often hard to have a good overview of pending merge requests when working with team-maintained packages, as by default, Salsa doesn't notify anyone when a MR is opened. Although our workflow typically does not involve creating merge requests, some people still do and they end up sitting there, unnoticed. During the sprint, Kurt Kremitzki worked on solving this issue by having distro-tracker show the pending MRs on our team's tracker page. Sadly, it seems little progress was made, as the removal of python3-django-jsonfield from the archive and breaking changes in python3-selenium has broken the test suite. Migrate packages building with the flit plugin to the generic pyproject one pybuild has been supporting building with PEP-517 style pyproject.toml files via a generic plugin (pybuild-plugin-pyproject) for a while now. As this plugin supersedes the old flit plugin, we've been thinking of deprecating it in time for the Bookworm release. To make this possible, most of the packages in the archive that still used this plugin were migrated to the generic one and I opened bugs on the last handful of packages that were not team-maintained. Other work Many other things were done during the sprint, such as: Thanks Thanks again to everyone who joined the sprint, and three big cheers for all the folks who donate to Debian and made it possible for us to have a food budget for the event.

4 March 2022

Abiola Ajadi: Outreachy-And it s a wrap!

Outreachy Wrap-up Project Improve Debian Continuous Integration UX
Project Link: https://www.outreachy.org/outreachy-december-2021-internship-round/communities/debian/#improve-debian-continuous-integration-ux
Code Repository: https://salsa.debian.org/ci-team/debci
Mentors: Antonio Terceiro, Paul Gevers and Pavit Kaur

About the project Debci exist to make sure packages work currently after an update, How it does this is by testing all of the packages that have tests written in them to make sure it works and nothing is broken This project entails making improvements to the platform to make it easier to use and maintain.

Deliverables of the project:
  • Package landing page displaying pending jobs
  • web frontend: centralize job listings in a single template
  • self-service: request test form forgets values when validation fails
  • Improvement to status

Work done

Package landing page displaying pending jobs Previously, Jobs that were pending were not displayed on the package page. Working on this added a feature to display pending jobs on package landing. Working on this task made it known that the same block of codes was repeated in different files which led to the next task Screenshot-2022-03-04-at-02-03-06.png Merge request
web frontend: centralize job listings in a single template Jobs are listed in various landings such as status packages, Status alerts, status failing, History, and so on. The same Code was repeated in these pages to list the jobs, I worked on refactoring it and created a single template for job listing so it can be used anywhere it s needed. I also wrote a test for the feature I added.
Merge request
self service: request test form forgets values when validation fails When one tries to request for a test and it fails with an error, originally the form does not remember the values that were typed in the package name, suite field et. c. This fix ensures the form remembers the values inputted even when it throws an error. Image of request test page N/B: The form checks all architecture on the load of the page
merge request
Improvement to status Originally the Status pages were rendered as static HTML pages but I converted these pages to be generated dynamically, I wrote endpoints for each page. Since most of the status pages have a list of jobs I modified it to use the template I created for job-listing. Previously, the status pages had a mechanism to filter such as All, Latest 50 et.c which wasn t paginated. I removed this mechanism added a filter by architecture and suites to these pages and also add pagination. Last but not the least, I wrote tests for these implementations carried out on the status page. Image of Status failing page merge request:
first task
second task

Major take-aways I learnt a lot during my internship but most importantly I learnt how to:
  • write Tests in Ruby and how writing tests is an important aspect of software development
  • maintain good coding practice, Paying attending to commit messages, Indentation et.c are good areas I developed in writing code.
  • make contributions in Ruby Programming Language.

Acknowledgement I can not end this without saying thank you to my mentors Antonio Terceiro, Paul Gevers, and Pavit Kaur for their constant support and guidance throughout the entire duration of this Internship. It has been a pleasure Interacting and learning from everyone.

Review Outreachy has helped me feel more confident about open-source, especially during the application phase. I had to reach out to the community I was interested in and ask questions on how to get started. The informal chats week was awesome I was able to build my network and have interesting conversations with amazing individuals in open-source. To round up, Always ask questions and do not be afraid of making a mistake, as one of the outreachy blog post topics says Everyone struggles!, but never give up!

29 November 2021

Russ Allbery: Fall haul

It's been a while since I've posted one of these, and I also may have had a few moments of deciding to support authors by buying their books even if I'm not going to get a chance to read them soon. There's also a bit of work reading in here. Ryka Aoki Light from Uncommon Stars (sff)
Frederick R. Chromey To Measure the Sky (non-fiction)
Neil Gaiman, et al. Sandman: Overture (graphic novel)
Alix E. Harrow A Spindle Splintered (sff)
Jordan Ifueko Raybearer (sff)
Jordan Ifueko Redemptor (sff)
T. Kingfisher Paladin's Hope (sff)
TJ Klune Under the Whispering Door (sff)
Kiese Laymon How to Slowly Kill Yourself and Others in America (non-fiction)
Yuna Lee Fox You (romance)
Tim Mak Misfire (non-fiction)
Naomi Novik The Last Graduate (sff)
Shelley Parker-Chan She Who Became the Sun (sff)
Gareth L. Powell Embers of War (sff)
Justin Richer & Antonio Sanso OAuth 2 in Action (non-fiction)
Dean Spade Mutual Aid (non-fiction)
Lana Swartz New Money (non-fiction)
Adam Tooze Shutdown (non-fiction)
Bill Watterson The Essential Calvin and Hobbes (strip collection)
Bill Willingham, et al. Fables: Storybook Love (graphic novel)
David Wong Real-World Cryptography (non-fiction)
Neon Yang The Black Tides of Heaven (sff)
Neon Yang The Red Threads of Fortune (sff)
Neon Yang The Descent of Monsters (sff)
Neon Yang The Ascent to Godhood (sff)
Xiran Jay Zhao Iron Widow (sff)

14 November 2021

Ruby Team: Ruby transition and packaging hints #2 - Gemfile.lock created by bundler/setup with Ruby 2.7 preventing successful test with Ruby 3.0

We currently face an issue in all packages requiring bunlder/setup and trying to run the tests for Ruby 2.7 and 3.0. The problem is that the first tests will create Gemfile.lock (or gemfile/gemfile-*.lock) using Ruby 2.7 and the next run for Ruby 3 will report e.g.:
Failure/Error: require 'bundler/setup' # Set up gems listed in the Gemfile.
Bundler::GemNotFound:
  Could not find racc-1.4.16 in any of the sources
or
/usr/share/rubygems-integration/all/gems/bundler-2.2.27/lib/bundler/definition.rb:496:in  materialize':
  Could not find rexml-3.2.3.1 in any of the sources (Bundler::GemNotFound)
Both bugs #996207 and #996302 are incarnations of this issue. The fix is as easy as making sure that the .lock files are removed before each run. This can be done in e.g. debian/ruby-tests.rake as very first task:
File.delete("Gemfile.lock") if File.exist?("Gemfile.lock")
In another case the .lock file is created by the tests in gemfiles/. While the first examples could actually be solved by gem2deb removing Gemfile.lock on its own, I m not quite sure how to handle the last case using packaging tools. The interesting part is that we will unlikely be confronted with this issue anytime soon again. It seems very specific to the Ruby 3.0 transition.

Update After talking to Antonio he added some code to gem2deb-test-runner to moving Gemfile.lock files out of the way. The tool already did this in an autopkgtest environment. In the upcoming 1.7 release it will do it in general and this will fix some more FTBFSes, e.g. #998497 and #996141 - originally reported against ruby-voight-kampff and ruby-bootsnap.

12 October 2021

Antonio Terceiro: Triaging Debian build failure logs with collab-qa-tools

The Ruby team is working now on transitioning to ruby 3.0. Even though most packages will work just fine, there is substantial amount of packages that require some work to adapt. We have been doing test rebuilds for a while during transitions, but usually triaged the problems manually. This time I decided to try collab-qa-tools, a set of scripts Lucas Nussbaum uses when he does archive-wide rebuilds. I'm really glad that I did, because those tols save a lot of time when processing a large number of build failures. In this post, I will go through how to triage a set of build logs using collab-qa-tools. I have some some improvements to the code. Given my last merge request is very new and was not merged yet, a few of the things I mention here may apply only to my own ruby3.0 branch. collab-qa-tools also contains a few tools do perform the builds in the cloud, but since we already had the builds done, I will not be mentioning that part and will write exclusively about the triaging tools. Installing collab-qa-tools The first step is to clone the git repository. Make sure you have the dependencies from debian/control installed (a few Ruby libraries). One of the patches I sent, and was already accepted, is the ability to run it without the need to install:
source /path/to/collab-qa-tools/activate.sh
This will add the tools to your $PATH. Preparation The first think you need to do is getting all your build logs in a directory. The tools assume .log file extension, and they can be named $ PACKAGE _*.log or just $ PACKAGE .log. Creating a TODO file
cqa-scanlogs   grep -v OK > todo
todo will contain one line for each log with a summary of the failure, if it's able to find one. collab-qa-tools has a large set of regular expressions for finding errors in the build logs It's a good idea to split the TODO file in multiple ones. This can easily be done with split(1), and can be used to delimit triaging sessions, and/or to split the triaging between multiple people. For example this will create todo into todo00, todo01, ..., each containing 30 lines:
split --lines=30 --numeric-suffixes todo todo
Triaging You can now do the triaging. Let's say we split the TODO files, and will start with todo01. The first step is calling cqa-fetchbugs (it does what it says on the tin):
cqa-fetchbugs --TODO=todo01
Then, cqa-annotate will guide you through the logs and allow you to report bugs:
cqa-annotate --TODO=todo01
I wrote myself a process.sh wrapper script for cqa-fetchbugs and cqa-annotate that looks like this:
#!/bin/sh
set -eu
for todo in $@; do
  # force downloading bugs
  awk ' print(".bugs." $1) ' "$ todo "   xargs rm -f
  cqa-fetchbugs --TODO="$ todo "
  cqa-annotate \
    --template=template.txt.jinja2 \
    --TODO="$ todo "
done
The --template option is a recent contribution of mine. This is a template for the bug reports you will be sending. It uses Liquid templates, which is very similar to Jinja2 for Python. You will notice that I am even pretending it is Jinja2 to trick vim into doing syntax highlighting for me. The template I'm using looks like this:
From:   fullname   <  email  >
To: submit@bugs.debian.org
Subject:   package  : FTBFS with ruby3.0:   summary  
Source:   package  
Version:   version   split:'+rebuild'   first  
Severity: serious
Justification: FTBFS
Tags: bookworm sid ftbfs
User: debian-ruby@lists.debian.org
Usertags: ruby3.0
Hi,
We are about to enable building against ruby3.0 on unstable. During a test
rebuild,   package   was found to fail to build in that situation.
To reproduce this locally, you need to install ruby-all-dev from experimental
on an unstable system or build chroot.
Relevant part (hopefully):
 % for line in extract % >   line  
 % endfor % 
The full build log is available at
https://people.debian.org/~kanashiro/ruby3.0/round2/builds/3/  package  /  filename   replace:".log",".build.txt"  
The cqa-annotate loop cqa-annotate will parse each log file, display an extract of what it found as possibly being the relevant part, and wait for your input:
######## ruby-cocaine_0.5.8-1.1+rebuild1633376733_amd64.log ########
--------- Error:
     Failure/Error: undef_method :exitstatus
     FrozenError:
       can't modify frozen object: pid 2351759 exit 0
     # ./spec/support/unsetting_exitstatus.rb:4:in  undef_method'
     # ./spec/support/unsetting_exitstatus.rb:4:in  singleton class'
     # ./spec/support/unsetting_exitstatus.rb:3:in  assuming_no_processes_have_been_run'
     # ./spec/cocaine/errors_spec.rb:55:in  block (2 levels) in <top (required)>'
Deprecation Warnings:
Using  should  from rspec-expectations' old  :should  syntax without explicitly enabling the syntax is deprecated. Use the new  :expect  syntax or explicitly enable  :should  with  config.expect_with(:rspec)    c  c.syntax = :should   instead. Called from /<<PKGBUILDDIR>>/spec/cocaine/command_line/runners/backticks_runner_spec.rb:19:in  block (2 levels) in <top (required)>'.
If you need more of the backtrace for any of these deprecations to
identify where to make the necessary changes, you can configure
 config.raise_errors_for_deprecations! , and it will turn the
deprecation warnings into errors, giving you the full backtrace.
1 deprecation warning total
Finished in 6.87 seconds (files took 2.68 seconds to load)
67 examples, 1 failure
Failed examples:
rspec ./spec/cocaine/errors_spec.rb:54 # When an error happens does not blow up if running the command errored before execution
/usr/bin/ruby3.0 -I/usr/share/rubygems-integration/all/gems/rspec-support-3.9.3/lib:/usr/share/rubygems-integration/all/gems/rspec-core-3.9.2/lib /usr/share/rubygems-integration/all/gems/rspec-core-3.9.2/exe/rspec --pattern ./spec/\*\*/\*_spec.rb --format documentation failed
ERROR: Test "ruby3.0" failed:
----------------
ERROR: Test "ruby3.0" failed:      Failure/Error: undef_method :exitstatus
----------------
package: ruby-cocaine
lines: 30
------------------------------------------------------------------------
s: skip
i: ignore this package permanently
r: report new bug
f: view full log
------------------------------------------------------------------------
Action [s i r f]:
You can then choose one of the options: When there are existing bugs in the package, cqa-annotate will list them among the options. If you choose a bug number, the TODO file will be annotated with that bug number and new runs of cqa-annotate will not ask about that package anymore. For example after I reported a bug for ruby-cocaine for the issue listed above, I aborted with a ctrl-c, and when I run my process.sh script again I then get this prompt:
----------------
ERROR: Test "ruby3.0" failed:      Failure/Error: undef_method :exitstatus
----------------
package: ruby-cocaine
lines: 30
------------------------------------------------------------------------
s: skip
i: ignore this package permanently
1: 996206 serious ruby-cocaine: FTBFS with ruby3.0: ERROR: Test "ruby3.0" failed:      Failure/Error: undef_method :exitstatus  
r: report new bug
f: view full log
------------------------------------------------------------------------
Action [s i 1 r f]:
Chosing 1 will annotate the TODO file with the bug number, and I'm done with this package. Only a few other hundreds to go.

23 August 2021

Pavit Kaur: GSoC 2021: Final Evaluation

Project: Incremental Improvements to Debian s CI platform
Project Link: https://summerofcode.withgoogle.com/projects/#6433686825730048
Code Repository: https://salsa.debian.org/ci-team/debci
Mentors: Antonio Terceiro and Paul Gevers

About the Project Debian Continuous Integration is Debian s CI platform. It runs tests on the packages published in the Debian archive, and today is used to control migration of packages from unstable, Debian s development area, to testing, the area of the archive where the next Debian release is being prepared. This makes it a crucial part of Debian s infrastructure. The web platform shows the results of all the tests executed. Debian CI provides developers both API and a GUI Self-Service section to request tests for the packages and get information on test history. This project involves implementing incremental improvements to the platform, making it easier to use and maintain.

Deliverables of the project:
  • Migrating Logins to Salsa
  • Adding support for testing security uploads and Debian LTS

Work done

Migrating Logins to Salsa
  • Originally, Debian CI used Debian SSO client certificates for logging in, but since that was deprecated logins are migrated to Salsa, Debian s Gitlab instance and this is implemented with the help of OmniAuth, the ruby authentication framework.
  • Another thing fixed as part of this task was that there exists a limitation in the existing database structure, where usernames were directly stored as the test requestor field, so that relationship was normalized with a proper foreign key to the users table.
Merge Requests:
Adding support for testing security uploads and Debian LTS In this task, work was done to enable private tests in Debian CI for adding support for testing security uploads and Debian LTS. It includes tasks from adding the private field in the API and Self-Service section in requesting tests to adding an option to publish them when ready through both interfaces. Merge Requests:
Others I worked on some issues to add usability improvements for the web interface:

Learning Takeaways I learned a lot throughout the entire journey. Some of the things to mention are:
  • Writing tests: I learned about using Rspec for writing tests in ruby and understood how writing tests is an integral part of coding any project.
  • Using good coding practices: By my merge requests reviews done by my mentor Antonio, I came to know about good coding practices which help in keeping the code both clean and concise.

Acknowledgement I owe huge thanks to my mentors Antonio Terceiro and Paul Gevers for their constant support and guidance throughout the entire duration. It is for them that I was able to get started with the code-base of the project in the first place. And while working on the project, they were extremely responsive to all my queries. My merge requests were all thoroughly reviewed which enables me to learn more and work more efficiently. It was a complete pleasure interacting with them on weekly meet calls. To sum up, my GSoC journey was awesome and I had a fun and productive summer with Debian

What s next? As part of a continuation of my project task Adding support for testing security uploads and Debian LTS, I would be working on Debian CI to facilitate the process of connecting the embargoed archive. It s the end of my GSoC journey but certainly not the end of my journey with Debian or Open-Source. I plan to stay an active contributor to Debian and get involve with more Open Source projects as well

Extras The link to my Salsa Profile for all my work activities can be found here.
To know more about my journey, my other GSoC blog posts can be found below:

21 August 2021

Pavit Kaur: GSoC: Second Phase of Coding Period

Hello there. So here we are near the end of GSoC 2021 and with that, I am sharing details of the work I completed in the second phase of the coding period. coding-period-2 Continuing with details of the task I left on

Task: Adding support for testing security uploads and Debian LTS
  • The extra-apt-sources parameter is added to both API and Self-Service section for the test request object and the part which took some time was deciding on its validation since if you ever deal with apt sources, you would know that there exist a lot of combinations of a valid apt source and in the end, the decision was taken to just add the minimal checks.
  • Since, we have enough setup to request private tests, it made sense to have a way of looking them up in the Self-Service section. So a new column is added to the test records in Job-History marking the visibility of a test (private or public) and a new filter to filter out records based on the visibility.
  • The option to retry tests is also added in Self-Service in the Job-History section so that private tests can also be retried. As earlier, this option was only available in Package History pages but those pages do not show the private tests.
  • The next thing in the list was to publish these test results of private tests at some point. For the API section, a new endpoint is added to accept a string of run_ids which is ready to be published, and for the Self-Service , in the Job-History page a Publish button is added to be clicked after check boxing the required tests.
This ends the list of my planned work for the project.

Some more interesting stuff As we got some more time for the coding period to end, my mentor Antonio Terceiro suggested that I could pick any issue from the opened issues list to work on so I picked up the one to add a user menu for the Self-Service section and completed that. Antonio also gave me insights into Debian Packaging and even let me try packaging a newer upstream version of the package itamae which is one of the packages maintained by him. I also worked on creating a video about my GSoC project for DebConf21 and this was truly the hardest thing to do. From setting up everything to final editing, it took me about 2 days to create something sane to submit. Now, I am excited about DebConf21 That concludes my work in the second phase of the coding period and next, I will be sharing my Final Evaluation Submission real soon. Stay tuned!

19 July 2021

Antonio Terceiro: Getting help with autopkgtest for your package

If you have been involved in Debian packaging at all in the last few years, you are probably aware that autopkgtest is now an important piece of the Debian release process. Back in 2018, the automated testing migration process started considering autopkgtest test results as part of its decision making. Since them, this process has received several improvements. For example, during the bullseye freeze, non-key packages with a non-trivial autopkgtest test suite could migrate automatically to testing without their maintainers needing to open unblock requests, provided there was no regression in theirs autopkgtest (or those from their reverse dependencies). Since 2014 when ci.debian.net was first introduced, we have seen an amazing increase in the number of packages in Debian that can be automatically tested. We went from around 100 to 15,000 today. This means not only happier maintainers because their packages get to testing faster, but also improved quality assurance for Debian as a whole. Chart showing the number of packages tested by ci.debian.net. Starts from close to 0 in 2014, up to 15,000 in 2021. The growth tendency seems to slow down in the last year However, the growth rate seems to be decreasing. Maybe the low hanging fruit have all been picked, or maybe we just need to help more people jump in the automated testing bandwagon. With that said, we would like to encourage and help more maintainers to add autopkgtest to their packages. To that effect, I just created the autopkgtest-help repository on salsa, where we will take help requests from maintainers working on autopkgtest for their packages. If you want help, please go ahead and create an issue in there. To quote the repository README:
Valid requests:
  • "I want to add autopkgtest to package X. X is a tool that [...] and it works by [...]. How should I approach testing it?" It's OK if you have no idea where to start. But at least try to describe your package, what it does and how it works so we can try to help you.
  • "I started writing autopkgtest for X, here is my current work in progress [link]. But I encountered problem Y. How to I move forward?" If you already have an autopkgtest but is having trouble making it work as you think it should, you can also ask here.
Invalid requests:
  • "Please write autopkgtest for my package X for me". As with anything else in free software, please show appreciation for other people's time, and do your own research first. If you pose your question with enough details (see above) and make it interesting, it may be that whoever answers will write at least a basic structure for you, but as the maintainer you are still the expert in the package and what tests are relevant.
If you ask your question soon, you might get your answer recorded in video: we are going to have a DebConf21 talk next month, where we I and Paul Gevers (elbrus) will answer a few autopkgtest questions in video for posterity. Now, if you have experience enabling autopkgtest for you own packages, please consider watching that repository there to help us help our fellow maintainers.

14 July 2021

Pavit Kaur: GSoC: First Phase of Coding Period

Hello there. I still can t believe that the first half of GSoC period is almost over. So it s been about 5 weeks working on the project and that means I have a lot to share about it. So without further ado, let s get started. coding-period-1 I will be listing up my work done in the respective tasks.

Task: Migrating Logins to Salsa The objective of this task was that the users could log in to their account on debci using their Debian Salsa account (collaborative development server for Debian based on the GitLab software) and this is implemented with the help of OmniAuth, the ruby authentication framework. At the beginning of this, I had to discuss quite a few issues with my mentors that I was bumping into, and by the end of it with multiple revisions and discussions, the following was implemented:
  • The previous users' table schema of debci comprises the username field which contained mostly the emails of the users with some exceptions and to accommodate the Salsa logins, a new uid field is added to the table to store the Salsa uid of the logged-in user with the username field storing Salsa usernames now and as the Salsa users have the liberty to change their usernames, the updation of username as well as in debci database is also taken care of.
  • For Salsa login, the ruby-omniauth-gitlab strategy has been used and for login in development mode, the developer strategy which comes with ruby-omniauth has been set up.
  • Added a Login Page giving the option to log in using Salsa and an additional option to login in Developer Mode which is accessible only in Development Setup so that other contributors don t have to set up dummy Salsa applications for working.
  • Added specs for the new login process. This was an interesting part, as I got the chance to understand RSpec and facilities provided by OmniAuth to mock the authentication for Integration Testing.
  • One blocker that I dealt with was that the Debian release from where packages were pulled out for debci have the OmniAuth version 1.8, which was not working well with the developer strategy implementation for the application so to resolve that I did a minor change to the callback API for developer strategy until the time that release have the newer version of OmniAuth.
  • Another thing we discussed in one of the meetings that in the existing database structure, the tests do not have a real reference to the users' table and rather the username is stored directly as a string for the requestor field, so this thing was fixed as part of this task.
The migration of the existing users' data for the new logins was handled by my mentor Antonio Terceiro and with this, our first task is concluded. All these changes are now part of Debian Continuous Integration platform and you can find the blogpost for same by Antonio here. This task also allowed me to write my first ever tutorial Tutorial: Integrating OmniAuth with Sinatra Application to help people looking to integrate their ruby application with OmniAuth. Moving further to the next task in progress.

Task: Adding support for testing security uploads and Debian LTS This is the next task I am working on enabling private tests in debci for adding support for testing security uploads and Debian LTS. Since it s a bigger task, it is broken down into about 6-7 steps and till now, the following has been done:
  • The schema of jobs' (tests) table is updated to have a boolean field to store whether the job is private or not.
  • The is_private parameter is added to both API and Self-Service section so the private test can be submitted through the API as well as through GUI form on the web platform.
  • Another thing which comes up through discussion in meetings that a parameter is required to add extra-apt-sources for getting packages of security repository and this is the part in progress.
So that concludes my work till now. It has been an amazing journey with lots of learning and also the guidance from the wonderful mentors of my project and I am looking forward to more exciting parts ahead. That s all for now. See you next time!

27 June 2021

Antonio Terceiro: Debian Continuous Integration now using Salsa logins

I have just updated the Debian Continuous Integration platform with debci 3.1. This update brings a few database performance improvements, courtesy of adding indexes to very important columns that were missing them. And boy, querying a table with 13 million rows without the proper indexes is bad! :-) Now, the most user visible change in this update is the change from Debian SSO to Salsa Logins, which is part of Pavit Kaur's GSoC work. She has been working with me and Paul Gevers for a few weeks, and this was the first official task in the internship. For users, this means that you now can only log in via Salsa. If you have an existing session where you logged in with an SSO certificate, it will still be valid. When you log in with Salsa, your username will be changed to match the one in Salsa. This means that if your account on salsa gets renamed, it will automatically be renamed on Debian CI when you log in the next time. Unfortunately we don't have a logout feature yet, but in the meantime you can use the developer toolbar to delete any existing cookies you might have for ci.debian.net. Migrating to Salsa logins was in my TODO list for a while. I had the impression that it could do it pretty quick to do by using pre-existing libraries that provide gitlab authentication integration for Rack (Ruby's de facto standard web application interface, like uwsgi for Python). But in reality, the devil was in the details. We went through several rounds of reviews to get it right. During the entire process, Pavit demonstrated an excelent capacity for responding to feedback, and overall I'm very happy with her performance in the internship so far. While we were discussing the Salsa logins, we noted a limitation in the existing database structure, where we stored usernames directly as the test requestor field, and decided it was better to normalize that relationship with a proper foreign key to the users table, which she also worked on. This update also include the very first (and non-user visible) step of her next task, which is adding support for having private tests. Those will be useful for implementing testing for embargoed security updates, and other use cases. This was broken up into 7 or 8 seperate steps, so there is still some work to do there. I'm looking forward to the continuation of this work.

8 June 2021

Pavit Kaur: GSoC: About my Project and Community Bonding Period

alt text To start writing about updates regarding my GSoC project, the first obvious thing I need to do is to explain what my project really is. So let s get started.

About my project

What is debci? Directly stating from the official docs:
The Debian continuous integration (debci) is an automated system that coordinates the execution of automated tests against packages in the Debian system.

Let s try decoding it: Debian is a huge system with thousands of packages and within these packages exist inter-package dependencies. So if any package is updated, it is important to test if that package is working correctly but it is equally important to test that all the packages which are dependent on this updated package are working correctly too. Debci is a platform serving this purpose of automated testing for the entire Debian archive whenever a new version of the package, or of any package in its dependency chain is available. It comes with a UI that lets developers easily run tests and see their results if they pass or not. For my GSoC project, I am working to implement some incremental improvements to debci making it easier to use and maintain.

Community Bonding Period

The debci community Everyone I have come across till now in the community is very nice. The debci community in itself is a small but active community. It really feels good to be a part of conversations here.

Weekly call set up I have two mentors for this project Antonio Terceiro and Paul Gevers and they have set up a weekly sync call with me in which I will share my updates regarding the work done in the past week, any issues I am facing, and discuss the work for next week. In addition to this, I can always contact them on IRC for any issue I am stuck in.

Work till now The first thing I did in the community bonding period is setting up this blog. I wanted to have one for a long time and this seems to be a really nice opportunity to start. And the fact this has been added to Planet Debian too makes me happier to write. I am still trying to get a hang of this and definitely need to learn how to spend less time writing it. I also worked on my already opened merge requests during this period and got them merged. Since I am already familiar with the codebase, so I started with my first deliverable a bit earlier before the official coding period begins which is migrating the logins to Salsa, Debian s Gitlab Instance. Currently, debci uses Debian SSO client certificates for logging in, but that is deprecated so it needs to be migrated to use Salsa as an identity provider. The OmniAuth library is being used to implement this with help of ruby-omniauth-gitlab strategy. I explored a great deal about integrating OmniAuth with our application and bumped into so many issues too when I began implementing that. Once I am done integrating the Salsa Authentication with debci, I am planning to write a separate tutorial on that which could be helpful to other people using OmniAuth with their application. With that, the community bonding period has ended on 7th June and the coding period officially begins and for now, I will be continuing working on my first deliverable. That s all for now. See you next time!

25 May 2021

Pavit Kaur: Journey to GSoC

I am really excited that my Google Summer of Code proposal with Debian for the project Debian Continuous Integration improvements has been accepted. Through this blog, I am here to share about my Pre-GSoC journey. alt text I knew about GSoC since my first year of college but had this misconception that only great coders get selected for GSoC which did not let me apply to the program until my 3rd year of engineering. I applied this year not because I thought I have turned into one but because I actually wanted to give a fair try to this before the time I become ineligible to participate.

Finding Project Scrolling through the list of GSoC 2021 organizations, I was checking out projects of organizations I am familiar with. Debian is one of the huge Open Source communities that has always inspired developers around the world to contribute to Open Source. So as I checked through Debian projects, I got excited to find the Debian Continuous Integration improvements project (referred to as debci in this post) related to web development and more concerned with backend work which is something I am very much interested in. I joined the community, and as directed by the application tasks of the project I set up the debci on my machine and started with an issue labeled as a newcomer. Soon I was able to submit my first Merge Request and it was reviewed by Antonio Terceiro, the mentor of my GSoC Project. With his further guidance, I was able to turn MR into an acceptable patch, and voila it got merged! That really did boost my morale to contribute further to the project.

Student Application Period At the suggestion of the mentor, during the Student Application Period, I worked on more open issues which were helping me understand the codebase better and in turn the deliverables of the project for my proposal. For my every doubt, I first tried to tackle it myself, and if still not able to find a solution I turn to mentors who did their best to answer my queries and this is how I completed my proposal and got it reviewed by Antonio before finally submitting it on 13th April. I still cannot express that feeling of satisfaction I achieved on submitting the proposal. I finally successfully applied for GSoC.

After Proposal Submission I did not stop my contribution after the Student Application period ended and kept on working on more issues which helped me stay in touch with the project and also because I was enjoying it a lot. I had already made up my mind to contribute more in Open Source as I learned and enjoyed plenty during this process.

Day of Results On 17th May, I got my acceptance mail at around 11:30 pm at night and I remember screaming and waking everybody up in my home to announce the news to them. It was truly the happiest moment for me.

Moment of Truth I would admit that I got involved with GSoC because of the reputation associated with it but things I learned during this pre GSoC period have made me realize the fun and learning opportunities associated with working opensource and I am here to stay for sure. I plan to write more blogs regarding my project and keep you guys updated about my work. Stay tuned!

27 March 2021

Antonio Terceiro: Migrating from Chef to itamae

The Debian CI platform is comprised of 30+ (virtual) machines. Maintaining this many machines, and being able to add new ones with some degree of reliability requires one to use some sort of configuration management. Until about a week ago, we were using Chef for our configuration management. I was, for several years, the main maintainer of Chef in Debian, so using it was natural to me, as I had used it before for personal and work projects. But last year I decided to request the removal of Chef from Debian, so that it won't be shipped with Debian 11 (bullseye). After evaluating a few options, I believed that the path of least resistance was to migrate to itamae. itamae was inspired by chef, and uses a DSL that is very similar to the Chef one. Even though the itamae team claim it's not compatible with Chef, the changes that I needed to do were relatively limited. The necessary code changes might look like a lot, but a large part of them could be automated or done in bulk, like doing simple search and replace operations, and moving entire directories around. In the rest of this post, I will describe the migration process, starting with the infrastructure changes, the types of changes I needed to make to the configuration management code, and my conclusions about the process. Infrastructure changes The first step was to add support for itamae to chake, a configuration management wrapper tool that I wrote. chake was originally written as a serverless remote executor for Chef, so this involved a bit of a redesign. I thought it was worth it to do, because at that point chake had gained several interesting managements features that we no directly tied to Chef. This work was done a bit slowly over the course of the several months, starting almost exactly one year ago, and was completed 3 months ago. I wasn't in a hurry and knew I had time before Debian 11 is released and I had to upgrade the platform. After this was done, I started the work of migrating the then Chef cookbooks to itamae, and the next sections present the main types of changes that were necessary. During the entire process, I sent a few patches out: Code changes These are the main types of changes that were necessary in the configuration code to accomplish the migration to itamae. Replace cookbook_file with remote_file. The resource known as cookbook_file in Chef is called remote_file in itamae. Fixing this is just a matter of search and replace, e.g.:
-cookbook_file '/etc/apt/apt.conf.d/00updates' do
+remote_file '/etc/apt/apt.conf.d/00updates' do
Changed file locations The file structure assumed by itamae is a lot simpler than the one in Chef. The needed changes were: Explicit file ownership and mode Chef is usually design to run as root on the nodes, and files created are owned by root and have move 0644 by default. With itamae, files are by default owned by the user that was used to SSH into the machine. Because of this, I had to review all file creation resources and add owner, group and mode explicitly:
-cookbook_file '/etc/apt/apt.conf.d/00updates' do
-  source 'apt.conf'
+remote_file '/etc/apt/apt.conf.d/00updates' do
+  source 'files/apt.conf'
+  owner   'root'
+  group   'root'
+  mode    "0644"
 end
In the end, I guess being explicit make the configuration code more understandable, so I take that as a win. Different execution context One of the major differences between Chef itamae comes down the execution context of the recipes. In both Chef and itamae, the configuration is written in DSL embedded in Ruby. This means that the recipes are just Ruby code, and difference here has to do with where that code is executed. With Chef, the recipes are always execute on the machine you are configuring, while with itamae the recipe is executed on the workstation where you run itamae, and that gets translated to commands that need to be executed on the machine being configured. For example, if you need to configure a service based on how much RAM the machine has, with Chef you could do something like this:
total_ram = File.readlines("/proc/meminfo").find do  l 
  l.split.first == "MemTotal:"
end.split[1]
file "/etc/service.conf" do
  # use 20% of the total RAM
  content "cache_size = # ram / 5 KB"
end
With itamae, all that Ruby code will run on the client, so total_ram will contain the wrong number. In the Debian CI case, I worked around that by explicitly declaring the amount of RAM in the static host configuration, and the above construct ended up as something like this:
file "/etc/service.conf" do
  # use 20% of the total RAM
  content "cache_size = # node['total_ram'] / 5 KB"
end
Lessons learned This migration is now complete, and there are a few points that I take away from it: All in all, the system is working just fine, and I consider this to have been a successful migration. I'm happy it worked out.

Next.