Search Results: "antonio"

29 November 2021

Russ Allbery: Fall haul

It's been a while since I've posted one of these, and I also may have had a few moments of deciding to support authors by buying their books even if I'm not going to get a chance to read them soon. There's also a bit of work reading in here. Ryka Aoki Light from Uncommon Stars (sff)
Frederick R. Chromey To Measure the Sky (non-fiction)
Neil Gaiman, et al. Sandman: Overture (graphic novel)
Alix E. Harrow A Spindle Splintered (sff)
Jordan Ifueko Raybearer (sff)
Jordan Ifueko Redemptor (sff)
T. Kingfisher Paladin's Hope (sff)
TJ Klune Under the Whispering Door (sff)
Kiese Laymon How to Slowly Kill Yourself and Others in America (non-fiction)
Yuna Lee Fox You (romance)
Tim Mak Misfire (non-fiction)
Naomi Novik The Last Graduate (sff)
Shelley Parker-Chan She Who Became the Sun (sff)
Gareth L. Powell Embers of War (sff)
Justin Richer & Antonio Sanso OAuth 2 in Action (non-fiction)
Dean Spade Mutual Aid (non-fiction)
Lana Swartz New Money (non-fiction)
Adam Tooze Shutdown (non-fiction)
Bill Watterson The Essential Calvin and Hobbes (strip collection)
Bill Willingham, et al. Fables: Storybook Love (graphic novel)
David Wong Real-World Cryptography (non-fiction)
Neon Yang The Black Tides of Heaven (sff)
Neon Yang The Red Threads of Fortune (sff)
Neon Yang The Descent of Monsters (sff)
Neon Yang The Ascent to Godhood (sff)
Xiran Jay Zhao Iron Widow (sff)

14 November 2021

Ruby Team: Ruby transition and packaging hints #2 - Gemfile.lock created by bundler/setup with Ruby 2.7 preventing successful test with Ruby 3.0

We currently face an issue in all packages requiring bunlder/setup and trying to run the tests for Ruby 2.7 and 3.0. The problem is that the first tests will create Gemfile.lock (or gemfile/gemfile-*.lock) using Ruby 2.7 and the next run for Ruby 3 will report e.g.:
Failure/Error: require 'bundler/setup' # Set up gems listed in the Gemfile.
Bundler::GemNotFound:
  Could not find racc-1.4.16 in any of the sources
or
/usr/share/rubygems-integration/all/gems/bundler-2.2.27/lib/bundler/definition.rb:496:in  materialize':
  Could not find rexml-3.2.3.1 in any of the sources (Bundler::GemNotFound)
Both bugs #996207 and #996302 are incarnations of this issue. The fix is as easy as making sure that the .lock files are removed before each run. This can be done in e.g. debian/ruby-tests.rake as very first task:
File.delete("Gemfile.lock") if File.exist?("Gemfile.lock")
In another case the .lock file is created by the tests in gemfiles/. While the first examples could actually be solved by gem2deb removing Gemfile.lock on its own, I m not quite sure how to handle the last case using packaging tools. The interesting part is that we will unlikely be confronted with this issue anytime soon again. It seems very specific to the Ruby 3.0 transition.

Update After talking to Antonio he added some code to gem2deb-test-runner to moving Gemfile.lock files out of the way. The tool already did this in an autopkgtest environment. In the upcoming 1.7 release it will do it in general and this will fix some more FTBFSes, e.g. #998497 and #996141 - originally reported against ruby-voight-kampff and ruby-bootsnap.

12 October 2021

Antonio Terceiro: Triaging Debian build failure logs with collab-qa-tools

The Ruby team is working now on transitioning to ruby 3.0. Even though most packages will work just fine, there is substantial amount of packages that require some work to adapt. We have been doing test rebuilds for a while during transitions, but usually triaged the problems manually. This time I decided to try collab-qa-tools, a set of scripts Lucas Nussbaum uses when he does archive-wide rebuilds. I'm really glad that I did, because those tols save a lot of time when processing a large number of build failures. In this post, I will go through how to triage a set of build logs using collab-qa-tools. I have some some improvements to the code. Given my last merge request is very new and was not merged yet, a few of the things I mention here may apply only to my own ruby3.0 branch. collab-qa-tools also contains a few tools do perform the builds in the cloud, but since we already had the builds done, I will not be mentioning that part and will write exclusively about the triaging tools. Installing collab-qa-tools The first step is to clone the git repository. Make sure you have the dependencies from debian/control installed (a few Ruby libraries). One of the patches I sent, and was already accepted, is the ability to run it without the need to install:
source /path/to/collab-qa-tools/activate.sh
This will add the tools to your $PATH. Preparation The first think you need to do is getting all your build logs in a directory. The tools assume .log file extension, and they can be named $ PACKAGE _*.log or just $ PACKAGE .log. Creating a TODO file
cqa-scanlogs   grep -v OK > todo
todo will contain one line for each log with a summary of the failure, if it's able to find one. collab-qa-tools has a large set of regular expressions for finding errors in the build logs It's a good idea to split the TODO file in multiple ones. This can easily be done with split(1), and can be used to delimit triaging sessions, and/or to split the triaging between multiple people. For example this will create todo into todo00, todo01, ..., each containing 30 lines:
split --lines=30 --numeric-suffixes todo todo
Triaging You can now do the triaging. Let's say we split the TODO files, and will start with todo01. The first step is calling cqa-fetchbugs (it does what it says on the tin):
cqa-fetchbugs --TODO=todo01
Then, cqa-annotate will guide you through the logs and allow you to report bugs:
cqa-annotate --TODO=todo01
I wrote myself a process.sh wrapper script for cqa-fetchbugs and cqa-annotate that looks like this:
#!/bin/sh
set -eu
for todo in $@; do
  # force downloading bugs
  awk ' print(".bugs." $1) ' "$ todo "   xargs rm -f
  cqa-fetchbugs --TODO="$ todo "
  cqa-annotate \
    --template=template.txt.jinja2 \
    --TODO="$ todo "
done
The --template option is a recent contribution of mine. This is a template for the bug reports you will be sending. It uses Liquid templates, which is very similar to Jinja2 for Python. You will notice that I am even pretending it is Jinja2 to trick vim into doing syntax highlighting for me. The template I'm using looks like this:
From:   fullname   <  email  >
To: submit@bugs.debian.org
Subject:   package  : FTBFS with ruby3.0:   summary  
Source:   package  
Version:   version   split:'+rebuild'   first  
Severity: serious
Justification: FTBFS
Tags: bookworm sid ftbfs
User: debian-ruby@lists.debian.org
Usertags: ruby3.0
Hi,
We are about to enable building against ruby3.0 on unstable. During a test
rebuild,   package   was found to fail to build in that situation.
To reproduce this locally, you need to install ruby-all-dev from experimental
on an unstable system or build chroot.
Relevant part (hopefully):
 % for line in extract % >   line  
 % endfor % 
The full build log is available at
https://people.debian.org/~kanashiro/ruby3.0/round2/builds/3/  package  /  filename   replace:".log",".build.txt"  
The cqa-annotate loop cqa-annotate will parse each log file, display an extract of what it found as possibly being the relevant part, and wait for your input:
######## ruby-cocaine_0.5.8-1.1+rebuild1633376733_amd64.log ########
--------- Error:
     Failure/Error: undef_method :exitstatus
     FrozenError:
       can't modify frozen object: pid 2351759 exit 0
     # ./spec/support/unsetting_exitstatus.rb:4:in  undef_method'
     # ./spec/support/unsetting_exitstatus.rb:4:in  singleton class'
     # ./spec/support/unsetting_exitstatus.rb:3:in  assuming_no_processes_have_been_run'
     # ./spec/cocaine/errors_spec.rb:55:in  block (2 levels) in <top (required)>'
Deprecation Warnings:
Using  should  from rspec-expectations' old  :should  syntax without explicitly enabling the syntax is deprecated. Use the new  :expect  syntax or explicitly enable  :should  with  config.expect_with(:rspec)    c  c.syntax = :should   instead. Called from /<<PKGBUILDDIR>>/spec/cocaine/command_line/runners/backticks_runner_spec.rb:19:in  block (2 levels) in <top (required)>'.
If you need more of the backtrace for any of these deprecations to
identify where to make the necessary changes, you can configure
 config.raise_errors_for_deprecations! , and it will turn the
deprecation warnings into errors, giving you the full backtrace.
1 deprecation warning total
Finished in 6.87 seconds (files took 2.68 seconds to load)
67 examples, 1 failure
Failed examples:
rspec ./spec/cocaine/errors_spec.rb:54 # When an error happens does not blow up if running the command errored before execution
/usr/bin/ruby3.0 -I/usr/share/rubygems-integration/all/gems/rspec-support-3.9.3/lib:/usr/share/rubygems-integration/all/gems/rspec-core-3.9.2/lib /usr/share/rubygems-integration/all/gems/rspec-core-3.9.2/exe/rspec --pattern ./spec/\*\*/\*_spec.rb --format documentation failed
ERROR: Test "ruby3.0" failed:
----------------
ERROR: Test "ruby3.0" failed:      Failure/Error: undef_method :exitstatus
----------------
package: ruby-cocaine
lines: 30
------------------------------------------------------------------------
s: skip
i: ignore this package permanently
r: report new bug
f: view full log
------------------------------------------------------------------------
Action [s i r f]:
You can then choose one of the options: When there are existing bugs in the package, cqa-annotate will list them among the options. If you choose a bug number, the TODO file will be annotated with that bug number and new runs of cqa-annotate will not ask about that package anymore. For example after I reported a bug for ruby-cocaine for the issue listed above, I aborted with a ctrl-c, and when I run my process.sh script again I then get this prompt:
----------------
ERROR: Test "ruby3.0" failed:      Failure/Error: undef_method :exitstatus
----------------
package: ruby-cocaine
lines: 30
------------------------------------------------------------------------
s: skip
i: ignore this package permanently
1: 996206 serious ruby-cocaine: FTBFS with ruby3.0: ERROR: Test "ruby3.0" failed:      Failure/Error: undef_method :exitstatus  
r: report new bug
f: view full log
------------------------------------------------------------------------
Action [s i 1 r f]:
Chosing 1 will annotate the TODO file with the bug number, and I'm done with this package. Only a few other hundreds to go.

23 August 2021

Pavit Kaur: GSoC 2021: Final Evaluation

Project: Incremental Improvements to Debian s CI platform
Project Link: https://summerofcode.withgoogle.com/projects/#6433686825730048
Code Repository: https://salsa.debian.org/ci-team/debci
Mentors: Antonio Terceiro and Paul Gevers

About the Project Debian Continuous Integration is Debian s CI platform. It runs tests on the packages published in the Debian archive, and today is used to control migration of packages from unstable, Debian s development area, to testing, the area of the archive where the next Debian release is being prepared. This makes it a crucial part of Debian s infrastructure. The web platform shows the results of all the tests executed. Debian CI provides developers both API and a GUI Self-Service section to request tests for the packages and get information on test history. This project involves implementing incremental improvements to the platform, making it easier to use and maintain.

Deliverables of the project:
  • Migrating Logins to Salsa
  • Adding support for testing security uploads and Debian LTS

Work done

Migrating Logins to Salsa
  • Originally, Debian CI used Debian SSO client certificates for logging in, but since that was deprecated logins are migrated to Salsa, Debian s Gitlab instance and this is implemented with the help of OmniAuth, the ruby authentication framework.
  • Another thing fixed as part of this task was that there exists a limitation in the existing database structure, where usernames were directly stored as the test requestor field, so that relationship was normalized with a proper foreign key to the users table.
Merge Requests:
Adding support for testing security uploads and Debian LTS In this task, work was done to enable private tests in Debian CI for adding support for testing security uploads and Debian LTS. It includes tasks from adding the private field in the API and Self-Service section in requesting tests to adding an option to publish them when ready through both interfaces. Merge Requests:
Others I worked on some issues to add usability improvements for the web interface:

Learning Takeaways I learned a lot throughout the entire journey. Some of the things to mention are:
  • Writing tests: I learned about using Rspec for writing tests in ruby and understood how writing tests is an integral part of coding any project.
  • Using good coding practices: By my merge requests reviews done by my mentor Antonio, I came to know about good coding practices which help in keeping the code both clean and concise.

Acknowledgement I owe huge thanks to my mentors Antonio Terceiro and Paul Gevers for their constant support and guidance throughout the entire duration. It is for them that I was able to get started with the code-base of the project in the first place. And while working on the project, they were extremely responsive to all my queries. My merge requests were all thoroughly reviewed which enables me to learn more and work more efficiently. It was a complete pleasure interacting with them on weekly meet calls. To sum up, my GSoC journey was awesome and I had a fun and productive summer with Debian

What s next? As part of a continuation of my project task Adding support for testing security uploads and Debian LTS, I would be working on Debian CI to facilitate the process of connecting the embargoed archive. It s the end of my GSoC journey but certainly not the end of my journey with Debian or Open-Source. I plan to stay an active contributor to Debian and get involve with more Open Source projects as well

Extras The link to my Salsa Profile for all my work activities can be found here.
To know more about my journey, my other GSoC blog posts can be found below:

21 August 2021

Pavit Kaur: GSoC: Second Phase of Coding Period

Hello there. So here we are near the end of GSoC 2021 and with that, I am sharing details of the work I completed in the second phase of the coding period. coding-period-2 Continuing with details of the task I left on

Task: Adding support for testing security uploads and Debian LTS
  • The extra-apt-sources parameter is added to both API and Self-Service section for the test request object and the part which took some time was deciding on its validation since if you ever deal with apt sources, you would know that there exist a lot of combinations of a valid apt source and in the end, the decision was taken to just add the minimal checks.
  • Since, we have enough setup to request private tests, it made sense to have a way of looking them up in the Self-Service section. So a new column is added to the test records in Job-History marking the visibility of a test (private or public) and a new filter to filter out records based on the visibility.
  • The option to retry tests is also added in Self-Service in the Job-History section so that private tests can also be retried. As earlier, this option was only available in Package History pages but those pages do not show the private tests.
  • The next thing in the list was to publish these test results of private tests at some point. For the API section, a new endpoint is added to accept a string of run_ids which is ready to be published, and for the Self-Service , in the Job-History page a Publish button is added to be clicked after check boxing the required tests.
This ends the list of my planned work for the project.

Some more interesting stuff As we got some more time for the coding period to end, my mentor Antonio Terceiro suggested that I could pick any issue from the opened issues list to work on so I picked up the one to add a user menu for the Self-Service section and completed that. Antonio also gave me insights into Debian Packaging and even let me try packaging a newer upstream version of the package itamae which is one of the packages maintained by him. I also worked on creating a video about my GSoC project for DebConf21 and this was truly the hardest thing to do. From setting up everything to final editing, it took me about 2 days to create something sane to submit. Now, I am excited about DebConf21 That concludes my work in the second phase of the coding period and next, I will be sharing my Final Evaluation Submission real soon. Stay tuned!

19 July 2021

Antonio Terceiro: Getting help with autopkgtest for your package

If you have been involved in Debian packaging at all in the last few years, you are probably aware that autopkgtest is now an important piece of the Debian release process. Back in 2018, the automated testing migration process started considering autopkgtest test results as part of its decision making. Since them, this process has received several improvements. For example, during the bullseye freeze, non-key packages with a non-trivial autopkgtest test suite could migrate automatically to testing without their maintainers needing to open unblock requests, provided there was no regression in theirs autopkgtest (or those from their reverse dependencies). Since 2014 when ci.debian.net was first introduced, we have seen an amazing increase in the number of packages in Debian that can be automatically tested. We went from around 100 to 15,000 today. This means not only happier maintainers because their packages get to testing faster, but also improved quality assurance for Debian as a whole. Chart showing the number of packages tested by ci.debian.net. Starts from close to 0 in 2014, up to 15,000 in 2021. The growth tendency seems to slow down in the last year However, the growth rate seems to be decreasing. Maybe the low hanging fruit have all been picked, or maybe we just need to help more people jump in the automated testing bandwagon. With that said, we would like to encourage and help more maintainers to add autopkgtest to their packages. To that effect, I just created the autopkgtest-help repository on salsa, where we will take help requests from maintainers working on autopkgtest for their packages. If you want help, please go ahead and create an issue in there. To quote the repository README:
Valid requests:
  • "I want to add autopkgtest to package X. X is a tool that [...] and it works by [...]. How should I approach testing it?" It's OK if you have no idea where to start. But at least try to describe your package, what it does and how it works so we can try to help you.
  • "I started writing autopkgtest for X, here is my current work in progress [link]. But I encountered problem Y. How to I move forward?" If you already have an autopkgtest but is having trouble making it work as you think it should, you can also ask here.
Invalid requests:
  • "Please write autopkgtest for my package X for me". As with anything else in free software, please show appreciation for other people's time, and do your own research first. If you pose your question with enough details (see above) and make it interesting, it may be that whoever answers will write at least a basic structure for you, but as the maintainer you are still the expert in the package and what tests are relevant.
If you ask your question soon, you might get your answer recorded in video: we are going to have a DebConf21 talk next month, where we I and Paul Gevers (elbrus) will answer a few autopkgtest questions in video for posterity. Now, if you have experience enabling autopkgtest for you own packages, please consider watching that repository there to help us help our fellow maintainers.

14 July 2021

Pavit Kaur: GSoC: First Phase of Coding Period

Hello there. I still can t believe that the first half of GSoC period is almost over. So it s been about 5 weeks working on the project and that means I have a lot to share about it. So without further ado, let s get started. coding-period-1 I will be listing up my work done in the respective tasks.

Task: Migrating Logins to Salsa The objective of this task was that the users could log in to their account on debci using their Debian Salsa account (collaborative development server for Debian based on the GitLab software) and this is implemented with the help of OmniAuth, the ruby authentication framework. At the beginning of this, I had to discuss quite a few issues with my mentors that I was bumping into, and by the end of it with multiple revisions and discussions, the following was implemented:
  • The previous users' table schema of debci comprises the username field which contained mostly the emails of the users with some exceptions and to accommodate the Salsa logins, a new uid field is added to the table to store the Salsa uid of the logged-in user with the username field storing Salsa usernames now and as the Salsa users have the liberty to change their usernames, the updation of username as well as in debci database is also taken care of.
  • For Salsa login, the ruby-omniauth-gitlab strategy has been used and for login in development mode, the developer strategy which comes with ruby-omniauth has been set up.
  • Added a Login Page giving the option to log in using Salsa and an additional option to login in Developer Mode which is accessible only in Development Setup so that other contributors don t have to set up dummy Salsa applications for working.
  • Added specs for the new login process. This was an interesting part, as I got the chance to understand RSpec and facilities provided by OmniAuth to mock the authentication for Integration Testing.
  • One blocker that I dealt with was that the Debian release from where packages were pulled out for debci have the OmniAuth version 1.8, which was not working well with the developer strategy implementation for the application so to resolve that I did a minor change to the callback API for developer strategy until the time that release have the newer version of OmniAuth.
  • Another thing we discussed in one of the meetings that in the existing database structure, the tests do not have a real reference to the users' table and rather the username is stored directly as a string for the requestor field, so this thing was fixed as part of this task.
The migration of the existing users' data for the new logins was handled by my mentor Antonio Terceiro and with this, our first task is concluded. All these changes are now part of Debian Continuous Integration platform and you can find the blogpost for same by Antonio here. This task also allowed me to write my first ever tutorial Tutorial: Integrating OmniAuth with Sinatra Application to help people looking to integrate their ruby application with OmniAuth. Moving further to the next task in progress.

Task: Adding support for testing security uploads and Debian LTS This is the next task I am working on enabling private tests in debci for adding support for testing security uploads and Debian LTS. Since it s a bigger task, it is broken down into about 6-7 steps and till now, the following has been done:
  • The schema of jobs' (tests) table is updated to have a boolean field to store whether the job is private or not.
  • The is_private parameter is added to both API and Self-Service section so the private test can be submitted through the API as well as through GUI form on the web platform.
  • Another thing which comes up through discussion in meetings that a parameter is required to add extra-apt-sources for getting packages of security repository and this is the part in progress.
So that concludes my work till now. It has been an amazing journey with lots of learning and also the guidance from the wonderful mentors of my project and I am looking forward to more exciting parts ahead. That s all for now. See you next time!

27 June 2021

Antonio Terceiro: Debian Continuous Integration now using Salsa logins

I have just updated the Debian Continuous Integration platform with debci 3.1. This update brings a few database performance improvements, courtesy of adding indexes to very important columns that were missing them. And boy, querying a table with 13 million rows without the proper indexes is bad! :-) Now, the most user visible change in this update is the change from Debian SSO to Salsa Logins, which is part of Pavit Kaur's GSoC work. She has been working with me and Paul Gevers for a few weeks, and this was the first official task in the internship. For users, this means that you now can only log in via Salsa. If you have an existing session where you logged in with an SSO certificate, it will still be valid. When you log in with Salsa, your username will be changed to match the one in Salsa. This means that if your account on salsa gets renamed, it will automatically be renamed on Debian CI when you log in the next time. Unfortunately we don't have a logout feature yet, but in the meantime you can use the developer toolbar to delete any existing cookies you might have for ci.debian.net. Migrating to Salsa logins was in my TODO list for a while. I had the impression that it could do it pretty quick to do by using pre-existing libraries that provide gitlab authentication integration for Rack (Ruby's de facto standard web application interface, like uwsgi for Python). But in reality, the devil was in the details. We went through several rounds of reviews to get it right. During the entire process, Pavit demonstrated an excelent capacity for responding to feedback, and overall I'm very happy with her performance in the internship so far. While we were discussing the Salsa logins, we noted a limitation in the existing database structure, where we stored usernames directly as the test requestor field, and decided it was better to normalize that relationship with a proper foreign key to the users table, which she also worked on. This update also include the very first (and non-user visible) step of her next task, which is adding support for having private tests. Those will be useful for implementing testing for embargoed security updates, and other use cases. This was broken up into 7 or 8 seperate steps, so there is still some work to do there. I'm looking forward to the continuation of this work.

8 June 2021

Pavit Kaur: GSoC: About my Project and Community Bonding Period

alt text To start writing about updates regarding my GSoC project, the first obvious thing I need to do is to explain what my project really is. So let s get started.

About my project

What is debci? Directly stating from the official docs:
The Debian continuous integration (debci) is an automated system that coordinates the execution of automated tests against packages in the Debian system.

Let s try decoding it: Debian is a huge system with thousands of packages and within these packages exist inter-package dependencies. So if any package is updated, it is important to test if that package is working correctly but it is equally important to test that all the packages which are dependent on this updated package are working correctly too. Debci is a platform serving this purpose of automated testing for the entire Debian archive whenever a new version of the package, or of any package in its dependency chain is available. It comes with a UI that lets developers easily run tests and see their results if they pass or not. For my GSoC project, I am working to implement some incremental improvements to debci making it easier to use and maintain.

Community Bonding Period

The debci community Everyone I have come across till now in the community is very nice. The debci community in itself is a small but active community. It really feels good to be a part of conversations here.

Weekly call set up I have two mentors for this project Antonio Terceiro and Paul Gevers and they have set up a weekly sync call with me in which I will share my updates regarding the work done in the past week, any issues I am facing, and discuss the work for next week. In addition to this, I can always contact them on IRC for any issue I am stuck in.

Work till now The first thing I did in the community bonding period is setting up this blog. I wanted to have one for a long time and this seems to be a really nice opportunity to start. And the fact this has been added to Planet Debian too makes me happier to write. I am still trying to get a hang of this and definitely need to learn how to spend less time writing it. I also worked on my already opened merge requests during this period and got them merged. Since I am already familiar with the codebase, so I started with my first deliverable a bit earlier before the official coding period begins which is migrating the logins to Salsa, Debian s Gitlab Instance. Currently, debci uses Debian SSO client certificates for logging in, but that is deprecated so it needs to be migrated to use Salsa as an identity provider. The OmniAuth library is being used to implement this with help of ruby-omniauth-gitlab strategy. I explored a great deal about integrating OmniAuth with our application and bumped into so many issues too when I began implementing that. Once I am done integrating the Salsa Authentication with debci, I am planning to write a separate tutorial on that which could be helpful to other people using OmniAuth with their application. With that, the community bonding period has ended on 7th June and the coding period officially begins and for now, I will be continuing working on my first deliverable. That s all for now. See you next time!

25 May 2021

Pavit Kaur: Journey to GSoC

I am really excited that my Google Summer of Code proposal with Debian for the project Debian Continuous Integration improvements has been accepted. Through this blog, I am here to share about my Pre-GSoC journey. alt text I knew about GSoC since my first year of college but had this misconception that only great coders get selected for GSoC which did not let me apply to the program until my 3rd year of engineering. I applied this year not because I thought I have turned into one but because I actually wanted to give a fair try to this before the time I become ineligible to participate.

Finding Project Scrolling through the list of GSoC 2021 organizations, I was checking out projects of organizations I am familiar with. Debian is one of the huge Open Source communities that has always inspired developers around the world to contribute to Open Source. So as I checked through Debian projects, I got excited to find the Debian Continuous Integration improvements project (referred to as debci in this post) related to web development and more concerned with backend work which is something I am very much interested in. I joined the community, and as directed by the application tasks of the project I set up the debci on my machine and started with an issue labeled as a newcomer. Soon I was able to submit my first Merge Request and it was reviewed by Antonio Terceiro, the mentor of my GSoC Project. With his further guidance, I was able to turn MR into an acceptable patch, and voila it got merged! That really did boost my morale to contribute further to the project.

Student Application Period At the suggestion of the mentor, during the Student Application Period, I worked on more open issues which were helping me understand the codebase better and in turn the deliverables of the project for my proposal. For my every doubt, I first tried to tackle it myself, and if still not able to find a solution I turn to mentors who did their best to answer my queries and this is how I completed my proposal and got it reviewed by Antonio before finally submitting it on 13th April. I still cannot express that feeling of satisfaction I achieved on submitting the proposal. I finally successfully applied for GSoC.

After Proposal Submission I did not stop my contribution after the Student Application period ended and kept on working on more issues which helped me stay in touch with the project and also because I was enjoying it a lot. I had already made up my mind to contribute more in Open Source as I learned and enjoyed plenty during this process.

Day of Results On 17th May, I got my acceptance mail at around 11:30 pm at night and I remember screaming and waking everybody up in my home to announce the news to them. It was truly the happiest moment for me.

Moment of Truth I would admit that I got involved with GSoC because of the reputation associated with it but things I learned during this pre GSoC period have made me realize the fun and learning opportunities associated with working opensource and I am here to stay for sure. I plan to write more blogs regarding my project and keep you guys updated about my work. Stay tuned!

27 March 2021

Antonio Terceiro: Migrating from Chef to itamae

The Debian CI platform is comprised of 30+ (virtual) machines. Maintaining this many machines, and being able to add new ones with some degree of reliability requires one to use some sort of configuration management. Until about a week ago, we were using Chef for our configuration management. I was, for several years, the main maintainer of Chef in Debian, so using it was natural to me, as I had used it before for personal and work projects. But last year I decided to request the removal of Chef from Debian, so that it won't be shipped with Debian 11 (bullseye). After evaluating a few options, I believed that the path of least resistance was to migrate to itamae. itamae was inspired by chef, and uses a DSL that is very similar to the Chef one. Even though the itamae team claim it's not compatible with Chef, the changes that I needed to do were relatively limited. The necessary code changes might look like a lot, but a large part of them could be automated or done in bulk, like doing simple search and replace operations, and moving entire directories around. In the rest of this post, I will describe the migration process, starting with the infrastructure changes, the types of changes I needed to make to the configuration management code, and my conclusions about the process. Infrastructure changes The first step was to add support for itamae to chake, a configuration management wrapper tool that I wrote. chake was originally written as a serverless remote executor for Chef, so this involved a bit of a redesign. I thought it was worth it to do, because at that point chake had gained several interesting managements features that we no directly tied to Chef. This work was done a bit slowly over the course of the several months, starting almost exactly one year ago, and was completed 3 months ago. I wasn't in a hurry and knew I had time before Debian 11 is released and I had to upgrade the platform. After this was done, I started the work of migrating the then Chef cookbooks to itamae, and the next sections present the main types of changes that were necessary. During the entire process, I sent a few patches out: Code changes These are the main types of changes that were necessary in the configuration code to accomplish the migration to itamae. Replace cookbook_file with remote_file. The resource known as cookbook_file in Chef is called remote_file in itamae. Fixing this is just a matter of search and replace, e.g.:
-cookbook_file '/etc/apt/apt.conf.d/00updates' do
+remote_file '/etc/apt/apt.conf.d/00updates' do
Changed file locations The file structure assumed by itamae is a lot simpler than the one in Chef. The needed changes were: Explicit file ownership and mode Chef is usually design to run as root on the nodes, and files created are owned by root and have move 0644 by default. With itamae, files are by default owned by the user that was used to SSH into the machine. Because of this, I had to review all file creation resources and add owner, group and mode explicitly:
-cookbook_file '/etc/apt/apt.conf.d/00updates' do
-  source 'apt.conf'
+remote_file '/etc/apt/apt.conf.d/00updates' do
+  source 'files/apt.conf'
+  owner   'root'
+  group   'root'
+  mode    "0644"
 end
In the end, I guess being explicit make the configuration code more understandable, so I take that as a win. Different execution context One of the major differences between Chef itamae comes down the execution context of the recipes. In both Chef and itamae, the configuration is written in DSL embedded in Ruby. This means that the recipes are just Ruby code, and difference here has to do with where that code is executed. With Chef, the recipes are always execute on the machine you are configuring, while with itamae the recipe is executed on the workstation where you run itamae, and that gets translated to commands that need to be executed on the machine being configured. For example, if you need to configure a service based on how much RAM the machine has, with Chef you could do something like this:
total_ram = File.readlines("/proc/meminfo").find do  l 
  l.split.first == "MemTotal:"
end.split[1]
file "/etc/service.conf" do
  # use 20% of the total RAM
  content "cache_size = # ram / 5 KB"
end
With itamae, all that Ruby code will run on the client, so total_ram will contain the wrong number. In the Debian CI case, I worked around that by explicitly declaring the amount of RAM in the static host configuration, and the above construct ended up as something like this:
file "/etc/service.conf" do
  # use 20% of the total RAM
  content "cache_size = # node['total_ram'] / 5 KB"
end
Lessons learned This migration is now complete, and there are a few points that I take away from it: All in all, the system is working just fine, and I consider this to have been a successful migration. I'm happy it worked out.

23 March 2021

Bits from Debian: New Debian Developers and Maintainers (January and February 2021)

The following contributors got their Debian Developer accounts in the last two months: The following contributors were added as Debian Maintainers in the last two months: Congratulations!

1 September 2020

Utkarsh Gupta: FOSS Activites in August 2020

Here s my (eleventh) monthly update about the activities I ve done in the F/L/OSS world.

Debian
This was my 20th month of contributing to Debian. I became a DM in late March last year and a DD last Christmas! \o/ Well, this month we had DebConf! \o/
(more about this later this week!) Anyway, here are the following things I did in Debian this month:

Uploads and bug fixes:

Other $things:
  • Mentoring for newcomers.
  • FTP Trainee reviewing.
  • Moderation of -project mailing list.
  • Sponsored php-dasprid-enum and php-bacon-baconqrcode for William and ruby-unparser, ruby-morpher, and ruby-path-exapander for Cocoa.

Goodbye GSoC! \o/ In May, I got selected as a Google Summer of Code student for Debian again! \o/
I am working on the Upstream-Downstream Cooperation in Ruby project. The other 5 blogs can be found here: Also, I log daily updates at gsocwithutkarsh2102.tk. Since this is a wrap and whilst the daily updates are already available at the above site^, I ll quickly mention the important points and links here.

Debian (E)LTS
Debian Long Term Support (LTS) is a project to extend the lifetime of all Debian stable releases to (at least) 5 years. Debian LTS is not handled by the Debian security team, but by a separate group of volunteers and companies interested in making it a success. And Debian Extended LTS (ELTS) is its sister project, extending support to the Jessie release (+2 years after LTS support). This was my eleventh month as a Debian LTS and my second as a Debian ELTS paid contributor.
I was assigned 21.75 hours for LTS and 14.25 hours for ELTS and worked on the following things:

LTS CVE Fixes and Announcements:

ELTS CVE Fixes and Announcements:
  • Issued ELA 255-1, fixing CVE-2020-14344, for libx11.
    For Debian 8 Jessie, these problems have been fixed in version 2:1.6.2-3+deb8u3.
  • Issued ELA 259-1, fixing CVE-2020-10177, for pillow.
    For Debian 8 Jessie, these problems have been fixed in version 2.6.1-2+deb8u5.
  • Issued ELA 269-1, fixing CVE-2020-11985, for apache2.
    For Debian 8 Jessie, these problems have been fixed in version 2.4.10-10+deb8u17.
  • Started working on clamAV update, it s a major bump from v0.101.5 to v0.102.4. There were lots of movings parts. Contacted upstream maintainers to help reduce the risk of regression. Came up with a patch to loosen the libcurl version requirement. Hopefully, the update could be rolled out soon!

Other (E)LTS Work:
  • I spent an additional 11.15 hours working on compiling the responses of the LTS survey and preparing a gist of it for its presentation during the Debian LTS BoF at DebConf20.
  • Triaged qemu, pillow, gupnp, clamav, apache2, and uwsgi.
  • Marked CVE-2020-11538/pillow as not-affected for Stretch.
  • Marked CVE-2020-11984/apache2 as not-affected for Stretch.
  • Marked CVE-2020-10378/pillow as not-affected for Jessie.
  • Marked CVE-2020-11538/pillow as not-affected for Jessie.
  • Marked CVE-2020-3481/clamav as not-affected for Jessie.
  • Marked CVE-2020-11984/apache2 as not-affected for Jessie.
  • Marked CVE-2020- 9490,11993 /apache2 as not-affected for Jessie.
  • Hosted Debian LTS BoF at DebConf20. Recording here.
  • General discussion on LTS private and public mailing list.

Until next time.
:wq for today.

16 August 2020

Andrej Shadura: Useful FFmpeg commands for video editing

As a response to Antonio Terceiro s blog post, I m publishing some FFmpeg commands I ve been using recently. Embedding subtitles Sometimes you have a video with subtitles in multiple languages and you don t want to clutter the directory with a lot of similarly-named files or maybe you want to be able to easily transfer the video and subtitles at once. In this case, it may be useful to embed to subtitles directly into the video container file.
ffmpeg -i video.mp4 -i video.eng.srt -map 0:v -map 0:a -c copy -map 1 \
        -c:s:0 mov_text -metadata:s:s:0 language="eng" video-out.mp4
This commands recodes the subtitle file into a format appropriate for the MP4 container and embeds it with a metadata element telling the video player what language it is in. You can add multiple subtitles at once, or you can also transcode the audio to AAC while doing so (I found that a lot of Android devices can t play Ogg Vorbis streams):
ffmpeg -i video.mp4 -i video.deu.srt -i video.eng.srt -map 0:v -map 0:a \
        -c:v copy -c:a aac -map 1 -c:s:0 mov_text -metadata:s:s:0 language="deu" \
                           -map 2 -c:s:1 mov_text -metadata:s:s:1 language="eng" video-out.mp4
Hard subtitles Sometimes you need to play the video with subtitles on devices not supporting them. In that case, it may be useful to hardcode the subtitles directly into the video stream:
ffmpeg -i video.mp4 -vf subtitles=video.eng.srt video-out.mp4
Unfortunately, if you also want to apply more transformations to the video, it starts getting tricky, the -vf option is no longer enough:
ffmpeg -i video.mp4 -i overlay.jpg -filter:a "volume=10" \
        -filter_complex '[0:v][1:v]overlay[outv];[outv]subtitles=video.eng.srt' \
                        video-out.mp4
This command adds an overlay to the video stream (in my case I overlaid a full frame over the original video offering some explanations), increases the volume ten times and adds hard subtitles. P.S. You can see the practical application of the above in this video with a head of one of the electoral commissions in Belarus forcing the members of the staff to manipulate the voting results. I transcribed the video in both Russian and English and encoded the English subtitles into the video.

Gunnar Wolf: DebConf20 talk recorded

Following Antonio Terceiro s post on tips for using ffmpeg for editing video, I will also share a bit of my experience producing my video for my session in DebConf20. I recorded my talk today. As Terceiro mentioned, even though I m used to speaking in front of my webcam (i.e. for my classes and some smaller conferences I ve worked on during the COVID lockdown), it does feel a bit weird to present a live talk to nobody :- OK, one step back. Why are we doing this? Because our hardworking friends of the DebConf20 video team recommended so. In order to minimize connecitvity issues from the variety of speakers throughout the world, we were requested to pre-record the exposition part of our talks, send them to the video team (deadline: today 2020-08-16, in case you still owe yours!), and make sure to be present at the end of the talk for the Q&A session. Of course, for a 45 minute talk, I prepared a 30 minute presentation, saving time for said Q&A session. Anyway, I used the excellent OBS studiolive video mixing/editing program (of course, Debian packages are available. This allowed me to set up several predefined views (combinations and layouts of the presentation, webcam, and maybe some other sources) and professionally and elegantly switch between them on the fly. I am still a newbie with OBS, but I surely see it becoming a part of my day to day streaming. Of course, my setup still was obvious (me looking right every now and then to see or control OBS, as I work on a dual-monitor setup ) Anyway, the experience was very good, much smoother and faster than what I usually have to do when editing video. But just as I was finishing thanking the (future) audience and closing the recording I had to tell the camera, oh, fuck! The button labeled Start Recording Had not been pressed. So, did I just lose 30 minutes of my life, plus a half-decent delivered talk? No, fortunately not. I had previously been playing with OBS, and configured some things. The button I did press was Start Streaming . So, my talk (swearing included, of course) was dutifully streamed over to my YouTube channel. It seems up to five people got a sneak preview as to what will my DebConf participation be (of course, I ve de-listed the video). I pulled it with the always-handy youtube-dl, edited out my curses using kdenlive, and pushed it to the DebConf video server. Oh, make sure you follow the advice for recording presentations. It has all the relevant advice, the settings you should use, and much more welcome information if you are new to this. So Next week, DebConf20! Be there or be square!

15 August 2020

Antonio Terceiro: Useful ffmpeg commands for editing video

For DebConf20, we are recommending that speakers pre-record the presentation part of their talks, and will have live Q&A. We had a smaller online MiniDebConf a couple of months ago, where for instance I had connectivity issues during my talk, so even though it feels too artificial, I guess pre-recording can decrease by a lot the likelihood of a given talk going bad. Paul Gevers and I submitted a short 20 min talk giving an update on autopkgtest, ci.debian.net and friends. We will provide the latest updates on autopkgtest, autodep8, debci, ci.debian.net, and its integration with the Debian testing migration software, britney. We agreed on a split of the content, each one recorded their part, and I offered to join them together. The logical chaining of the topics is such that we can't just concatenate the recordings, so we need to interlace our parts. So I set out to do a full video editing work. I have done this before, although in a simpler way, for one of the MiniDebconfs we held in Curitiba. In that case, it was just cutting the noise at the beginning and the end of the recording, and adding beginning and finish screens with sponsors logos etc. The first issue I noticed was that both our recordings had a decent amount of audio noise. To extract the audio track from the videos, I resorted to How can I extract audio from video with ffmpeg? on Stack Overflow:
ffmpeg -i input-video.avi -vn -acodec copy output-audio.aac
I then edited the audio with Audacity. I passed a noise reduction filter a couple of times, then a compressor filter to amplify my recording on mine, as Paul's already had a good volume. And those are my more advanced audio editing skills, which I acquired doing my own podcast. I now realized I could have just muted the audio tracks from the original clip and align the noise-free audio with it, but I ended up creating new video files with the clean audio. Another member of the Stack Overflow family came to the rescue, in How to merge audio and video file in ffmpeg. To replace the audio stream, we can do something like this:
ffmpeg -i video.mp4 -i audio.wav -c:v copy -c:a aac -map 0:v:0 -map 1:a:0 output.mp4
Paul's recording had a 4:3 aspect ratio, while the requested format is 16:9. This late in the game, there was zero chance I would request him to redo the recording. So I decided to add those black bars on the side to make it the right aspect when showing full screen. And yet again the quickest answer I could find came from the Stack Overflow empire: ffmpeg: pillarbox 4:3 to 16:9:
ffmpeg -i "input43.mkv" -vf "scale=640x480,setsar=1,pad=854:480:107:0" [etc..]
The final editing was done with pitivi, which is what I have used before. I'm a very basic user, but I could do what I needed. It was basically splitting the clips at the right places, inserting the slides as images and aligning them with the video, and making most our video appear small in the corner when presenting the slides. P.S.: all the command lines presented here are examples, basically copied from the linked Q&As, and have to be adapted to your actual input and output formats.

7 August 2020

Antonio Terceiro: When community service providers fail

I'm starting a new blog, and instead of going into the technical details on how it's made, or on how I migrated my content from the previous ones, I want focus on why I did it. It's been a while since I have written a blog post. I wanted to get back into it, but also wanted to finally self-host my blog/homepage because I have been let down before. And sadly, I was not let down by a for-profit, privacy invading corporation, but by a free software organization. The sad story of wiki.softwarelivre.org My first blog that was hosted in a blog engine written by me, which was hosted in a TWiki, and later Foswiki instance previously available at wiki.softwarelivre.org, hosted by ASL.org. I was the one who introduced the tool to the organization in the first place. I had come from a previous, very fruitful experience on the use of wikis for creation of educational material while in university, which ultimately led me to become a core TWiki, and then Foswiki developer. In 2004, I had just moved to Porto Alegre, got involved in ASL.org, and there was a demand for a tool like that. 2 years later, I left Porto Alegre, and some time after that the daily operations of ASL.org when it became clear that it was not really prepared for remote participation. I was still maintaing the wiki software and the OS for quite some years after, until I wasn't anymore. In 2016, the server that hosted it went haywire, and there were no backups. A lot of people and free software groups lost their content forever. My blog was the least important content in there. To mention just a few examples, here are some groups that lost their content in there: Some of this can still be reached via the Internet Archive Wayback Machine, but that is only useful for recovering content, not for it to be used by the public. The announced tragedy of softwarelivre.org My next blog after that was hosted at softwarelivre.org, a Noosfero instance also hosted by ASL.org. When it was introduced in 2010, this Noosfero instance became responsible for the main domain softwarelivre.org name. This was a bold move by ASL.org, and was a demonstration of trust in a local free software project, led by a local free software cooperative (Colivre). I was a lead developer in the Noosfero project for a long time, and I was also involved in maintaining that server as well. However, for several years there is little to no investment in maintaining that service. I already expect that it will probably blow up at some point as the wiki did, or that it may be just shut down on purpose. On the responsibility of organizations Today, a large part of wast mot people consider "the internet" is controlled by a handful of corporations. Most popular services on the internet might look like they are gratis (free as in beer), but running those services is definitely not without costs. So when you use services provided by for-profit companies and are not paying for them with money, you are paying with your privacy and attention. Society needs independent organizations to provide alternatives. The market can solve a part of the problem by providing ethical services and charging for them. This is legitimate, and as long as there is transparency about how peoples' data and communications are handled, there is nothing wrong with it. But that only solves part of the problem, as there will always be people who can't afford to pay, and people and communities who can afford to pay, but would rather rely on a nonprofit. That's where community-based services, provided by nonprofits, are also important. We should have more of them, not less. So it makes me worry to realize ASL.org left the community in the dark. Losing the wiki wasn't even the first event of its kind, as the listas.softwarelivre.org mailing list server, with years and years of community communications archived in it, broke with no backups in 2012. I do not intend to blame the ASL.org leadership personally, they are all well meaning and good people. But as an organization, it failed to recognize the importance of this role of service provider. I can even include myself in it: I was member of the ASL.org board some 15 years ago; I was involved in the deployment of both the wiki and Noosfero, the former as a volunteer and the later professionally. Yet, I did nothing to plan the maintenance of the infrastructure going forward. When well meaning organizations fail, people who are not willing to have their data and communications be exploited for profit are left to their own devices. I can afford a virtual private server, and have the technical knowledge to import my old content into a static website generator, so I did it. But what about all the people who can't, or don't? Of course, these organizations have to solve the challenge of being sustainable, and being able to pay professionals to maintain the services that the community relies on. We should be thankful to these organizations, and their leadership needs to recognize the importance of those services, and actively plan for them to be kept alive.

Antonio Terceiro: When community service providers fail

I'm starting a new blog, and instead of going into the technical details on how it's made, or on how I migrated my content from the previous ones, I want to focus on why I did it. It's been a while since I have written a blog post. I wanted to get back into it, but also wanted to finally self-host my blog/homepage because I have been let down before. And sadly, I was not let down by a for-profit, privacy invading corporation, but by a free software organization. The sad story of wiki.softwarelivre.org My first blog that was hosted in a blog engine written by me, which was hosted in a TWiki, and later Foswiki instance previously available at wiki.softwarelivre.org, hosted by ASL.org. I was the one who introduced the tool to the organization in the first place. I had come from a previous, very fruitful experience on the use of wikis for creation of educational material while in university, which ultimately led me to become a core TWiki, and then Foswiki developer. In 2004, I had just moved to Porto Alegre, got involved in ASL.org, and there was a demand for a tool like that. 2 years later, I left Porto Alegre, and some time after that the daily operations of ASL.org when it became clear that it was not really prepared for remote participation. I was still maintaing the wiki software and the OS for quite some years after, until I wasn't anymore. In 2016, the server that hosted it went haywire, and there were no backups. A lot of people and free software groups lost their content forever. My blog was the least important content in there. To mention just a few examples, here are some groups that lost their content in there: Some of this can still be reached via the Internet Archive Wayback Machine, but that is only useful for recovering content, not for it to be used by the public. The announced tragedy of softwarelivre.org My next blog after that was hosted at softwarelivre.org, a Noosfero instance also hosted by ASL.org. When it was introduced in 2010, this Noosfero instance became responsible for the main domain softwarelivre.org name. This was a bold move by ASL.org, and was a demonstration of trust in a local free software project, led by a local free software cooperative (Colivre). I was a lead developer in the Noosfero project for a long time, and I was also involved in maintaining that server as well. However, for several years there is little to no investment in maintaining that service. I already expect that it will probably blow up at some point as the wiki did, or that it may be just shut down on purpose. On the responsibility of organizations Today, a large part of wast mot people consider "the internet" is controlled by a handful of corporations. Most popular services on the internet might look like they are gratis (free as in beer), but running those services is definitely not without costs. So when you use services provided by for-profit companies and are not paying for them with money, you are paying with your privacy and attention. Society needs independent organizations to provide alternatives. The market can solve a part of the problem by providing ethical services and charging for them. This is legitimate, and as long as there is transparency about how peoples' data and communications are handled, there is nothing wrong with it. But that only solves part of the problem, as there will always be people who can't afford to pay, and people and communities who can afford to pay, but would rather rely on a nonprofit. That's where community-based services, provided by nonprofits, are also important. We should have more of them, not less. So it makes me worry to realize ASL.org left the community in the dark. Losing the wiki wasn't even the first event of its kind, as the listas.softwarelivre.org mailing list server, with years and years of community communications archived in it, broke with no backups in 2012. I do not intend to blame the ASL.org leadership personally, they are all well meaning and good people. But as an organization, it failed to recognize the importance of this role of service provider. I can even include myself in it: I was member of the ASL.org board some 15 years ago; I was involved in the deployment of both the wiki and Noosfero, the former as a volunteer and the later professionally. Yet, I did nothing to plan the maintenance of the infrastructure going forward. When well meaning organizations fail, people who are not willing to have their data and communications be exploited for profit are left to their own devices. I can afford a virtual private server, and have the technical knowledge to import my old content into a static website generator, so I did it. But what about all the people who can't, or don't? Of course, these organizations have to solve the challenge of being sustainable, and being able to pay professionals to maintain the services that the community relies on. We should be thankful to these organizations, and their leadership needs to recognize the importance of those services, and actively plan for them to be kept alive.

15 July 2020

Utkarsh Gupta: GSoC Phase 2

Hello, In early May, I got selected as a Google Summer of Code student for Debian to work on a project which is to write a linter (an extension to RuboCop).
This tool is mostly to help the Debian Ruby team. And that is the best part, I love working in/for/with the Ruby team!
(I ve been an active part of the team for 19 months now :)) More details about the project can be found here, on the wiki.
And also, I have got the best mentors I could ve possibly asked for: Antonio Terceiro and David Rodr guez So, the program began on 1st June and I ve been working since then. I log my daily updates at gsocwithutkarsh2102.tk.
The blog for the first part of phase 1 can be found here and that of second part of phase 1 can be found here. Whilst the daily updates are available at the above site^, I ll breakdown the important parts here: Well, the best part yet?
rubocop-packaging is being used by batalert, arbre, rspec-stubbed_env, rspec-pending_for, ISO8601, get_root ( ), gir_ffi, linter, and cucumber-rails. Whilst it has been a lot of fun so far, my plate has started to almost overflow. It seems that I ve got a lot of things to work on (and already things that are due!).
From my major project, college *stuff to my GSoC project, Debian (E)LTS, and a lot *more. Thanks to Antonio for helping me out with *other things (which maps back to his sayings in Paris \o/).
Until next time.
:wq for today.

1 July 2020

Utkarsh Gupta: FOSS Activites in June 2020

Here s my (ninth) monthly update about the activities I ve done in the F/L/OSS world.

Debian
This was my 16th month of contributing to Debian. I became a DM in late March last year and a DD last Christmas! \o/ This month was a little intense. I did a lot of different kinds of things in Debian this month. Whilst most of my time went on doing security stuff, I also sponsored a bunch of packages. Here are the following things I did this month:

Uploads and bug fixes:

Other $things:
  • Hosted Ruby team meeting. Logs here.
  • Mentoring for newcomers.
  • FTP Trainee reviewing.
  • Moderation of -project mailing list.
  • Sponsored ruby-ast for Abraham, libexif for Hugh, djangorestframework-gis and karlseguin-ccache for Nilesh, and twig-extensions, twig-i18n-extension, and mariadb-mysql-kbs for William.

GSoC Phase 1, Part 2! Last month, I got selected as a Google Summer of Code student for Debian again! \o/
I am working on the Upstream-Downstream Cooperation in Ruby project. The first half of the first month is blogged here, titled, GSoC Phase 1.
Also, I log daily updates at gsocwithutkarsh2102.tk. Whilst the daily updates are available at the above site^, I ll breakdown the important parts of the later half of the first month here:
  • Documented the first cop, GemspecGit via PR #2.
  • Made an initial release, v0.1.0!
  • Spread the word/usage about this tool/library via adding them in the official RuboCop docs.
  • We had our third weekly meeting where we discussed the next steps and the things that are supposed to be done for the next set of cops.
  • Wrote more tests so as to cover different aspects of the GemspecGit cop.
  • Opened PR #4 for the next Cop, RequireRelativeToLib.
  • Introduced rubocop-packaging to the outer world and requested other upstream projects to use it! It is being used by 6 other projects already
  • Had our fourth weekly meeting where we pair-programmed (and I sucked :P) and figured out a way to make the second cop work.
  • Found a bug, reported at issue #5 and raised PR #6 to fix it.
  • And finally, people loved the library/tool (and it s outcome):



    (for those who don t know, @bbatsov is the author of RuboCop, @lienvdsteen is an amazing fullstack engineer at GitLab, and @pboling is the author of some awesome Ruby tools and libraries!)

Debian LTS
Debian Long Term Support (LTS) is a project to extend the lifetime of all Debian stable releases to (at least) 5 years. Debian LTS is not handled by the Debian security team, but by a separate group of volunteers and companies interested in making it a success. This was my ninth month as a Debian LTS paid contributor. I was assigned 30.00 hours and worked on the following things:

CVE Fixes and Announcements:

Other LTS Work:
  • Triaged sympa, apache2, qemu, and coturn.
  • Add fix for CVE-2020-0198/libexif.
  • Requested CVE for bug#60251 against apache2 and prodded further.
  • Raised issue #947 against sympa reporting an incomplete patch for CVE-2020-10936. More discussions internally.
  • Created the LTS Survey on the self-hosted LimeSurvey instance.
  • Attended the third LTS meeting. Logs here.
  • General discussion on LTS private and public mailing list.

Other(s)
Sometimes it gets hard to categorize work/things into a particular category.
That s why I am writing all of those things inside this category.
This includes two sub-categories and they are as follows.

Personal: This month I did the following things:
  • Wrote and published v0.1.0 of rubocop-packaging on RubyGems!
    It s open-sourced and the repository is here.
    Bug reports and pull requests are welcomed!
  • Integrated a tiny (yet a powerful) hack to align images in markdown for my blog.
    Commit here.
  • Released v0.4.0 of batalert on RubyGems!

Open Source: Again, this contains all the things that I couldn t categorize earlier.
Opened several issues and PRs:
Thank you for sticking along for so long :) Until next time.
:wq for today.

Next.