Search Results: "noop"

18 April 2024

Samuel Henrique: Hello World

This is my very first post, just to make sure everything is working as expected. Made with Zola and the Abridge theme.

1 March 2024

Scarlett Gately Moore: Kubuntu: Week 4, Feature Freeze and what comes next.

First I would like to give a big congratulations to KDE for a superb KDE 6 mega release  While we couldn t go with 6 on our upcoming LTS release, I do recommend KDE neon if you want to give it a try! I want to say it again, I firmly stand by the Kubuntu Council in the decision to stay with the rock solid Plasma 5 for the 24.04 LTS release. The timing was just to close to feature freeze and the last time we went with the shiny new stuff on an LTS release, it was a nightmare ( KDE 4 anyone? ). So without further ado, my weekly wrap-up. Kubuntu: Continuing efforts from last week Kubuntu: Week 3 wrap up, Contest! KDE snaps, Debian uploads. , it has been another wild and crazy week getting everything in before feature freeze yesterday. We will still be uploading the upcoming Plasma 5.27.11 as it is a bug fix release  and right now it is all about the finding and fixing bugs! Aside from many uploads my accomplishments this week are: What comes next? Testing, testing, testing! Bug fixes and of course our re-branding. My focus is on bug triage right now. I am also working on new projects in launchpad to easily track our bugs as right now they are all over the place and hard to track down. Snaps: I have started the MRs to fix our latest 23.08.5 snaps, I hope to get these finished in the next week or so. I have also been speaking to a prospective student with some GSOC ideas that I really like and will mentor, hopefully we are not too late. Happy with my work? My continued employment depends on you! Please consider a donation http://kubuntu.org/donate Thank you!

2 February 2024

Scarlett Gately Moore: Some exciting news! Kubuntu: I m back!!!

It s official, the Kubuntu Council has hired me part time to work on the 24.04 LTS release, preparation for Plasma 6, and to bring life back into the Distribution. First I want thank the Kubuntu Council for this opportunity and I plan a long and successful journey together!!!! My first week ( I started midweek ): It has been a busy one! Many meet and greets with the team and other interested parties. I had the chance to chat with Mike from Kubuntu Focus and I have to say I am absolutely amazed with the work they have done, and if you are in the market for a new laptop, you must check these out!!! https://kfocus.org Or if you want to try before you buy you can download the OS! All they ask is for an e-mail, which is completely reasonable. Hosting isn t free! Besides, you can opt out anytime and they don t share it with anyone. I look forward to working closely with this project. We now have a Kubuntu Team in KDE invent https://invent.kde.org/teams/distribution-kubuntu if you would like to join us, please don t hesitate to ask! I have started a new Wiki and our first page is the ever important Bug triaging! It is still a WIP but you can check it out here: https://invent.kde.org/teams/distribution-kubuntu/docs/-/wikis/Bug-Triage-Story-WIP , with that said I have started the launchpad work to make tracking our bugs easier buy subscribing kubuntu-bugs to all our packages and creating proper projects for our packages missing them. We have compiled a list of our various documentation links that need updated and Rick Timmis is updating kubuntu.org! Aaron Honeycutt has been busy with the Kubuntu Manual https://github.com/kubuntu-team/kubuntu-manual which is in good shape. We just need to improve our developer story  I have been working on the rather massive Apparmor bug https://bugs.launchpad.net/ubuntu/+source/apparmor/+bug/2046844 with testing the fixes from the ppa and writing profiles for the various KDE packages affected ( pretty much anything that uses webengine ) and making progress there. My next order of business staging Frameworks 5.114 with guidance from our super awesome Rik Mills that has been doing most of the heavy lifting in Kubuntu for many years now. So thank you for that Rik  I will also start on our big transition to the Calamaras Installer! I do have experience here, so I expect it will be a smooth one. I am so excited for the future of Kubuntu and the exciting things to come! With that said, the Kubuntu funding is community donation driven. There is enough to pay me part time for a couple contracts, but it will run out and a full-time contract would be super awesome. I am reaching out to anyone enjoying Kubuntu and want to help with the future of Kubuntu to please consider a donation! We are working on more donation options, but for now you can donate through paypal at https://kubuntu.org/donate/ Thank you!!!!!

28 January 2024

Niels Thykier: Annotating the Debian packaging directory

In my previous blog post Providing online reference documentation for debputy, I made a point about how debhelper documentation was suboptimal on account of being static rather than online. The thing is that debhelper is not alone in this problem space, even if it is a major contributor to the number of packaging files you have to to know about. If we look at the "competition" here such as Fedora and Arch Linux, they tend to only have one packaging file. While most Debian people will tell you a long list of cons about having one packaging file (such a Fedora's spec file being 3+ domain specific languages "mashed" into one file), one major advantage is that there is only "the one packaging file". You only need to remember where to find the documentation for one file, which is great when you are running on wetware with limited storage capacity. Which means as a newbie, you can dedicate less mental resources to tracking multiple files and how they interact and more effort understanding the "one file" at hand. I started by asking myself how can we in Debian make the packaging stack more accessible to newcomers? Spoiler alert, I dug myself into rabbit hole and ended up somewhere else than where I thought I was going. I started by wanting to scan the debian directory and annotate all files that I could with documentation links. The logic was that if debputy could do that for you, then you could spend more mental effort elsewhere. So I combined debputy's packager provided files detection with a static list of files and I quickly had a good starting point for debputy-based packages.
Adding (non-static) dpkg and debhelper files to the mix Now, I could have closed the topic here and said "Look, I did debputy files plus couple of super common files". But I decided to take it a bit further. I added support for handling some dpkg files like packager provided files (such as debian/substvars and debian/symbols). But even then, we all know that debhelper is the big hurdle and a major part of the omission... In another previous blog post (A new Debian package helper: debputy), I made a point about how debputy could list all auxiliary files while debhelper could not. This was exactly the kind of feature that I would need for this feature, if this feature was to cover debhelper. Now, I also remarked in that blog post that I was not willing to maintain such a list. Also, I may have ranted about static documentation being unhelpful for debhelper as it excludes third-party provided tooling. Fortunately, a recent update to dh_assistant had provided some basic plumbing for loading dh sequences. This meant that getting a list of all relevant commands for a source package was a lot easier than it used to be. Once you have a list of commands, it would be possible to check all of them for dh's NOOP PROMISE hints. In these hints, a command can assert it does nothing if a given pkgfile is not present. This lead to the new dh_assistant list-guessed-dh-config-files command that will list all declared pkgfiles and which helpers listed them. With this combined feature set in place, debputy could call dh_assistant to get a list of pkgfiles, pretend they were packager provided files and annotate those along with manpage for the relevant debhelper command. The exciting thing about letting debpputy resolve the pkgfiles is that debputy will resolve "named" files automatically (debhelper tools will only do so when --name is passed), so it is much more likely to detect named pkgfiles correctly too. Side note: I am going to ignore the elephant in the room for now, which is dh_installsystemd and its package@.service files and the wide-spread use of debian/foo.service where there is no package called foo. For the latter case, the "proper" name would be debian/pkg.foo.service. With the new dh_assistant feature done and added to debputy, debputy could now detect the ubiquitous debian/install file. Excellent. But less great was that the very common debian/docs file was not. Turns out that dh_installdocs cannot be skipped by dh, so it cannot have NOOP PROMISE hints. Meh... Well, dh_assistant could learn about a new INTROSPECTABLE marker in addition to the NOOP PROMISE and then I could sprinkle that into a few commands. Indeed that worked and meant that debian/postinst (etc.) are now also detectable. At this point, debputy would be able to identify a wide range of debhelper related configuration files in debian/ and at least associate each of them with one or more commands. Nice, surely, this would be a good place to stop, right...?
Adding more metadata to the files The debhelper detected files only had a command name and manpage URI to that command. It would be nice if we could contextualize this a bit more. Like is this file installed into the package as is like debian/pam or is it a file list to be processed like debian/install. To make this distinction, I could add the most common debhelper file types to my static list and then merge the result together. Except, I do not want to maintain a full list in debputy. Fortunately, debputy has a quite extensible plugin infrastructure, so added a new plugin feature to provide this kind of detail and now I can outsource the problem! I split my definitions into two and placed the generic ones in the debputy-documentation plugin and moved the debhelper related ones to debhelper-documentation. Additionally, third-party dh addons could provide their own debputy plugin to add context to their configuration files. So, this gave birth file categories and configuration features, which described each file on different fronts. As an example, debian/gbp.conf could be tagged as a maint-config to signal that it is not directly related to the package build but more of a tool or style preference file. On the other hand, debian/install and debian/debputy.manifest would both be tagged as a pkg-helper-config. Files like debian/pam were tagged as ppf-file for packager provided file and so on. I mentioned configuration features above and those were added because, I have had a beef with debhelper's "standard" configuration file format as read by filearray and filedoublearray. They are often considered simple to understand, but it is hard to know how a tool will actually read the file. As an example, consider the following:
  • Will the debhelper use filearray, filedoublearray or none of them to read the file? This topic has about 2 bits of entropy.
  • Will the config file be executed if it is marked executable assuming you are using the right compat level? If it is executable, does dh-exec allow renaming for this file? This topic adds 1 or 2 bit of entropy depending on the context.
  • Will the config file be subject to glob expansions? This topic sounds like a boolean but is a complicated mess. The globs can be handled either by debhelper as it parses the file for you. In this case, the globs are applied to every token. However, this is not what dh_install does. Here the last token on each line is supposed to be a directory and therefore not subject to globs. Therefore, dh_install does the globbing itself afterwards but only on part of the tokens. So that is about 2 bits of entropy more. Actually, it gets worse...
    • If the file is executed, debhelper will refuse to expand globs in the output of the command, which was a deliberate design choice by the original debhelper maintainer took when he introduced the feature in debhelper/8.9.12. Except, dh_install feature interacts with the design choice and does enable glob expansion in the tool output, because it does so manually after its filedoublearray call.
So these "simple" files have way too many combinations of how they can be interpreted. I figured it would be helpful if debputy could highlight these difference, so I added support for those as well. Accordingly, debian/install is tagged with multiple tags including dh-executable-config and dh-glob-after-execute. Then, I added a datatable of these tags, so it would be easy for people to look up what they meant. Ok, this seems like a closed deal, right...?
Context, context, context However, the dh-executable-config tag among other are only applicable in compat 9 or later. It does not seem newbie friendly if you are told that this feature exist, but then have to read in the extended description that that it actually does not apply to your package. This problem seems fixable. Thanks to dh_assistant, it is easy to figure out which compat level the package is using. Then tweak some metadata to enable per compat level rules. With that tags like dh-executable-config only appears for packages using compat 9 or later. Also, debputy should be able to tell you where packager provided files like debian/pam are installed. We already have the logic for packager provided files that debputy supports and I am already using debputy engine for detecting the files. If only the plugin provided metadata gave me the install pattern, debputy would be able tell you where this file goes in the package. Indeed, a bit of tweaking later and setting install-pattern to usr/lib/pam.d/ name , debputy presented me with the correct install-path with the package name placing the name placeholder. Now, I have been using debian/pam as an example, because debian/pam is installed into usr/lib/pam.d in compat 14. But in earlier compat levels, it was installed into etc/pam.d. Well, I already had an infrastructure for doing compat file tags. Off we go to add install-pattern to the complat level infrastructure and now changing the compat level would change the path. Great. (Bug warning: The value is off-by-one in the current version of debhelper. This is fixed in git) Also, while we are in this install-pattern business, a number of debhelper config files causes files to be installed into a fixed directory. Like debian/docs which causes file to be installed into /usr/share/docs/ package . Surely, we can expand that as well and provide that bit of context too... and done. (Bug warning: The code currently does not account for the main documentation package context) It is rather common pattern for people to do debian/foo.in files, because they want to custom generation of debian/foo. Which means if you have debian/foo you get "Oh, let me tell you about debian/foo ". Then you rename it to debian/foo.in and the result is "debian/foo.in is a total mystery to me!". That is suboptimal, so lets detect those as well as if they were the original file but add a tag saying that they are a generate template and which file we suspect it generates. Finally, if you use debputy, almost all of the standard debhelper commands are removed from the sequence, since debputy replaces them. It would be weird if these commands still contributed configuration files when they are not actually going to be invoked. This mostly happened naturally due to the way the underlying dh_assistant command works. However, any file mentioned by the debhelper-documentation plugin would still appear unfortunately. So off I went to filter the list of known configuration files against which dh_ commands that dh_assistant thought would be used for this package.
Wrapping it up I was several layers into this and had to dig myself out. I have ended up with a lot of data and metadata. But it was quite difficult for me to arrange the output in a user friendly manner. However, all this data did seem like it would be useful any tool that wants to understand more about the package. So to get out of the rabbit hole, I for now wrapped all of this into JSON and now we have a debputy tool-support annotate-debian-directory command that might be useful for other tools. To try it out, you can try the following demo: In another day, I will figure out how to structure this output so it is useful for non-machine consumers. Suggestions are welcome. :)
Limitations of the approach As a closing remark, I should probably remind people that this feature relies heavily on declarative features. These include:
  • When determining which commands are relevant, using Build-Depends: dh-sequence-foo is much more reliable than configuring it via the Turing complete configuration we call debian/rules.
  • When debhelper commands use NOOP promise hints, dh_assistant can "see" the config files listed those hints, meaning the file will at least be detected. For new introspectable hint and the debputy plugin, it is probably better to wait until the dust settles a bit before adding any of those.
You can help yourself and others to better results by using the declarative way rather than using debian/rules, which is the bane of all introspection!

25 December 2023

Sergio Talens-Oliag: GitLab CI/CD Tips: Automatic Versioning Using semantic-release

This post describes how I m using semantic-release on gitlab-ci to manage versioning automatically for different kinds of projects following a simple workflow (a develop branch where changes are added or merged to test new versions, a temporary release/#.#.# to generate the release candidate versions and a main branch where the final versions are published).

What is semantic-releaseIt is a Node.js application designed to manage project versioning information on Git Repositories using a Continuous integration system (in this post we will use gitlab-ci)

How does it workBy default semantic-release uses semver for versioning (release versions use the format MAJOR.MINOR.PATCH) and commit messages are parsed to determine the next version number to publish. If after analyzing the commits the version number has to be changed, the command updates the files we tell it to (i.e. the package.json file for nodejs projects and possibly a CHANGELOG.md file), creates a new commit with the changed files, creates a tag with the new version and pushes the changes to the repository. When running on a CI/CD system we usually generate the artifacts related to a release (a package, a container image, etc.) from the tag, as it includes the right version number and usually has passed all the required tests (it is a good idea to run the tests again in any case, as someone could create a tag manually or we could run extra jobs when building the final assets if they fail it is not a big issue anyway, numbers are cheap and infinite, so we can skip releases if needed).

Commit messages and versioningThe commit messages must follow a known format, the default module used to analyze them uses the angular git commit guidelines, but I prefer the conventional commits one, mainly because it s a lot easier to use when you want to update the MAJOR version. The commit message format used must be:
<type>(optional scope): <description>
[optional body]
[optional footer(s)]
The system supports three types of branches: release, maintenance and pre-release, but for now I m not using maintenance ones. The branches I use and their types are:
  • main as release branch (final versions are published from there)
  • develop as pre release branch (used to publish development and testing versions with the format #.#.#-SNAPSHOT.#)
  • release/#.#.# as pre release branches (they are created from develop to publish release candidate versions with the format #.#.#-rc.# and once they are merged with main they are deleted)
On the release branch (main) the version number is updated as follows:
  1. The MAJOR number is incremented if a commit with a BREAKING CHANGE: footer or an exclamation (!) after the type/scope is found in the list of commits found since the last version change (it looks for tags on the same branch).
  2. The MINOR number is incremented if the MAJOR number is not going to be changed and there is a commit with type feat in the commits found since the last version change.
  3. The PATCH number is incremented if neither the MAJOR nor the MINOR numbers are going to be changed and there is a commit with type fix in the the commits found since the last version change.
On the pre release branches (develop and release/#.#.#) the version and pre release numbers are always calculated from the last published version available on the branch (i. e. if we published version 1.3.2 on main we need to have the commit with that tag on the develop or release/#.#.# branch to get right what will be the next version). The version number is updated as follows:
  1. The MAJOR number is incremented if a commit with a BREAKING CHANGE: footer or an exclamation (!) after the type/scope is found in the list of commits found since the last released version.In our example it was 1.3.2 and the version is updated to 2.0.0-SNAPSHOT.1 or 2.0.0-rc.1 depending on the branch.
  2. The MINOR number is incremented if the MAJOR number is not going to be changed and there is a commit with type feat in the commits found since the last released version.In our example the release was 1.3.2 and the version is updated to 1.4.0-SNAPSHOT.1 or 1.4.0-rc.1 depending on the branch.
  3. The PATCH number is incremented if neither the MAJOR nor the MINOR numbers are going to be changed and there is a commit with type fix in the the commits found since the last version change.In our example the release was 1.3.2 and the version is updated to 1.3.3-SNAPSHOT.1 or 1.3.3-rc.1 depending on the branch.
  4. The pre release number is incremented if the MAJOR, MINOR and PATCH numbers are not going to be changed but there is a commit that would otherwise update the version (i.e. a fix on 1.3.3-SNAPSHOT.1 will set the version to 1.3.3-SNAPSHOT.2, a fix or feat on 1.4.0-rc.1 will set the version to 1.4.0-rc.2 an so on).

How do we manage its configurationAlthough the system is designed to work with nodejs projects, it can be used with multiple programming languages and project types. For nodejs projects the usual place to put the configuration is the project s package.json, but I prefer to use the .releaserc file instead. As I use a common set of CI templates, instead of using a .releaserc on each project I generate it on the fly on the jobs that need it, replacing values related to the project type and the current branch on a template using the tmpl command (lately I use a branch of my own fork while I wait for some feedback from upstream, as you will see on the Dockerfile).

Container used to run itAs we run the command on a gitlab-ci job we use the image built from the following Dockerfile:
Dockerfile
# Semantic release image
FROM golang:alpine AS tmpl-builder
#RUN go install github.com/krakozaure/tmpl@v0.4.0
RUN go install github.com/sto/tmpl@v0.4.0-sto.2
FROM node:lts-alpine
COPY --from=tmpl-builder /go/bin/tmpl /usr/local/bin/tmpl
RUN apk update &&\
  apk upgrade &&\
  apk add curl git jq openssh-keygen yq zip &&\
  npm install --location=global\
    conventional-changelog-conventionalcommits@6.1.0\
    @qiwi/multi-semantic-release@7.0.0\
    semantic-release@21.0.7\
    @semantic-release/changelog@6.0.3\
    semantic-release-export-data@1.0.1\
    @semantic-release/git@10.0.1\
    @semantic-release/gitlab@9.5.1\
    @semantic-release/release-notes-generator@11.0.4\
    semantic-release-replace-plugin@1.2.7\
    semver@7.5.4\
  &&\
  rm -rf /var/cache/apk/*
CMD ["/bin/sh"]

How and when is it executedThe job that runs semantic-release is executed when new commits are added to the develop, release/#.#.# or main branches (basically when something is merged or pushed) and after all tests have passed (we don t want to create a new version that does not compile or passes at least the unit tests). The job is something like the following:
semantic_release:
  image: $SEMANTIC_RELEASE_IMAGE
  rules:
    - if: '$CI_COMMIT_BRANCH =~ /^(develop main release\/\d+.\d+.\d+)$/'
      when: always
  stage: release
  before_script:
    - echo "Loading scripts.sh"
    - . $ASSETS_DIR/scripts.sh
  script:
    - sr_gen_releaserc_json
    - git_push_setup
    - semantic-release
Where the SEMANTIC_RELEASE_IMAGE variable contains the URI of the image built using the Dockerfile above and the sr_gen_releaserc_json and git_push_setup are functions defined on the $ASSETS_DIR/scripts.sh file:
  • The sr_gen_releaserc_json function generates the .releaserc.json file using the tmpl command.
  • The git_push_setup function configures git to allow pushing changes to the repository with the semantic-release command, optionally signing them with a SSH key.

The sr_gen_releaserc_json functionThe code for the sr_gen_releaserc_json function is the following:
sr_gen_releaserc_json()
 
  # Use nodejs as default project_type
  project_type="$ PROJECT_TYPE:-nodejs "
  # REGEX to match the rc_branch name
  rc_branch_regex='^release\/[0-9]\+\.[0-9]\+\.[0-9]\+$'
  # PATHS on the local ASSETS_DIR
  assets_dir="$ CI_PROJECT_DIR /$ ASSETS_DIR "
  sr_local_plugin="$ assets_dir /local-plugin.cjs"
  releaserc_tmpl="$ assets_dir /releaserc.json.tmpl"
  pipeline_runtime_values_yaml="/tmp/releaserc_values.yaml"
  pipeline_values_yaml="$ assets_dir /values_$ project_type _project.yaml"
  # Destination PATH
  releaserc_json=".releaserc.json"
  # Create an empty pipeline_values_yaml if missing
  test -f "$pipeline_values_yaml"   : >"$pipeline_values_yaml"
  # Create the pipeline_runtime_values_yaml file
  echo "branch: $ CI_COMMIT_BRANCH " >"$pipeline_runtime_values_yaml"
  echo "gitlab_url: $ CI_SERVER_URL " >"$pipeline_runtime_values_yaml"
  # Add the rc_branch name if we are on an rc_branch
  if [ "$(echo "$CI_COMMIT_BRANCH"   sed -ne "/$rc_branch_regex/ p ")" ]; then
    echo "rc_branch: $ CI_COMMIT_BRANCH " >>"$pipeline_runtime_values_yaml"
  elif [ "$(echo "$CI_MERGE_REQUEST_SOURCE_BRANCH_NAME"  
      sed -ne "/$rc_branch_regex/ p ")" ]; then
    echo "rc_branch: $ CI_MERGE_REQUEST_SOURCE_BRANCH_NAME " \
      >>"$pipeline_runtime_values_yaml"
  fi
  echo "sr_local_plugin: $ sr_local_plugin " >>"$pipeline_runtime_values_yaml"
  # Create the releaserc_json file
  tmpl -f "$pipeline_runtime_values_yaml" -f "$pipeline_values_yaml" \
    "$releaserc_tmpl"   jq . >"$releaserc_json"
  # Remove the pipeline_runtime_values_yaml file
  rm -f "$pipeline_runtime_values_yaml"
  # Print the releaserc_json file
  print_file_collapsed "$releaserc_json"
  # --*-- BEG: NOTE --*--
  # Rename the package.json to ignore it when calling semantic release.
  # The idea is that the local-plugin renames it back on the first step of the
  # semantic-release process.
  # --*-- END: NOTE --*--
  if [ -f "package.json" ]; then
    echo "Renaming 'package.json' to 'package.json_disabled'"
    mv "package.json" "package.json_disabled"
  fi
 
Almost all the variables used on the function are defined by gitlab except the ASSETS_DIR and PROJECT_TYPE; in the complete pipelines the ASSETS_DIR is defined on a common file included by all the pipelines and the project type is defined on the .gitlab-ci.yml file of each project. If you review the code you will see that the file processed by the tmpl command is named releaserc.json.tmpl, its contents are shown here:
 
  "plugins": [
     - if .sr_local_plugin  
    "  .sr_local_plugin  ",
     - end  
    [
      "@semantic-release/commit-analyzer",
       
        "preset": "conventionalcommits",
        "releaseRules": [
            "breaking": true, "release": "major"  ,
            "revert": true, "release": "patch"  ,
            "type": "feat", "release": "minor"  ,
            "type": "fix", "release": "patch"  ,
            "type": "perf", "release": "patch"  
        ]
       
    ],
     - if .replacements  
    [
      "semantic-release-replace-plugin",
        "replacements":   .replacements   toJson    
    ],
     - end  
    "@semantic-release/release-notes-generator",
     - if eq .branch "main"  
    [
      "@semantic-release/changelog",
        "changelogFile": "CHANGELOG.md", "changelogTitle": "# Changelog"  
    ],
     - end  
    [
      "@semantic-release/git",
       
        "assets":   if .assets   .assets   toJson   else  []  end  ,
        "message": "ci(release): v$ nextRelease.version \n\n$ nextRelease.notes "
       
    ],
    [
      "@semantic-release/gitlab",
        "gitlabUrl": "  .gitlab_url  ", "successComment": false  
    ]
  ],
  "branches": [
      "name": "develop", "prerelease": "SNAPSHOT"  ,
     - if .rc_branch  
      "name": "  .rc_branch  ", "prerelease": "rc"  ,
     - end  
    "main"
  ]
 
The values used to process the template are defined on a file built on the fly (releaserc_values.yaml) that includes the following keys and values:
  • branch: the name of the current branch
  • gitlab_url: the URL of the gitlab server (the value is taken from the CI_SERVER_URL variable)
  • rc_branch: the name of the current rc branch; we only set the value if we are processing one because semantic-release only allows one branch to match the rc prefix and if we use a wildcard (i.e. release/*) but the users keep more than one release/#.#.# branch open at the same time the calls to semantic-release will fail for sure.
  • sr_local_plugin: the path to the local plugin we use (shown later)
The template also uses a values_$ project_type _project.yaml file that includes settings specific to the project type, the one for nodejs is as follows:
replacements:
  - files:
      - "package.json"
    from: "\"version\": \".*\""
    to: "\"version\": \"$ nextRelease.version \""
assets:
  - "CHANGELOG.md"
  - "package.json"
The replacements section is used to update the version field on the relevant files of the project (in our case the package.json file) and the assets section includes the files that will be committed to the repository when the release is published (looking at the template you can see that the CHANGELOG.md is only updated for the main branch, we do it this way because if we update the file on other branches it creates a merge nightmare and we are only interested on it for released versions anyway). The local plugin adds code to rename the package.json_disabled file to package.json if present and prints the last and next versions on the logs with a format that can be easily parsed using sed:
local-plugin.cjs
// Minimal plugin to:
// - rename the package.json_disabled file to package.json if present
// - log the semantic-release last & next versions
function verifyConditions(pluginConfig, context)  
  var fs = require('fs');
  if (fs.existsSync('package.json_disabled'))  
    fs.renameSync('package.json_disabled', 'package.json');
    context.logger.log( verifyConditions: renamed 'package.json_disabled' to 'package.json' );
   
 
function analyzeCommits(pluginConfig, context)  
  if (context.lastRelease && context.lastRelease.version)  
    context.logger.log( analyzeCommits: LAST_VERSION=$ context.lastRelease.version  );
   
 
function verifyRelease(pluginConfig, context)  
  if (context.nextRelease && context.nextRelease.version)  
    context.logger.log( verifyRelease: NEXT_VERSION=$ context.nextRelease.version  );
   
 
module.exports =  
  verifyConditions,
  analyzeCommits,
  verifyRelease
 

The git_push_setup functionThe code for the git_push_setup function is the following:
git_push_setup()
 
  # Update global credentials to allow git clone & push for all the group repos
  git config --global credential.helper store
  cat >"$HOME/.git-credentials" <<EOF
https://fake-user:$ GITLAB_REPOSITORY_TOKEN @gitlab.com
EOF
  # Define user name, mail and signing key for semantic-release
  user_name="$SR_USER_NAME"
  user_email="$SR_USER_EMAIL"
  ssh_signing_key="$SSH_SIGNING_KEY"
  # Export git user variables
  export GIT_AUTHOR_NAME="$user_name"
  export GIT_AUTHOR_EMAIL="$user_email"
  export GIT_COMMITTER_NAME="$user_name"
  export GIT_COMMITTER_EMAIL="$user_email"
  # Sign commits with ssh if there is a SSH_SIGNING_KEY variable
  if [ "$ssh_signing_key" ]; then
    echo "Configuring GIT to sign commits with SSH"
    ssh_keyfile="/tmp/.ssh-id"
    : >"$ssh_keyfile"
    chmod 0400 "$ssh_keyfile"
    echo "$ssh_signing_key"   tr -d '\r' >"$ssh_keyfile"
    git config gpg.format ssh
    git config user.signingkey "$ssh_keyfile"
    git config commit.gpgsign true
  fi
 
The function assumes that the GITLAB_REPOSITORY_TOKEN variable (set on the CI/CD variables section of the project or group we want) contains a token with read_repository and write_repository permissions on all the projects we are going to use this function. The SR_USER_NAME and SR_USER_EMAIL variables can be defined on a common file or the CI/CD variables section of the project or group we want to work with and the script assumes that the optional SSH_SIGNING_KEY is exported as a CI/CD default value of type variable (that is why the keyfile is created on the fly) and git is configured to use it if the variable is not empty.
Warning: Keep in mind that the variables GITLAB_REPOSITORY_TOKEN and SSH_SIGNING_KEY contain secrets, so probably is a good idea to make them protected (if you do that you have to make the develop, main and release/* branches protected too).
Warning: The semantic-release user has to be able to push to all the projects on those protected branches, it is a good idea to create a dedicated user and add it as a MAINTAINER for the projects we want (the MAINTAINERS need to be able to push to the branches), or, if you are using a Gitlab with a Premium license you can use the api to allow the semantic-release user to push to the protected branches without allowing it for any other user.

The semantic-release commandOnce we have the .releaserc file and the git configuration ready we run the semantic-release command. If the branch we are working with has one or more commits that will increment the version, the tool does the following (note that the steps are described are the ones executed if we use the configuration we have generated):
  1. It detects the commits that will increment the version and calculates the next version number.
  2. Generates the release notes for the version.
  3. Applies the replacements defined on the configuration (in our example updates the version field on the package.json file).
  4. Updates the CHANGELOG.md file adding the release notes if we are going to publish the file (when we are on the main branch).
  5. Creates a commit if all or some of the files listed on the assets key have changed and uses the commit message we have defined, replacing the variables for their current values.
  6. Creates a tag with the new version number and the release notes.
  7. As we are using the gitlab plugin after tagging it also creates a release on the project with the tag name and the release notes.

Notes about the git workflows and merges between branchesIt is very important to remember that semantic-release looks at the commits of a given branch when calculating the next version to publish, that has two important implications:
  1. On pre release branches we need to have the commit that includes the tag with the released version, if we don t have it the next version is not calculated correctly.
  2. It is a bad idea to squash commits when merging a branch to another one, if we do that we will lose the information semantic-release needs to calculate the next version and even if we use the right prefix for the squashed commit (fix, feat, ) we miss all the messages that would otherwise go to the CHANGELOG.md file.
To make sure that we have the right commits on the pre release branches we should merge the main branch changes into the develop one after each release tag is created; in my pipelines the fist job that processes a release tag creates a branch from the tag and an MR to merge it to develop. The important thing about that MR is that is must not be squashed, if we do that the tag commit will probably be lost, so we need to be careful. To merge the changes directly we can run the following code:
# Set the SR_TAG variable to the tag you want to process
SR_TAG="v1.3.2"
# Fetch all the changes
git fetch --all --prune
# Switch to the main branch
git switch main
# Pull all the changes
git pull
# Switch to the development branch
git switch develop
# Pull all the changes
git pull
# Create followup branch from tag
git switch -c "followup/$SR_TAG" "$SR_TAG"
# Change files manually & commit the changed files
git commit -a --untracked-files=no -m "ci(followup): $SR_TAG to develop"
# Switch to the development branch
git switch develop
# Merge the followup branch into the development one using the --no-ff option
git merge --no-ff "followup/$SR_TAG"
# Remove the followup branch
git branch -d "followup/$SR_TAG"
# Push the changes
git push
If we can t push directly to develop we can create a MR pushing the followup branch after committing the changes, but we have to make sure that we don t squash the commits when merging or it will not work as we want.

15 December 2023

Scarlett Gately Moore: KDE: Snaps, KDEneon, Debian and my future.

First I want to thank KDE for this wonderful write up on LinkedIn. https://www.linkedin.com/posts/kde_akademy-2023-over-a-million-reasons-why-activity-7139965489153208320-PNem?utm_source=share&utm_medium=member_desktop It made my heart explode with happiness as I prepped for my interview on Monday. I didn t get the job ( Just the usual we were very impressed with your experience, but we went with another candidate ). I think the cosmos is determined for me to hold out for the project even though it is only part time work, it is work and if I have learned nothing else this year, I have learned how to live on a very tight budget! It turns out many of the things we think we need, we don t. So with hard work of Kevin Ottens ( Thank you!!!! ), this should be finalized after the first of the year. This plan also allows me time for KDEneon and Debian work. I am happy with it and look forward to the coming year and the exciting things to come. My holiday plans are to help the Debian KDE team with the KF6 packaging, On-going KDEneon efforts, and continue to make sure the snaps Qt6 transition is a painless as possible. I will also be working on the qt6 kde-neon extension. In closing, despite my terrible luck with job-hunting, I am in an amazing community and I am truly grateful to each and every one of you. It has been a great year and I have added many new things to my skillset and I look forward to many more next year. As usual, it is that time of month where I have not raised enough to pay my internet bill ( phone company taking an extra 200.00 didn t help ) If you can spare any change ( any amount helps! ) please consider a donation https://gofund.me/b74e4c6f Thank you! I hope everyone has a wonderful <insert your holiday here> !!!!! ~ Scarlett

23 September 2023

Sergio Talens-Oliag: GitLab CI/CD Tips: Using Rule Templates

This post describes how to define and use rule templates with semantic names using extends or !reference tags, how to define manual jobs using the same templates and how to use gitlab-ci inputs as macros to give names to regular expressions used by rules.

Basic rule templatesI keep my templates in a rules.yml file stored on a common repository used from different projects as I mentioned on my previous post, but they can be defined anywhere, the important thing is that the files that need them include their definition somehow. The first version of my rules.yml file was as follows:
.rules_common:
  # Common rules; we include them from others instead of forcing a workflow
  rules:
    # Disable branch pipelines while there is an open merge request from it
    - if: >-
        $CI_COMMIT_BRANCH &&
        $CI_OPEN_MERGE_REQUESTS &&
        $CI_PIPELINE_SOURCE != "merge_request_event"
      when: never
.rules_default:
  # Default rules, we need to add the when: on_success to make things work
  rules:
    - !reference [.rules_common, rules]
    - when: on_success
The main idea is that .rules_common defines a rule section to disable jobs as we can do on a workflow definition; in our case common rules only have if rules that apply to all jobs and are used to disable them. The example includes one that avoids creating duplicated jobs when we push to a branch that is the source of an open MR as explained here. To use the rules in a job we have two options, use the extends keyword (we do that when we want to use the rule as is) or declare a rules section and add a !reference to the template we want to use as described here (we do that when we want to add additional rules to disable a job before evaluating the template conditions). As an example, with the following definitions both jobs use the same rules:
job_1:
  extends:
    - .rules_default
  [...]
job_2:
  rules:
    - !reference [.rules_default, rules]
  [...]

Manual jobs and rule templatesTo make the jobs manual we have two options, create a version of the job that includes when: manual and defines if we want it to be optional or not (allow_failure: true makes the job optional, if we don t add that to the rule the job is blocking) or add the when: manual and the allow_failure value to the job (if we work at the job level the default value for allow_failure is false for when: manual, so it is optional by default, we have to add an explicit allow_failure = true it to make it blocking). The following example shows how we define blocking or optional manual jobs using rules with when conditions:
.rules_default_manual_blocking:
  # Default rules for optional manual jobs
  rules:
    - !reference [.rules_common, rules]
    - when: manual
      # allow_failure: false is implicit
.rules_default_manual_optional:
  # Default rules for optional manual jobs
  rules:
    - !reference [.rules_common, rules]
    - when: manual
      allow_failure: true
manual_blocking_job:
  extends:
    - .rules_default_manual_blocking
  [...]
manual_optional_job:
  extends:
    - .rules_default_manual_optional
  [...]
The problem here is that we have to create new versions of the same rule template to add the conditions, but we can avoid it using the keywords at the job level with the original rules to get the same effect; the following definitions create jobs equivalent to the ones defined earlier without creating additional templates:
manual_blocking_job:
  extends:
    - .rules_default
  when: manual
  allow_failure: false
  [...]
manual_optional_job:
  extends:
    - .rules_default
  when: manual
  # allow_failure: true is implicit
  [...]
As you can imagine, that is my preferred way of doing it, as it keeps the rules.yml file smaller and I see that the job is manual in its definition without problem.

Rules with allow_failure, changes, exists, needs or variablesUnluckily for us, for now there is no way to avoid creating additional templates as we did on the when: manual case when a rule is similar to an existing one but adds changes, exists, needs or variables to it. So, for now, if a rule needs to add any of those fields we have to copy the original rule and add the keyword section. Some notes, though:
  • we only need to add allow_failure if we want to change its value for a given condition, in other cases we can set the value at the job level.
  • if we are adding changes to the rule it is important to make sure that they are going to be evaluated as explained here.
  • when we add a needs value to a rule for a specific condition and it matches it replaces the job needs section; when using templates I would use two different job names with different conditions instead of adding a needs on a single job.

Defining rule templates with semantic namesI started to use rule templates to avoid repetition when defining jobs that needed the same rules and soon I noticed that giving them names with a semantic meaning they where easier to use and understand (we provide a name that tells us when we are going to execute the job, while the details of the variables names or values used on the rules are an implementation detail of the templates). We are not going to define real jobs on this post, but as an example we are going to define a set of rules that can be useful if we plan to follow a scaled trunk based development workflow when developing, that is, we are going to put the releasable code on the main branch and use short-lived branches to test and complete changes before pushing things to main. Using this approach we can define an initial set of rule templates with semantic names:
.rules_mr_to_main:
  rules:
    - !reference [.rules_common, rules]
    - if: $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == 'main'
.rules_mr_or_push_to_main:
  rules:
    - !reference [.rules_common, rules]
    - if: $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == 'main'
    - if: >-
        $CI_COMMIT_BRANCH == 'main'
        &&
        $CI_PIPELINE_SOURCE != 'merge_request_event'
.rules_push_to_main:
  rules:
    - !reference [.rules_common, rules]
    - if: >-
        $CI_COMMIT_BRANCH == 'main'
        &&
        $CI_PIPELINE_SOURCE != 'merge_request_event'
.rules_push_to_branch:
  rules:
    - !reference [.rules_common, rules]
    - if: >-
        $CI_COMMIT_BRANCH != 'main'
        &&
        $CI_PIPELINE_SOURCE != 'merge_request_event'
.rules_push_to_branch_or_mr_to_main:
  rules:
    - !reference [.rules_push_to_branch, rules]
    - if: >-
         $CI_MERGE_REQUEST_SOURCE_BRANCH_NAME != 'main'
         &&
         $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == 'main'
.rules_release_tag:
  rules:
    - !reference [.rules_common, rules]
    - if: $CI_COMMIT_TAG =~ /^([0-9a-zA-Z_.-]+-)?v\d+.\d+.\d+$/
.rules_non_release_tag:
  rules:
    - !reference [.rules_common, rules]
    - if: $CI_COMMIT_TAG !~ /^([0-9a-zA-Z_.-]+-)?v\d+.\d+.\d+$/
With those names it is clear when a job is going to be executed and when using the templates on real jobs we can add additional restrictions and make the execution manual if needed as described earlier.

Using inputs as macrosOn the previous rules we have used a regular expression to identify the release tag format and assumed that the general branches are the ones with a name different than main; if we want to force a format for those branch names we can replace the condition != 'main' by a regex comparison (=~ if we look for matches, !~ if we want to define valid branch names removing the invalid ones). When testing the new gitlab-ci inputs my colleague Jorge noticed that if you keep their default value they basically work as macros. The variables declared as inputs can t hold YAML values, the truth is that their value is always a string that is replaced by the value assigned to them when including the file (if given) or by their default value, if defined. If you don t assign a value to an input variable when including the file that declares it its occurrences are replaced by its default value, making them work basically as macros; this is useful for us when working with strings that can t managed as variables, like the regular expressions used inside if conditions. With those two ideas we can add the following prefix to the rules.yaml defining inputs for both regular expressions and replace the rules that can use them by the ones shown here:
spec:
  inputs:
    # Regular expression for branches; the prefix matches the type of changes
    # we plan to work on inside the branch (we use conventional commit types as
    # the branch prefix)
    branch_regex:
      default: '/^(build ci chore docs feat fix perf refactor style test)\/.+$/'
    # Regular expression for tags
    release_tag_regex:
      default: '/^([0-9a-zA-Z_.-]+-)?v\d+.\d+.\d+$/'
---
[...]
.rules_push_to_changes_branch:
  rules:
    - !reference [.rules_common, rules]
    - if: >-
        $CI_COMMIT_BRANCH =~ $[[ inputs.branch_regex ]]
        &&
        $CI_PIPELINE_SOURCE != 'merge_request_event'
.rules_push_to_branch_or_mr_to_main:
  rules:
    - !reference [.rules_push_to_branch, rules]
    - if: >-
         $CI_MERGE_REQUEST_SOURCE_BRANCH_NAME =~ $[[ inputs.branch_regex ]]
         &&
         $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == 'main'
.rules_release_tag:
  rules:
    - !reference [.rules_common, rules]
    - if: $CI_COMMIT_TAG =~ $[[ inputs.release_tag_regex ]]
.rules_non_release_tag:
  rules:
    - !reference [.rules_common, rules]
    - if: $CI_COMMIT_TAG !~ $[[ inputs.release_tag_regex ]]

Creating rules reusing existing onesI m going to finish this post with a comment about how I avoid defining extra rule templates in some common cases. The idea is simple, we can use !reference tags to fine tune rules when we need to add conditions to disable them simply adding conditions with when: never before referencing the template. As an example, in some projects I m using different job definitions depending on the DEPLOY_ENVIRONMENT value to make the job manual or automatic; as we just said we can define different jobs referencing the same rule adding a condition to check if the environment is the one we are interested in:
deploy_job_auto:
  rules:
    # Only deploy automatically if the environment is 'dev' by skipping this job
    # for other values of the DEPLOY_ENVIRONMENT variable
    - if: $DEPLOY_ENVIRONMENT != "dev"
      when: never
    - !reference [.rules_release_tag, rules]
  [...]
deploy_job_manually:
  rules:
    # Disable this job if the environment is 'dev'
    - if: $DEPLOY_ENVIRONMENT == "dev"
      when: never
    - !reference [.rules_release_tag, rules]
  when: manual
  # Change this to  false  to make the deployment job blocking
  allow_failure: true
  [...]
If you think about it the idea of adding negative conditions is what we do with the .rules_common template; we add conditions to disable the job before evaluating the real rules. The difference in that case is that we reference them at the beginning because we want those negative conditions on all jobs and that is also why we have a .rules_default condition with an when: on_success for the jobs that only need to respect the default workflow (we need the last condition to make sure that they are executed if the negative rules don t match).

16 September 2023

Sergio Talens-Oliag: GitLab CI/CD Tips: Using a Common CI Repository with Assets

This post describes how to handle files that are used as assets by jobs and pipelines defined on a common gitlab-ci repository when we include those definitions from a different project.

Problem descriptionWhen a .giltlab-ci.yml file includes files from a different repository its contents are expanded and the resulting code is the same as the one generated when the included files are local to the repository. In fact, even when the remote files include other files everything works right, as they are also expanded (see the description of how included files are merged for a complete explanation), allowing us to organise the common repository as we want. As an example, suppose that we have the following script on the assets/ folder of the common repository:
dumb.sh
#!/bin/sh
echo "The script arguments are: '$@'"
If we run the following job on the common repository:
job:
  script:
    - $CI_PROJECT_DIR/assets/dumb.sh ARG1 ARG2
the output will be:
The script arguments are: 'ARG1 ARG2'
But if we run the same job from a different project that includes the same job definition the output will be different:
/scripts-23-19051/step_script: eval: line 138: d./assets/dumb.sh: not found
The problem here is that we include and expand the YAML files, but if a script wants to use other files from the common repository as an asset (configuration file, shell script, template, etc.), the execution fails if the files are not available on the project that includes the remote job definition.

SolutionsWe can solve the issue using multiple approaches, I ll describe two of them:
  • Create files using scripts
  • Download files from the common repository

Create files using scriptsOne way to dodge the issue is to generate the non YAML files from scripts included on the pipelines using HERE documents. The problem with this approach is that we have to put the content of the files inside a script on a YAML file and if it uses characters that can be replaced by the shell (remember, we are using HERE documents) we have to escape them (error prone) or encode the whole file into base64 or something similar, making maintenance harder. As an example, imagine that we want to use the dumb.sh script presented on the previous section and we want to call it from the same PATH of the main project (on the examples we are using the same folder, in practice we can create a hidden folder inside the project directory or use a PATH like /tmp/assets-$CI_JOB_ID to leave things outside the project folder and make sure that there will be no collisions if two jobs are executed on the same place (i.e. when using a ssh runner). To create the file we will use hidden jobs to write our script template and reference tags to add it to the scripts when we want to use them. Here we have a snippet that creates the file with cat:
.file_scripts:
  create_dumb_sh:
    -  
      # Create dumb.sh script
      mkdir -p "$ CI_PROJECT_DIR /assets"
      cat >"$ CI_PROJECT_DIR /assets/dumb.sh" <<EOF
      #!/bin/sh
      echo "The script arguments are: '\$@'"
      EOF
      chmod +x "$ CI_PROJECT_DIR /assets/dumb.sh"
Note that to make things work we ve added 6 spaces before the script code and escaped the dollar sign. To do the same using base64 we replace the previous snippet by this:
.file_scripts:
  create_dumb_sh:
    -  
      # Create dumb.sh script
      mkdir -p "$ CI_PROJECT_DIR /assets"
      base64 -d >"$ CI_PROJECT_DIR /assets/dumb.sh" <<EOF
      IyEvYmluL3NoCmVjaG8gIlRoZSBzY3JpcHQgYXJndW1lbnRzIGFyZTogJyRAJyIK
      EOF
      chmod +x "$ CI_PROJECT_DIR /assets/dumb.sh"
Again, we have to indent the base64 version of the file using 6 spaces (all lines of the base64 output have to be indented) and to make changes we have to decode and re-code the file manually, making it harder to maintain. With either version we just need to add a !reference before using the script, if we add the call on the first lines of the before_script we can use the downloaded file in the before_script, script or after_script sections of the job without problems:
job:
  before_script:
    - !reference [.file_scripts, create_dumb_sh]
  script:
    - $ CI_PROJECT_DIR /assets/dumb.sh ARG1 ARG2
The output of a pipeline that uses this job will be the same as the one shown in the original example:
The script arguments are: 'ARG1 ARG2'

Download the files from the common repositoryAs we ve seen the previous solution works but is not ideal as it makes the files harder to read, maintain and use. An alternative approach is to keep the assets on a directory of the common repository (in our examples we will name it assets) and prepare a YAML file that declares some variables (i.e. the URL of the templates project and the PATH where we want to download the files) and defines a script fragment to download the complete folder. Once we have the YAML file we just need to include it and add a reference to the script fragment at the beginning of the before_script of the jobs that use files from the assets directory and they will be available when needed. The following file is an example of the YAML file we just mentioned:
bootstrap.yml
variables:
  CI_TMPL_API_V4_URL: "$ CI_API_V4_URL /projects/common%2Fci-templates"
  CI_TMPL_ARCHIVE_URL: "$ CI_TMPL_API_V4_URL /repository/archive"
  CI_TMPL_ASSETS_DIR: "/tmp/assets-$ CI_JOB_ID "
.scripts_common:
  bootstrap_ci_templates:
    -  
      # Downloading assets
      echo "Downloading assets"
      mkdir -p "$CI_TMPL_ASSETS_DIR"
      wget -q -O - --header="PRIVATE-TOKEN: $CI_TMPL_READ_TOKEN" \
        "$CI_TMPL_ARCHIVE_URL?path=assets&sha=$ CI_TMPL_REF:-main "  
        tar --strip-components 2 -C "$ASSETS_DIR" -xzf -
The file defines the following variables:
  • CI_TMPL_API_V4_URL: URL of the common project, in our case we are using the project ci-templates inside the common group (note that the slash between the group and the project is escaped, that is needed to reference the project by name, if we don t like that approach we can replace the url encoded path by the project id, i.e. we could use a value like $ CI_API_V4_URL /projects/31)
  • CI_TMPL_ARCHIVE_URL: Base URL to use the gitlab API to download files from a repository, we will add the arguments path and sha to select which sub path to download and from which commit, branch or tag (we will explain later why we use the CI_TMPL_REF, for now just keep in mind that if it is not defined we will download the version of the files available on the main branch when the job is executed).
  • CI_TMPL_ASSETS_DIR: Destination of the downloaded files.
And uses variables defined in other places:
  • CI_TMPL_READ_TOKEN: token that includes the read_api scope for the common project, we need it because the tokens created by the CI/CD pipelines of other projects can t be used to access the api of the common one.We define the variable on the gitlab CI/CD variables section to be able to change it if needed (i.e. if it expires)
  • CI_TMPL_REF: branch or tag of the common repo from which to get the files (we need that to make sure we are using the right version of the files, i.e. when testing we will use a branch and on production pipelines we can use fixed tags to make sure that the assets don t change between executions unless we change the reference).We will set the value on the .gitlab-ci.yml file of the remote projects and will use the same reference when including the files to make sure that everything is coherent.
This is an example YAML file that defines a pipeline with a job that uses the script from the common repository:
pipeline.yml
include:
  - /bootstrap.yaml
stages:
  - test
dumb_job:
  stage: test
  before_script:
    - !reference [.bootstrap_ci_templates, create_dumb_sh]
  script:
    - $ CI_TMPL_ASSETS_DIR /dumb.sh ARG1 ARG2
To use it from an external project we will use the following gitlab ci configuration:
gitlab-ci.yml
include:
  - project: 'common/ci-templates'
    ref: &ciTmplRef 'main'
    file: '/pipeline.yml'
variables:
  CI_TMPL_REF: *ciTmplRef
Where we use a YAML anchor to ensure that we use the same reference when including and when assigning the value to the CI_TMPL_REF variable (as far as I know we have to pass the ref value explicitly to know which reference was used when including the file, the anchor allows us to make sure that the value is always the same in both places). The reference we use is quite important for the reproducibility of the jobs, if we don t use fixed tags or commit hashes as references each time a job that downloads the files is executed we can get different versions of them. For that reason is not a bad idea to create tags on our common repo and use them as reference on the projects or branches that we want to behave as if their CI/CD configuration was local (if we point to a fixed version of the common repo the way everything is going to work is almost the same as having the pipelines directly in our repo). But while developing pipelines using branches as references is a really useful option; it allows us to re-run the jobs that we want to test and they will download the latest versions of the asset files on the branch, speeding up the testing process. However keep in mind that the trick only works with the asset files, if we change a job or a pipeline on the YAML files restarting the job is not enough to test the new version as the restart uses the same job created with the current pipeline. To try the updated jobs we have to create a new pipeline using a new action against the repository or executing the pipeline manually.

ConclusionFor now I m using the second solution and as it is working well my guess is that I ll keep using that approach unless giltab itself provides a better or simpler way of doing the same thing.

18 July 2023

Sergio Talens-Oliag: Testing cilium with k3d and kind

This post describes how to deploy cilium (and hubble) using docker on a Linux system with k3d or kind to test it as CNI and Service Mesh. I wrote some scripts to do a local installation and evaluate cilium to use it at work (in fact we are using cilium on an EKS cluster now), but I thought it would be a good idea to share my original scripts in this blog just in case they are useful to somebody, at least for playing a little with the technology.

InstallationFor each platform we are going to deploy two clusters on the same docker network; I ve chosen this model because it allows the containers to see the addresses managed by metallb from both clusters (the idea is to use those addresses for load balancers and treat them as if they were public). The installation(s) use cilium as CNI, metallb for BGP (I tested the cilium options, but I wasn t able to configure them right) and nginx as the ingress controller (again, I tried to use cilium but something didn t work either). To be able to use the previous components some default options have been disabled on k3d and kind and, in the case of k3d, a lot of k3s options (traefik, servicelb, kubeproxy, network-policy, ) have also been disabled to avoid conflicts. To use the scripts we need to install cilium, docker, helm, hubble, k3d, kind, kubectl and tmpl in our system. After cloning the repository, the sbin/tools.sh script can be used to do that on a linux-amd64 system:
$ git clone https://gitea.mixinet.net/blogops/cilium-docker.git
$ cd cilium-docker
$ ./sbin/tools.sh apps
Once we have the tools, to install everything on k3d (for kind replace k3d by kind) we can use the sbin/cilium-install.sh script as follows:
$ # Deploy first k3d cluster with cilium & cluster-mesh
$ ./sbin/cilium-install.sh k3d 1 full
[...]
$ # Deploy second k3d cluster with cilium & cluster-mesh
$ ./sbin/cilium-install.sh k3d 2 full
[...]
$ # The 2nd cluster-mesh installation connects the clusters
If we run the command cilium status after the installation we should get an output similar to the one seen on the following screenshot:
cilium status
The installation script uses the following templates:
Once we have finished our tests we can remove the installation using the sbin/cilium-remove.sh script.

Some notes about the configuration
  • As noted on the documentation, the cilium deployment needs to mount the bpffs on /sys/fs/bpf and cgroupv2 on /run/cilium/cgroupv2; that is done automatically on kind, but fails on k3d because the image does not include bash (see this issue).To fix it we mount a script on all the k3d containers that is executed each time they are started (the script is mounted as /bin/k3d-entrypoint-cilium.sh because the /bin/k3d-entrypoint.sh script executes the scripts that follow the pattern /bin/k3d-entrypoint-*.sh before launching the k3s daemon). The source code of the script is available here.
  • When testing the multi-cluster deployment with k3d we have found issues with open files, looks like they are related to inotify (see this page on the kind documentation); adding the following to the /etc/sysctl.conf file fixed the issue:
    # fix inotify issues with docker & k3d
    fs.inotify.max_user_watches = 524288
    fs.inotify.max_user_instances = 512
  • Although the deployment theoretically supports it, we are not using cilium as the cluster ingress yet (it did not work, so it is no longer enabled) and we are also ignoring the gateway-api for now.
  • The documentation uses the cilium cli to do all the installations, but I noticed that following that route the current version does not work right with hubble (it messes up the TLS support, there are some notes about the problems on this cilium issue), so we are deploying with helm right now.The problem with the helm approach is that there is no official documentation on how to install the cluster mesh with it (there is a request for documentation here), so we are using the cilium cli for now and it looks that it does not break the hubble configuration.

TestsTo test cilium we have used some scripts & additional config files that are available on the test sub directory of the repository:
  • cilium-connectivity.sh: a script that runs the cilium connectivity test for one cluster or in multi cluster mode (for mesh testing).If we export the variable HUBBLE_PF=true the script executes the command cilium hubble port-forward before launching the tests.
  • http-sw.sh: Simple tests for cilium policies from the cilium demo; the script deploys the Star Wars demo application and allows us to add the L3/L4 policy or the L3/L4/L7 policy, test the connectivity and view the policies.
  • ingress-basic.sh: This test is for checking the ingress controller, it is prepared to work against cilium and nginx, but as explained before the use of cilium as an ingress controller is not working as expected, so the idea is to call it with nginx always as the first argument for now.
  • mesh-test.sh: Tool to deploy a global service on two clusters, change the service affinity to local or remote, enable or disable if the service is shared and test how the tools respond.

Running the testsThe cilium-connectivity.sh executes the standard cilium tests:
$ ./test/cilium-connectivity.sh k3d 12
   Monitor aggregation detected, will skip some flow validation
steps
  [k3d-cilium1] Creating namespace cilium-test for connectivity
check...
  [k3d-cilium2] Creating namespace cilium-test for connectivity
check...
[...]
  All 33 tests (248 actions) successful, 2 tests skipped,
0 scenarios skipped.
To test how the cilium policies work use the http-sw.sh script:
kubectx k3d-cilium2 # (just in case)
# Create test namespace and services
./test/http-sw.sh create
# Test without policies (exaust-port fails by design)
./test/http-sw.sh test
# Create and view L3/L4 CiliumNetworkPolicy
./test/http-sw.sh policy-l34
# Test policy (no access from xwing, exaust-port fails)
./test/http-sw.sh test
# Create and view L7 CiliumNetworkPolicy
./test/http-sw.sh policy-l7
# Test policy (no access from xwing, exaust-port returns 403)
./test/http-sw.sh test
# Delete http-sw test
./test/http-sw.sh delete
And to see how the service mesh works use the mesh-test.sh script:
# Create services on both clusters and test
./test/mesh-test.sh k3d create
./test/mesh-test.sh k3d test
# Disable service sharing from cluster 1 and test
./test/mesh-test.sh k3d svc-shared-false
./test/mesh-test.sh k3d test
# Restore sharing, set local affinity and test
./test/mesh-test.sh k3d svc-shared-default
./test/mesh-test.sh k3d svc-affinity-local
./test/mesh-test.sh k3d test
# Delete deployment from cluster 1 and test
./test/mesh-test.sh k3d delete-deployment
./test/mesh-test.sh k3d test
# Delete test
./test/mesh-test.sh k3d delete

24 May 2023

Scarlett Gately Moore: KDE Gear 23.04.1 Snaps Released! Snapcraft updates and more.

Kweather SnapKweather Snap
I have completed the 23.04.1 KDE Gear applications release for snaps! With this release comes several new KDE Snaps! Plus many long outdated / broken snaps are updated and or fixed! Check them all out here: https://snapcraft.io/search?q=KDE I have been busy triaging and squashing bugs in regards to snaps on https://bugs.kde.org Snapcraft: Updated the kde-neon extension for the newest content pack. Made a core22 qmake plugin with tests PR https://github.com/canonical/craft-parts/pull/463 Future work: Top on my TO-DO list is still PIM. There are many parts, making it more complex. I am working on it though. QT6/KF6 is making it s way to the top of the list as well. KDE Neon has made significant progress here, so I am in early stages of updating our build scripts to generate our qt6/kf6 content snap. Thanks for stopping by! https://gofund.me/2c7b1808 All proceeds go to improving my ability to work. Thanks for your consideration!

23 May 2023

Craig Small: Devices with cgroup v2

Docker and other container systems by default restrict access to devices on the host. They used to do this with cgroups with the cgroup v1 system, however, the second version of cgroups removed this controller and the man page says:
Cgroup v2 device controller has no interface files and is implemented on top of cgroup BPF.
https://www.kernel.org/doc/Documentation/admin-guide/cgroup-v2.rst
That is just awesome, nothing to see here, go look at the BPF documents if you have cgroup v2. With cgroup v1 if you wanted to know what devices were permitted, you just would cat /sys/fs/cgroup/XX/devices.allow and you were done! The kernel documentation is not very helpful, sure its something in BPF and has something to do with the cgroup BPF specifically, but what does that mean? There doesn t seem to be an easy corresponding method to get the same information. So to see what restrictions a docker container has, we will have to:
  1. Find what cgroup the programs running in the container belong to
  2. Find what is the eBPF program ID that is attached to our container cgroup
  3. Dump the eBPF program to a text file
  4. Try to interpret the eBPF syntax
The last step is by far the most difficult.

Finding a container s cgroup All containers have a short ID and a long ID. When you run the docker ps command, you get the short id. To get the long id you can either use the --no-trunc flag or just guess from the short ID. I usually do the second.
$ docker ps 
CONTAINER ID   IMAGE            COMMAND       CREATED          STATUS          PORTS     NAMES
a3c53d8aaec2   debian:minicom   "/bin/bash"   19 minutes ago   Up 19 minutes             inspiring_shannon
So the short ID is a3c53d8aaec2 and the long ID is a big ugly hex string starting with that. I generally just paste the relevant part in the next step and hit tab. For this container the cgroup is /sys/fs/cgroup/system.slice/docker-a3c53d8aaec23c256124f03d208732484714219c8b5f90dc1c3b4ab00f0b7779.scope/ Notice that the last directory has docker- then the short ID. If you re not sure of the exact path. The /sys/fs/cgroup is the cgroup v2 mount point which can be found with mount -t cgroup2 and then rest is the actual cgroup name. If you know the process running in the container then the cgroup column in ps will show you.
$ ps -o pid,comm,cgroup 140064
    PID COMMAND         CGROUP
 140064 bash            0::/system.slice/docker-a3c53d8aaec23c256124f03d208732484714219c8b5f90dc1c3b4ab00f0b7779.scope
Either way, you will have your cgroup path.

eBPF programs and cgroups Next we will need to get the eBPF program ID that is attached to our recently found cgroup. To do this, we will need to use the bpftool. One thing that threw me for a long time is when the tool talks about a program or a PROG ID they are talking about the eBPF programs, not your processes! With that out of the way, let s find the prog id.
$ sudo bpftool cgroup list /sys/fs/cgroup/system.slice/docker-a3c53d8aaec23c256124f03d208732484714219c8b5f90dc1c3b4ab00f0b7779.scope/
ID       AttachType      AttachFlags     Name
90       cgroup_device   multi
Our cgroup is attached to eBPF prog with ID of 90 and the type of program is cgroup _device.

Dumping the eBPF program Next, we need to get the actual code that is run every time a process running in the cgroup tries to access a device. The program will take some parameters and will return either a 1 for yes you are allowed or a zero for permission denied. Don t use the file option as it dumps the program in binary format. The text version is hard enough to understand.
sudo bpftool prog dump xlated id 90 > myebpf.txt
Congratulations! You now have the eBPF program in a human-readable (?) format.

Interpreting the eBPF program The eBPF format as dumped is not exactly user friendly. It probably helps to first go and look at an example program to see what is going on. You ll see that the program splits type (lower 4 bytes) and access (higher 4 bytes) and then does comparisons on those values. The eBPF has something similar:
   0: (61) r2 = *(u32 *)(r1 +0)
   1: (54) w2 &= 65535
   2: (61) r3 = *(u32 *)(r1 +0)
   3: (74) w3 >>= 16
   4: (61) r4 = *(u32 *)(r1 +4)
   5: (61) r5 = *(u32 *)(r1 +8)
What we find is that once we get past the first few lines filtering the given value that the comparison lines have:
  • r2 is the device type, 1 is block, 2 is character.
  • r3 is the device access, it s used with r1 for comparisons after masking the relevant bits. mknod, read and write are 1,2 and 3 respectively.
  • r4 is the major number
  • r5 is the minor number
For a even pretty simple setup, you are going to have around 60 lines of eBPF code to look at. Luckily, you ll often find the lines for the command options you added will be near the end, which makes it easier. For example:
  63: (55) if r2 != 0x2 goto pc+4
  64: (55) if r4 != 0x64 goto pc+3
  65: (55) if r5 != 0x2a goto pc+2
  66: (b4) w0 = 1
  67: (95) exit
This is a container using the option --device-cgroup-rule='c 100:42 rwm'. It is checking if r2 (device type) is 2 (char) and r4 (major device number) is 0x64 or 100 and r5 (minor device number) is 0x2a or 42. If any of those are not true, move to the next section, otherwise return with 1 (permit). We have all access modes permitted so it doesn t check for it. The previous example has all permissions for our device with id 100:42, what about if we only want write access with the option --device-cgroup-rule='c 100:42 r'. The resulting eBPF is:
  63: (55) if r2 != 0x2 goto pc+7  
  64: (bc) w1 = w3
  65: (54) w1 &= 2
  66: (5d) if r1 != r3 goto pc+4
  67: (55) if r4 != 0x64 goto pc+3
  68: (55) if r5 != 0x2a goto pc+2
  69: (b4) w0 = 1
  70: (95) exit
The code is almost the same but we are checking that w3 only has the second bit set, which is for reading, effectively checking for X==X&2. It s a cautious approach meaning no access still passes but multiple bits set will fail.

The device option docker run allows you to specify files you want to grant access to your containers with the --device flag. This flag actually does two things. The first is to great the device file in the containers /dev directory, effectively doing a mknod command. The second thing is to adjust the eBPF program. If the device file we specified actually did have a major number of 100 and a minor of 42, the eBPF would look exactly like the above snippets.

What about privileged? So we have used the direct cgroup options here, what does the --privileged flag do? This lets the container have full access to all the devices (if the user running the process is allowed). Like the --device flag, it makes the device files as well, but what does the filtering look like? We still have a cgroup but the eBPF program is greatly simplified, here it is in full:
   0: (61) r2 = *(u32 *)(r1 +0)
   1: (54) w2 &= 65535
   2: (61) r3 = *(u32 *)(r1 +0)
   3: (74) w3 >>= 16
   4: (61) r4 = *(u32 *)(r1 +4)
   5: (61) r5 = *(u32 *)(r1 +8)
   6: (b4) w0 = 1
   7: (95) exit
There is the usual setup lines and then, return 1. Everyone is a winner for all devices and access types!

14 April 2023

Scarlett Gately Moore: KDE Snaps! Oh so many released! Debian update

KDE Elisia snapKDE Elisia snap
It has been another very busy couple of weeks! I have released many snaps, fixed a few bugs, started the documentation, and have many snaps in progress. So without further ado here is my status update for KDE snaps: Fixed two very important Krita bugs: https://bugs.kde.org/show_bug.cgi?id=465307 Fixed several other minor bugs and triaged all the snap bugs I could find on bugs.kde.org. Remember to assign snap bugs to me! It makes my life easier and I see them quicker. I have started some documentation for developers that want to assist in getting their KDE app snapped here: https://invent.kde.org/teams/neon/-/wikis/Snaps I have released around 40 snaps! This would make for a very unruly blog post to list, so I have created a list here of my releases: https://invent.kde.org/packaging/snapcraft-kde-applications/-/issues/30 I am happy to report there are several first time releases including a very kool music app called Elisia ! I have many more snaps in various stages and many more to come. My goal is to get them all by the end of this 3 months ( 1.5 left to get it done ), a lofty goal I know. What can I say, I am ambitious. Now on to Debian. I am excited to announce that soon I will become contributor status for Freexian ( and hopefully, after time, a collaborator ). I am waiting for my mentor to upload my first security update(s) for Buster LTS. This will be a new type of contributions for me, but I am confident I will do well and I have new hope that I have found my new forever home. I will be very useful in many areas and I look forward to growing with the company. In closing, if you can spare some change we have a few unexpected expenses ( doesn t everyone?! ) Anything helps, thank you! https://gofund.me/2c7b1808

31 March 2023

Scarlett Gately Moore: KDE Snaps! Many new releases, more to come.

I have been extremely busy the last 2 weeks churning out KDE snaps! All of the have been tested and released on AMD64 and Arm64 architectures. If you run into any problems please file bugs @ http://bugs.kde.org and feel free to assign me. Thanks!
KDE Krita snapKDE Krita snap
Krita Version 5.1.5 https://apps.kde.org/krita/
KDE Parley snapKDE Parley snap
Parley Version 22.12.3 https://apps.kde.org/parley/
KDE Kate snapKDE Kate snap
Kate Version 22.12.3 https://apps.kde.org/kate/
KDE Okular snapKDE Okular snap
Okular Version 22.12.3 https://apps.kde.org/okular/
KDE Haruna snapKDE Haruna snap
Haruna Version 0.10.3 https://apps.kde.org/haruna/
KDE Granatier snapKDE Granatier snap
Granatier Version 22.12.3 https://apps.kde.org/granatier/
KDE Gwenview snapKDE Gwenview snap
Gwenview Version 22.12.3 https://apps.kde.org/gwenview/
KDE Gcompris snapKDE Gcompris snap
GCompris-qt Version 3.2 https://apps.kde.org/gcompris/
KDE Bomber snapKDE Bomber snap
Bomber Version 22.12.3 https://apps.kde.org/bomber/
KDE Falkon snapKDE Falkon snap
Falkon Version 22.12.3 https://apps.kde.org/falkon/
KDE Ark snapKDE Ark snap
Ark Version 22.12.3 https://apps.kde.org/ark/
KDE Blinken snapKDE Blinken snap
Blinken Version 22.12.3 https://apps.kde.org/blinken/
KDE Bovo snapKDE Bovo Snap
Bovo Version 22.12.3 https://apps.kde.org/bovo/
KDE Atikulate snapKDE Atikulate snap
Artikulate Version 22.12.3 https://apps.kde.org/artikulate/ There are many more snaps coming your way! As usual, I hate to ask, but if you can spare anything we appreciate it, even if they are just kind words, thank you! https://gofund.me/d73342c3

28 February 2023

Shirish Agarwal: Cutting off body parts and Lenovo

I would suggest that this blog post would be slightly unpleasant and I do wish that there was a way, a standardized way just like movies where you can put General, 14+, 16+, Adult and whatnot. so people could share without getting into trouble. I would suggest to consider this blog as for somewhat mature and perhaps disturbing.

Cutting off body parts From last couple of months or so we have been getting daily reports of either men or women killed and then being chopped into pieces and this is being normalized . During my growing up years, the only such case I remember was the 1995 Tandoor case and it jolted the conscience of the nation. But it seems lot of water has passe under the bridge. as no one seems to be shocked anymore  Also shocking are the number of heart attacks that young people are getting. Dunno the reason for either. Just saw this yesterday, The first thing to my mind was, at least she wasn t chopped. It was only latter I realized that the younger sister may have wanted to educate herself or have some other drreams, but because of some evil customs had to give hand in marriage. No outrage here for anything, not even child marriage :(. How have we become so insensitive. And it s mostly Hindus killing Hindus but still no outrage. We have been killing Muslims and Christians so that I guess is just par for the course :(. I wish I could say there is a solution but there seems to be not  Even Child abuse cases have been going up but sad to say even they are being normalised. It s only when a US agency or somebody who feels shocked, then we feel shocked otherwise we have become numb

AMD and Lenovo Lappies About couple of months ago I had made a blog post about lappies. Then Russel reached out to me on Twitter and we engaged. One thing lead to other and soon I saw on some other topic somewhere came across this
The above is a video presentation given by Mark Pearson. Sad to say it was not illuminating enough. Especially the whole boothole thing. I did see three blog posts to get some more insight. The security entry did also share some news. I also reached out to Mr. Pearson to know both the status and also to enquire if there are any new lappies without an OS that I can buy from Lenovo. Sadly, both these e-mails went unanswered. Maybe they went to spam or something else, have no clue. While other organizations did work on it, Debian was kinda side-lined. Hence the annoyance from the Debian Maintainers that the whole thing came from the left field. And this doesn t just effect Debian but all those downstream distributions that rely on Debian  . Now while it s almost a year since then and probably all has been fixed but there haven t been any instructions that I could find that tellls me if there is any new way or just the old way works. In any case, I do think bookworm release probably would have all the fixes needed. IIRC, we just entered soft freeze just couple of weeks back. I have to admit something though, I have never used secure-boot as it has been designed, partially because I always run testing, irrespective of whatever device I use. And AFAIK the whole idea of Secure Boot is to have few updates unlike Testing which is kinda a rolling release thing. While Secure Boot wants same bits, all underlying bits, in Testing it s hard to ensure that as the idea is to test new releases of software and see what works and what breaks till we send it to final release (something like Bookworm ). FWIW, currently bookworm and Testing is one and the same till Bookworm releases, and then Testing would have its own updates from the next hour/day after.

5 February 2023

James Valleroy: A look back at FreedomBox project in 2022

This post is very late, but better late than never! I want to take a look back at the work that was done on FreedomBox during 2022. Several apps were added to FreedomBox in 2022. The email server app (that was developed by a Google Summer of Code student back in 2021) was finally made available to the general audience of FreedomBox users. You will find it under the name Postfix/Dovecot , which are the main services configured by this app. Another app that was added is Janus, which has the description video room . It is called video room instead of video conference because the room itself is persistent. People can join the room or leave, but there isn t a concept of calling or ending the call . Actually, Janus is a lightweight WebRTC server that can be used as a backend for many different types of applications. But as implemented currently, there is just the simple video room app. In the future, more advanced apps such as Jangouts may be packaged in Debian and made available to FreedomBox. RSS-Bridge is an app that generates RSS feeds for websites that don t provide their own (for example, YouTube). It can be used together with any RSS news feed reader application, such as TT-RSS which is also available in FreedomBox. There is now a Privacy page in the System menu, which allows enabling or disabling the Debian popularity-contest tool. If enabled, it reports the Debian packages that are installed on the system. The results can be seen at https://popcon.debian.org, which currently shows over 400 FreedomBoxes are reporting data. A major feature added to FreedomBox in 2022 is the ability to uninstall apps. This feature is still considered experimental (it won t work for every app), but many issues have been fixed already. There is an option to take a backup of the app s data before uninstalling. There is also now an operations queue in case the user starts multiple install or uninstall operations concurrently. XEP-0363 (HTTP File Upload) has been enabled for Ejabberd Chat Server. This allows files to be transferred between XMPP clients that support this feature. There were a number of security improvements to FreedomBox, such as adding fail2ban jails for Dovecot, Matrix Synapse, and WordPress. Firewall rules were added to ensure that authentication and authorization for services proxied through Apache web server cannot be bypassed by programs running locally on the system. Also, we are no longer using libpam-tmpdir to provide temporary folder isolation, because it causes issues for several packages. Instead we use systemd s sandboxing features, which provide even better isolation for services. Some things were removed in 2022. The ez-ipupdate package is no longer used for Dynamic DNS, since it is replaced by a Python implementation of GnuDIP. An option to restrict who can log in to the system was removed, due to various issues that arose from it. Instead there is an option to restrict who can login through SSH. The DNSSEC diagnostic test was removed, because it caused confusion for many users (although use of DNSSEC is still recommended). Finally, some statistics. There were 31 releases in 2022 (including
point releases). There were 68 unique contributors to the git
repository; this includes code contributions and translations (but not
contributions to the manual pages). In total, there were 980 commits to the git repository.

3 February 2023

Scarlett Gately Moore: KDE Snaps, snapcraft, Debian packages.

Sunset, Witch Wells ArizonaSunset, Witch Wells Arizona
Another busy week! In the snap world, I have been busy trying to solve the problem of core20 snaps needing security updates and focal is no longer supported in KDE Neon. So I have created a ppa at https://launchpad.net/~scarlettmoore/+archive/ubuntu/kf5-5.99-focal-updates/+packages Which of course presents more work, as kf5 5.99.0 requires qt5 5.15.7. Sooo this is a WIP. Snapcraft kde-neon-extension is moving along as I learn the python ways of formatting, and fixing some issues in my tests. In the Debian world, I am sad to report Mycroft-AI has gone bust, however the packaging efforts are not in vain as the project has been forked to https://github.com/orgs/OpenVoiceOS/repositories and should be relatively easy to migrate. I have spent some time verifying the libappimage in buster is NOT vulnerable with CVE-2020-25265 as the code wasn t introduced yet. Skanpage, plasma-bigscreen both have source uploads so the can migrate to testing to hopefully make it into bookworm! As many of you know, I am seeking employment. I am a hard worker, that thrives on learning new things. I am a self starter, knowledge sponge, and eager to be an asset to < insert your company here > ! Meanwhile, as interview processes are much longer than I remember and the industry exploding in layoffs, I am coming up short on living expenses as my unemployment lingers on. Please consider donating to my gofundme. Thank you for your consideration.I still have a ways to go to cover my bills this month, I will continue with my work until I cannot, I hate asking, but please consider a donation. Thank you!GoFundMe

28 January 2023

Scarlett Gately Moore: KDE Snaps, Debian uploads and much more in the works!

Witch Wells, AZ SnowWitch Wells, AZ Snow
It has been a very busy few weeks as we endured snowstorm after snowstorm! I have made some progress on the Mycroft in debian adventure! This will slow down as we enter freeze for bookworm and there is no way we will make it into bookworm as there are some significant issues to solve. On the KDE side of things: In the Snap arena, I have made my first significant contribution to snapcraft upstream! It has been a great learning experience as I convert my Ruby knowledge to Python. Formatting is something I need to get used to! https://github.com/snapcore/snapcraft/pull/4023 Snaps have been on hold due to the kde-neon extension not having core22 support and the above pull request fixes that. Meanwhile, I have been working on getting core20 apps ( 22.08.3 final KDE apps version for this base. ) rebuilt for security updates. As many of you know, I am seeking employment. I am a hard worker, that thrives on learning new things. I am a self starter, knowledge sponge, and eager to be an asset to < insert your company here > ! Meanwhile, as interview processes are much longer than I remember and the industry exploding in layoffs, I am coming up short on living expenses as my unemployment lingers on. Please consider donating to my gofundme. Thank you for your consideration. GoFundMe

Craig Small: Fixing iCalendar feeds

The local government here has all the schools use an iCalendar feed for things like when school terms start and stop and other school events occur. The department s website also has events like public holidays. The issue is that all of them don t make it an all-day event but one that happens at midnight, or one past midnight. The events synchronise fine, though Google s calendar is known for synchronising when it feels like it, not at any particular time you would like it to.
Screenshot of Android Calendar showing a tiny bar at midnight which is the event.
Even though a public holiday is all day, they are sent as appointments for midnight. That means on my phone all the events are these tiny bars that appear right up the top of the screen and are easily missed, especially when the focus of the calendar is during the day. On the phone, you can see the tiny purple bar at midnight. This is how the events appear. It s not the calendar s fault, as far as it knows the school events are happening at midnight. You can also see Lunar New Year and Australia Day appear in the all-day part of the calendar and don t scroll away. That s where these events should be.
Why are all the events appearing at midnight? The reason is the feed is incorrectly set up and has the time. The events are sent in an iCalendar format and a typical event looks like this:
BEGIN:VEVENT
DTSTART;TZID=Australia/Sydney:20230206T000000
DTEND;TZID=Australia/Sydney:20230206T000000
SUMMARY:School Term starts
END:VEVENT
The event starting and stopping date and time are the DTSTART and DTEND lines. Both of them have the date of 2023/02/06 or 6th February 2023 and a time of 00:00:00 or midnight. So the calendar is doing the right thing, we need to fix the feed! The Fix I wrote a quick and dirty PHP script to download the feed from the real site, change the DTSTART and DTEND lines to all-day events and leave the rest of it alone.
<?php
$site = $_GET['s'];
if ($site == 'site1')  
    $REMOTE_URL='https://site1.example.net/ical_feed';
  elseif ($site == 'site2')  
    $REMOTE_URL='https://site2.example.net/ical_feed';
  else  
    http_response_code(400);
    die();
 
$fp = fopen($REMOTE_URL, "r");
if (!$fp)  
    die("fopen");
 
header('Content-Type: text/calendar');
while (( $line = fgets($fp, 1024)) !== false)  
    $line = preg_replace(
        '/^(DTSTART DTEND);[^:]+:([0-9] 8 )T000[01]00/',
        '$ 1 ;VALUE=DATE:$ 2 ',
        $line);
    echo $line;
 
?>
It s pretty quick and nasty but gets the job done. So what is it doing? You need to save the script on your web server somewhere, possibly with an alias command. The whole point of this is to change the type from a date/time to a date-only event and only print the date part of it for the start and end of it. The resulting iCalendar event looks like this:
BEGIN:VEVENT
DTSTART;VALUE=DATE:20230206
DTEND;VALUE=DATE:20230206
SUMMARY:School Term starts
END:VEVENT
The calendar then shows it properly as an all-day event. I would check the script works before doing the next step. You can use things like curl or wget to download it. If you use a normal browser, it will probably just download the translated file. If you re not seeing the right thing then it s probably the PCRE failing. You can check it online with a regex checker such as https://regex101.com. The site has saved my PCRE and match so you got something to start with. Calendar settings The last thing to do is to change the URL in your calendar settings. Each calendar system has a different way of doing it. For Google Calendar they provide instructions and you want to follow the section titled Use a link to add a public Calendar . The URL here is not the actual site s URL (which you would have put into the REMOTE_URL variable before) but the URL of your script plus the ?s=site1 part. So if you put your script aliased to /myical.php and the site ID was site1 and your website is www.example.com the URL would be https://www.example.com/myical.php?s=site1 . You should then see the events appear as all-day events on your calendar.

26 January 2023

Shirish Agarwal: Minidebconf Tamilnadu 2023, Tinnitus, Cooking, Books and Series.

First up is Minidebconf Tamilnadu 2023 that would be held on 28-29 January 2023. You can find rest of the details here. I do hope we get to see/hear some good stuff from the Minidebconf. Best of luck to all those who are applying.

Tinnitus During the lock-down of March 2020, I became aware of noise in ears and subsequently major hearing loss. It took me quite a while to know that Tinnitus happens to both those who have hearing loss as well as not. I keep running into threads like this and as shared by someone nobody knows what really causes it. I did try some of the apps (an app. called Resound on Android) that is supposed to tackle Tinnitus but it hasn t helped much. There is this but at least for me, right now pretty speculative. Also this, and again highly speculative.

Cooking After mum passed away, I haven t cooked anything. This used to give me pleasure but now just doesn t feel right. Cooking is something you enjoy when you are doing for somebody else and not just for yourself, at least that s how I feel and with that the curiosity to know more recipes. I do wanna buy a wok at sometime but when, how I just don t know.

Books Have been reading books quite a bit. And due to that had to again revisit and understand ISBN. Perhaps I might have shared it before. It really is something, the history of ISBN. And that co-relates with the book I read, Raising Steam by Terry Pratchett. Raising Steam is the 40th Book in the Discworld Series and it basically romanticizes and reminisces how the idea of an engine was born, and then a steam engine and how actually Railways started. There has been a lot of history and experiences from the early years of Steam Railway that have been taken and transplanted into the book. Also how Railways is and can be successful if only it is invested wisely and maintenance is done. This is only where imagination and reality come apart as maintenance isn t done and then you have issues. While this is and was in the UK, similar situation exists in India and many other places around the world and doesn t matter whether it is private or public. Exceptions are German, French but then that maybe due to Labor movements that happened and were successful unlike in other places. I could go on but then it will become a different article in itself. Suffice to say there is much to learn and you need serious people to look after it. Both in UK and India we lack that. And not just in Railways but Civil Aviation too, but again that is a story in itself.

Web-series Apart from books, have been seeing web-series that Willow is a good one that I enjoyed even though I hadn t seen the earlier movie. While there has been a flurry of movies and web-series both due to end of year and beginning of 2023 and yet have tried to be a bit partial on what I wanna watch or not. If it has crime, fantasy, drama then usually I like it. For e.g. I saw Blackout and pretty much was engrossed in what will happen next. It also does lead you to ask questions about centralization vs de-centralization for both power and other utilities and does make a case for communities to have their utilities apart from the grid as a fallback. How do we do over decades or centuries about it is a different question perhaps altogether. There were two books that kinda stood out for me, the first was Ian Rankin s Naming of the Dead . The book is about a cynical John Rebus, a man after my own heart. I am probably going to buy a few more of his series. In a way it also tells you why UK is the way it is right now. Another book that I liked was Shades of Grey by Jasper Fforde. This is one of the books that Mum would have clearly liked. It is pretty unusual while at the same time very close to 1984 and other such dystopian novels. The main trope of the book is what color you can see and how much you can see. The main character is somebody who can see Red, around the age of 20. One of the interesting aspects of the book is de-facting which closely resembles the Post-Truth world where alternative facts can be made out of air and they don t need any scientific evidence to back them up. In Jasper s world, they don t care about how things work and most of the technology is banned and curiosity is considered harmful and those who show that are murdered one way or the other. Interestingly, the author has just last year decided to start book 2 in the 3 book series that is supposed to be. This also tells why the U.S. is such a precarious situation in a way. A part of it is also due to the media which is in hands of chosen few, the same goes for UK and India, almost an oligopoly.

The Great Escape This is also a book but also about experiences of people, not in 19th-20th century but today that tells you slavery is alive and well and human-trafficking as well. This piece from NPR tells you about an MNC and Indian workers. What I found interesting is that there barely is an mention of the Indian Embassy that is supposed to help Indian people. I do know for a fact that the embassies of India has seen a drastic shortage of both people and materials even since the new Govt. came in place that was nine years ago. Incidentally, BBC shared about the Gujarat riots 2002 and that has been censored in India. They keep quiet about the UK Govt. who did find out that the Chief Minister was directly responsible for the killings and in facts his number 2, Amit Shah had shared that we would do 2002 again in the election cycle barely a month ago. But sadly, no hate speech FIR or any action was taken against Mr. Shah. There have been attempts by people to showcase the documentary. For e.g. JNU tried it and the rowdies from ABVP (arm of BJP) created violence. Even the questions that has been asked by the Wire, GOI will not acknowledge them. Interestingly, all India s edtechs have taken a beating in the last 6-8 months including the biggest BJYU s. Sharing a story from 2021 where things were best and today all of them are at bottom. In fact, the public has been wary as the prices of the courses has kept on increasing and most case studies have been found to be fake. Also the general outlook on jobs and growth has been pessimistic. In fact, most companies have been shedding jobs truckloads, most in the I.T. sector but other sectors as well. Hospitality and other related sectors have taken a huge beating, part of it post-pandemic, part of it Govt s refusal to either spend money or do any positive policies for either infrastructure, education, medical, you name it, they think private sector has all the answers which has been proven to be wrong again and again. I did not want to end on a discordant note but things are the way they are

6 January 2023

Jonathan McDowell: Finally making use of bpftrace

I am old enough to remember when BPF meant the traditional Berkeley Packet Filter, and was confined to filtering network packets. It s grown into much, much, more as eBPF and getting familiar with it so that I can add it to the suite of tips and tricks I can call upon has been on my to-do list for a while. To this end I was lucky enough to attend a live walk through of bpftrace last year. bpftrace is a high level tool that allows the easy creation and execution of eBPF tracers under Linux. Recently I ve been working on updating the RetroArch packages in Debian and as I was doing so I realised there was a need to update the quite outdated retroarch-assets package, which contains various icons and images used for the user interface. I wanted to try and re-generate as many of the artefacts as I could, to ensure the proper source was available. However it wasn t always clear which files were actually needed and which were either source or legacy. So I wanted to trace file opens by retroarch and see when it was failing to find files. Traditionally this is something I d have used strace for, but it seemed like a great opportunity to try out bpftrace. It turns out bpftrace ships with an example, opensnoop.bt which provided details of hooking the open syscall entry + exit and providing details of all files opened on the system. I only wanted to track opens by the retroarch binary that failed, so I made a couple of modifications:
retro-failed-open-snoop.bt
#!/usr/bin/env bpftrace
/*
 * retro-failed-open-snoop - snoop failed opens by RetroArch
 *
 * Based on:
 * opensnoop	Trace open() syscalls.
 *		For Linux, uses bpftrace and eBPF.
 *
 * Copyright 2018 Netflix, Inc.
 * Licensed under the Apache License, Version 2.0 (the "License")
 *
 * 08-Sep-2018	Brendan Gregg	Created this.
 */
BEGIN
 
	printf("Tracing open syscalls... Hit Ctrl-C to end.\n");
	printf("%-6s %-16s %3s %s\n", "PID", "COMM", "ERR", "PATH");
 
tracepoint:syscalls:sys_enter_open,
tracepoint:syscalls:sys_enter_openat
 
	@filename[tid] = args->filename;
 
tracepoint:syscalls:sys_exit_open,
tracepoint:syscalls:sys_exit_openat
/@filename[tid]/
 
	$ret = args->ret;
	$errno = $ret > 0 ? 0 : - $ret;
	if (($ret <= 0) && (strncmp("retroarch", comm, 9) == 0) )  
		printf("%-6d %-16s %3d %s\n", pid, comm, $errno,
		    str(@filename[tid]));
	 
	delete(@filename[tid]);
 
END
 
	clear(@filename);
 
I had to install bpftrace (apt install bpftrace) and then I ran bpftrace -o retro.log retro-failed-open-snoop.bt as root and fired up retroarch as a normal user.
bpftrace failed open log for retroarch
Attaching 6 probes...
Tracing open syscalls... Hit Ctrl-C to end.
PID    COMM             ERR PATH
3394   retroarch          2 /usr/lib/x86_64-linux-gnu/pulseaudio/glibc-hwcaps/x86-64-v2/lib
3394   retroarch          2 /usr/lib/x86_64-linux-gnu/pulseaudio/tls/x86_64/x86_64/libpulse
3394   retroarch          2 /usr/lib/x86_64-linux-gnu/pulseaudio/tls/x86_64/libpulsecommon-
3394   retroarch          2 /usr/lib/x86_64-linux-gnu/pulseaudio/tls/x86_64/libpulsecommon-
3394   retroarch          2 /usr/lib/x86_64-linux-gnu/pulseaudio/tls/libpulsecommon-16.1.so
3394   retroarch          2 /usr/lib/x86_64-linux-gnu/pulseaudio/x86_64/x86_64/libpulsecomm
3394   retroarch          2 /usr/lib/x86_64-linux-gnu/pulseaudio/x86_64/libpulsecommon-16.1
3394   retroarch          2 /usr/lib/x86_64-linux-gnu/pulseaudio/x86_64/libpulsecommon-16.1
3394   retroarch          2 /etc/gcrypt/hwf.deny
3394   retroarch          2 /lib/x86_64-linux-gnu/glibc-hwcaps/x86-64-v2/libgamemode.so.0
3394   retroarch          2 /lib/x86_64-linux-gnu/tls/x86_64/x86_64/libgamemode.so.0
3394   retroarch          2 /lib/x86_64-linux-gnu/tls/x86_64/libgamemode.so.0
3394   retroarch          2 /lib/x86_64-linux-gnu/tls/x86_64/libgamemode.so.0
3394   retroarch          2 /lib/x86_64-linux-gnu/tls/libgamemode.so.0
3394   retroarch          2 /lib/x86_64-linux-gnu/x86_64/x86_64/libgamemode.so.0
3394   retroarch          2 /lib/x86_64-linux-gnu/x86_64/libgamemode.so.0
3394   retroarch          2 /lib/x86_64-linux-gnu/x86_64/libgamemode.so.0
3394   retroarch          2 /lib/x86_64-linux-gnu/libgamemode.so.0
3394   retroarch          2 /usr/lib/x86_64-linux-gnu/glibc-hwcaps/x86-64-v2/libgamemode.so
3394   retroarch          2 /usr/lib/x86_64-linux-gnu/tls/x86_64/x86_64/libgamemode.so.0
3394   retroarch          2 /usr/lib/x86_64-linux-gnu/tls/x86_64/libgamemode.so.0
3394   retroarch          2 /usr/lib/x86_64-linux-gnu/tls/x86_64/libgamemode.so.0
3394   retroarch          2 /usr/lib/x86_64-linux-gnu/tls/libgamemode.so.0
3394   retroarch          2 /usr/lib/x86_64-linux-gnu/x86_64/x86_64/libgamemode.so.0
3394   retroarch          2 /usr/lib/x86_64-linux-gnu/x86_64/libgamemode.so.0
3394   retroarch          2 /usr/lib/x86_64-linux-gnu/x86_64/libgamemode.so.0
3394   retroarch          2 /usr/lib/x86_64-linux-gnu/libgamemode.so.0
3394   retroarch          2 /lib/glibc-hwcaps/x86-64-v2/libgamemode.so.0
3394   retroarch          2 /lib/tls/x86_64/x86_64/libgamemode.so.0
3394   retroarch          2 /lib/tls/x86_64/libgamemode.so.0
3394   retroarch          2 /lib/tls/x86_64/libgamemode.so.0
3394   retroarch          2 /lib/tls/libgamemode.so.0
3394   retroarch          2 /lib/x86_64/x86_64/libgamemode.so.0
3394   retroarch          2 /lib/x86_64/libgamemode.so.0
3394   retroarch          2 /lib/x86_64/libgamemode.so.0
3394   retroarch          2 /lib/libgamemode.so.0
3394   retroarch          2 /usr/lib/glibc-hwcaps/x86-64-v2/libgamemode.so.0
3394   retroarch          2 /usr/lib/tls/x86_64/x86_64/libgamemode.so.0
3394   retroarch          2 /usr/lib/tls/x86_64/libgamemode.so.0
3394   retroarch          2 /usr/lib/tls/x86_64/libgamemode.so.0
3394   retroarch          2 /usr/lib/tls/libgamemode.so.0
3394   retroarch          2 /usr/lib/x86_64/x86_64/libgamemode.so.0
3394   retroarch          2 /usr/lib/x86_64/libgamemode.so.0
3394   retroarch          2 /usr/lib/x86_64/libgamemode.so.0
3394   retroarch          2 /usr/lib/libgamemode.so.0
3394   retroarch          2 /lib/x86_64-linux-gnu/libgamemode.so
3394   retroarch          2 /usr/lib/x86_64-linux-gnu/libgamemode.so
3394   retroarch          2 /lib/libgamemode.so
3394   retroarch          2 /usr/lib/libgamemode.so
3394   retroarch          2 /lib/x86_64-linux-gnu/libdecor-0.so
3394   retroarch          2 /usr/lib/x86_64-linux-gnu/libdecor-0.so
3394   retroarch          2 /lib/libdecor-0.so
3394   retroarch          2 /usr/lib/libdecor-0.so
3394   retroarch          2 /etc/drirc
3394   retroarch          2 /home/noodles/.drirc
3394   retroarch          2 /etc/drirc
3394   retroarch          2 /home/noodles/.drirc
3394   retroarch          2 /usr/lib/x86_64-linux-gnu/dri/tls/iris_dri.so
3394   retroarch          2 /lib/x86_64-linux-gnu/../lib/glibc-hwcaps/x86-64-v2/libedit.so.
3394   retroarch          2 /lib/x86_64-linux-gnu/../lib/tls/x86_64/x86_64/libedit.so.2
3394   retroarch          2 /lib/x86_64-linux-gnu/../lib/tls/x86_64/libedit.so.2
3394   retroarch          2 /lib/x86_64-linux-gnu/../lib/tls/x86_64/libedit.so.2
3394   retroarch          2 /lib/x86_64-linux-gnu/../lib/tls/libedit.so.2
3394   retroarch          2 /lib/x86_64-linux-gnu/../lib/x86_64/x86_64/libedit.so.2
3394   retroarch          2 /lib/x86_64-linux-gnu/../lib/x86_64/libedit.so.2
3394   retroarch          2 /lib/x86_64-linux-gnu/../lib/x86_64/libedit.so.2
3394   retroarch          2 /lib/x86_64-linux-gnu/../lib/libedit.so.2
3394   retroarch          2 /etc/drirc
3394   retroarch          2 /home/noodles/.drirc
3394   retroarch          2 /etc/drirc
3394   retroarch          2 /home/noodles/.drirc
3394   retroarch          2 /etc/drirc
3394   retroarch          2 /home/noodles/.drirc
3394   retroarch          2 /home/noodles/.Xdefaults-udon
3394   retroarch          2 /home/noodles/.icons/default/cursors/00000000000000000000000000
3394   retroarch          2 /home/noodles/.icons/default/index.theme
3394   retroarch          2 /usr/share/icons/default/cursors/000000000000000000000000000000
3394   retroarch          2 /usr/share/pixmaps/default/cursors/0000000000000000000000000000
3394   retroarch          2 /home/noodles/.icons/Adwaita/cursors/00000000000000000000000000
3394   retroarch          2 /home/noodles/.icons/Adwaita/index.theme
3394   retroarch          2 /usr/share/icons/Adwaita/cursors/000000000000000000000000000000
3394   retroarch          2 /usr/share/pixmaps/Adwaita/cursors/0000000000000000000000000000
3394   retroarch          2 /home/noodles/.icons/hicolor/cursors/00000000000000000000000000
3394   retroarch          2 /home/noodles/.icons/hicolor/index.theme
3394   retroarch          2 /usr/share/icons/hicolor/cursors/000000000000000000000000000000
3394   retroarch          2 /usr/share/pixmaps/hicolor/cursors/0000000000000000000000000000
3394   retroarch          2 /usr/share/pixmaps/hicolor/index.theme
3394   retroarch          2 /home/noodles/.XCompose
3394   retroarch          2 /home/noodles/.icons/default/cursors/00000000000000000000000000
3394   retroarch          2 /home/noodles/.icons/default/index.theme
3394   retroarch          2 /usr/share/icons/default/cursors/000000000000000000000000000000
3394   retroarch          2 /usr/share/pixmaps/default/cursors/0000000000000000000000000000
3394   retroarch          2 /home/noodles/.icons/Adwaita/cursors/00000000000000000000000000
3394   retroarch          2 /home/noodles/.icons/Adwaita/index.theme
3394   retroarch          2 /usr/share/icons/Adwaita/cursors/000000000000000000000000000000
3394   retroarch          2 /usr/share/pixmaps/Adwaita/cursors/0000000000000000000000000000
3394   retroarch          2 /home/noodles/.icons/hicolor/cursors/00000000000000000000000000
3394   retroarch          2 /home/noodles/.icons/hicolor/index.theme
3394   retroarch          2 /usr/share/icons/hicolor/cursors/000000000000000000000000000000
3394   retroarch          2 /usr/share/pixmaps/hicolor/cursors/0000000000000000000000000000
3394   retroarch          2 /usr/share/pixmaps/hicolor/index.theme
3394   retroarch          2 /usr/share/libretro/assets/xmb/monochrome/png/disc.png
3394   retroarch          2 /usr/share/libretro/assets/xmb/monochrome/sounds
3394   retroarch          2 /usr/share/libretro/assets/sounds
3394   retroarch          2 /sys/class/power_supply/ACAD
3394   retroarch          2 /sys/class/power_supply/ACAD
3394   retroarch          2 /usr/share/libretro/assets/xmb/monochrome/png/disc.png
3394   retroarch          2 /usr/share/libretro/assets/ozone/sounds
3394   retroarch          2 /usr/share/libretro/assets/sounds
This was incredibly useful - the only theme image I was missing is disc.png from XMB Monochrome (which fails to have SVG source). I also discovered the runtime optional loading of GameMode. This is available in Debian so it was a simple matter to add libgamemode0 to the binary package Recommends. So, a very basic example of using bpftrace, but a remarkably useful intro to it from my point of view!

Next.