Search Results: "sven"

4 December 2024

Sven Hoexter: Looking at x509 Certificate Chains

Sometimes you've to look at the content of x509 certificate chains. Usually one finds them pem encoded and concatenated in a text file. Since the openssl x509 subcommand only decodes the first certificate it will find in a file, I did something like this:
csplit -z -f 'cert' fullchain.pem '/-----BEGIN CERTIFICATE-----/' ' * '
for x in cert*; do openssl x509 -in $x -noout -text; done
Apparently that's the "wrong" way and the more appropriate way is using the openssl crl2pkcs7 subcommand albeit we do not try to parse a revocation list here.
  openssl crl2pkcs7 -nocrl -certfile fullchain.pem   \
  openssl pkcs7 -print_certs -noout
Learned that one in a webinar presented by Victor Dukhovni. If you're new to the topic worth watching.

12 November 2024

Sven Hoexter: fluxcd: Validate flux-system Root Kustomization

Not entirely sure how people use fluxcd, but I guess most people have something like a flux-system flux kustomization as the root to add more flux kustomizations to their kubernetes cluster. Here all of that is living in a monorepo, and as we're all humans people figure out different ways to break it, which brings the reconciliation of the flux controllers down. Thus we set out to do some pre-flight validations. Note1: We do not use flux variable substitutions for those root kustomizations, so if you use those, you've to put additional work into the validation and pipe things through flux envsubst. First Iteration: Just Run kustomize Like Flux Would Do It With a folder structure where we've a cluster folder with subfolders per cluster, we just run a for loop over all of them:
for CLUSTER in $ CLUSTERS ; do
    pushd clusters/$ CLUSTER 
    # validate if we can create and build a flux-system like kustomization file
    kustomize create --autodetect --recursive
    if ! kustomize build . -o /dev/null 2> error.log; then
        echo "Error building flux-system kustomization for cluster $ CLUSTER "
        cat error.log
    fi
    popd
done
Second Iteration: Make Sure Our Workload Subfolder Have a kustomization.yaml Next someone figured out that you can delete some yaml files from a workload subfolder, including the kustomization.yaml, but not all of them. That left around a resource definition which lacks some other referenced objects, but is still happily included into the root kustomization by kustomize create and flux, which of course did not work. Thus we started to catch that as well in our growing for loop:
for CLUSTER in $ CLUSTERS ; do
    pushd clusters/$ CLUSTER 
    # validate if we can create and build a flux-system like kustomization file
    kustomize create --autodetect --recursive
    if ! kustomize build . -o /dev/null 2> error.log; then
        echo "Error building flux-system kustomization for cluster $ CLUSTER "
        cat error.log
    fi
    # validate if we always have a kustomization file in folders with yaml files
    for CLFOLDER in $(find . -type d); do
        test -f $ CLFOLDER /kustomization.yaml && continue
        test -f $ CLFOLDER /kustomization.yml && continue
        if [[ $(find $ CLFOLDER  -maxdepth 1 \( -name '*.yaml' -o -name '*.yml' \) -type f wc -l) != 0 ]]; then
            echo "Error Cluster $ CLUSTER  folder $ CLFOLDER  lacks a kustomization.yaml"
        fi
    done
    popd
done
Note2: I shortened those snippets to the core parts. In our case some things are a bit specific to how we implemented the execution of those checks in GitHub action workflows. Hope that's enough to transport the idea of what to check for.

4 November 2024

Sven Hoexter: Google CloudDNS HTTPS Records with ipv6hint

I naively provisioned an HTTPS record at Google CloudDNS like this via terraform:
resource "google_dns_record_set" "testv6"  
    name         = "testv6.some-domain.example."
    managed_zone = "some-domain-example"
    type         = "HTTPS"
    ttl          = 3600
    rrdatas      = ["1 . alpn=\"h2\" ipv4hint=\"198.51.100.1\" ipv6hint=\"2001:DB8::1\""]
 
This results in a permanent diff because the Google CloudDNS API seems to parse the record content, and stores the ipv6hint expanded (removing the :: notation) and in all lowercase as 2001:db8:0:0:0:0:0:1. Thus to fix the permanent diff we've to use it like this:
resource "google_dns_record_set" "testv6"  
    name = "testv6.some-domain.example."
    managed_zone = "some-domain-example"
    type = "HTTPS"
    ttl = 3600
    rrdatas = ["1 . alpn=\"h2\" ipv4hint=\"198.51.100.1\" ipv6hint=\"2001:db8:0:0:0:0:0:1\""]
 
Guess I should be glad that they already support HTTPS records natively, and not bicker too much about the implementation details.

28 October 2024

Sven Hoexter: GKE version 1.31.1-gke.1678000+ is a baddy

Just a "warn your brothers" for people foolish enough to use GKE and run on the Rapid release channel. Update from version 1.31.1-gke.1146000 to 1.31.1-gke.1678000 is causing trouble whenever NetworkPolicy resources and a readinessProbe (or health check) are configured. As a workaround we started to remove the NetworkPolicy resources. E.g. when kustomize is involved with a patch like this:
- patch:  -
    $patch: delete
    apiVersion: "networking.k8s.io/v1"
    kind: NetworkPolicy
    metadata:
        name: dummy
  target:
    kind: NetworkPolicy
We tried to update to the latest version - right now 1.31.1-gke.2008000 - which did not change anything. Behaviour is pretty much erratic, sometimes it still works and sometimes the traffic is denied. It also seems that there is some relevant fix in 1.31.1-gke.1678000 because that is now the oldest release of 1.31.1 which I can find in the regular and rapid release channels. The last known good version 1.31.1-gke.1146000 is not available to try a downgrade.

21 October 2024

Sven Hoexter: Terraform: Making Use of Precondition Checks

I'm in the unlucky position to have to deal with GitHub. Thus I've a terraform module in a project which deals with populating organization secrets in our GitHub organization, and assigning repositories access to those secrets. Since the GitHub terraform provider internally works mostly with repository IDs, not slugs (this human readable organization/repo format), we've to do some mapping in between. In my case it looks like this:
#tfvars Input for Module
org_secrets =  
    "SECRET_A" =  
        repos = [
            "infra-foo",
            "infra-baz",
            "deployment-foobar",
        ]
    "SECRET_B" =  
        repos = [
            "job-abc",
            "job-xyz",
        ]
     
 
# Module Code
/*
Limitation: The GH search API which is queried returns at most 1000
results. Thus whenever we reach that limit this approach will no longer work.
The query is also intentionally limited to internal repositories right now.
*/
data "github_repositories" "repos"  
    query           = "org:myorg archived:false -is:public -is:private"
    include_repo_id = true
 
/*
The properties of the github_repositories.repos data source queried
above contains only lists. Thus we've to manually establish a mapping
between the repository names we need as a lookup key later on, and the
repository id we got in another list from the search query above.
*/
locals  
    # Assemble the set of repository names we need repo_ids for
    repos = toset(flatten([for v in var.org_secrets : v.repos]))
    # Walk through all names in the query result list and check
    # if they're also in our repo set. If yes add the repo name -> id
    # mapping to our resulting map
    repos_and_ids =  
        for i, v in data.github_repositories.repos.names : v => data.github_repositories.repos.repo_ids[i]
        if contains(local.repos, v)
     
 
resource "github_actions_organization_secret" "org_secrets"  
    for_each        = var.org_secrets
    secret_name     = each.key
    visibility      = "selected"
    # the logic how the secret value is sourced is omitted here
    plaintext_value = data.xxx
    selected_repository_ids = [
        for r in each.value.repos : local.repos_and_ids[r]
        if can(local.repos_and_ids[r])
    ]
 
Now if we do something bad, delete a repository and forget to remove it from the configuration for the module, we receive some error message that a (numeric) repository ID could not be found. Pretty much useless for the average user because you've to figure out which repository is still in the configuration list, but got deleted recently. Luckily terraform supports since version 1.2 precondition checks, which we can use in an output-block to provide the information which repository is missing. What we need is the set of missing repositories and the validation condition:
locals  
    # Debug facility in combination with an output and precondition check
    # There we can report which repository we still have in our configuration
    # but no longer get as a result from the data provider query
    missing_repos = setsubtract(local.repos, data.github_repositories.repos.names)
 
# Debug facility - If we can not find every repository in our
# search query result, report those repos as an error
output "missing_repos"  
    value = local.missing_repos
    precondition  
        condition     = length(local.missing_repos) == 0
        error_message = format("Repos in config missing from resultset: %v", local.missing_repos)
     
 
Now you only have to be aware that GitHub is GitHub and the TF provider has open bugs, but is not supported by GitHub and you will encounter inconsistent results. But it works, even if your terraform apply failed that way.

19 July 2024

Bits from Debian: New Debian Developers and Maintainers (May and June 2024)

The following contributors got their Debian Developer accounts in the last two months: The following contributors were added as Debian Maintainers in the last two months: Congratulations!

11 May 2024

Sven Hoexter: xdg and mime types - stuff I would've loved to know a week ago

Learned a few things about xdg and mimetype registration in the last week that could be helpful to have condensed in a single place. No Need to Ship a Mailcap Mime File If you already ship a .desktop file (that is what ends up in /usr/share/applications/) which has a MimeType declared, there is no need to also ship a mailcap file (that is what ends up in /usr/lib/mime/packages/). Some triggers will do the conversion work for you. See also Debian Policy 4.9. Reverse DNS Naming Convention for .desktop Files Seems to be a closely guarded secret, maybe mainly known inside the Gnome world, but it's in the spec. Also not very widely known inside Debian if I look at my local system as not very representative sample. Your hicolor Theme App Icon can be a Mime Type Icon as Well In case you didn't know the hicolor icon theme is the default fallback theme. Many of us already install application icons e.g. in /usr/share/icons/hicolor/48x48/apps/ which is used in conjunction with the Icon field in the .desktop file to locate the application icon. Now the next step, and there it seems quite of few us miss out, is to create a symlink to also provide a mime type icon, so it's displayed in graphical file managers for the application data files. The schema here is simple: Take the MimeType e.g. application/x-vymand replace the / with a - and use that as file name in e.g. /usr/share/icons/hicolor/48x48/mimetypes/. In the vym case that is /usr/share/icons/hicolor/48x48/mimetypes/application-x-vym.png. If you have one use a scalable .svg file instead of .png. This seems to be an area where Debian lacks a bit of tooling to automatically convert application icons to all the different sizes and install it in all the appropriate places. What is already there is a trigger to run gtk-update-icon-cache when you install new icons into one of the icon theme folder so they're picked up. No Priority or Order in .desktop Files Likely something that hapens on all my fresh installations: Libreoffice is installed and xdg-open starts to open pdf files with Libreoffice instead of evince. Now I've to figure out again to run xdg-mime default org.gnome.Evince.desktop application/pdf to change that (at least for my user). Background here is that the desktop file spec explicitly mandates "Priority for applications is handled external to the .desktop files.". That's why we got in addition to all of that mimeapps.list files. And now, after running the xdg-mime command from above, we've a ~/.config/mimeapps.list defining
[Default Applications]
application/pdf=org.gnome.Evince.desktop
Debian as whole seems to be not very keen on shipping something like a sensible default mimeapps.list outside of desktop environment specific ones. A quick search gave me just
$ apt-file search mimeapps.list
cinnamon-desktop-data: /usr/share/applications/x-cinnamon-mimeapps.list
gdm3: /usr/share/gdm/greeter/applications/mimeapps.list
gnome-session-common: /usr/share/applications/gnome-mimeapps.list
plasma-workspace: /usr/share/applications/kde-mimeapps.list
sxmo-utils: /usr/share/applications/mimeapps.list
sxmo-utils: /usr/share/sxmo/xdg/mimeapps.list
While it's a bit anoying to run into that pdf vs Libreoffice thing every now and then, it's maybe better to not have long controversial threads about default pdf viewer, like the ones we already had about the default MTA choices. ;) And while we're at it: everyone using Libreoffice should give a virtual hug to rene@ for taming that beast since 2010 and OpenOffice.org before.

4 May 2024

Sven Hoexter: vym - view your mind

Had a need for a mindmapping application and found view your mind in the archive. Works but the version is a bit rusty. Sadly my Debian packaging skills are a bit rusty as well, especially when it comes to bigger GUI applications. Thus I spent a good chunk of yesterday afternoon to rip out cdbs and package the last source release on github which is right now 2.9.22 (the release branch already has 2.9.27, still sorting that out). Git repository and a amd64 build of the current state. It still deserves some additional love, e.g. creating a -common package for arch indep content. Proposed a few changes upstream: Also pinged pollux@ who uploaded vym up to 2019 if he'd be fine if I pick it up. If someone else is interested, I'm also fine to put it up on salsa in the general "Debian" group for shared maintenance. I guess I will use it in the future, but time is still a scarce resource for all of us.

2 April 2024

Sven Hoexter: PKIX: pathLen Constrain on Root Certificates

I recently came a cross a x509 P(rivate)KI Root Certificate which had a pathLen constrain set on the (self signed) Root Certificate. Since that is not commonly seen I looked a bit around to get a better understanding about how the pathLen basic constrain should be used. Primary source is RFC 5280 section 4.2.1.9
The pathLenConstraint field is meaningful only if the cA boolean is asserted and the key usage extension, if present, asserts the keyCertSign bit (Section 4.2.1.3). In this case, it gives the maximum number of non-self-issued intermediate certificates that may follow this certificate in a valid certification path
Since the Root is always self-issued it doesn't count towards the limit, and since it's the last certificate (or the first depending on how you count) in a chain, it's pretty much pointless to configure a pathLen constrain directly on a Root Certificate. Another relevant resource are the Baseline Requirements of the CA/Browser Forum (currently v2.0.2). Section 7.1.2.1.4 "Root CA Basic Constraints" describes it as NOT RECOMMENDED for a Root CA. Last but not least there is the awesome x509 Limbo project which has a section for validating pathLen constrains. Since the RFC 5280 based assumption is that self signed certs do not count, they do not check a case with such a constrain on the Root itself, and what the implementations do about it. So the assumption right now is that they properly ignore it. Summary: It's pointless to set the pathLen constrain on the Root Certificate, so just don't do it.

8 February 2024

Sven Hoexter: Use GitHub CLI to List all Repository Secrets

Write it down before I forget about it again:
for x in $(gh api graphql --paginate -f query='query($endCursor:String)   organization(login:"myorg")  
    repositories(first: 100, after: $endCursor, isArchived:false)  
        pageInfo  
            hasNextPage
            endCursor
         
        nodes  
            name
         
     
     
     ' --jq '.data.organization.repositories.nodes[].name'); do
    secrets=$(gh secret list --json name --jq '.[].name' -R "myorg/$ x "   tr '\n' ',')
    if ! [ -z "$ secrets " ]; then
        echo "$ x ,$ secrets "
    fi
done
Requests a list of all not archived repositories in a GitHub org and queries repository secrets. If we find some we output the repo name and the secrets in a comma separated list. Not real CSV, but good enough for further processing. I've to admit it's kinda beautiful what you can do with the gh cli by now. Sadly it seems the secrets are not yet available via GraphQL (or I missed it in the docs), so I just use the gh cli to do the REST calls.

25 October 2023

Sven Hoexter: Curing vpnc-scripts Symptoms

I stick to some very archaic workflows, e.g. to connect to some corp VPN I just run sudo vpnc-connect and later on sudo vpnc-disconnect. In the past that also managed to restore my resolv.conf, currently it doesn't. According to a colleague that's also the case for Ubuntu. Taking a step back, the sane way would be to use the NetworkManager vpnc plugin, but that does not work with this specific case because we use uncool VPN tech which requires the Enable weak authentication setting for vpnc. There is a feature request open for that one at https://gitlab.gnome.org/GNOME/NetworkManager-vpnc/-/issues/11 Taking another step back I thought that it shouldn't be that hard to add some checkbox, a boolean and render out another config flag or line in a config file. Not as intuitive as I thought this mix of XML and C. So let's quickly look elsewhere. What happens is that the backup files in /var/run/vpnc/ are created by the vpnc-scripts script called vpnc-script, but not moved back, because it adds some pid as a suffix and the pid is not the final pid of the vpnc process. Basically it can not find the backup when it tries to restore it. So I decided to replace the pid guessing code with a suffix made up of the gateway IP and the tun interface name. No idea if that is stable in all circumstance (someone with a vpn name DNS RR?) or several connections to different gateways. But good enough for myself, so here is my patch:
vpnc-scripts [master]$ cat debian/patches/replace-pid-detection 
Index: vpnc-scripts/vpnc-script
===================================================================
--- vpnc-scripts.orig/vpnc-script
+++ vpnc-scripts/vpnc-script
@@ -91,21 +91,15 @@ OS=" uname -s "
 HOOKS_DIR=/etc/vpnc
-# Use the PID of the controlling process (vpnc or OpenConnect) to
-# uniquely identify this VPN connection. Normally, the parent process
-# is a shell, and the grandparent's PID is the relevant one.
-# OpenConnect v9.0+ provides VPNPID, so we don't need to determine it.
-if [ -z "$VPNPID" ]; then
-    VPNPID=$PPID
-    PCMD= ps -c -o cmd= -p $PPID 
-    case "$PCMD" in
-        *sh) VPNPID= ps -o ppid= -p $PPID  ;;
-    esac
+# This whole script is called twice via vpnc-connect. On the first run
+# the variables are empty. Catch that and move on when they're there.
+if [ -n "$VPNGATEWAY" ]; then
+    BACKUPID="$ VPNGATEWAY _$ TUNDEV "
+    DEFAULT_ROUTE_FILE=/var/run/vpnc/defaultroute.$ BACKUPID 
+    DEFAULT_ROUTE_FILE_IPV6=/var/run/vpnc/defaultroute_ipv6.$ BACKUPID 
+    RESOLV_CONF_BACKUP=/var/run/vpnc/resolv.conf-backup.$ BACKUPID 
 fi
-DEFAULT_ROUTE_FILE=/var/run/vpnc/defaultroute.$ VPNPID 
-DEFAULT_ROUTE_FILE_IPV6=/var/run/vpnc/defaultroute_ipv6.$ VPNPID 
-RESOLV_CONF_BACKUP=/var/run/vpnc/resolv.conf-backup.$ VPNPID 
 SCRIPTNAME= basename $0 
 # some systems, eg. Darwin & FreeBSD, prune /var/run on boot
Or rolled into a debian package at https://sven.stormbind.net/debian/vpnc-scripts/ The colleague decided to stick to NetworkManager, moved the vpnc binary aside and added a wrapper which invokes vpnc with --enable-weak-authentication. The beauty is, all of this will break on updates, so at some point someone has to understand GTK4 to fix the NetworkManager plugin for good. :)

14 June 2023

Sven Hoexter: htop on stage in the theatre

Always amusing to see some more or less famous open source tools on stage or in movies. Lately we watched THE ME (german only) which is mixing live playing of actors and pre recorded video material. In one of the early video sequences a fictional console interface is displayed, claiming to be running on a Macbook, and htop is used to look for a suspicious process.

15 May 2023

Sven Hoexter: GCP: Private Service Connect Forwarding Rules can not be Updated

PSA for those foolish enough to use Google Cloud and try to use private service connect: If you want to change the serviceAttachment your private service connect forwarding rule points at, you must delete the forwarding rule and create a new one. Updates are not supported. I've done that in the past via terraform, but lately encountered strange errors like this:
Error updating ForwardingRule: googleapi: Error 400: Invalid value for field 'target.target':
'<https://www.googleapis.com/compute/v1/projects/mydumbproject/regions/europe-west1/serviceAttachments/
k8s1-sa-xyz-abc>'. Unexpected resource collection 'serviceAttachments'., invalid
Worked around that with the help of terrraform_data and lifecycle:
resource "terraform_data" "replacement"  
    input = var.gcp_psc_data["target"]
 
resource "google_compute_forwarding_rule" "this"  
    count   = length(var.gcp_psc_data["target"]) > 0 ? 1 : 0
    name    = "$ var.gcp_psc_name -psc"
    region  = var.gcp_region
    project = var.gcp_project
    target                = var.gcp_psc_data["target"]
    load_balancing_scheme = "" # need to override EXTERNAL default when target is a service attachment
    network               = var.gcp_network
    ip_address            = google_compute_address.this.id
    lifecycle  
        replace_triggered_by = [
            terraform_data.replacement
        ]
     
 
See also terraform data for replace_triggered_by.

28 April 2023

Sven Hoexter: What's wrong in IT: commit messages

In my day job someone today took the time in the team daily to explain his research why some of our configuration is wrong. He spent quite some time on his own to look at the history in git and how everything was setup initially, and ended up in the current - wrong - way. That triggered me to validate that quickly, another 5min of work. So we agreed to change it. A one line change, nothing spectacular, but lifetime was invested to figure out why it should've a different value. When the pull request got opened a few minutes later there was nothing of that story in the commit message. Zero, nada, nothing. :( I'm really puzzled why someone invests lifetime to dig into company internal history to try get something right, do a lengthy explanation to the whole team, use the time of others, even mention that there was no explanation of why it's not the default value anymore it should be, and repeat the same mistake by not writing down anything in the commit message. For the current company I'm inclined to propose a commit message validator. For a potential future company I might join, I guess I ask for real world git logs from repositories I should contribute to. Seems that this is another valuable source of information to qualify the company culture. Next up to the existence of whiteboards in the office. I'm really happy that at least a majority of the people contributing to Debian writes somewhat decent commit messages and changelogs. Let that be a reminder to myself to improve in that area the next time I've to change something.

3 March 2023

Sven Hoexter: exfat-fuse 1.4 in experimental

I know a few people hold on to the exFAT fuse implementation due the support for timezone offsets, so here is a small update for you. Andrew released 1.4.0, which includes the timezone offset support, which was so far only part of the git master branch. It also fixes a, from my point of view very minor, security issue CVE-2022-29973. In addition to that it's the first build with fuse3 support. If you still use this driver, pick it up in experimental (we're in the bookworm freeze right now), and give it a try. I'm personally not using it anymore beyond a very basic "does it mount" test.

19 December 2022

Simon Josefsson: Second impressions of Guix 1.4

While my first impression of Guix 1.4rc2 on NV41PZ was only days ago, the final Guix 1.4 release has happened. I thought I should give it a second try, although being at my summer house with no wired ethernet I realized this may be overly optimistic. However I am happy to say that a guided graphical installation on my new laptop went smooth without any problem. Practicing OS installations has a tendency to make problems disappear. My WiFi issues last time was probably due to a user interface mistake on my part: you have to press a button to search for wireless networks before seeing them. I m not sure why I missed this the first time, but maybe the reason was that I didn t really expect WiFi to work on this laptop with one Intel-based WiFi card without firmware and a USB-based WiFi dongle. I haven t went back to the rc2 image, but I strongly believe it wasn t a problem with that image but my user mistake. Perhaps some more visual clues could be given that Guix found a usable WiFi interface, as this isn t completely obvious now. My main pet problem with the installation is the language menu. It contains a bazillion languages, and I want to find Swedish in it. However the list is half-sorted so it looks like it is alphabetized but paging through the list I didn t find svenska , but did notice that the sorting restarts after a while. Eventually I find my language of chose, but a better search interface would be better. Typing s to find it jumps around in the list. This may be a user interface misunderstanding on my part: I may be missing whatever great logic I m sure there is to find my language in that menu. I did a simple installation, enabling GNOME, Cups and OpenSSH. Given the experience with sharing /home with my Trisquel installation last time, I chose to not mount it this time, fixing this later on if I want to share files between OSes. Watching the installation proceed with downloading packages over this slow WiFi was meditative, and I couldn t help but wonder what logic there was to the many steps where it says it is going to download X MB of software, downloads a set of packages, and then starts another iteration saying it is going to download Y MB and then downloads another set of packages. Maybe there is a package dependency tree being worked out while I watch. After logging into GNOME I had to provide the WiFi password another time, it seems it wasn t saved during installation, or I was too impatient to wait for WiFi to come up automatically. Using the GNOME WiFi selection menu worked fine. The webcam issue is still present, the image is distorted and it doesn t happen in Trisquel. Other than that, everythings appear to work, but it has to be put through more testing. Upgrading Guix after installation is still suffering from the same issue I noticed with the rc2 images, this time I managed to save the error message in case someone wants to provide an official fix or workaround. The initial guix pull command also takes forever, even on this speedy laptop, but after the initial run it is faster. Here are the error messages (pardon the Swedish):
jas@kaka ~$ sudo -i
...
root@kaka ~# guix pull
...
root@kaka ~# guix system reconfigure /etc/config.scm 
guix system: fel: aborting reconfiguration because commit 8e2f32cee982d42a79e53fc1e9aa7b8ff0514714 of channel 'guix' is not a descendant of 989a3916dc8967bcb7275f10452f89bc6c3389cc
tips: Use  --allow-downgrades' to force this downgrade.
root@kaka ~# 
I ll avoid using allow-downgrades this time to see if there is a better solution available. Update: Problem resolved: my muscle memory typed sudo -i before writing the commands above. If I stick to the suggested guix pull (as user) followed by sudo guix system reconfigure /etc/config.scm everything works. I ll leave this in case someone else runs into this problem. I m using the Evolution mail/calendar/contacts application, and it was not installed via GNOME so I had to manually install it using guix package -i evolution . Following the guided setup worked remarkable well (it auto-detects all my email settings after giving it my email address), although at the end I get a surprising error message:
Puzzling error message from Evolution
If I didn t know a bit about how Evolution works internally, I would have been stuck here the solution is to install the evolution data server package. This should probably be a dependency from the main package? Fix it by guix package -i evolution-data-server . It works directly, no need to even restart Evolution or go through the configuration dialog again. After this, I m happily using email against my Dovecot server and contacts/calendars against my Nextcloud server via GNOME s builtin Nextcloud connector which was straight-forward to setup.

16 October 2022

Sven Hoexter: CentOS 9, stunnel, an openssl memory leak and a VirtualBox crash

tl;dr; OpenSSL 3.0.1 leaks memory in ssl3_setup_write_buffer(), seems to be fixed in 3.0.5 3.0.2. The issue manifests at least in stunnel and keepalived on CentOS 9. In addition I learned the hard way that running a not so recent VirtualBox version on Debian bullseye let to dh parameter generation crashing in libcrypto in bn_sqr8x_internal(). A recent rabbit hole I went down. The actual bug in openssl was nailed down and documented by Quentin Armitage on GitHub in keepalived My bugreport with all back and forth in the RedHat Bugzilla is #2128412. Act I - Hello stunnel, this is the OOMkiller Calling We started to use stunnel on Google Cloud compute engine instances running CentOS 9. The loadbalancer in front of those instances used a TCP health check to validate the backend availability. A day or so later the stunnel instances got killed by the OOMkiller. Restarting stunnel and looking into /proc/<pid>/smaps showed a heap segment growing quite quickly. Act II - Reproducing the Issue While I'm not the biggest fan of VirtualBox and Vagrant I've to admit it's quite nice to just fire up a VM image, and give other people a chance to recreate that setup as well. Since VirtualBox is no longer released with Debian/stable I just recompiled what was available in unstable at the time of the bullseye release, and used that. That enabled me now to just start a CentOS 9 VM, setup stunnel with a minimal config, grab netcat and a for loop and watch the memory grow. E.g. while true; do nc -z localhost 2600; sleep 1; done To my surprise, in addition to the memory leak, I also observed some crashes but did not yet care too much about those. Act III - Wrong Suspect, a Workaround and Bugreporting Of course the first idea was that something must be wrong in stunnel itself. But I could not find any recent bugreports. My assumption is that there are still a few people around using CentOS and stunnel, so someone else should probably have seen it before. Just to be sure I recompiled the latest stunnel package from Fedora. Didn't change anything. Next I recompiled it without almost all the patches Fedora/RedHat carries. Nope, no progress. Next idea: Maybe this is related to the fact that we do not initiate a TLS context after connecting? So we changed the test case from nc to openssl s_client, and the loadbalancer healthcheck from TCP to a TLS based one. Tada, a workaround, no more memory leaking. In addition I gave Fedora a try (they have Vagrant Virtualbox images in the "Cloud" Spin, e.g. here for Fedora 36) and my local Debian installation a try. No leaks experienced on both. Next I reported #2128412. Act IV - Crash in libcrypto and a VirtualBox Bug When I moved with the test case from the Google Cloud compute instance to my local VM I encountered some crashes. That morphed into a real problem when I started to run stunnel with gdb and valgrind. All crashes happened in libcrypto bn_sqr8x_internal() when generating new dh parameter (stunnel does that for you if you do not use static dh parameter). I quickly worked around that by generating static dh parameter for stunnel. After some back and forth I suspected VirtualBox as the culprit. Recompiling the current VirtualBox version (6.1.38-dfsg-3) from unstable on bullseye works without any changes. Upgrading actually fixed that issue. Epilog I highly appreciate that RedHat, with all the bashing around the future of CentOS, still works on community contributed bugreports. My kudos go to Clemens Lang. :) Now that the root cause is clear, I guess RedHat will push out a fix for the openssl 3.0.1 based release they have in RHEL/CentOS 9. Until that is available at least stunnel and keepalived are known to be affected. If you run stunnel on something public it's not that pretty, because already a low rate of TCP connections will result in a DoS condition.

30 September 2022

Utkarsh Gupta: FOSS Activites in September 2022

Here s my (thirty-sixth) monthly but brief update about the activities I ve done in the F/L/OSS world.

Debian
This was my 45th month of actively contributing to Debian. I became a DM in late March 2019 and a DD on Christmas 19! \o/ There s a bunch of things I do, both, technical and non-technical. Here are the things I did this month:

Debian Uploads
  • rails (2:6.1.6.1+dfsg-2) - Add patch to allow Symbols in YAML columns, fixes #1018934.
  • rails (2:6.1.6.1+dfsg-3) - Add patch to remove active_record.yaml initializers.
  • rails (2:6.1.6.1+dfsg-4) - Add patch to allow Date, Time, ActiveSupport::HashWithIndifferentAccess in YAML columns.
  • ruby-arbre (1.4.0-2) - Add patch to use selector to detect authenticity token input.
  • ruby-net-http-digest-auth (1.4.1-1) - New upstream version, v1.4.1 to fix the FTBFS w/ rails.
  • rails (2:6.1.7+dfsg-1) - New upstream version, v6.1.7+dfsg.
  • redmine (5.0.2-1) - New upstream version, v5.0.2 + fixes for #1017525, #1019607, #1019238, and #1014813.
  • redmine (5.0.2-2) - Add patch to relax pg s version for autopkgtest.
  • ruby-json-jwt (1.14.0-2) - No-change rebuild for unstable to fix #1011682.
  • libexporter-tiny-perl (1.004002-1) - New upstream version, v1.004002.

Other $things:
  • Sponsored php-nikic-fast-route/1.3.0-4~bpo11+1 for William.
  • Being an AM for Arun Kumar, process #1024.
  • Sponsoring stuff for non-DDs.
  • Mentoring for newcomers.
  • Moderation of -project mailing list.

Ubuntu
This was my 20th month of actively contributing to Ubuntu. Now that I joined Canonical to work on Ubuntu full-time, there s a bunch of things I do! \o/ I mostly worked on different things, I guess. I was too lazy to maintain a list of things I worked on so there s no concrete list atm. Maybe I ll get back to this section later or will start to list stuff from the fall, as I was doing before. :D

Debian (E)LTS
Debian Long Term Support (LTS) is a project to extend the lifetime of all Debian stable releases to (at least) 5 years. Debian LTS is not handled by the Debian security team, but by a separate group of volunteers and companies interested in making it a success. And Debian Extended LTS (ELTS) is its sister project, extending support to the Jessie release (+2 years after LTS support). This was my thirty-sixth month as a Debian LTS and twenty-seventh month as a Debian ELTS paid contributor.
I worked for 38.00 hours for LTS and 27.00 hours for ELTS.

LTS CVE Fixes and Announcements:
  • Rolled out announcement for src:flac.
  • Rolled out announcement for src:ruby-rack.
  • Issued DLA 3128-1, fixing CVE-2020-7677, for node-thenify.
    For Debian 10 buster, these problems have been fixed in version 3.3.0-1+deb10u1.
  • Issued DLA 3129-1, fixing CVE-2019-17545 and CVE-2021-45943, for gdal.
    For Debian 10 buster, these problems have been fixed in version 2.4.0+dfsg-1+deb10u1.
  • Looked at src:mbedtls which has about 18 CVEs opened in buster (including no-dsa).
    Also, spoke to the maintainer - they said they d be uncomfortable doing or reviewing the backport (although they initially said they d be happy to help).
  • Fixed src:rails regression via 2:6.1.6.1+dfsg-2, 2:6.1.6.1+dfsg-3, and 2:6.1.6.1+dfsg-4 for sid.
    CVE-2022-32224 broke the entire world. :)
  • Helped Abhijith figure out the regression fix for CVE-2022-32224.
    Also got that verified by the people who reported regression, Raphael, Sven, and Jude. The whole thread is on debian-lts@.

ELTS CVE Fixes and Announcements:
  • Rolled out announcemnet for src:ruby-tzinfo.
  • Rolled out announcemnet for src:grubt.
  • Issued ELA 682-1, fixing CVE-2022-31676, for open-vm-tools.
    For Debian 9 stretch, these problems have been fixed in version 2:10.1.5-5055683-4+deb9u3.
  • Issued ELA 691-1, fixing CVE-2020-21365, for wkhtmltopdf.
    For Debian 8 jessie, these problems have been fixed in version 0.12.1-2+deb8u1.
    For Debian 9 stretch, these problems have been fixed in version 0.12.3.2-3+deb9u1.
  • Issued ELA 692-1, fixing CVE-2022-37452, for exim4.
    For Debian 8 jessie, these problems have been fixed in version 4.84.2-2+deb8u9.
    For Debian 9 stretch, these problems have been fixed in version 4.89-2+deb9u9.
  • Started to look at src:tiff again. Has a lot of open issues. Haven t claimed the package officially yet, though. :)

Other (E)LTS Work:
  • Triaged rails, node-thenify, exim4, wkhtmltopdf, gdal, and mbedtls.
  • Marked CVE-2019-25050/gdal as not-affected for buster.
  • Marked CVE-2022-37451/exim4 as not-affected for stretch and jessie; following buster and bullseye.
  • Helped and assisted new contributors joining Freexian (LTS/ELTS).
  • Answered questions (& discussions) on IRC (#debian-lts and #debian-elts) and Matrix.
  • Participated and helped fellow members with their queries via private mail and chat.
  • General and other discussions on LTS private and public mailing list.
  • Attended the monthly public meeting held on #debian-lts on September 29th.

Until next time.
:wq for today.

12 April 2022

Sven Hoexter: Emulating Raspi2 like hardware with RaspiOS in 2022

Update of my notes from 2020.
# Download a binary device tree file and matching kernel a good soul uploaded to github
wget https://github.com/vfdev-5/qemu-rpi2-vexpress/raw/master/kernel-qemu-4.4.1-vexpress
wget https://github.com/vfdev-5/qemu-rpi2-vexpress/raw/master/vexpress-v2p-ca15-tc1.dtb
# Download the official Rasbian image without X
wget https://downloads.raspberrypi.org/raspios_lite_armhf/images/raspios_lite_armhf-2022-04-07/2022-04-04-raspios-bullseye-armhf-lite.img.xz
unxz 2022-04-04-raspios-bullseye-armhf-lite.img.xz
# Convert it from the raw image to a qcow2 image and add some space
qemu-img convert -f raw -O qcow2 2022-04-04-raspios-bullseye-armhf-lite.img rasbian.qcow2
qemu-img resize rasbian.qcow2 4G
# make sure we get a user account setup
echo "me:$(echo 'test123' openssl passwd -6 -stdin)" > userconf
sudo guestmount -a rasbian.qcow2 -m /dev/sda1 /mnt
sudo mv userconf /mnt
sudo guestunmount /mnt
# start qemu
qemu-system-arm -m 2048M -M vexpress-a15 -cpu cortex-a15 \
 -kernel kernel-qemu-4.4.1-vexpress -no-reboot \
 -smp 2 -serial stdio \
 -dtb vexpress-v2p-ca15-tc1.dtb -sd rasbian.qcow2 \
 -append "root=/dev/mmcblk0p2 rw rootfstype=ext4 console=ttyAMA0,15200 loglevel=8" \
 -nic user,hostfwd=tcp::5555-:22
# login at the serial console as user me with password test123
sudo -i
# enable ssh
systemctl enable ssh
systemctl start ssh
# resize partition and filesystem
parted /dev/mmcblk0 resizepart 2 100%
resize2fs /dev/mmcblk0p2
Now I can login via ssh and start to play:
ssh me@localhost -p 5555

3 February 2022

Sven Hoexter: Suntime Calculation with Lua and the Great Gift of Open Source

tl;dr I ported a part of the python-suntime library to Lua to use it on OpenWRT and RutOS powered devices. suntime.lua There are those unremarkable things which let you pause for a moment, and realize what a great gift of our time open source software and open knowledge is. At some point in time someone figured out how to calculate the sunrise and sunset time on the current date for your location. Someone else wrote that up and probably again a different person published it on the internet. The Internet Archive preserved a copy of it so I can still link to it. Someone took this algorithm and published a code sample on StackOverflow, which was later on used by the SatAgro guys to create the python-suntime library. Now I could come along, copy the core function of this library, convert it within a few hours - mostly spent learning a bit of Lua, to a working script fulfilling my needs.

Next.