Search Results: "hoexter"

12 April 2022

Sven Hoexter: Emulating Raspi2 like hardware with RaspiOS in 2022

Update of my notes from 2020.
# Download a binary device tree file and matching kernel a good soul uploaded to github
wget https://github.com/vfdev-5/qemu-rpi2-vexpress/raw/master/kernel-qemu-4.4.1-vexpress
wget https://github.com/vfdev-5/qemu-rpi2-vexpress/raw/master/vexpress-v2p-ca15-tc1.dtb
# Download the official Rasbian image without X
wget https://downloads.raspberrypi.org/raspios_lite_armhf/images/raspios_lite_armhf-2022-04-07/2022-04-04-raspios-bullseye-armhf-lite.img.xz
unxz 2022-04-04-raspios-bullseye-armhf-lite.img.xz
# Convert it from the raw image to a qcow2 image and add some space
qemu-img convert -f raw -O qcow2 2022-04-04-raspios-bullseye-armhf-lite.img rasbian.qcow2
qemu-img resize rasbian.qcow2 4G
# make sure we get a user account setup
echo "me:$(echo 'test123' openssl passwd -6 -stdin)" > userconf
sudo guestmount -a rasbian.qcow2 -m /dev/sda1 /mnt
sudo mv userconf /mnt
sudo guestunmount /mnt
# start qemu
qemu-system-arm -m 2048M -M vexpress-a15 -cpu cortex-a15 \
 -kernel kernel-qemu-4.4.1-vexpress -no-reboot \
 -smp 2 -serial stdio \
 -dtb vexpress-v2p-ca15-tc1.dtb -sd rasbian.qcow2 \
 -append "root=/dev/mmcblk0p2 rw rootfstype=ext4 console=ttyAMA0,15200 loglevel=8" \
 -nic user,hostfwd=tcp::5555-:22
# login at the serial console as user me with password test123
sudo -i
# enable ssh
systemctl enable ssh
systemctl start ssh
# resize partition and filesystem
parted /dev/mmcblk0 resizepart 2 100%
resize2fs /dev/mmcblk0p2
Now I can login via ssh and start to play:
ssh me@localhost -p 5555

3 February 2022

Sven Hoexter: Suntime Calculation with Lua and the Great Gift of Open Source

tl;dr I ported a part of the python-suntime library to Lua to use it on OpenWRT and RutOS powered devices. suntime.lua There are those unremarkable things which let you pause for a moment, and realize what a great gift of our time open source software and open knowledge is. At some point in time someone figured out how to calculate the sunrise and sunset time on the current date for your location. Someone else wrote that up and probably again a different person published it on the internet. The Internet Archive preserved a copy of it so I can still link to it. Someone took this algorithm and published a code sample on StackOverflow, which was later on used by the SatAgro guys to create the python-suntime library. Now I could come along, copy the core function of this library, convert it within a few hours - mostly spent learning a bit of Lua, to a working script fulfilling my needs.

20 January 2022

Sven Hoexter: Running OpenWRT x86 in qemu

Sometimes it's nice for testing purpose to have the OpenWRT userland available locally. Since there is an x86 build available one can just run it within qemu.
wget https://downloads.openwrt.org/releases/21.02.1/targets/x86/64/openwrt-21.02.1-x86-64-generic-squashfs-combined.img.gz
gunzip openwrt-21.02.1-x86-64-generic-squashfs-combined.img.gz
qemu-img convert -f raw -O qcow2 openwrt-21.02.1-x86-64-generic-squashfs-combined.img openwrt-21.02.1.qcow2
qemu-img resize openwrt-21.02.1.qcow2 200M
qemu-system-x86_64 -M q35 \
  -drive file=openwrt-21.02.1.qcow2,id=d0,if=none,bus=0,unit=0 \
  -device ide-hd,drive=d0,bus=ide.0 -nic user,hostfwd=tcp::5556-:22
# you've to change the network configuration to retrieve an IP via
# dhcp for the lan bridge br-lan
vi /etc/config/network
  - change option proto 'static' to 'dhcp'
  - remove IP address and netmask setting
/etc/init.d/network restart
# now you should've an ip out of 10.0.2.0/24
ssh root@localhost -p 5556
# remember ICMP does not work but otherwise you should have
# IP networking available
opkg update
opkg install curl

15 October 2021

Sven Hoexter: ThinkPad P15v Gen1, Xorg and a Samsung QHD Display

Wasted quite some hours until I found a working Modeline in this stack exchange post so the ThinkPad works with a HDMI attached Samsung QHD display. Internal display of the ThinkPad is a FHD display detected as eDP-1, the external one is DP-3 and according to the packaging known by Samsung as S24A600NWU. The auto deteced EDID modes for QHD - 2560x1440 - did not work at all, the display simply stays dark. After a lot of back and forth with the i915 driver vs nouveau vs nvidia/nvidia-drm with and without modesetting, the following Modeline did the magic:
xrandr --newmode 2560x1440_54.97  221.00  2560 2608 2640 2720  1440 1443 1447 1478  +HSync -VSync
xrandr --addmode DP-3 2560x1440_54.97
xrandr --output DP-3 --mode 2560x1440_54.97 --right-of eDP-1 --primary
Modelines for 50Hz and 60Hz generated with cvt 2560 1440 60 did not work, neither did the one extracted with edid-decode -X from the hex blob found in .local/share/xorg/Xorg.0.log. From the auto-detected Modelines FHD - 1920x1080 - did work. In case someone struggles with a similar setup, that might be a starting point. Fun part, if I attach my several years old Dell E7470 everything is just fine out of the box. But that one just has an Intel GPU and not the unholy combination I've here:
$ lspci grep -E "VGA 3D"
00:02.0 VGA compatible controller: Intel Corporation CometLake-H GT2 [UHD Graphics] (rev 05)
01:00.0 3D controller: NVIDIA Corporation GP107GLM [Quadro P620] (rev ff)

14 September 2021

Sven Hoexter: PV - Monitoring Envertech Microinverter via envertecportal.com

Some time ago I looked briefly at an Envertech data logger for small scale photovoltaic setups. Turned out that PV inverter are kinda unreliable, and you really have to monitor them to notice downtimes and defects. Since my pal shot for a quick win I've cobbled together another Python script to query the portal at www.envertecportal.com, and report back if the generated power is down to 0. The script is currently run on a vserver via cron and reports back via the system MTA. So yeah, you need to have something like that already at hand. Script and Configuration You've to provide your PV systems location with latitude and longitude so the script can calculate (via python3-suntime) the sunrise and sunset times. At the location we deal with we expect to generate some power at least from sunrise + 1h to sunet - 1h. That is tunable via the configuration option toleranceSeconds. Retrieving the stationId is a bit ugly because it's not provided via any API, instead it's rendered serverside into the website. So I just logged in on the portal and picked it up by looking into the page source. www.envertecportal.com API I guess this is some classic in the IoT land, but neither the documentation provided on the portal frontpage as docx, nor the API docs at port 8090 are complete and correct. The few bits I gathered via the Firefox Web Developer Tools are:
  1. Login https://www.envertecportal.com/apiaccount/login - POST, sent userName and pwd containing your login name and password. The response JSON is very explicit if your login was not successful and why.
  2. Store the session cookie called ASP.NET_SessionId for use on all subsequent requests.
  3. Retrieve station info https://www.envertecportal.com/ApiStations/getStationInfo - POST, sent ASP.NET_SessionId and stationId with the ID of the station. Returns a JSON with an object named Data. The field Power contains the currently generated power as a float with two digits (e.g. 0.01).
  4. Logout https://www.envertecportal.com/apiAccount/Logout - POST, sent ASP.NET_SessionId.
Some Surprises There were a few surprises, maybe they help others dealing with an Envertech setup.
  1. The portal truncates passwords at 16 chars.
  2. The "Forget Password?" function mails you back the password in plain text (that's how I learned about 1.).
  3. The login API endpoint reporting the exact reason why the login failed is somewhat out of fashion. Though this one is probably not a credential stuffing target because there is no money to make, so don't care.
  4. The data logger reports the data to www.envertecportal.com at port 10013.
  5. There is some checksuming done on the reported data, but the system is not replay safe. So you can sent it any valid data string at a later time and get wrong data recorded.
  6. People at forum.fhem.de decoded some values but could not figure out the checksuming so far.

2 June 2021

Sven Hoexter: pulseaudio/alsa and dynamic mic sensitivity in my browser

It's a gross hack but works for now. To prevent overly sensitive mic settings autotuned by the browser in web conferences, I currently edit as root /usr/share/pulseaudio/alsa-mixer/paths/analog-input-internal-mic.conf. Change in [Element Capture] the setting volume from merge to 80. The config block as a whole looks like this:
[Element Capture]
switch = mute
volume = 80
override-map.1 = all
override-map.2 = all-left,all-right
Solution found at https://askubuntu.com/a/761103.

21 April 2021

Sven Hoexter: bullseye: doveadm as unprivileged user with dovecot ssl config

The dovecot version which will be released with bullseye seems to require some subtle config adjustment if you I guess one of the common cases is executing doveadm pw e.g. if you use postfixadmin. For myself that manifested in the nginx error log, which I use in combination with php-fpm, as.
2021/04/19 20:22:59 [error] 307467#307467: *13 FastCGI sent in stderr: "PHP message:
Failed to read password from /usr/bin/doveadm pw ... stderr: doveconf: Fatal: 
Error in configuration file /etc/dovecot/conf.d/10-ssl.conf line 12: ssl_cert:
Can't open file /etc/dovecot/private/dovecot.pem: Permission denied
You easily see the same error message if you just execute something like doveadm pw -p test123. The workaround is to move your ssl configuration to a new file which is only readable by root, and create a dummy one which disables ssl, and has a !include_try on the real one. Maybe best explained by showing the modification:
cd /etc/dovecot/conf.d
cp 10-ssl.conf 10-ssl_server
chmod 600 10-ssl_server
echo 'ssl = no' > 10-ssl.conf
echo '!include_try 10-ssl_server' >> 10-ssl.conf
Discussed upstream here.

23 December 2020

Sven Hoexter: Jenkins dynamically parameterized pipelins for terraform execution

Jenkins in the Ops space is in general already painful. Lately the deprecation of the multiple-scms plugin caused some headache, becaue we relied heavily on it to generate pipelines in a Seedjob based on structure inside secondary repositories. We kind of started from scratch now and ship parameterized pipelines defined in Jenkinsfiles in those secondary repositories. Basically that is the way it should be, you store the pipeline definition along with code you'd like to execute. In our case that is mostly terraform and ansible. Problem Directory structure is roughly "stage" -> "project" -> "service". We'd like to have one job pipeline per project, which dynamically reads all service folder names and offers those as available parameters. A service folder is the smallest entity we manage with terraform in a separate state file. Now Jenkins pipelines are by intention limited, but you can add some groovy at will if you whitelist the usage in Jenkins. You have to click through some security though to make it work. Jenkinsfile This is basically a commented version of the Jenkinsfile we copy now around as a template, to be manually adjusted per project.
// Syntax: https://jenkins.io/doc/book/pipeline/syntax/
// project name as we use it in the folder structure and job name
def TfProject = "myproject-I-dev"
// directory relative to the repo checkout inside the jenkins workspace
def jobDirectory = "terraform/dev/$ TfProject "
// informational string to describe the stage or project
def stageEnvDescription = "DEV"
/* Attention please if you rebuild the Jenkins instance consider the following:
- You've to run this job at least *thrice*. It first has to checkout the
repository, then you've to add permisions for the groovy part, and on
the third run you can gather the list of available terraform folder.
- As a safeguard the first first folder name is always the invalid string
"choose-one". That prevents accidential execution of a random project.
- If you add new terraform folder you've to run the "choose-one" dummy rollout so
the dynamic parameters pick up the new folder. */
/* Here we hardcode the path to the correct job workspace on the jenkins host, and
   discover the service folder list. We have to filter it slightly to avoid temporary folders created by Jenkins (like @tmp folders). */
List tffolder = new File("/var/lib/jenkins/jobs/terraform $ TfProject /workspace/$ jobDirectory ").listFiles().findAll   it.isDirectory() && it.name ==~ /(?i)[a-z0-9_-]+/  .sort()
/* ensure the "choose-one" dummy entry is always the first in the list, otherwise
   initial executions might execute something. By default the first parameter is
   used if none is selected */
tffolder.add(0,"choose-one")
pipeline  
    agent any
    /* Show a choice parameter with the service directory list we stored
       above in the variable tffolder */
    parameters  
        choice(name: "TFFOLDER", choices: tffolder)
     
    // Configure logrotation and coloring.
    options  
        buildDiscarder(logRotator(daysToKeepStr: "30", numToKeepStr: "100"))
        ansiColor("xterm")
     
    // Set some variables for terraform to pick up the right service account.
    environment  
        GOOGLE_CLOUD_KEYFILE_JSON = '/var/lib/jenkins/cicd.json'
        GOOGLE_APPLICATION_CREDENTIALS = '/var/lib/jenkins/cicd.json'
     
stages  
    stage('TF Plan')  
    /* Make sure on every stage that we only execute if the
       choice parameter is not the dummy one. Ensures we
       can run the pipeline smoothly for re-reading the
       service directories. */
    when   expression   params.TFFOLDER != "choose-one"    
    steps  
        /* Initialize terraform and generate a plan in the selected
           service folder. */
        dir("$ params.TFFOLDER ")  
        sh 'terraform init -no-color -upgrade=true'
        sh 'terraform plan -no-color -out myplan'
         
        // Read in the repo name we act on for informational output.
        script  
            remoteRepo = sh(returnStdout: true, script: 'git remote get-url origin').trim()
         
        echo "INFO: job *$ JOB_NAME * in *$ params.TFFOLDER * on branch *$ GIT_BRANCH * of repo *$ remoteRepo *"
     
     
    stage('TF Apply')  
    /* Run terraform apply only after manual acknowledgement, we have to
       make sure that the when     condition is actually evaluated before
       the input. Default is input before when. */
    when  
        beforeInput true
        expression   params.TFFOLDER != "choose-one"  
     
    input  
        message "Cowboy would you really like to run **$ JOB_NAME ** in **$ params.TFFOLDER **"
        ok "Apply $ JOB_NAME  to $ stageEnvDescription "
     
    steps  
        dir("$ params.TFFOLDER ")  
        sh 'terraform apply -no-color -input=false myplan'
         
     
     
 
    post  
            failure  
                // You can also alert to noisy chat platforms on failures if you like.
                echo "job failed"
             
         
job-dsl side of the story Having all those when conditions in the pipeline stages above allows us to create a dependency between successful Seedjob executions and just let that trigger the execution of the pipeline jobs. This is important because the Seedjob execution itself will reset all pipeline jobs, so your dynamic parameters are gone. By making sure we can re-execute the job, and doing that automatically, we still have up to date parameterized pipelines, whenever the Seedjob ran successfully. The job-dsl script looks like this:
import javaposse.jobdsl.dsl.DslScriptLoader;
import javaposse.jobdsl.plugin.JenkinsJobManagement;
import javaposse.jobdsl.plugin.ExecuteDslScripts;
def params = [
    // Defaults are repo: mycorp/admin, branch: master, jenkinsFilename: Jenkinsfile
    pipelineJobs: [
        [name: 'terraform myproject-I-dev', jenkinsFilename: 'terraform/dev/myproject-I-dev/Jenkinsfile', upstream: 'Seedjob'],
        [name: 'terraform myproject-I-prod', jenkinsFilename: 'terraform/prod/myproject-I-prod/Jenkinsfile', upstream: 'Seedjob'],
    ],
]
params.pipelineJobs.each   job ->
    pipelineJob(job.name)  
        definition  
            cpsScm  
                // assume admin and branch master as a default, look for Jenkinsfile
                def repo = job.repo ?: 'mycorp/admin'
                def branch = job.branch ?: 'master'
                def jenkinsFilename = job.jenkinsFilename ?: 'Jenkinsfile'
                scm  
                    git("ssh://git@github.com/$ repo .git", branch)
                 
                scriptPath(jenkinsFilename)
             
         
        properties  
            pipelineTriggers  
                triggers  
                    if(job.upstream)  
                        upstream  
                            upstreamProjects("$ job.upstream ")
                            threshold('SUCCESS')
                         
                     
                 
             
         
     
 
Disadvantages There are still a bunch of disadvantages you've to consider Jenkins Rebuilds are Painful In general we rebuild our Jenkins instances quite frequently. With the approach outlined here in place, you've to allow the groovy script execution after the first Seedjob execution, and then go through at least another round of run the job, allow permissions, run the job, until it's finally all up and running. Copy around Jenkinsfile Whenever you create a new project you've to copy around Jenkinsfiles for each and every stage and modify the variables at the top accordingly. Keep the Seedjob definitions and Jenkinsfile in Sync You not only have to copy the Jenkinsfile around, but you also have to keep the variables and names in sync with what you define for the Seedjob. Sadly the pipeline env-vars are not available outside of the pipeline when we execute the groovy parts. Kudos This setup was crafted with a lot of help by Michael and Eric.

21 December 2020

Sven Hoexter: docker buildx sugar - dumping results to disk

The latest docker 20.10.x release unlocks the buildx subcommands which allow for some sugar, like building something in a container and dumping the result to your local directory in one command. Dockerfile
FROM docker-registry.mycorp.com/debian-node:lts as builder
USER service
COPY . /opt/service
RUN cd /opt/service; npm install; npm run build
FROM scratch as dist
COPY --from=builder /opt/service/dist /
build with
docker buildx build --target=dist --output type=local,dest=$(pwd)/pages/ .
Here we build a page, copy the result with all assets from the /opt/service/dist directory to an empty image and dump it into the local pages directory.

20 December 2020

Sven Hoexter: Mock a Serial pty Device with socat

Another note to myself before I forget about this nifty usage of socat again. I was looking for something to mock a serial device, similar to a microcontroller which usually ends up as /dev/ttyACM0 and might output some text. What I found is a very helpful post on stackoverflow showing an example utilizing socat.
$ socat -d -d pty,rawer pty,rawer
2020/12/20 21:37:53 socat[29130] N PTY is /dev/pts/8
2020/12/20 21:37:53 socat[29130] N PTY is /dev/pts/11
2020/12/20 21:37:53 socat[29130] N starting data transfer loop with FDs [5,5] and [7,7]
Write whatever you need to the second pty, here /dev/pts/11, e.g.
$ i=0; while :; do echo "foo: $ i " > /dev/pts/11; let i++; sleep 5; done
Now you can listen with whatever you like, e.g. some tool you work on, on the fist pty, here /dev/pts/8. For demonstration purpose just use cat:
$ cat /dev/pts/8
foo: 0
foo: 1
socat is an awesome tool, looking through the manpage you need some knowledge about sockets, but it's incredibly vesatile.

14 October 2020

Sven Hoexter: Nice Helper to Sanitize File Names - sanity.pl

One of the most awesome helpers I carry around in my ~/bin since the early '00s is the sanity.pl script written by Andreas Gohr. It just recently came back to use when I started to archive some awesome Corona enforced live session music with youtube-dl. Update: Francois Marier pointed out that Debian contains the detox package, which has a similar functionality.

18 September 2020

Sven Hoexter: Avoiding the GitHub WebUI

Now that GitHub released v1.0 of the gh cli tool, and this is all over HN, it might make sense to write a note about my clumsy aliases and shell functions I cobbled together in the past month. Background story is that my dayjob moved to GitHub coming from Bitbucket. From my point of view the WebUI for Bitbucket is mediocre, but the one at GitHub is just awful and painful to use, especially for PR processing. So I longed for the terminal and ended up with gh and wtfutil as a dashboard. The setup we have is painful on its own, with several orgs and repos which are more like monorepos covering several corners of infrastructure, and some which are very focused on a single component. All workflows are anti GitHub workflows, so you must have permission on the repo, create a branch in that repo as a feature branch, and open a PR for the merge back into master. gh functions and aliases
# setup a token with perms to everything, dealing with SAML is a PITA
export GITHUB_TOKEN="c0ffee4711"
# I use a light theme on my terminal, so adjust the gh theme
export GLAMOUR_STYLE="light"
#simple aliases to poke at a PR
alias gha="gh pr review --approve"
alias ghv="gh pr view"
alias ghd="gh pr diff"
### github support functions, most invoked with a PR ID as $1
#primary function to review PRs
function ghs  
    gh pr view $ 1 
    gh pr checks $ 1 
    gh pr diff $ 1 
 
# very custom PR create function relying on ORG and TEAM settings hard coded
# main idea is to create the PR with my team directly assigned as reviewer
function ghc  
    if git status   grep -q 'Untracked'; then
        echo "ERROR: untracked files in branch"
        git status
        return 1
    fi
    git push --set-upstream origin HEAD
    gh pr create -f -r "$(git remote -v   grep push   grep -oE 'myorg-[a-z]+')/myteam"
 
# merge a PR and update master if we're not in a different branch
function ghm  
    gh pr merge -d -r $ 1 
    if [[ "$(git rev-parse --abbrev-ref HEAD)" == "master" ]]; then
        git pull
    fi
 
# get an overview over the files changed in a PR
function ghf  
    gh pr diff $ 1    diffstat -l
 
# generate a link to a commit in the WebUI to pass on to someone else
# input is a git commit hash
function ghlink  
    local repo="$(git remote -v   grep -E "github.+push"   cut -d':' -f 2   cut -d'.' -f 1)"
    echo "https://github.com/$ repo /commit/$ 1 "
 
wtfutil I have a terminal covering half my screensize with small dashboards listing PRs for the repos I care about. For other repos I reverted back to mail notifications which get sorted and processed from time to time. A sample dashboard config looks like this:
github_admin:
  apiKey: "c0ffee4711"
  baseURL: ""
  customQueries:
    othersPRs:
      title: "Pull Requests"
      filter: "is:open is:pr -author:hoexter -label:dependencies"
  enabled: true
  enableStatus: true
  showOpenReviewRequests: false
  showStats: false
  position:
    top: 0
    left: 0
    height: 3
    width: 1
  refreshInterval: 30
  repositories:
    - "myorg/admin"
  uploadURL: ""
  username: "hoexter"
  type: github
The -label:dependencies is used here to filter out dependabot PRs in the dashboard. Workflow Look at a PR with ghv $ID, if it's ok ACK it with gha $ID. Create a PR from a feature branch with ghc and later on merge it with ghm $ID. The $ID is retrieved from looking at my wtfutil based dashboard. Security Considerations The world is full of bad jokes. For the WebUI access I've the full array of pain with SAML auth, which expires too often, and 2nd factor verification for my account backed by a Yubikey. But to work with the CLI you basically need an API token with full access, everything else drives you insane. So I gave in and generated exactly that. End result is that I now have an API token - which is basically a password - which has full power, and is stored in config files and environment variables. So the security features created around the login are all void now. Was that the aim of it after all?

8 September 2020

Sven Hoexter: Cloudflare Bot Management, MITM Boxes and TLS 1.3

This is just a "warn your brothers" post for those who use Cloudflare Bot Management, and have customers which use MITM boxes to break up TLS 1.3 connections. Be aware that right now some heuristic rules in the Cloudflare Bot Management score TLS 1.3 requests made by some MITM boxes with 1 - which equals "we're 99.99% sure that this is none human browser traffic". While technically correct - the TLS connection hitting the Cloudflare Edge node is not established by a browser - that does not help your customer if you block those requests. If you do something like blocking requests with a BM score of 1 at the Cloudflare Edge, you might want to reconsider that at the moment and sent a captcha challenge instead. While that is not a lot nicer, and still pisses people off, you might find a balance there between protecting yourself and still having some customers. I've a confirmation for this happening with Cisco WSA, but it's likely to be also the case with other vendors. Breaking up TLS 1.2 seems to be stealthy enough in those appliances that it's not detected, so this issue creeps in with more enterprises rolling out modern browser. You can now enter youself here a rant about how bad the client-server internet of 2020 is, and how bad it is that some of us rely on Cloudflare, and that they have accumulated a way too big market share. But the world is as it is. :(

24 August 2020

Sven Hoexter: google cloud buster images without python 2

Update I have to stand corrected. noahm@ wrote me, because the Debian Cloud Image maintainer only ever included python explicitly in Azure images. The most likely explanation for the change in the Google images is that Google just ported the last parts of their own software to python 3, and subsequently removed python 2. With some relieve one can just conclude it's only our own fault that we did not build our own images, which include all our own dependencies. Take it as reminder to always build your own images. Always. Be it VMs or docker. Build your own image. Original Post Fun in the morning, we realized that the Debian Cloud image builds dropped python 2 and that propagated to the Google provided Debian/buster images. So in case you use something like ansible, and so far assumed python 2 as the default interpreter, and installed additional python 2 modules to support ansible modules, you now have to either install python 2 again or just move to python 3k. We just try to suffer it through now, and set interpreter_python = auto in our ansible.cfg to anticipate the new default behaviour, which is planned for ansible 2.12. See also https://docs.ansible.com/ansible/latest/reference_appendices/interpreter_discovery.html Other lesson to learn here: The GCE Debian stable images are not stable. Blends in nicely with this rant, though it's not 100% a Google Cloud foul this time.

14 August 2020

Sven Hoexter: Retrieve WLAN PSK via nmcli

Note to myself so I do not have to figure that out every few month when I've to dig out a WLAN PSK from my existing configuration. Step 1: Figure out the UUID of the network:
$ nmcli con show
NAME                  UUID                                  TYPE      DEVICE          
br-59d010130b86       d8672d3d-7cf6-484f-9ef8-e6ec3e73bef7  bridge    br-59d010130b86 
FRITZ!Box 7411        1ed1cec1-f586-4e75-ba6d-c9f6f4cba6e2  wifi      wlp4s0
[...]
Step 2: Request to view the PSK for this network based on the UUID
$ nmcli --show-secrets --fields 802-11-wireless-security.psk con show '1ed1cec1-f586-4e75-ba6d-c9f6f4cba6e2'
802-11-wireless-security.psk:           0815471123420511111

13 August 2020

Sven Hoexter: An Average IT Org

Supply chain attacks are a known issue, and also lately there was a discussion around the relevance of reproducible builds. Looking in comparison at an average IT org doing something with the internet, I believe the pressing problem is neither supply chain attacks nor a lack of reproducible builds. The real problem is the amount of prefabricated binaries supplied by someone else, created in an unknown build environment with unknown tools, the average IT org requires to do anything. The Mess the World Runs on By chance I had an opportunity to look at what some other people I know use, and here is the list I could compile by scratching just at the surface: Of course there are also all the script language repos - Python, Ruby, Node/Typescript - around as well. Looking at myself, who's working in a different IT org but with a similar focus, I have the following lingering around on my for work laptop and retrieved it as a binary from a 3rd party: Yes some of that is even non-free and might contain spyw^telemetry. Takeway I By guessing based on Pareto Principle probably 80% of the software mentioned above is also open source software. But, and here we leave Pareto behind, close to none is build by the average IT org from source. Why should the average IT org care about advanced issues like supply chain attacks on source code and mitigations, when it already gets into very hot water the day DockerHub closes down, HashiCorp moves from open core to full proprietary or Elastic decides to no longer offer free binary builds? The reality out there seems to be that infrastructure of "modern" IT orgs is managed similar to the Windows 95 installation of my childhood. You just grab running binaries from somewhere and run them. The main difference seems to be that you no longer have the inconvenience of downloading a .xls from geocities you've to rename to .rar and that it's legal. Takeway II In the end the binary supply is like a drug for the user, and somehow the Debian project is also just another dealer / middle man in this setup. There are probably a lot of open questions to think about in that context. Are we the better dealer because we care about signed sources we retrieve from upstream and because we engage in reproducible build projects? Are our own means of distributing binaries any better than a binary download from github via https with a manual checksum verification, or the Debian repo at download.docker.com? Is the approach of the BSD/Gentoo ports, where you have to compile at least some software from source, the better one? Do I really want to know how some of the software is actually build? Or some more candid ones like is gnutls a good choice for the https support in apt and how solid is the gnupg code base? Update: Regarding apt there seems to be some movement.

17 July 2020

Sven Hoexter: Debian and exFAT

As some might have noticed we now have Linux 5.7 in Debian/unstable and subsequently the in kernel exFAT implementation created by Samsung available. Thus we now have two exFAT implementations, the exfat fuse driver and the Linux based one. Since some comments and mails I received showed minor confusions, especially around the available tooling, it might help to clarify a bit what is required when. Using the Samsung Linux Kernel Implementation Probably the most common use case is that you just want to use the in kernel implementation. Easy, install a Linux 5.7 kernel package for your architecture and either remove the exfat-fuse package or make sure you've version 1.3.0-2 or later installed. Then you can just run mount /dev/sdX /mnt and everything should be fine. Your result will look something like this:
$ mount grep sdb
/dev/sdb on /mnt type exfat (rw,relatime,fmask=0022,dmask=0022,iocharset=utf8,errors=remount-ro
In the past this basic mount invocation utilized the mount.exfat helper, which was just a symlink to the helper which is shipped as /sbin/mount.exfat-fuse. The link was dropped from the package in 1.3.0-2. If you're running a not so standard setup, and would like to keep an older version of exfat-fuse installed you must invoke mount -i to prevent mount from loading any helper to mount the filesystem. See also man 8 mount. For those who care, mstone@ and myself had a brief discussion about this issue in #963752, which quickly brought me to the conclusion that it's in the best interest of a majority of users to just drop the symlink from the package. Sticking to the Fuse Driver If you would like to stick to the fuse driver you can of course just do it. I plan to continue to maintain all packages for the moment. Just keep the exfat-fuse package installed and use the mount.exfat-fuse helper directly. E.g.
$ sudo mount.exfat-fuse /dev/sdb /mnt
FUSE exfat 1.3.0
$ mount grep sdb
/dev/sdb on /mnt type fuseblk (rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other,blksize=4096)
In case this is something you would like to make permanent, I would recommend that you create yourself a mount.exfat symlink pointing at the mount.exfat-fuse helper. mkfs and fsck - exfat-utils vs exfatprogs Beside of the filesystem access itself we now also have two implementations of tooling, which support filesystem creation, fscking and label adjustment. The older one is exfat-utils by the creator of the fuse driver, which is also part of Debian since the fuse driver was packaged in 2012. New in Debian is the exfatprogs package written by the Samsung engineers. And here the situation starts to get a bit more complicated. Both packages ship a mkfs.exfat and fsck.exfat tool, so we can not co-install them. In the end both packages declare a conflict with each other at the moment. As outlined in this thread I do not plan to overcomplicate the situation by using the alternatives system. I do feel strongly that this would just create more confusion without a real benefit. Since the tools do not have matching cli options, that could also cause additional issues and confusion. I plan to keep that as is, at least for the bullseye release. Afterwards it's possible, depending on how the usage evolves, to drop the mkfs.exfat and fsck.exfat from exfat-utils, they are in fact again only symlinks. pain point might be tools interfacing with the differing implementations. Currently I see only three reverse depedencies, so that should be manageable to consolidate if required. Last but not least it might be relevant to mention that the exfat-utils package also contains a dumpexfat tool which could be helpful if you're more into forensics, or looking into other lower level analysis of an exFAT filsystem. Thus there is a bit of an interest to have those tools co-installed in some - I would say - niche cases. buster-backports Well if you use buster with a backports kernel you're a bit on your own. In case you want to keep the fuse driver installed, but would still like to mount, e.g. for testing, with the kernel exFAT driver, you must use mount -i. I do not plan any uploads to buster-backports. If you need a mkfs.exfat on buster, I would recommend to just use the one from exfat-utils for now. It has been good enough for the past years, should not get sour before the bullseye release, which ships exfatprogs for you. Kudos My sincere kudos go to:

16 May 2020

Sven Hoexter: Quick and Dirty: masquerading / NAT with nftables

Since nftables is now the new default, a short note to myself on how to setup masquerading, like the usual NAT setup you use on a gateway.
nft add table nat
nft add chain nat postrouting   type nat hook postrouting priority 100 \;  
nft add rule nat postrouting ip saddr 192.168.1.0/24 oif wlan0 masquerade
In this case the wlan0 is basically the "WAN" interface, because I use an old netbook as a wired to wireless network adapter.

19 April 2020

Sven Hoexter: Emulating Raspi2 like hardware with Rasbian in 2020

To follow some older (as in two years) ARM assembler howto, I searched for a quick and dirty way to run a current Rasbian with qemu 4.2 on Debian/unstable. The end result are the following notes to get that up and running:
# Download a binary device tree file and matching kernel a good soul uploaded to github
wget https://github.com/vfdev-5/qemu-rpi2-vexpress/raw/master/kernel-qemu-4.4.1-vexpress
wget https://github.com/vfdev-5/qemu-rpi2-vexpress/raw/master/vexpress-v2p-ca15-tc1.dtb
# Download the official Rasbian image without X
wget -O raspbian_lite_latest.zip https://downloads.raspberrypi.org/raspbian_lite_latest
unzip raspbian_lite_latest.zip
# Convert it from the raw image to a qcow2 image and add some space
qemu-img convert -f raw -O qcow2 2020-02-13-raspbian-buster-lite.img rasbian.qcow2
qemu-img resize rasbian.qcow2 +2G
# start qemu
qemu-system-arm -m 2048M -M vexpress-a15 -cpu cortex-a15 \
 -kernel kernel-qemu-4.4.1-vexpress -no-reboot \
 -smp 2 -serial stdio \
 -dtb vexpress-v2p-ca15-tc1.dtb -sd rasbian.qcow2 \
 -append "root=/dev/mmcblk0p2 rw rootfstype=ext4 console=ttyAMA0,15200 loglevel=8" \
 -nic user,hostfwd=tcp::5555-:22
# login at the serial console as user pi with password raspberry
sudo -i
# enable ssh
systemctl enable ssh
# resize partition and filesystem
parted /dev/mmcblk0 resizepart 2 100%
resize2fs /dev/mmcblk0p2
Now I can login via ssh and start to play:
ssh pi@localhost -p 5555
So for me that is sufficient, I have network connectivity to install an editor, transfer files and can otherwise work with tmux to have some session multiplexing. Additional Notes

2 April 2020

Sven Hoexter: New TLDs and Automatic link detection was a bad idea

Update: Seems this is a Firefox specific bug in the Slack Webapplication, it works in Chrome and the Slack Electron Application as it should. Tested with Firefox ESR on Debian/buster and Firefox 74 on OS X. Ah I like it that we now have so many TLDs and matching on those seems to go bad more often now. Last occassion is Slack (which I think is a pile of shit written by morons, but that is a different story) which somehow does not properly match on .co domains. Leading to this auto linking: nsswitch.conf link Now I'm not sure if someone enocountered the same issue, or people just registered random domains just because they could. I found registrations for I've a few more .conf files in /etc which could be interesting in an IT environment, but for the sake of playing with it I registered nsswitch.co at godaddy. I do not want to endorse them in anyway, but for the first year it's only 13.08EUR right now, which is okay to pay for a stupid demo. So if you feel like it, you can probably register something stupid for yourself to play around with. I do not intent to renew this domain next year, so be aware of what happens then with the next owner.

Next.