In my day job someone today took the time in the team daily to explain
his research why some of our configuration is wrong. He spent quite
some time on his own to look at the history in git and how everything
was setup initially, and ended up in the current - wrong - way. That triggered
me to validate that quickly, another 5min of work. So we agreed
to change it. A one line change, nothing spectacular, but lifetime was
invested to figure out why it should've a different value.
When the pull request got opened a few minutes later there was nothing
of that story in the commit message. Zero, nada, nothing. I'm really
puzzled why someone invests lifetime to dig into company internal history
to try get something right, do a lengthy explanation to the whole team,
use the time of others, even mention that there was no explanation of
why it's not the default value anymore it should be, and repeat the same
mistake by not writing down anything in the commit message.
For the current company I'm inclined to propose a commit message validator.
For a potential future company I might join, I guess I ask for real world
git logs from repositories I should contribute to. Seems that this is another
valuable source of information to qualify the company culture. Next up to
the existence of whiteboards in the office.
I'm really happy that at least a majority of the people contributing to Debian
writes somewhat decent commit messages and changelogs. Let that be a reminder
to myself to improve in that area the next time I've to change something.
I recently bought a Banana Pi BPI-M5, which uses the Amlogic S905X3 SoC: these are my notes about installing Debian on it.
While this SoC is supported by the upstream U-Boot it is not supported by the Debian U-Boot package, so debian-installer does not work. Do not be fooled by seeing the DTB file for this exact board being distributed with debian-installer: all DTB files are, and it does not mean that the board is supposed to work.
As I documented in #1033504, the Debian kernels are currently missing some patches needed to support the SD card reader.
I started by downloading an Armbian Banana Pi image and booted it from an SD card. From there I partitioned the eMMC, which always appears as /dev/mmcblk1:
Make sure to leave enough space before the first partition, or else U-Boot will overwrite it: as it is common for many ARM SoCs, U-Boot lives somewhere in the gap between the MBR and the first partition.
I looked at Armbian's /usr/lib/u-boot/platform_install.sh and installed U-Boot by manually copying it to the eMMC:
I wanted to have a fully working flash-kernel, so I used Armbian's boot.scr as a template to create /etc/flash-kernel/bootscript/bootscr.meson and then added a custom entry for the Banana Pi to /etc/flash-kernel/db:
All things considered I do not think that I would recommend to Debian users to buy Amlogic-based boards since there are many other better supported SoCs.
I know a few people hold on to the exFAT fuse implementation due the
support for timezone offsets, so here is a small update for you.
Andrew released 1.4.0, which includes the timezone offset support, which
was so far only part of the git master branch. It also fixes a,
from my point of view very minor, security issue
CVE-2022-29973.
In addition to that it's the first build with fuse3 support. If you
still use this driver, pick it up in experimental (we're in the bookworm freeze
right now), and give it a try. I'm personally not using it anymore beyond a very
basic "does it mount" test.
Long ago I applied for my dream job at a company I have wanted to wok for since its beginning and I wasn t ready technically. Fast forward to now, I am ready! A big thank you goes out to Blue Systems for that. So I go out and find the perfect role and start the application process. The process was months long, but was going very well, the interviews and I passed the technical with flying colors. I got to the end where the hiring lead told me he was submitting my offer I was so excited, so much so, I told my husband and parents I got the job! I know, I jinxed myself there. Soon I receive the There was a problem .. One obscure assessment called GIA came back not so good. I remember that day, we were in the middle of a long series of winter storms and I when I took the test, my kitten decided right then it was me time. I couldn t very well throw her out into the snowstorm, so I continued on the best I could. It is my fault, it clearly states to be distraction free. So I speak again to the hiring lead and we both feel with my experience and technical knowledge and abilities we can still move forward. I still had hope. After some time passes, I asked for an update and got the dreaded rejection. I am told it wasn t just the GIA, but that I am not a good overall fit for the company. In one fell swoop my dreams are dashed and final, for this and all roles within that company. I wasn t given a reason either. I am devastated, heart broken, and shocked. I get along with everyone, I exceed the technical requirements, and I work well in the community. Dream door closed.
I will not let this get me down. I am moving on. I will find my place where I fit in .
With that said, I no longer have the will, passion, or drive to work on snaps anymore. I will leave instructions with Jonathon as to what needs to be done to move forward. The good news is my core22 kde-neon extension was merged into upstream snapcraft, so whomever takes over will have a much easier time knocking them out. @kubuntu-council I will do whatever it takes to pay back the money for the hardware you provided me to do snaps, I am truly sorry about this.
What does my future hold? I will still continue with my Debian efforts. In fact, I have ventured out from the KDE umbrella and joined the go-team. I am finalizing my packaging for
https://github.com/charmbracelet/gum
and it s dependencies: roff, mango, mango-kong. I had my first golang patch for a failing test and have submitted it upstream. I will upload these to experimental while the freeze is on.
I will be moving all the libraries in the mycroft team to the python umbrella as they are useful for other things and mycroft is no more.
During the holidays I was tinkering around with selenium UI testing and stumbled on some accessibility issues within KDE, so I think this is a good place for me to dive into for my KDE contributions.
I have been approached to collaborate with OpenOS on a few things, time permitting I will see what I can do there.
I have a possible gig to do some websites, while I move forward in my job hunt.
I will not give up! I will find my place where I fit in .
Meanwhile, I must ask for donations to get us by. Anything helps, thank you for your consideration.
https://gofund.me/a9c36b87
This is the report for the Debian Python Team remote sprint
that took place on December 2-3-4 2022.
Many thanks to those who participated, namely:
tienne Mollier (emollier)
Taihsiang Ho (tai271828)
Athos Ribeiro (athos)
Stuart Prescott (stuart)
Louis-Philippe V ronneau (pollo)
Ileana Dumitrescu (ildumi)
James Valleroy (jvalleroy)
Emmanuel Arias (eamanu)
Kurt Kremitzki (kkremitzki)
Mohammed Bilal (rmb)
Stefano Rivera (tumbleweed)
Jeroen Ploemen (jcfp)
Here is a list of issues we worked on:
pybuild autodep8 feature
About a year ago, Antonio Terceiro contributed code to pybuild to
make it possible to automatically run the upstream test suite as autopkgtests.
This feature has now been merged and uploaded to unstable. Although you can
find out more about it in the pybuild-autopkgtest manpage, an email
providing more details should be sent to the debian-python mailing list
relatively soon.
Fixing packages that run tests via python3 setup.py test
Last August, Stefano Rivera poked the team about the deprecation
of the python3 setup.py test command to run tests in pybuild. Although this
feature has been deprecated upstream for 6 years now, many packages
in the archive still use it to run the upstream test suite during build.
Around 29 of the 67 packages that are team-maintained by the Debian Python Team
were fixed during the sprint. Ideally, all of them would be before the feature
is removed from pybuild.
if a package you maintain still runs this command, please consider fixing it!
Fixing packages that use nosenose, provided by the python3-nose package, is an obsolete testing
framework for Python and has been unmaintained since 2015.
During the sprint, people worked on fixing some of the many bugs filled against
packages still running tests via nose, but there are still around 240
packages affected by this issue in the archive.
Again, if a package you maintain still runs this command, please consider
fixing it!
Removal of the remaining Python2 packages
With the upload of dh-python 5.20221202, Stefano Rivera officially removed
support for dh_python2 and dh_pypy, thus closing the "Python2 removal in
sid/bullseye" bug.
It seems some work still needs to be done for complete Python2
removal from Sid, but I expect this will be done in time for the
Bookworm release.
Working on Lintian tags for the Team
During the sprint, I managed to work on some Lintian issues that we had
targeted, namely:
the addition of a "missing-cpython-extension" tag to flag packages
that do not build CPython extensions for all the supported Python versions.
I also worked on a few other Lintian tags, but they were unrelated to the
Debian Python Team itself.
I'm also happy to report many of the tags I wrote for the team in the past few
months were merged by the awesome Russ Allbery and should land in unstable as
soon as a new release is made.
I'm particularly looking forward the new "uses-python-distutils" tag
that should help us flag packages that still use the deprecated distutils
library.
Patching distro-tracker (tracker.debian.org) to show pending team MRs
It's often hard to have a good overview of pending merge requests when working
with team-maintained packages, as by default, Salsa doesn't notify anyone when
a MR is opened.
Although our workflow typically does not involve creating merge requests, some
people still do and they end up sitting there, unnoticed.
During the sprint, Kurt Kremitzki worked on solving this issue by having
distro-tracker show the pending MRs on our team's tracker page.
Sadly, it seems little progress was made, as the removal of
python3-django-jsonfield from the archive and breaking changes in
python3-selenium has broken the test suite.
Migrate packages building with the flit plugin to the generic pyproject one
pybuild has been supporting building with PEP-517 style pyproject.toml files
via a generic plugin (pybuild-plugin-pyproject) for a while now.
As this plugin supersedes the old flit plugin, we've been thinking of
deprecating it in time for the Bookworm release.
To make this possible, most of the packages in the archive that still used this
plugin were migrated to the generic one and I opened bugs on the last handful
of packages that were not team-maintained.
Other work
Many other things were done during the sprint, such as:
improving existing packages (for example, adding or fixing autopkgtests)
Thanks
Thanks again to everyone who joined the sprint, and three big cheers for all
the folks who donate to Debian and made it possible for us to have a
food budget for the event.
If you ve done anything in the Kubernetes space in recent years, you ve most likely come across the words Service Mesh . It s backed by a set of mature technologies that provides cross-cutting networking, security, infrastructure capabilities to be used by workloads running in Kubernetes in a manner that is transparent to the actual workload. This abstraction enables application developers to not worry about building in otherwise sophisticated capabilities for networking, routing, circuit-breaking and security, and simply rely on the services offered by the service mesh.In this post, I ll be covering Linkerd, which is an alternative to Istio. It has gone through a significant re-write when it transitioned from the JVM to a Go-based Control Plane and a Rust-based Data Plane a few years back and is now a part of the CNCF and is backed by Buoyant. It has proven itself widely for use in production workloads and has a healthy community and release cadence.It achieves this with a side-car container that communicates with a Linkerd control plane that allows central management of policy, telemetry, mutual TLS, traffic routing, shaping, retries, load balancing, circuit-breaking and other cross-cutting concerns before the traffic hits the container. This has made the task of implementing the application services much simpler as it is managed by container orchestrator and service mesh. I covered Istio in a prior post a few years back, and much of the content is still applicable for this post, if you d like to have a look.Here are the broad architectural components of Linkerd:The components are separated into the control plane and the data plane.The control plane components live in its own namespace and consists of a controller that the Linkerd CLI interacts with via the Kubernetes API. The destination service is used for service discovery, TLS identity, policy on access control for inter-service communication and service profile information on routing, retries, timeouts. The identity service acts as the Certificate Authority which responds to Certificate Signing Requests (CSRs) from proxies for initialization and for service-to-service encrypted traffic. The proxy injector is an admission webhook that injects the Linkerd proxy side car and the init container automatically into a pod when the linkerd.io/inject: enabled is available on the namespace or workload.On the data plane side are two components. First, the init container, which is responsible for automatically forwarding incoming and outgoing traffic through the Linkerd proxy via iptables rules. Second, the Linkerd proxy, which is a lightweight micro-proxy written in Rust, is the data plane itself.I will be walking you through the setup of Linkerd (2.12.2 at the time of writing) on a Kubernetes cluster.Let s see what s running on the cluster currently. This assumes you have a cluster running and kubectl is installed and available on the PATH.
On most systems, this should be sufficient to setup the CLI. You may need to restart your terminal to load the updated paths. If you have a non-standard configuration and linkerd is not found after the installation, add the following to your PATH to be able to find the cli:
export PATH=$PATH:~/.linkerd2/bin/
At this point, checking the version would give you the following:
$ linkerd version Client version: stable-2.12.2 Server version: unavailable
Setting up Linkerd Control PlaneBefore installing Linkerd on the cluster, run the following step to check the cluster for pre-requisites:
kubernetes-api -------------- can initialize the client can query the Kubernetes API
kubernetes-version ------------------ is running the minimum Kubernetes API version is running the minimum kubectl version
pre-kubernetes-setup -------------------- control plane namespace does not already exist can create non-namespaced resources can create ServiceAccounts can create Services can create Deployments can create CronJobs can create ConfigMaps can create Secrets can read Secrets can read extension-apiserver-authentication configmap no clock skew detected
linkerd-version --------------- can determine the latest version cli is up-to-date
Status check results are
All the pre-requisites appear to be good right now, and so installation can proceed.The first step of the installation is to setup the Custom Resource Definitions (CRDs) that Linkerd requires. The linkerd cli only prints the resource YAMLs to standard output and does not create them directly in Kubernetes, so you would need to pipe the output to kubectl apply to create the resources in the cluster that you re working with.
$ linkerd install --crds kubectl apply -f - Rendering Linkerd CRDs... Next, run linkerd install kubectl apply -f - to install the control plane.
customresourcedefinition.apiextensions.k8s.io/authorizationpolicies.policy.linkerd.io created customresourcedefinition.apiextensions.k8s.io/httproutes.policy.linkerd.io created customresourcedefinition.apiextensions.k8s.io/meshtlsauthentications.policy.linkerd.io created customresourcedefinition.apiextensions.k8s.io/networkauthentications.policy.linkerd.io created customresourcedefinition.apiextensions.k8s.io/serverauthorizations.policy.linkerd.io created customresourcedefinition.apiextensions.k8s.io/servers.policy.linkerd.io created customresourcedefinition.apiextensions.k8s.io/serviceprofiles.linkerd.io created
Next, install the Linkerd control plane components in the same manner, this time without the crds switch:
$ linkerd install kubectl apply -f - namespace/linkerd created clusterrole.rbac.authorization.k8s.io/linkerd-linkerd-identity created clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-identity created serviceaccount/linkerd-identity created clusterrole.rbac.authorization.k8s.io/linkerd-linkerd-destination created clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-destination created serviceaccount/linkerd-destination created secret/linkerd-sp-validator-k8s-tls created validatingwebhookconfiguration.admissionregistration.k8s.io/linkerd-sp-validator-webhook-config created secret/linkerd-policy-validator-k8s-tls created validatingwebhookconfiguration.admissionregistration.k8s.io/linkerd-policy-validator-webhook-config created clusterrole.rbac.authorization.k8s.io/linkerd-policy created clusterrolebinding.rbac.authorization.k8s.io/linkerd-destination-policy created role.rbac.authorization.k8s.io/linkerd-heartbeat created rolebinding.rbac.authorization.k8s.io/linkerd-heartbeat created clusterrole.rbac.authorization.k8s.io/linkerd-heartbeat created clusterrolebinding.rbac.authorization.k8s.io/linkerd-heartbeat created serviceaccount/linkerd-heartbeat created clusterrole.rbac.authorization.k8s.io/linkerd-linkerd-proxy-injector created clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-proxy-injector created serviceaccount/linkerd-proxy-injector created secret/linkerd-proxy-injector-k8s-tls created mutatingwebhookconfiguration.admissionregistration.k8s.io/linkerd-proxy-injector-webhook-config created configmap/linkerd-config created secret/linkerd-identity-issuer created configmap/linkerd-identity-trust-roots created service/linkerd-identity created service/linkerd-identity-headless created deployment.apps/linkerd-identity created service/linkerd-dst created service/linkerd-dst-headless created service/linkerd-sp-validator created service/linkerd-policy created service/linkerd-policy-validator created deployment.apps/linkerd-destination created cronjob.batch/linkerd-heartbeat created deployment.apps/linkerd-proxy-injector created service/linkerd-proxy-injector created secret/linkerd-config-overrides created
Kubernetes will start spinning up the data plane components and you should see the following when you list the pods:
kubernetes-api -------------- can initialize the client can query the Kubernetes API
kubernetes-version ------------------ is running the minimum Kubernetes API version is running the minimum kubectl version
linkerd-existence ----------------- 'linkerd-config' config map exists heartbeat ServiceAccount exist control plane replica sets are ready no unschedulable pods control plane pods are ready cluster networks contains all pods cluster networks contains all services
linkerd-config -------------- control plane Namespace exists control plane ClusterRoles exist control plane ClusterRoleBindings exist control plane ServiceAccounts exist control plane CustomResourceDefinitions exist control plane MutatingWebhookConfigurations exist control plane ValidatingWebhookConfigurations exist proxy-init container runs as root user if docker container runtime is used
linkerd-identity ---------------- certificate config is valid trust anchors are using supported crypto algorithm trust anchors are within their validity period trust anchors are valid for at least 60 days issuer cert is using supported crypto algorithm issuer cert is within its validity period issuer cert is valid for at least 60 days issuer cert is issued by the trust anchor
linkerd-webhooks-and-apisvc-tls ------------------------------- proxy-injector webhook has valid cert proxy-injector cert is valid for at least 60 days sp-validator webhook has valid cert sp-validator cert is valid for at least 60 days policy-validator webhook has valid cert policy-validator cert is valid for at least 60 days
linkerd-version --------------- can determine the latest version cli is up-to-date
control-plane-version --------------------- can retrieve the control plane version control plane is up-to-date control plane and cli versions match
linkerd-control-plane-proxy --------------------------- control plane proxies are healthy control plane proxies are up-to-date control plane proxies and cli versions match
Status check results are
Everything looks good.Setting up the Viz ExtensionAt this point, the required components for the service mesh are setup, but let s also install the viz extension, which provides a good visualization capabilities that will come in handy subsequently. Once again, linkerd uses the same pattern for installing the extension.
$ linkerd viz install kubectl apply -f - namespace/linkerd-viz created clusterrole.rbac.authorization.k8s.io/linkerd-linkerd-viz-metrics-api created clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-viz-metrics-api created serviceaccount/metrics-api created clusterrole.rbac.authorization.k8s.io/linkerd-linkerd-viz-prometheus created clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-viz-prometheus created serviceaccount/prometheus created clusterrole.rbac.authorization.k8s.io/linkerd-linkerd-viz-tap created clusterrole.rbac.authorization.k8s.io/linkerd-linkerd-viz-tap-admin created clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-viz-tap created clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-viz-tap-auth-delegator created serviceaccount/tap created rolebinding.rbac.authorization.k8s.io/linkerd-linkerd-viz-tap-auth-reader created secret/tap-k8s-tls created apiservice.apiregistration.k8s.io/v1alpha1.tap.linkerd.io created role.rbac.authorization.k8s.io/web created rolebinding.rbac.authorization.k8s.io/web created clusterrole.rbac.authorization.k8s.io/linkerd-linkerd-viz-web-check created clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-viz-web-check created clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-viz-web-admin created clusterrole.rbac.authorization.k8s.io/linkerd-linkerd-viz-web-api created clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-viz-web-api created serviceaccount/web created server.policy.linkerd.io/admin created authorizationpolicy.policy.linkerd.io/admin created networkauthentication.policy.linkerd.io/kubelet created server.policy.linkerd.io/proxy-admin created authorizationpolicy.policy.linkerd.io/proxy-admin created service/metrics-api created deployment.apps/metrics-api created server.policy.linkerd.io/metrics-api created authorizationpolicy.policy.linkerd.io/metrics-api created meshtlsauthentication.policy.linkerd.io/metrics-api-web created configmap/prometheus-config created service/prometheus created deployment.apps/prometheus created service/tap created deployment.apps/tap created server.policy.linkerd.io/tap-api created authorizationpolicy.policy.linkerd.io/tap created clusterrole.rbac.authorization.k8s.io/linkerd-tap-injector created clusterrolebinding.rbac.authorization.k8s.io/linkerd-tap-injector created serviceaccount/tap-injector created secret/tap-injector-k8s-tls created mutatingwebhookconfiguration.admissionregistration.k8s.io/linkerd-tap-injector-webhook-config created service/tap-injector created deployment.apps/tap-injector created server.policy.linkerd.io/tap-injector-webhook created authorizationpolicy.policy.linkerd.io/tap-injector created networkauthentication.policy.linkerd.io/kube-api-server created service/web created deployment.apps/web created serviceprofile.linkerd.io/metrics-api.linkerd-viz.svc.cluster.local created serviceprofile.linkerd.io/prometheus.linkerd-viz.svc.cluster.local created
A few seconds later, you should see the following in your pod list:
The viz components live in the linkerd-viz namespace.You can now checkout the viz dashboard:
$ linkerd viz dashboard Linkerd dashboard available at: http://localhost:50750 Grafana dashboard available at: http://localhost:50750/grafana Opening Linkerd dashboard in the default browser Opening in existing browser session.
The Meshed column indicates the workload that is currently integrated with the Linkerd control plane. As you can see, there are no application deployments right now that are running.Injecting the Linkerd Data Plane componentsThere are two ways to integrate Linkerd to the application containers:1 by manually injecting the Linkerd data plane components 2 by instructing Kubernetes to automatically inject the data plane componentsInject Linkerd data plane manuallyLet s try the first option. Below is a simple nginx-app that I will deploy into the cluster:
Back in the viz dashboard, I do see the workload deployed, but it isn t currently communicating with the Linkerd control plane, and so doesn t show any metrics, and the Meshed count is 0:Looking at the Pod s deployment YAML, I can see that it only includes the nginx container:
Let s directly inject the linkerd data plane into this running container. We do this by retrieving the YAML of the deployment, piping it to linkerd cli to inject the necessary components and then piping to kubectl apply the changed resources.
Back in the viz dashboard, the workload now is integrated into Linkerd control plane.Looking at the updated Pod definition, we see a number of changes that the linkerd has injected that allows it to integrate with the control plane. Let s have a look:
At this point, the necessary components are setup for you to explore Linkerd further. You can also try out the jaeger and multicluster extensions, similar to the process of installing and using the viz extension and try out their capabilities.Inject Linkerd data plane automaticallyIn this approach, we shall we how to instruct Kubernetes to automatically inject the Linkerd data plane to workloads at deployment time.We can achieve this by adding the linkerd.io/inject annotation to the deployment descriptor which causes the proxy injector admission hook to execute and inject linkerd data plane components automatically at the time of deployment.
This annotation can also be specified at the namespace level to affect all the workloads within the namespace. Note that any resources created before the annotation was added to the namespace will require a rollout restart to trigger the injection of the Linkerd components.Uninstalling LinkerdNow that we have walked through the installation and setup process of Linkerd, let s also cover how to remove it from the infrastructure and go back to the state prior to its installation.The first step would be to remove extensions, such as viz.
tl;dr; OpenSSL 3.0.1 leaks memory in ssl3_setup_write_buffer(), seems to be
fixed in 3.0.5 3.0.2. The issue manifests at least in stunnel
and keepalived on CentOS 9. In addition I learned the hard way that running a
not so recent VirtualBox version on Debian bullseye let to dh parameter generation
crashing in libcrypto in bn_sqr8x_internal().
A recent rabbit hole I went down. The actual bug in openssl was nailed down and
documented by
Quentin Armitage on GitHub in keepalived
My bugreport with all back and forth in the RedHat Bugzilla is
#2128412.
Act I - Hello stunnel, this is the OOMkiller Calling
We started to use stunnel on Google Cloud compute engine instances running CentOS 9.
The loadbalancer in front of those instances used a TCP health check to validate the
backend availability. A day or so later the stunnel instances got killed by the OOMkiller. Restarting stunnel and looking into /proc/<pid>/smaps showed a heap
segment growing quite quickly.
Act II - Reproducing the Issue
While I'm not the biggest fan of VirtualBox and Vagrant I've to admit it's quite
nice to just fire up a VM image, and give other people a chance to recreate that
setup as well. Since VirtualBox is no longer released with Debian/stable I just
recompiled what was available in unstable at the time of the bullseye release, and
used that. That enabled me now to just start a CentOS 9 VM, setup stunnel with a
minimal config, grab netcat and a for loop and watch the memory grow.
E.g. while true; do nc -z localhost 2600; sleep 1; done
To my surprise, in addition to the memory leak, I also observed some crashes but
did not yet care too much about those.
Act III - Wrong Suspect, a Workaround and Bugreporting
Of course the first idea was that something must be wrong in stunnel itself. But
I could not find any recent bugreports. My assumption is that there are
still a few people around using CentOS and stunnel, so someone else should probably
have seen it before. Just to be sure I recompiled the latest stunnel package from
Fedora. Didn't change anything. Next I recompiled it without almost all the patches
Fedora/RedHat carries. Nope, no progress.
Next idea: Maybe this is related to the fact that we do not initiate a TLS context
after connecting? So we changed the test case from nc to openssl s_client, and
the loadbalancer healthcheck from TCP to a TLS based one. Tada, a workaround, no
more memory leaking.
In addition I gave Fedora a try (they have Vagrant Virtualbox images in the "Cloud"
Spin, e.g.
here for Fedora 36)
and my local Debian installation a try. No leaks experienced on both.
Next I reported
#2128412.
Act IV - Crash in libcrypto and a VirtualBox Bug
When I moved with the test case from the Google Cloud compute instance to my
local VM I encountered some crashes. That morphed into a real problem when I
started to run stunnel with gdb and valgrind. All crashes happened in libcrypto
bn_sqr8x_internal() when generating new dh parameter (stunnel does that for
you if you do not use static dh parameter). I quickly worked around that by
generating static dh parameter for stunnel.
After some back and forth I suspected VirtualBox as the culprit. Recompiling
the current VirtualBox version (6.1.38-dfsg-3) from unstable on bullseye works
without any changes. Upgrading actually fixed that issue.
Epilog
I highly appreciate that RedHat, with all the bashing around the future of
CentOS, still works on community contributed bugreports. My kudos go to
Clemens Lang.
Now that the root cause is clear, I guess RedHat will push out a fix for the
openssl 3.0.1 based release they have in RHEL/CentOS 9. Until that is available
at least stunnel and keepalived are known to be affected. If you run stunnel
on something public it's not that pretty, because already a low rate of TCP
connections will result in a DoS condition.
# Download a binary device tree file and matching kernel a good soul uploaded to github
wget https://github.com/vfdev-5/qemu-rpi2-vexpress/raw/master/kernel-qemu-4.4.1-vexpress
wget https://github.com/vfdev-5/qemu-rpi2-vexpress/raw/master/vexpress-v2p-ca15-tc1.dtb
# Download the official Rasbian image without X
wget https://downloads.raspberrypi.org/raspios_lite_armhf/images/raspios_lite_armhf-2022-04-07/2022-04-04-raspios-bullseye-armhf-lite.img.xz
unxz 2022-04-04-raspios-bullseye-armhf-lite.img.xz
# Convert it from the raw image to a qcow2 image and add some space
qemu-img convert -f raw -O qcow2 2022-04-04-raspios-bullseye-armhf-lite.img rasbian.qcow2
qemu-img resize rasbian.qcow2 4G
# make sure we get a user account setup
echo "me:$(echo 'test123' openssl passwd -6 -stdin)" > userconf
sudo guestmount -a rasbian.qcow2 -m /dev/sda1 /mnt
sudo mv userconf /mnt
sudo guestunmount /mnt
# start qemu
qemu-system-arm -m 2048M -M vexpress-a15 -cpu cortex-a15 \
-kernel kernel-qemu-4.4.1-vexpress -no-reboot \
-smp 2 -serial stdio \
-dtb vexpress-v2p-ca15-tc1.dtb -sd rasbian.qcow2 \
-append "root=/dev/mmcblk0p2 rw rootfstype=ext4 console=ttyAMA0,15200 loglevel=8" \
-nic user,hostfwd=tcp::5555-:22
# login at the serial console as user me with password test123
sudo -i
# enable ssh
systemctl enable ssh
systemctl start ssh
# resize partition and filesystem
parted /dev/mmcblk0 resizepart 2 100%
resize2fs /dev/mmcblk0p2
tl;dr I ported a part of the python-suntime library to Lua to use it on OpenWRT and
RutOS powered devices.
suntime.lua
There are those unremarkable things which let you pause for
a moment, and realize what a great gift of our time open source software
and open knowledge is. At some point in time someone figured out
how to calculate the sunrise and sunset time on the current date for
your location. Someone else wrote that up and probably again a different
person published it on the internet. The
Internet Archive preserved a copy of it so I can still link to it. Someone
took this algorithm and published a code sample on StackOverflow, which was later on used by the SatAgro guys
to create the
python-suntime library.
Now I could come along, copy the core function of this library, convert it
within a few hours - mostly spent learning a bit of Lua, to a working script
fulfilling my needs.
Sometimes it's nice for testing purpose to have the OpenWRT
userland available locally. Since there is an x86 build
available one can just run it within qemu.
wget https://downloads.openwrt.org/releases/21.02.1/targets/x86/64/openwrt-21.02.1-x86-64-generic-squashfs-combined.img.gz
gunzip openwrt-21.02.1-x86-64-generic-squashfs-combined.img.gz
qemu-img convert -f raw -O qcow2 openwrt-21.02.1-x86-64-generic-squashfs-combined.img openwrt-21.02.1.qcow2
qemu-img resize openwrt-21.02.1.qcow2 200M
qemu-system-x86_64 -M q35 \
-drive file=openwrt-21.02.1.qcow2,id=d0,if=none,bus=0,unit=0 \
-device ide-hd,drive=d0,bus=ide.0 -nic user,hostfwd=tcp::5556-:22
# you've to change the network configuration to retrieve an IP via
# dhcp for the lan bridge br-lan
vi /etc/config/network
- change option proto 'static' to 'dhcp'
- remove IP address and netmask setting
/etc/init.d/network restart
# now you should've an ip out of 10.0.2.0/24
ssh root@localhost -p 5556
# remember ICMP does not work but otherwise you should have
# IP networking available
opkg update
opkg install curl
Wasted quite some hours until I found a working Modeline in this
stack exchange post
so the ThinkPad works with a HDMI attached Samsung QHD display.
Internal display of the ThinkPad is a FHD display detected as eDP-1,
the external one is DP-3 and according to the packaging known by
Samsung as
S24A600NWU.
The auto deteced EDID modes for QHD - 2560x1440 - did not work at all, the display simply stays
dark. After a lot of back and forth with the i915 driver vs nouveau vs nvidia/nvidia-drm
with and without modesetting, the following Modeline did the magic:
Modelines for 50Hz and 60Hz generated with cvt 2560 1440 60 did not work, neither did the one
extracted with edid-decode -X from the hex blob found in .local/share/xorg/Xorg.0.log.
From the auto-detected Modelines FHD - 1920x1080 - did work. In case someone struggles
with a similar setup, that might be a starting point. Fun part, if I attach my several years old
Dell E7470 everything is just fine out of the box. But that one just has an Intel GPU and not
the unholy combination I've here:
Some time ago I looked briefly at an Envertech data logger
for small scale photovoltaic setups. Turned out that PV inverter are kinda
unreliable, and you really have to monitor them to notice downtimes and defects.
Since my pal shot for a quick win I've cobbled together another Python script
to query the portal at
www.envertecportal.com, and report
back if the generated power is down to 0. The script is currently run on a vserver
via cron and reports back via the system MTA. So yeah, you need to have something
like that already at hand.
Script and Configuration
You've to provide your PV systems location with latitude and longitude so the
script can calculate
(via python3-suntime)
the sunrise and sunset times. At the location we deal with we expect to generate
some power at least from sunrise + 1h to sunet - 1h. That is tunable via the
configuration option toleranceSeconds.
Retrieving the stationId is a bit ugly because it's not provided via any API,
instead it's rendered serverside into the website. So I just logged in on the
portal and picked it up by looking into the page source.
www.envertecportal.com API
I guess this is some classic in the IoT land, but neither the documentation
provided on the portal frontpage as docx, nor the API docs at port 8090 are complete and correct. The few bits I gathered via the
Firefox Web Developer Tools are:
Login https://www.envertecportal.com/apiaccount/login - POST, sent userName and
pwd containing your login name and password. The response JSON is very explicit if
your login was not successful and why.
Store the session cookie called ASP.NET_SessionId for use on all subsequent
requests.
Retrieve station info https://www.envertecportal.com/ApiStations/getStationInfo -
POST, sent ASP.NET_SessionId and stationId with the ID of the station. Returns
a JSON with an object named Data. The field Power contains the currently
generated power as a float with two digits (e.g. 0.01).
Logout https://www.envertecportal.com/apiAccount/Logout - POST, sent
ASP.NET_SessionId.
Some Surprises
There were a few surprises, maybe they help others dealing with an Envertech setup.
The portal truncates passwords at 16 chars.
The "Forget Password?" function mails you back the password in plain text
(that's how I learned about 1.).
The login API endpoint reporting the exact reason why the login failed is somewhat
out of fashion. Though this one is probably not a credential stuffing target because
there is no money to make, so don't care.
The data logger reports the data to www.envertecportal.com at port 10013.
There is some checksuming done on the reported data, but the system is not replay
safe. So you can sent it any valid data string at a later time and get wrong data
recorded.
People at forum.fhem.de
decoded some values but could not figure out the checksuming so far.
8.5 years ago, I moved my
blog to
Ikiwiki and
Branchable. It's now time for me to take the
next step and host my blog on my own server. This is how I migrated from
Branchable to my own Apache server.
Installing Ikiwiki dependencies
Here are all of the extra Debian packages I had to install on my server:
and un-commented the following in /etc/apache2/mods-available/mime.conf:
AddHandler cgi-script .cgi
Creating a separate user account
Since Ikiwiki needs to regenerate my blog whenever a new article is pushed
to the git repo or a comment is accepted, I created a restricted user
account for it:
adduser blog
adduser blog sshuser
chsh -s /usr/bin/git-shell blog
git setup
Thanks to Branchable storing blogs in git repositories, I was able to import my
blog using a simple git clone in /home/blog (the srcdir):
Note that the name of the directory (source.git) is important for the
ikiwikihosting plugin to work.
Then I pulled the .setup file out of the setup branch in that repo and put
it in /home/blog/.ikiwiki/FeedingTheCloud.setup. After that, I deleted the
setup branch and the origin remote from that clone:
git branch -d setup
git remote rm origin
Following the recommended git
configuration, I created a working directory
(the repository) for the blog user to modify the blog as needed:
cd /home/blog/
git clone /home/blog/source.git FeedingTheCloud
I added my own ssh public key to /home/blog/.ssh/authorized_keys
so that I could push to the srcdir from my laptop.
Finaly, I generated a new ssh key without a passphrase:
One thing that failed to generate properly was the tag cloug (from the
pagestats plugin). I have not
been able to figure out why it fails to generate any output when run this
way, but if I push to the repo and let the git hook handle the rebuilding of
the wiki, the tag cloud is generated correctly. Consequently, fixing this
is not high on my list of priorities, but if you happen to know what the
problem is, please reach out.
Apache config
Here's the Apache config I put in /etc/apache2/sites-available/blog.conf:
a2ensite blog
apache2ctl configtest
systemctl restart apache2.service
The feeds.cloud.geek.nz domain used to be pointing to
Feedburner and so I need to
maintain it in order to avoid breaking RSS feeds from folks who added my
blog to their reader a long time ago.
Server-side improvements
Since I'm now in control of the server configuration, I was able to make
several improvements to how my blog is served.
First of all, I enabled the HTTP/2 and Brotli modules:
a2enmod http2
a2enmod brotli
and enabled Brotli
compression
by putting the following in /etc/apache2/conf-available/compression.conf:
Note that the Mozilla Observatory is mistakenly identifying HTTP onion
services as insecure, so you can ignore that failure.
I also used the Mozilla TLS config
generator
to improve the TLS config for my server.
Then I added security.txt and
gpc.json to the root
of my git repo and then added the following aliases to put these files in
the right place:
Alias /.well-known/gpc.json /var/www/blog/gpc.json
Alias /.well-known/security.txt /var/www/blog/security.txt
Monitoring 404s
Another advantage of running my own web server is that I can monitor the
404s easily using logcheck by
putting the following in /etc/logcheck/logcheck.logfiles:
/var/log/apache2/blog-error.log
Based on that, I added a few redirects to point bots and users to the
location of my RSS feed:
Future improvements
There are a few things I'd like to improve on my current setup.
The first one is to remove the iwikihosting and gitpush
plugins and replace them with a
small script which would simply git push to the read-only GitHub mirror.
Then I could uninstall the ikiwiki-hosting-common and
ikiwiki-hosting-web
since that's all I use them for.
Next, I would like to have proper support for signed git
pushes. At the
moment, I have the following in /home/blog/source.git/config:
but I'd like to also reject unsigned pushes.
While my blog now has a CSP policy which doesn't rely on unsafe-inline for
scripts, it does rely on unsafe-inline for stylesheets. I tried to remove this
but the actual calls to allow seemed to be located deep within jQuery and so I gave up.
Update: now fixed.
Finally, I'd like to figure out a good way to deal with articles which don't
currently have comments. At the moment, if you try to subscribe to their
comment feed, it returns a 404. For example:
[Sun Jun 06 17:43:12.336350 2021] [core:info] [pid 30591:tid 140253834704640] [client 66.249.66.70:57381] AH00128: File does not exist: /var/www/blog/posts/using-iptables-with-network-manager/comments.atom
This is obviously not ideal since many feed readers will refuse to add a
feed which is currently not found even though it could become real in the future. If
you know of a way to fix this, please let me know.
It's a gross hack but works for now. To prevent overly sensitive mic
settings autotuned by the browser in web conferences, I currently edit as root
/usr/share/pulseaudio/alsa-mixer/paths/analog-input-internal-mic.conf.
Change in [Element Capture] the setting volume from merge to 80.
The config block as a whole looks like this:
This is another post about EVM/IMA which has it s main purpose providing useful web search results for problems. However if reading it on a planet feed inspires someone to play with EVM/IMA then that s good too, it s interesting technology.
When using EVM/IMA in the Linux kernel if dmesg has errors like op=appraise_data cause=missing-HMAC the missing-HMAC means that the error code in the kernel source is INTEGRITY_NOLABEL which has a comment No security.evm xattr . You can check for the xattr on a file with the following command (this example has the security.evm xattr):
# getfattr -d -m - /etc/fstab
getfattr: Removing leading '/' from absolute path names
# file: etc/fstab
security.evm=0sAwICqGOsfwCAvgE9y9OP74QxJ/I+3eOSF2n2dM51St98z/7LYHFd9rfGTvssvhTSYL9G8cTdRAH8ozggJu7VCzggW1REoTjnLcPeuMJsrMbW3DwVrB6ldDmJzyenLMjnIHmRDDeK309aRbLVn2ueJZ07aMDcSr+sxhOOAQ/GIW4SW8L1AKpKn4g=
security.ima=0sAT+Eivfxl+7FYI+Hr9K4sE6IieZ+
security.selinux="system_u:object_r:etc_t:s0"
If dmesg has errors like op=appraise_data cause=invalid-HMAC the invalid-HMAC means that the error code in the kernel source is INTEGRITY_FAIL which has a comment Invalid HMAC/signature .
These errors are from the evm_verifyxattr() function in Linux kernel 5.11.14.
The error evm: HMAC key is not set means that the evm key is not initialised, this means the key needs to be loaded into the kernel and EVM is initialised by the command echo 1 > /sys/kernel/security/evm (or possibly some equivalent from a utility like evmctl). When the key is loaded the kernel gives the message evm: key initialized and after that /sys/kernel/security/evm is read-only. If there is something wrong with the key the kernel gives the message evm: key initialization failed , it seems that the way to determine if your key is good is to try writing 1 to /sys/kernel/security/evm and see what happens. After that the command cat /sys/kernel/security/evm should return 3 .
The Gentoo wiki has good documentation on how to create and load the keys which has to be done before initialising EVM [1]. I ll write more about that in another post.
The dovecot version which will be released with bullseye seems to require some
subtle config adjustment if you
use ssl (ok that should be almost everyone)
and you would like to execute doveadm as a user, who can not read the ssl cert and
keys (quite likely).
I guess one of the common cases is executing doveadm pw e.g. if you use
postfixadmin. For myself
that manifested in the nginx error log, which I use in combination with php-fpm, as.
2021/04/19 20:22:59 [error] 307467#307467: *13 FastCGI sent in stderr: "PHP message:
Failed to read password from /usr/bin/doveadm pw ... stderr: doveconf: Fatal:
Error in configuration file /etc/dovecot/conf.d/10-ssl.conf line 12: ssl_cert:
Can't open file /etc/dovecot/private/dovecot.pem: Permission denied
You easily see the same error message if you just execute something like doveadm pw -p test123.
The workaround is to move your ssl configuration to a new file which is only readable by root,
and create a dummy one which disables ssl, and has a !include_try on the real one. Maybe
best explained by showing the modification:
FTP master
This month I only accepted 8 packages and like last month rejected 0. Despite the holidays 293 packages got accepted.
Debian LTS
This was my seventy-eighth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.
This month my all in all workload has been 26h. During that time I did LTS uploads of:
[DLA 2489-1] minidlna security update for two CVEs
[DLA 2490-1] x11vnc security update for one CVE
[DLA 2501-1] influxdb security update for one CVE
[DLA 2511-1] highlight.js security update for one CVE
Unfortunately package slirp has the same version in Stretch and Buster. So I first had to upload slirp/1:1.0.17-11 to unstable, in order to be allowed to fix the CVE in Buster and to finally upload a new version to Stretch. Meanwhile the fix for Buster has been approved by the Release Team and I am waiting for the next point release now.
I also prepared a debdiff for influxdb, which will result in DSA-4823-1 in January.
As there appeared new CVEs for openjpeg2, I did not do an upload yet. This is planned for January now.
Last but not least I did some days of frontdesk duties.
Debian ELTS
This month was the thirtieth ELTS month.
During my allocated time I uploaded:
ELA-341-1 for highlight.js
As well as for LTS, I did not finish work on all CVEs of openjpeg2, so the upload is postponed to January.
Last but not least I did some days of frontdesk duties.
Unfortunately I also had to give back some hours.
Other stuff
This month I uploaded new upstream versions of:
With these uploads I finished the libosmocom- and libctl-transitions.
The Debian Med Advent Calendar was again really successful this year. There was no new record, but with 109, the second most number of bugs has been closed.
year
number of bugs closed
2011
63
2012
28
2013
73
2014
5
2015
150
2016
95
2017
105
2018
81
2019
104
2020
109
Well done everybody who participated. It is really nice to see that Andreas is no longer a lone wolf.
Jenkins in the Ops space is in general already painful. Lately the deprecation of the
multiple-scms plugin caused
some headache, becaue we relied heavily on it to generate pipelines in a
Seedjob based on structure inside
secondary repositories. We kind of started from scratch now and ship parameterized
pipelines defined in Jenkinsfiles in those secondary repositories. Basically that
is the way it should be, you store the pipeline definition along with code you'd
like to execute. In our case that is mostly terraform and ansible.
Problem
Directory structure is roughly "stage" -> "project" -> "service".
We'd like to have one job pipeline per project, which dynamically
reads all service folder names and offers those as available
parameters. A service folder is the smallest entity we manage with
terraform in a separate state file.
Now Jenkins pipelines are by intention limited, but you can add some
groovy at will if you whitelist the usage in Jenkins. You have to
click through some security though to make it work.
Jenkinsfile
This is basically a commented version of the Jenkinsfile we copy
now around as a template, to be manually adjusted per project.
// Syntax: https://jenkins.io/doc/book/pipeline/syntax/
// project name as we use it in the folder structure and job name
def TfProject = "myproject-I-dev"
// directory relative to the repo checkout inside the jenkins workspace
def jobDirectory = "terraform/dev/$ TfProject "
// informational string to describe the stage or project
def stageEnvDescription = "DEV"
/* Attention please if you rebuild the Jenkins instance consider the following:
- You've to run this job at least *thrice*. It first has to checkout the
repository, then you've to add permisions for the groovy part, and on
the third run you can gather the list of available terraform folder.
- As a safeguard the first first folder name is always the invalid string
"choose-one". That prevents accidential execution of a random project.
- If you add new terraform folder you've to run the "choose-one" dummy rollout so
the dynamic parameters pick up the new folder. */
/* Here we hardcode the path to the correct job workspace on the jenkins host, and
discover the service folder list. We have to filter it slightly to avoid temporary folders created by Jenkins (like @tmp folders). */
List tffolder = new File("/var/lib/jenkins/jobs/terraform $ TfProject /workspace/$ jobDirectory ").listFiles().findAll it.isDirectory() && it.name ==~ /(?i)[a-z0-9_-]+/ .sort()
/* ensure the "choose-one" dummy entry is always the first in the list, otherwise
initial executions might execute something. By default the first parameter is
used if none is selected */
tffolder.add(0,"choose-one")
pipeline
agent any
/* Show a choice parameter with the service directory list we stored
above in the variable tffolder */
parameters
choice(name: "TFFOLDER", choices: tffolder)
// Configure logrotation and coloring.
options
buildDiscarder(logRotator(daysToKeepStr: "30", numToKeepStr: "100"))
ansiColor("xterm")
// Set some variables for terraform to pick up the right service account.
environment
GOOGLE_CLOUD_KEYFILE_JSON = '/var/lib/jenkins/cicd.json'
GOOGLE_APPLICATION_CREDENTIALS = '/var/lib/jenkins/cicd.json'
stages
stage('TF Plan')
/* Make sure on every stage that we only execute if the
choice parameter is not the dummy one. Ensures we
can run the pipeline smoothly for re-reading the
service directories. */
when expression params.TFFOLDER != "choose-one"
steps
/* Initialize terraform and generate a plan in the selected
service folder. */
dir("$ params.TFFOLDER ")
sh 'terraform init -no-color -upgrade=true'
sh 'terraform plan -no-color -out myplan'
// Read in the repo name we act on for informational output.
script
remoteRepo = sh(returnStdout: true, script: 'git remote get-url origin').trim()
echo "INFO: job *$ JOB_NAME * in *$ params.TFFOLDER * on branch *$ GIT_BRANCH * of repo *$ remoteRepo *"
stage('TF Apply')
/* Run terraform apply only after manual acknowledgement, we have to
make sure that the when condition is actually evaluated before
the input. Default is input before when. */
when
beforeInput true
expression params.TFFOLDER != "choose-one"
input
message "Cowboy would you really like to run **$ JOB_NAME ** in **$ params.TFFOLDER **"
ok "Apply $ JOB_NAME to $ stageEnvDescription "
steps
dir("$ params.TFFOLDER ")
sh 'terraform apply -no-color -input=false myplan'
post
failure
// You can also alert to noisy chat platforms on failures if you like.
echo "job failed"
job-dsl side of the story
Having all those when conditions in the pipeline stages above allows us to
create a dependency between successful Seedjob executions and just let that trigger
the execution of the pipeline jobs. This is important because the Seedjob
execution itself will reset all pipeline jobs, so your dynamic parameters are gone.
By making sure we can re-execute the job, and doing that automatically, we still
have up to date parameterized pipelines, whenever the Seedjob ran successfully.
The job-dsl script looks like this:
Disadvantages
There are still a bunch of disadvantages you've to consider
Jenkins Rebuilds are Painful
In general we rebuild our Jenkins instances quite frequently. With the
approach outlined here in place, you've to allow the groovy script execution
after the first Seedjob execution, and then go through at least another
round of run the job, allow permissions, run the job, until it's finally
all up and running.
Copy around Jenkinsfile
Whenever you create a new project you've to copy around Jenkinsfiles for each
and every stage and modify the variables at the top accordingly.
Keep the Seedjob definitions and Jenkinsfile in Sync
You not only have to copy the Jenkinsfile around, but you also have to
keep the variables and names in sync with what you define for the Seedjob.
Sadly the pipeline env-vars are not available outside of the pipeline when
we execute the groovy parts.
Kudos
This setup was crafted with a lot of help by
Michael and
Eric.
The latest docker 20.10.x release unlocks the buildx subcommands
which allow for some sugar, like building something in a container
and dumping the result to your local directory in one command.
Dockerfile
FROM docker-registry.mycorp.com/debian-node:lts as builder
USER service
COPY . /opt/service
RUN cd /opt/service; npm install; npm run build
FROM scratch as dist
COPY --from=builder /opt/service/dist /
Here we build a page, copy the result with all assets from the /opt/service/dist
directory to an empty image and dump it into the local pages directory.
Another note to myself before I forget about this nifty usage
of socat again.
I was looking for something to mock a serial device, similar to
a microcontroller which usually ends up as /dev/ttyACM0 and might
output some text. What I found is a very helpful post on
stackoverflow
showing an example utilizing socat.
$ socat -d -d pty,rawer pty,rawer
2020/12/20 21:37:53 socat[29130] N PTY is /dev/pts/8
2020/12/20 21:37:53 socat[29130] N PTY is /dev/pts/11
2020/12/20 21:37:53 socat[29130] N starting data transfer loop with FDs [5,5] and [7,7]
Write whatever you need to the second pty, here /dev/pts/11, e.g.
$ i=0; while :; do echo "foo: $ i " > /dev/pts/11; let i++; sleep 5; done
Now you can listen with whatever you like, e.g. some tool you work on, on the fist pty,
here /dev/pts/8. For demonstration purpose just use cat:
$ cat /dev/pts/8
foo: 0
foo: 1
socat is an awesome tool, looking through the manpage you need some knowledge about
sockets, but it's incredibly vesatile.