Search Results: "lss"

23 February 2023

Paul Tagliamonte: Announcing hz.tools

Interested in future updates? Follow me on mastodon at @paul@soylent.green. Posts about hz.tools will be tagged #hztools.

If you're on the Fediverse, I'd very much appreciate boosts on my announcement toot!
Ever since 2019, I ve been learning about how radios work, and trying to learn about using them the hard way by writing as much of the stack as is practical (for some value of practical) myself. I wrote my first Hello World in 2018, which was a simple FM radio player, which used librtlsdr to read in an IQ stream, did some filtering, and played the real valued audio stream via pulseaudio. Over 4 years this has slowly grown through persistence, lots of questions to too many friends to thank (although I will try), and the eternal patience of my wife hearing about radios nonstop for years into a number of Go repos that can do quite a bit, and support a handful of radios. I ve resisted making the repos public not out of embarrassment or a desire to keep secrets, but rather, an attempt to keep myself free of any maintenance obligations to users so that I could freely break my own API, add and remove API surface as I saw fit. The worst case was to have this project feel like work, and I can t imagine that will happen if I feel frustrated by PRs that are getting ahead of me solving problems I didn t yet know about, or bugs I didn t understand the fix for. As my rate of changes to the most central dependencies has slowed, i ve begun to entertain the idea of publishing them. After a bit of back and forth, I ve decided it s time to make a number of them public, and to start working on them in the open, as I ve built up a bit of knowledge in the space, and I and feel confident that the repo doesn t contain overt lies. That s not to say it doesn t contain lies, but those lies are likely hidden and lurking in the dark. Beware. That being said, it shouldn t be a surprise to say I ve not published everything yet for the same reasons as above. I plan to open repos as the rate of changes slows and I understand the problems the library solves well enough or if the project dead ends and I ve stopped learning.

Intention behind hz.tools It s my sincere hope that my repos help to make Software Defined Radio (SDR) code a bit easier to understand, and serves as an understandable framework to learn with. It s a large codebase, but one that is possible to sit down and understand because, well, it was written by a single person. Frankly, I m also not productive enough in my free time in the middle of the night and on weekends and holidays to create a codebase that s too large to understand, I hope! I remain wary of this project turning into work, so my goal is to be very upfront about my boundaries, and the limits of what classes of contributions i m interested in seeing. Here s some goals of open sourcing these repos:
  • I do want this library to be used to learn with. Please go through it all and use it to learn about radios and how software can control them!
  • I am interested in bugs if there s a problem you discover. Such bugs are likely a great chance for me to fix something I ve misunderstood or typoed.
  • I am interested in PRs fixing bugs you find. I may need a bit of a back and forth to fully understand the problem if I do not understand the bug and fix yet. I hope you may have some grace if it s taking a long time.
Here s a list of some anti-goals of open sourcing these repos.
  • I do not want this library to become a critical dependency of an important project, since I do not have the time to deal with the maintenance burden. Putting me in that position is going to make me very uncomfortable.
  • I am not interested in feature requests, the features have grown as I ve hit problems, I m not interested in building or maintaining features for features sake. The API surface should be exposed enough to allow others to experiment with such things out-of-tree.
  • I m not interested in clever code replacing clear code without a very compelling reason.
  • I use GNU/Linux (specifically Debian ), and from time-to-time I ve made sure that my code runs on OpenBSD too. Platforms beyond that will likely not be supported at the expense of either of those two. I ll take fixes for bugs that fix a problem on another platform, but not damage the code to work around issues / lack of features on other platforms (like Windows).
I m not saying all this to be a jerk, I do it to make sure I can continue on my journey to learn about how radios work without my full time job becoming maintaining a radio framework single-handedly for other people to use even if it means I need to close PRs or bugs without merging it or fixing the issue. With all that out of the way, I m very happy to announce that the repos are now public under github.com/hztools.

Should you use this? Probably not. The intent here is not to provide a general purpose Go SDR framework for everyone to build on, although I am keenly aware it looks and feels like it, since that what it is to me. This is a learning project, so for any use beyond joining me in learning should use something like GNU Radio or a similar framework that has a community behind it. In fact, I suspect most contributors ought to be contributing to GNU Radio, and not this project. If I can encourage people to do so, contribute to GNU Radio! Nothing makes me happier than seeing GNU Radio continue to be the go-to, and well supported. Consider donating to GNU Radio!

hz.tools/rf - Frequency types The hz.tools/rf library contains the abstract concept of frequency, and some very basic helpers to interact with frequency ranges (such as helpers to deal with frequency ranges, or frequency range math) as well as frequencies and some very basic conversions (to meters, etc) and parsers (to parse values like 10MHz). This ensures that all the hz.tools libraries have a shared understanding of Frequencies, a standard way of representing ranges of Frequencies, and the ability to handle the IO boundary with things like CLI arguments, JSON or YAML. The git repo can be found at github.com/hztools/go-rf, and is importable as hz.tools/rf.
 // Parse a frequency using hz.tools/rf.ParseHz, and print it to stdout.
 freq := rf.MustParseHz("-10kHz")
fmt.Printf("Frequency: %s\n", freq+rf.MHz)
// Prints: 'Frequency: 990kHz'

// Return the Intersection between two RF ranges, and print
 // it to stdout.
 r1 := rf.Range rf.KHz, rf.MHz 
r2 := rf.Range rf.Hz(10), rf.KHz * 100 
fmt.Printf("Range: %s\n", r1.Intersection(r2))
// Prints: Range: 1000Hz->100kHz
These can be used to represent tons of things - ranges can be used for things like the tunable range of an SDR, the bandpass of a filter or the frequencies that correspond to a bin of an FFT, while frequencies can be used for things such as frequency offsets or the tuned center frequency.

hz.tools/sdr - SDR I/O and IQ Types This is the big one. This library represents the majority of the shared types and bindings, and is likely the most useful place to look at when learning about the IO boundary between a program and an SDR. The git repo can be found at github.com/hztools/go-sdr, and is importable as hz.tools/sdr. This library is designed to look (and in some cases, mirror) the Go io idioms so that this library feels as idiomatic as it can, so that Go builtins interact with IQ in a way that s possible to reason about, and to avoid reinventing the wheel by designing new API surface. While some of the API looks (and is even called) the same thing as a similar function in io, the implementation is usually a lot more naive, and may have unexpected sharp edges such as concurrency issues or performance problems. The following IQ types are implemented using the sdr.Samples interface. The hz.tools/sdr package contains helpers for conversion between types, and some basic manipulation of IQ streams.
IQ Format hz.tools Name Underlying Go Type
Interleaved uint8 (rtl-sdr) sdr.SamplesU8 [][2]uint8
Interleaved int8 (hackrf, uhd) sdr.SamplesI8 [][2]int8
Interleaved int16 (pluto, uhd) sdr.SamplesI16 [][2]int16
Interleaved float32 (airspy, uhd) sdr.SamplesC64 []complex64
The following SDRs have implemented drivers in-tree.
SDR Format RX/TX State
rtl u8 RX Good
HackRF i8 RX/TX Good
PlutoSDR i16 RX/TX Good
rtl kerberos u8 RX Old
uhd i16/c64/i8 RX/TX Good
airspyhf c64 RX Exp
The following major packages and subpackages exist at the time of writing:
Import What is it?
hz.tools/sdr Core IQ types, supporting types and implementations that interact with the byte boundary
hz.tools/sdr/rtl sdr.Receiver implementation using librtlsdr.
hz.tools/sdr/rtl/kerberos Helpers to enable coherent RX using the Kerberos SDR.
hz.tools/sdr/rtl/e4k Helpers to interact with the E4000 RTL-SDR dongle.
hz.tools/sdr/fft Interfaces for performing an FFT, which are implemented by other packages.
hz.tools/sdr/rtltcp sdr.Receiver implementation for rtl_tcp servers.
hz.tools/sdr/pluto sdr.Transceiver implementation for the PlutoSDR using libiio.
hz.tools/sdr/uhd sdr.Transceiver implementation for UHD radios, specifically the B210 and B200mini
hz.tools/sdr/hackrf sdr.Transceiver implementation for the HackRF using libhackrf.
hz.tools/sdr/mock Mock SDR for testing purposes.
hz.tools/sdr/airspyhf sdr.Receiver implementation for the AirspyHF+ Discovery with libairspyhf.
hz.tools/sdr/internal/simd SIMD helpers for IQ operations, written in Go ASM. This isn t the best to learn from, and it contains pure go implemtnations alongside.
hz.tools/sdr/stream Common Reader/Writer helpers that operate on IQ streams.

hz.tools/fftw - hz.tools/sdr/fft implementation The hz.tools/fftw package contains bindings to libfftw3 to implement the hz.tools/sdr/fft.Planner type to transform between the time and frequency domain. The git repo can be found at github.com/hztools/go-fftw, and is importable as hz.tools/fftw. This is the default throughout most of my codebase, although that default is only expressed at the leaf package libraries should not be hardcoding the use of this library in favor of taking an fft.Planner, unless it s used as part of testing. There are a bunch of ways to do an FFT out there, things like clFFT or a pure-go FFT implementation could be plugged in depending on what s being solved for.

hz.tools/ fm,am - analog audio demodulation and modulation The hz.tools/fm and hz.tools/am packages contain demodulators for AM analog radio, and FM analog radio. This code is a bit old, so it has a lot of room for cleanup, but it ll do a very basic demodulation of IQ to audio. The git repos can be found at github.com/hztools/go-fm and github.com/hztools/go-am, and are importable as hz.tools/fm and hz.tools/am. As a bonus, the hz.tools/fm package also contains a modulator, which has been tested on the air and with some of my handheld radios. This code is a bit old, since the hz.tools/fm code is effectively the first IQ processing code I d ever written, but it still runs and I run it from time to time.
 // Basic sketch for playing FM radio using a reader stream from
 // an SDR or other IQ stream.

bandwidth := 150*rf.KHz
reader, err = stream.ConvertReader(reader, sdr.SampleFormatC64)
if err != nil  
...
 
demod, err := fm.Demodulate(reader, fm.DemodulatorConfig 
Deviation: bandwidth / 2,
Downsample: 8, // some value here depending on sample rate
 Planner: fftw.Plan,
 )
if err != nil  
...
 
speaker, err := pulseaudio.NewWriter(pulseaudio.Config 
Format: pulseaudio.SampleFormatFloat32NE,
Rate: demod.SampleRate(),
AppName: "rf",
StreamName: "fm",
Channels: 1,
SinkName: "",
 )
if err != nil  
...
 
buf := make([]float32, 1024*64)
for  
i, err := demod.Read(buf)
if err != nil  
...
 
if i == 0  
panic("...")
 
if err := speaker.Write(buf[:i]); err != nil  
...
 
 

hz.tools/rfcap - byte serialization for IQ data The hz.tools/rfcap package is the reference implementation of the rfcap spec , and is how I store IQ captures locally, and how I send them across a byte boundary. The git repo can be found at github.com/hztools/go-rfcap, and is importable as hz.tools/rfcap. If you re interested in storing IQ in a way others can use, the better approach is to use SigMF rfcap exists for cases like using UNIX pipes to move IQ around, through APIs, or when I send IQ data through an OS socket, to ensure the sample format (and other metadata) is communicated with it. rfcap has a number of limitations, for instance, it can not express a change in frequency or sample rate during the capture, since the header is fixed at the beginning of the file.

23 July 2021

Evgeni Golov: It's not *always* DNS

Two weeks ago, I had the pleasure to play with Foremans Kerberos integration and iron out a few long standing kinks. It all started with a user reminding us that Kerberos authentication is broken when Foreman is deployed on CentOS 8, as there is no more mod_auth_kerb available. Given mod_auth_kerb hasn't seen a release since 2013, this is quite understandable. Thankfully, there is a replacement available, mod_auth_gssapi. Even better, it's available in CentOS 7 and 8 and in Debian and Ubuntu too! So I quickly whipped up a PR to completely replace mod_auth_kerb with mod_auth_gssapi in our installer and successfully tested that it still works in CentOS 7 (even if upgrading from a mod_auth_kerb installation) and CentOS 8. Yay, the issue at hand seemed fixed. But just writing a post about that would've been boring, huh? Well, and then I dared to test the same on Debian Turns out, our installer was using the wrong path to the Apache configuration and the wrong username Apache runs under while trying to setup Kerberos, so it could not have ever worked. Luckily Ewoud and I were able to fix that too. And yet the installer was still unable to fetch the keytab from my FreeIPA server Let's dig deeper! To fetch the keytab, the installer does roughly this:
# kinit -k
# ipa-getkeytab -k http.keytab -p HTTP/foreman.example.com
And if one executes that by hand to see the a actual error, you see:
# kinit -k
kinit: Cannot determine realm for host (principal host/foreman@)
Well, yeah, the principal looks kinda weird (no realm) and the interwebs say for "kinit: Cannot determine realm for host":
  • Kerberos cannot determine the realm name for the host. (Well, duh, that's what it said?!)
  • Make sure that there is a default realm name, or that the domain name mappings are set up in the Kerberos configuration file (krb5.conf)
And guess what, all of these are perfectly set by ipa-client-install when joining the realm But there must be something, right? Looking at the principal in the error, it's missing both the domain of the host and the realm. I was pretty sure that my DNS and config was right, but what about gethostname(2)?
# hostname
foreman
Bingo! Let's see what happens if we force that to be an FQDN?
# hostname foreman.example.com
# kinit -k
NO ERRORS! NICE! We're doing science here, right? And I still have the CentOS 8 box I had for the previous round of tests. What happens if we set that to have a shortname? Nothing. It keeps working fine. And what about CentOS 7? VMs are cheap. Well, that breaks like on Debian, if we force the hostname to be short. Interesting. Is it a version difference between the systems?
  • Debian 10 has krb5 1.17-3+deb10u1
  • CentOS 7 has krb5 1.15.1-50.el7
  • CentOS 8 has krb5 1.18.2-8.el8
So, something changed in 1.18? Looking at the krb5 1.18 changelog the following entry jumps at one: Expand single-component hostnames in host-based principal names when DNS canonicalization is not used, adding the system's first DNS search path as a suffix. Given Debian 11 has krb5 1.18.3-5 (well, testing has, so lets pretend bullseye will too), we can retry the experiment there, and it shows that it works with both, short and full hostname. So yeah, it seems krb5 "does the right thing" since 1.18, and before that gethostname(2) must return an FQDN. I've documented that for our users and can now sleep a bit better. At least, it wasn't DNS, right?! Btw, freeipa won't be in bulsseye, which makes me a bit sad, as that means that Foreman won't be able to automatically join FreeIPA realms if deployed on Debian 11.

27 May 2021

Michael Prokop: What to expect from Debian/bullseye #newinbullseye

Bullseye Banner, Copyright 2020 Juliette Taka Debian v11 with codename bullseye is supposed to be released as new stable release soon-ish (let s hope for June, 2021! :)). Similar to what we had with #newinbuster and previous releases, now it s time for #newinbullseye! I was the driving force at several of my customers to be well prepared for bullseye before its freeze, and since then we re on good track there overall. In my opinion, Debian s release team did (and still does) a great job I m very happy about how unblock requests (not only mine but also ones I kept an eye on) were handled so far. As usual with major upgrades, there are some things to be aware of, and hereby I m starting my public notes on bullseye that might be worth also for other folks. My focus is primarily on server systems and looking at things from a sysadmin perspective. Further readings Of course start with taking a look at the official Debian release notes, make sure to especially go through What s new in Debian 11 + Issues to be aware of for bullseye. Chris published notes on upgrading to Debian bullseye, and also anarcat published upgrade notes for bullseye. Package versions As a starting point, let s look at some selected packages and their versions in buster vs. bullseye as of 2021-05-27 (mainly having amd64 in mind):
Package buster/v10 bullseye/v11
ansible 2.7.7 2.10.8
apache 2.4.38 2.4.46
apt 1.8.2.2 2.2.3
bash 5.0 5.1
ceph 12.2.11 14.2.20
docker 18.09.1 20.10.5
dovecot 2.3.4 2.3.13
dpkg 1.19.7 1.20.9
emacs 26.1 27.1
gcc 8.3.0 10.2.1
git 2.20.1 2.30.2
golang 1.11 1.15
libc 2.28 2.31
linux kernel 4.19 5.10
llvm 7.0 11.0
lxc 3.0.3 4.0.6
mariadb 10.3.27 10.5.10
nginx 1.14.2 1.18.0
nodejs 10.24.0 12.21.0
openjdk 11.0.9.1 11.0.11+9 + 17~19
openssh 7.9p1 8.4p1
openssl 1.1.1d 1.1.1k
perl 5.28.1 5.32.1
php 7.3 7.4+76
postfix 3.4.14 3.5.6
postgres 11 13
puppet 5.5.10 5.5.22
python2 2.7.16 2.7.18
python3 3.7.3 3.9.2
qemu/kvm 3.1 5.2
ruby 2.5.1 2.7+2
rust 1.41.1 1.48.0
samba 4.9.5 4.13.5
systemd 241 247.3
unattended-upgrades 1.11.2 2.8
util-linux 2.33.1 2.36.1
vagrant 2.2.3 2.2.14
vim 8.1.0875 8.2.2434
zsh 5.7.1 5.8
Linux Kernel The bullseye release will ship a Linux kernel based on v5.10 (v5.10.28 as of 2021-05-27, with v5.10.38 pending in unstable/sid), whereas buster shipped kernel 4.19. As usual there are plenty of changes in the kernel area and this might warrant a separate blog entry, but to highlight some issues: One surprising change might be that the scrollback buffer (Shift + PageUp) is gone from the Linux console. Make sure to always use screen/tmux or handle output through a pager of your choice if you need all of it and you re in the console. The kernel provides BTF support (via CONFIG_DEBUG_INFO_BTF, see #973870), which means it s no longer necessary to install LLVM, Clang, etc (requiring >100MB of disk space), see Gregg s excellent blog post regarding the underlying rational. Sadly the libbpf-tools packaging didn t make it into bullseye (#978727), but if you want to use your own self-made Debian packages, my notes might be useful. With kernel version 5.4, SUBDIRS support was removed from kbuild, so if an out-of-tree kernel module (like a *-dkms package) fails to compile on bullseye, make sure to use a recent version of it which uses M= or KBUILD_EXTMOD= instead. Unprivileged user namespaces are enabled by default (see #898446 + #987777), so programs can create more restricted sandboxes without the need to run as root or via a setuid-root helper. If you prefer to keep this feature restricted (or tools like web browsers, WebKitGTK, Flatpak, don t work), use sysctl -w kernel.unprivileged_userns_clone=0 . The /boot/System.map file(s) no longer provide the actual data, you need to switch to the dbg package if you rely on that information:
% cat /boot/System.map-5.10.0-6-amd64 
ffffffffffffffff B The real System.map is in the linux-image-<version>-dbg package
Be aware though, that the *-dbg package requires ~5GB of additional disk space. Systemd systemd v247 made it into bullseye (updated from v241). Same as for the kernel this might warrant a separate blog entry, but to mention some highlights: Systemd in bullseye activates its persistent journal functionality by default (storing its files in /var/log/journal/, see #717388). systemd-timesyncd is no longer part of the systemd binary package itself, but available as standalone package. This allows usage of ntp, chrony, openntpd, without having systemd-timesyncd installed (which prevents race conditions like #889290, which was biting me more than once). journalctl gained new options:
--cursor-file=FILE      Show entries after cursor in FILE and update FILE
--facility=FACILITY...  Show entries with the specified facilities
--image=IMAGE           Operate on files in filesystem image
--namespace=NAMESPACE   Show journal data from specified namespace
--relinquish-var        Stop logging to disk, log to temporary file system
--smart-relinquish-var  Similar, but NOP if log directory is on root mount
systemctl gained new options:
clean UNIT...                       Clean runtime, cache, state, logs or configuration of unit
freeze PATTERN...                   Freeze execution of unit processes
thaw PATTERN...                     Resume execution of a frozen unit
log-level [LEVEL]                   Get/set logging threshold for manager
log-target [TARGET]                 Get/set logging target for manager
service-watchdogs [BOOL]            Get/set service watchdog state
--with-dependencies                 Show unit dependencies with 'status', 'cat', 'list-units', and 'list-unit-files'
 -T --show-transaction              When enqueuing a unit job, show full transaction
 --what=RESOURCES                   Which types of resources to remove
--boot-loader-menu=TIME             Boot into boot loader menu on next boot
--boot-loader-entry=NAME            Boot into a specific boot loader entry on next boot
--timestamp=FORMAT                  Change format of printed timestamps
If you use systemctl edit to adjust overrides, then you ll now also get the existing configuration file listed as comment, which I consider very helpful. The MACAddressPolicy behavior with systemd naming schema v241 changed for virtual devices (I plan to write about this in a separate blog post). There are plenty of new manual pages: systemd also gained new unit configurations related to security hardening: Another new unit configuration is SystemCallLog= , which supports listing the system calls to be logged. This is very useful for for auditing or temporarily when constructing system call filters. The cgroupv2 change is also documented in the release notes, but to explicitly mention it also here, quoting from /usr/share/doc/systemd/NEWS.Debian.gz:
systemd now defaults to the unified cgroup hierarchy (i.e. cgroupv2).
This change reflects the fact that cgroups2 support has matured
substantially in both systemd and in the kernel.
All major container tools nowadays should support cgroupv2.
If you run into problems with cgroupv2, you can switch back to the previous,
hybrid setup by adding systemd.unified_cgroup_hierarchy=false to the
kernel command line.
You can read more about the benefits of cgroupv2 at
https://www.kernel.org/doc/html/latest/admin-guide/cgroup-v2.html
Note that cgroup-tools (lssubsys + lscgroup etc) don t work in cgroup2/unified hierarchy yet (see #959022 for the details). Configuration management puppet s upstream doesn t provide packages for bullseye yet (see PA-3624 + MODULES-11060), and sadly neither v6 nor v7 made it into bullseye, so when using the packages from Debian you re still stuck with v5.5 (also see #950182). ansible is also available, and while it looked like that only version 2.9.16 would make it into bullseye (see #984557 + #986213), actually version 2.10.8 made it into bullseye. chef was removed from Debian and is not available with bullseye (due to trademark issues). Prometheus stack Prometheus server was updated from v2.7.1 to v2.24.1, and the prometheus service by default applies some systemd hardening now. Also all the usual exporters are still there, but bullseye also gained some new ones: Virtualization docker (v20.10.5), ganeti (v3.0.1), libvirt (v7.0.0), lxc (v4.0.6), openstack, qemu/kvm (v5.2), xen (v4.14.1), are all still around, though what s new and noteworthy is that podman version 3.0.1 (tool for managing OCI containers and pods) made it into bullseye. If you re using the docker packages from upstream, be aware that they still don t seem to understand Debian package version handling. The docker* packages will not be automatically considered for upgrade, as 5:20.10.6~3-0~debian-buster is considered newer than 5:20.10.6~3-0~debian-bullseye:
% apt-cache policy docker-ce
  docker-ce:
    Installed: 5:20.10.6~3-0~debian-buster
    Candidate: 5:20.10.6~3-0~debian-buster
    Version table:
   *** 5:20.10.6~3-0~debian-buster 100
          100 /var/lib/dpkg/status
       5:20.10.6~3-0~debian-bullseye 500
          500 https://download.docker.com/linux/debian bullseye/stable amd64 Packages
Vagrant is available in version 2.2.14, the package from upstream works perfectly fine on bullseye as well. If you re relying on VirtualBox, be aware that upstream doesn t provide packages for bullseye yet, but the package from Debian/unstable (v6.1.22 as of 2021-05-27) works fine on bullseye (VirtualBox isn t shipped with stable releases since quite some time due to lack of cooperation from upstream on security support for older releases, see #794466). If you rely on the virtualbox-guest-additions-iso and its shared folders support, you might be glad to hear that v6.1.22 made it into bullseye (see #988783), properly supporting more recent kernel versions like present in bullseye. debuginfod There s a new service debuginfod.debian.net (see debian-devel-announce and Debian Wiki), which makes the debugging experience way smoother. You no longer need to download the debugging Debian packages (*-dbgsym/*-dbg), but instead can fetch them on demand, by exporting the following variables (before invoking gdb or alike):
% export DEBUGINFOD_PROGRESS=1    # for optional download progress reporting
% export DEBUGINFOD_URLS="https://debuginfod.debian.net"
BTW: if you can t rely on debuginfod (for whatever reason), I d like to point your attention towards find-dbgsym-packages from the debian-goodies package. Vim Sadly Vim 8.2 once again makes another change for bad defaults (hello mouse behavior!). When incsearch is set, it also applies to :substitute. This makes it veeeeeeeeeery annoying when running something like :%s/\s\+$// to get rid of trailing whitespace characters, because if there are no matches it jumps to the beginning of the file and then back, sigh. To get the old behavior back, you can use this:
au CmdLineEnter : let s:incs = &incsearch   set noincsearch
au CmdLineLeave : let &incsearch = s:incs
rsync rsync was updated from v3.1.3 to v3.2.3. It provides various checksum enhancements (see option --checksum-choice). We got new capabilities (hardlink-specials, atimes, optional protect-args, stop-at, no crtimes) and the addition of zstd and lz4 compression algorithms. And we got new options: OpenSSH OpenSSH was updated from v7.9p1 to 8.4p1, so if you re interested in all the changes, check out the release notes between those version (8.0, 8.1, 8.2, 8.3 + 8.4). Let s highlight some notable new features: Misc unsorted

30 August 2020

Enrico Zini: Miscellaneous news

A fascinating apparent paradox that kind of makes sense: Czech nudists reprimanded by police for not wearing face-masks. Besides being careful about masks when naked at the lake, be careful about your laptop being confused for a pizza: German nudist chases wild boar that stole laptop. Talking about pigs: Pig starts farm fire by excreting pedometer. Now that traveling is complicated, you might enjoy A Brief History of Children Sent Through the Mail, or learning about Narco-submarines. Meanwhile, in a time of intense biotechnological research, Scientists rename human genes to stop Microsoft Excel from misreading them as dates. Finally, for a good, cheaper, and more readily available alternative to a trip to the pharmacy, learn about Hypoalgesic effect of swearing.

16 July 2020

Enrico Zini: Build Qt5 cross-builder with raspbian sysroot: compiling with the sysroot

Whack-A-Mole machines from <https://commons.wikimedia.org/wiki/File:Whac-A-Mole_Cedar_Point.jpg> This is part of a series of posts on compiling a custom version of Qt5 in order to develop for both amd64 and a Raspberry Pi. Now that I have a sysroot, I try to use it to build Qt5 with QtWebEngine. Nothing seems to work straightforwardly with Qt5's build system, and hit an endless series of significant blockers to try and work around.
Problem in wayland code QtWayland's source currently does not compile:
../../../hardwareintegration/client/brcm-egl/qwaylandbrcmeglwindow.cpp: In constructor  QtWaylandClient::QWaylandBrcmEglWindow::QWaylandBrcmEglWindow(QWindow*) :
../../../hardwareintegration/client/brcm-egl/qwaylandbrcmeglwindow.cpp:131:67: error: no matching function for call to  QtWaylandClient::QWaylandWindow::QWaylandWindow(QWindow*&) 
     , m_eventQueue(wl_display_create_queue(mDisplay->wl_display()))
                                                                   ^
In file included from ../../../../include/QtWaylandClient/5.15.0/QtWaylandClient/private/qwaylandwindow_p.h:1,
                 from ../../../hardwareintegration/client/brcm-egl/qwaylandbrcmeglwindow.h:43,
                 from ../../../hardwareintegration/client/brcm-egl/qwaylandbrcmeglwindow.cpp:40:
../../../../include/QtWaylandClient/5.15.0/QtWaylandClient/private/../../../../../src/client/qwaylandwindow_p.h:97:5: note: candidate:  QtWaylandClient::QWaylandWindow::QWaylandWindow(QWindow*, QtWayland
Client::QWaylandDisplay*) 
     QWaylandWindow(QWindow *window, QWaylandDisplay *display);
     ^~~~~~~~~~~~~~
../../../../include/QtWaylandClient/5.15.0/QtWaylandClient/private/../../../../../src/client/qwaylandwindow_p.h:97:5: note:   candidate expects 2 arguments, 1 provided
make[5]: Leaving directory '/home/build/armhf/qt-everywhere-src-5.15.0/qttools/src/qdoc'
I am not trying to debug here. I understand that Wayland support is not a requirement, and I'm adding -skip wayland to Qt5's configure options. Next round. nss not found Qt5 embeds Chrome's sources. Chrome's sources require libnss3-dev to be available for both host and target architectures. Although I now have it installed both on the build system and in the sysroot, the pkg-config wrapper that Qt5 hooks into its Chrome's sources, failes to find it:
Command: /usr/bin/python2 /home/build/armhf/qt-everywhere-src-5.15.0/qtwebengine/src/3rdparty/chromium/build/config/linux/pkg-config.py -s /home/build/sysroot/ -a arm -p /usr/bin/arm-linux-gnueabihf-pkg-config --system_libdir lib nss -v -lssl3
Returned 1.
stderr:
Package nss was not found in the pkg-config search path.
Perhaps you should add the directory containing  nss.pc'
to the PKG_CONFIG_PATH environment variable
No package 'nss' found
Traceback (most recent call last):
  File "/home/build/armhf/qt-everywhere-src-5.15.0/qtwebengine/src/3rdparty/chromium/build/config/linux/pkg-config.py", line 248, in <module>
    sys.exit(main())
  File "/home/build/armhf/qt-everywhere-src-5.15.0/qtwebengine/src/3rdparty/chromium/build/config/linux/pkg-config.py", line 143, in main
    prefix = GetPkgConfigPrefixToStrip(options, args)
  File "/home/build/armhf/qt-everywhere-src-5.15.0/qtwebengine/src/3rdparty/chromium/build/config/linux/pkg-config.py", line 82, in GetPkgConfigPrefixToStrip
    "--variable=prefix"] + args, env=os.environ).decode('utf-8')
  File "/usr/lib/python2.7/subprocess.py", line 223, in check_output
    raise CalledProcessError(retcode, cmd, output=output)
subprocess.CalledProcessError: Command '['/usr/bin/arm-linux-gnueabihf-pkg-config', '--variable=prefix', 'nss']' returned non-zero exit status 1
See //build/config/linux/nss/BUILD.gn:15:3: whence it was called.
  pkg_config("system_nss_no_ssl_config")  
  ^---------------------------------------
See //crypto/BUILD.gn:218:25: which caused the file to be included.
    public_configs += [ "//build/config/linux/nss:system_nss_no_ssl_config" ]
                        ^--------------------------------------------------
Project ERROR: GN run error!
It's trying to look into $SYSROOT/usr/lib/pkgconfig, while it should be $SYSROOT//usr/lib/arm-linux-gnueabihf/pkgconfig. I worked around this this patch to qtwebengine/src/3rdparty/chromium/build/config/linux/pkg-config.py:
--- pkg-config.py.orig  2020-07-16 11:46:21.005373002 +0200
+++ pkg-config.py   2020-07-16 11:46:02.605296967 +0200
@@ -61,6 +61,7 @@
   libdir = sysroot + '/usr/' + options.system_libdir + '/pkgconfig'
   libdir += ':' + sysroot + '/usr/share/pkgconfig'
+  libdir += ':' + sysroot + '/usr/lib/arm-linux-gnueabihf/pkgconfig'
   os.environ['PKG_CONFIG_LIBDIR'] = libdir
   return libdir
Next round. g++ 8.3.0 Internal Compiler Error Qt5's sources embed Chrome's sources that embed the skia library sources. One of the skia library sources, when cross-compiled to ARM with -O1 or -O2 with g++ 8.3.0, produces an Internal Compiler Error:
/usr/bin/arm-linux-gnueabihf-g++ -MMD -MF obj/skia/skcms/skcms.o.d -DUSE_UDEV -DUSE_AURA=1 -DUSE_NSS_CERTS=1 -DUSE_OZONE=1 -DOFFICIAL_BUILD -DTOOLKIT_QT -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE -D_LARGEFILE64_SOURCE -DNO_UNWIND_TABLES -D__STDC_CONSTANT_MACROS -D__STDC_FORMAT_MACROS -DCR_SYSROOT_HASH=76e6068f9f6954e2ab1ff98ce5fa236d3d85bcbd -DNDEBUG -DNVALGRIND -DDYNAMIC_ANNOTATIONS_ENABLED=0 -I../../3rdparty/chromium/third_party/skia/include/third_party/skcms -Igen -I../../3rdparty/chromium -w -std=c11 -mfp16-format=ieee -fno-strict-aliasing --param=ssp-buffer-size=4 -fstack-protector -fno-unwind-tables -fno-asynchronous-unwind-tables -fPIC -pipe -pthread -march=armv7-a -mfloat-abi=hard -mtune=generic-armv7-a -mfpu=vfpv3-d16 -mthumb -Wall -U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=2 -Wno-psabi -Wno-unused-local-typedefs -Wno-maybe-uninitialized -Wno-deprecated-declarations -fno-delete-null-pointer-checks -Wno-comments -Wno-packed-not-aligned -Wno-dangling-else -Wno-missing-field-initializers -Wno-unused-parameter -O2 -fno-ident -fdata-sections -ffunction-sections -fno-omit-frame-pointer -g0 -fvisibility=hidden -std=gnu++14 -Wno-narrowing -Wno-class-memaccess -Wno-attributes -Wno-class-memaccess -Wno-subobject-linkage -Wno-invalid-offsetof -Wno-return-type -Wno-deprecated-copy -fno-exceptions -fno-rtti --sysroot=../../../../../../sysroot/ -fvisibility-inlines-hidden -c ../../3rdparty/chromium/third_party/skia/third_party/skcms/skcms.cc -o obj/skia/skcms/skcms.o
during RTL pass: expand
In file included from ../../3rdparty/chromium/third_party/skia/third_party/skcms/skcms.cc:2053:
../../3rdparty/chromium/third_party/skia/third_party/skcms/src/Transform_inl.h: In function  void baseline::exec_ops(const Op*, const void**, const char*, char*, int) :
../../3rdparty/chromium/third_party/skia/third_party/skcms/src/Transform_inl.h:766:13: internal compiler error: in convert_move, at expr.c:218
 static void exec_ops(const Op* ops, const void** args,
             ^~~~~~~~
Please submit a full bug report,
with preprocessed source if appropriate.
See <file:///usr/share/doc/gcc-8/README.Bugs> for instructions.
I reported the bug at https://gcc.gnu.org/bugzilla/show_bug.cgi?id=96206 Since this source compiles with -O0, I attempted to fix this by editing qtwebkit/src/3rdparty/chromium/build/config/compiler/BUILD.gn and replacing instances of -O1 and -O2 with -O0. Spoiler: wrong attempt. We'll see it in the next round. Impossible constraint in asm Qt5's sources embed Chrome's sources that embed the ffmpeg library sources. Even if ffmpeg's development libraries are present both in the host and in the target system, the build system insists in compiling and using the bundled version. Unfortunately, using -O0 breaks the build of ffmpeg:
/usr/bin/arm-linux-gnueabihf-gcc -MMD -MF obj/third_party/ffmpeg/ffmpeg_internal/opus.o.d -DHAVE_AV_CONFIG_H -D_POSIX_C_SOURCE=200112 -D_XOPEN_SOURCE=600 -DPIC -DFFMPEG_CONFIGURATION=NULL -DCHROMIUM_NO_LOGGING -D_ISOC99_SOURCE -D_LARGEFILE_SOURCE -DUSE_UDEV -DUSE_AURA=1 -DUSE_NSS_CERTS=1 -DUSE_OZONE=1 -DOFFICIAL_BUILD -DTOOLKIT_QT -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE -D_LARGEFILE64_SOURCE -DNO_UNWIND_TABLES -DCR_SYSROOT_HASH=76e6068f9f6954e2ab1ff98ce5fa236d3d85bcbd -DNDEBUG -DNVALGRIND -DDYNAMIC_ANNOTATIONS_ENABLED=0 -DOPUS_FIXED_POINT -I../../3rdparty/chromium/third_party/ffmpeg/chromium/config/Chromium/linux/arm -I../../3rdparty/chromium/third_party/ffmpeg -I../../3rdparty/chromium/third_party/ffmpeg/compat/atomics/gcc -Igen -I../../3rdparty/chromium -I../../3rdparty/chromium/third_party/opus/src/include -fPIC -Wno-deprecated-declarations -fomit-frame-pointer -w -std=c99 -pthread -fno-math-errno -fno-signed-zeros -fno-tree-vectorize -fno-strict-aliasing --param=ssp-buffer-size=4 -fstack-protector -fno-unwind-tables -fno-asynchronous-unwind-tables -fPIC -pipe -pthread -march=armv7-a -mfloat-abi=hard -mtune=generic-armv7-a -mfpu=vfpv3-d16 -mthumb -g0 -fvisibility=hidden -Wno-psabi -Wno-unused-local-typedefs -Wno-maybe-uninitialized -Wno-deprecated-declarations -fno-delete-null-pointer-checks -Wno-comments -Wno-packed-not-aligned -Wno-dangling-else -Wno-missing-field-initializers -Wno-unused-parameter -O0 -fno-ident -fdata-sections -ffunction-sections -std=gnu11 --sysroot=../../../../../../sysroot/ -c ../../3rdparty/chromium/third_party/ffmpeg/libavcodec/opus.c -o obj/third_party/ffmpeg/ffmpeg_internal/opus.o
In file included from ../../3rdparty/chromium/third_party/ffmpeg/libavutil/intmath.h:30,
                 from ../../3rdparty/chromium/third_party/ffmpeg/libavutil/common.h:106,
                 from ../../3rdparty/chromium/third_party/ffmpeg/libavutil/avutil.h:296,
                 from ../../3rdparty/chromium/third_party/ffmpeg/libavutil/audio_fifo.h:30,
                 from ../../3rdparty/chromium/third_party/ffmpeg/libavcodec/opus.h:28,
                 from ../../3rdparty/chromium/third_party/ffmpeg/libavcodec/opus_celt.h:29,
                 from ../../3rdparty/chromium/third_party/ffmpeg/libavcodec/opus.c:32:
../../3rdparty/chromium/third_party/ffmpeg/libavcodec/opus.c: In function  ff_celt_quant_bands :
../../3rdparty/chromium/third_party/ffmpeg/libavutil/arm/intmath.h:77:5: error: impossible constraint in  asm 
     __asm__ ("usat %0, %2, %1" : "=r"(x) : "r"(a), "i"(p));
     ^~~~~~~
The same source compiles with using -O2 instead of -O0. I worked around this by undoing the previous change, and limiting -O0 to just the source that causes the Internal Compiler Error. I edited qtwebengine/src/3rdparty/chromium/third_party/skia/third_party/skcms/skcms.cc to prepend:
#pragma GCC push_options
#pragma GCC optimize ("O0")
and append:
#pragma GCC pop_options
Next round. Missing build-deps for i386 code Qt5's sources embed Chrome's sources that embed the V8 library sources. For some reason, torque, that is part of V8, wants to build some of its sources into 32 bit code with -m32, and I did not have i386 cross-compilation libraries installed:
/usr/bin/g++ -MMD -MF v8_snapshot/obj/v8/torque_base/csa-generator.o.d -DUSE_UDEV -DUSE_AURA=1 -DUSE_NSS_CERTS=1 -DUSE_OZONE=1 -DOFFICIAL_BUILD -DTOOLKIT_QT -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE -D_LARGEFILE64_SOURCE -DNO_UNWIND_TABLES -D__STDC_CONSTANT_MACROS -D__STDC_FORMAT_MACROS -DNDEBUG -DNVALGRIND -DDYNAMIC_ANNOTATIONS_ENABLED=0 -DV8_TYPED_ARRAY_MAX_SIZE_IN_HEAP=64 -DENABLE_MINOR_MC -DV8_INTL_SUPPORT -DV8_CONCURRENT_MARKING -DV8_ENABLE_LAZY_SOURCE_POSITIONS -DV8_EMBEDDED_BUILTINS -DV8_SHARED_RO_HEAP -DV8_WIN64_UNWINDING_INFO -DV8_ENABLE_REGEXP_INTERPRETER_THREADED_DISPATCH -DV8_31BIT_SMIS_ON_64BIT_ARCH -DV8_DEPRECATION_WARNINGS -DV8_TARGET_ARCH_ARM -DCAN_USE_ARMV7_INSTRUCTIONS -DCAN_USE_VFP3_INSTRUCTIONS -DUSE_EABI_HARDFLOAT=1 -DV8_HAVE_TARGET_OS -DV8_TARGET_OS_LINUX -DDISABLE_UNTRUSTED_CODE_MITIGATIONS -DV8_31BIT_SMIS_ON_64BIT_ARCH -DV8_DEPRECATION_WARNINGS -Iv8_snapshot/gen -I../../3rdparty/chromium -I../../3rdparty/chromium/v8 -Iv8_snapshot/gen/v8 -fno-strict-aliasing --param=ssp-buffer-size=4 -fstack-protector -fno-unwind-tables -fno-asynchronous-unwind-tables -fPIC -pipe -pthread -m32 -msse2 -mfpmath=sse -mmmx -Wall -U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=2 -Wno-unused-local-typedefs -Wno-maybe-uninitialized -Wno-deprecated-declarations -fno-delete-null-pointer-checks -Wno-comments -Wno-packed-not-aligned -Wno-dangling-else -Wno-missing-field-initializers -Wno-unused-parameter -fno-omit-frame-pointer -g0 -fvisibility=hidden -Wno-strict-overflow -Wno-return-type -O3 -fno-ident -fdata-sections -ffunction-sections -std=gnu++14 -Wno-narrowing -Wno-class-memaccess -Wno-attributes -Wno-class-memaccess -Wno-subobject-linkage -Wno-invalid-offsetof -Wno-return-type -Wno-deprecated-copy -fvisibility-inlines-hidden -fexceptions -frtti -c ../../3rdparty/chromium/v8/src/torque/csa-generator.cc -o v8_snapshot/obj/v8/torque_base/csa-generator.o
In file included from ../../3rdparty/chromium/v8/src/torque/csa-generator.h:8,
                 from ../../3rdparty/chromium/v8/src/torque/csa-generator.cc:5:
/usr/include/c++/8/iostream:38:10: fatal error: bits/c++config.h: No such file or directory
 #include <bits/c++config.h>
          ^~~~~~~~~~~~~~~~~~
compilation terminated.
New build dependencies needed:
apt install lib32stdc++-8-dev
apt install libc6-dev-i386
dpkg --add-architecture i386
apt install linux-libc-dev:i386
Next round. OpenGL build issues Next bump are OpenGL related compiler issues:
/usr/bin/arm-linux-gnueabihf-g++ -MMD -MF obj/QtWebEngineCore/gl_ozone_glx_qt.o.d -DCHROMIUM_VERSION=\"80.0.3987.163\" -DUSE_UDEV -DUSE_AURA=1 -DUSE_NSS_CERTS=1 -DUSE_OZONE=1 -DOFFICIAL_BUILD -DTOOLKIT_QT -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE -D_LARGEFILE64_SOURCE -DNO_UNWIND_TABLES -D__STDC_CONSTANT_MACROS -D__STDC_FORMAT_MACROS -DCR_SYSROOT_HASH=76e6068f9f6954e2ab1ff98ce5fa236d3d85bcbd -DNDEBUG -DNVALGRIND -DDYNAMIC_ANNOTATIONS_ENABLED=0 -DQT_NO_LINKED_LIST -DQT_NO_KEYWORDS -DQT_USE_QSTRINGBUILDER -DQ_FORWARD_DECLARE_OBJC_CLASS=QT_FORWARD_DECLARE_CLASS -DQTWEBENGINECORE_VERSION_STR=\"5.15.0\" -DQTWEBENGINEPROCESS_NAME=\"QtWebEngineProcess\" -DBUILDING_CHROMIUM -DQTWEBENGINE_EMBEDDED_SWITCHES -DQT_NO_EXCEPTIONS -D_LARGEFILE64_SOURCE -D_LARGEFILE_SOURCE -DQT_NO_DEBUG -DQT_QUICK_LIB -DQT_GUI_LIB -DQT_QMLMODELS_LIB -DQT_WEBCHANNEL_LIB -DQT_QML_LIB -DQT_NETWORK_LIB -DQT_POSITIONING_LIB -DQT_CORE_LIB -DQT_WEBENGINECOREHEADERS_LIB -DVK_NO_PROTOTYPES -DGL_GLEXT_PROTOTYPES -DUSE_GLX -DUSE_EGL -DGOOGLE_PROTOBUF_NO_RTTI -DGOOGLE_PROTOBUF_NO_STATIC_INITIALIZER -DHAVE_PTHREAD -DU_USING_ICU_NAMESPACE=0 -DU_ENABLE_DYLOAD=0 -DUSE_CHROMIUM_ICU=1 -DU_STATIC_IMPLEMENTATION -DICU_UTIL_DATA_IMPL=ICU_UTIL_DATA_FILE -DUCHAR_TYPE=uint16_t -DWEBRTC_NON_STATIC_TRACE_EVENT_HANDLERS=0 -DWEBRTC_CHROMIUM_BUILD -DWEBRTC_POSIX -DWEBRTC_LINUX -DABSL_ALLOCATOR_NOTHROW=1 -DWEBRTC_USE_BUILTIN_ISAC_FIX=1 -DWEBRTC_USE_BUILTIN_ISAC_FLOAT=0 -DHAVE_SCTP -DNO_MAIN_THREAD_WRAPPING -DSK_HAS_PNG_LIBRARY -DSK_HAS_WEBP_LIBRARY -DSK_USER_CONFIG_HEADER=\"../../skia/config/SkUserConfig.h\" -DSK_GL -DSK_HAS_JPEG_LIBRARY -DSK_USE_LIBGIFCODEC -DSK_VULKAN_HEADER=\"../../skia/config/SkVulkanConfig.h\" -DSK_VULKAN=1 -DSK_SUPPORT_GPU=1 -DSK_GPU_WORKAROUNDS_HEADER=\"gpu/config/gpu_driver_bug_workaround_autogen.h\" -DVK_NO_PROTOTYPES -DLEVELDB_PLATFORM_CHROMIUM=1 -DLEVELDB_PLATFORM_CHROMIUM=1 -DV8_31BIT_SMIS_ON_64BIT_ARCH -DV8_DEPRECATION_WARNINGS -I../../3rdparty/chromium/skia/config -I../../3rdparty/chromium/third_party -I../../3rdparty/chromium/third_party/boringssl/src/include -I../../3rdparty/chromium/third_party/skia/include/core -Igen -I../../3rdparty/chromium -I/home/build/armhf/qt-everywhere-src-5.15.0/qtwebengine/src/core -I/home/build/armhf/qt-everywhere-src-5.15.0/qtwebengine/src/core/api -I/home/build/armhf/qt-everywhere-src-5.15.0/qtdeclarative/include/QtQuick/5.15.0 -I/home/build/armhf/qt-everywhere-src-5.15.0/qtdeclarative/include/QtQuick/5.15.0/QtQuick -I/home/build/armhf/qt-everywhere-src-5.15.0/qtbase/include/QtGui/5.15.0 -I/home/build/armhf/qt-everywhere-src-5.15.0/qtbase/include/QtGui/5.15.0/QtGui -I/home/build/armhf/qt-everywhere-src-5.15.0/qtdeclarative/include -I/home/build/armhf/qt-everywhere-src-5.15.0/qtdeclarative/include/QtQuick -I/home/build/armhf/qt-everywhere-src-5.15.0/qtbase/include -I/home/build/armhf/qt-everywhere-src-5.15.0/qtbase/include/QtGui -I/home/build/armhf/qt-everywhere-src-5.15.0/qtdeclarative/include/QtQmlModels/5.15.0 -I/home/build/armhf/qt-everywhere-src-5.15.0/qtdeclarative/include/QtQmlModels/5.15.0/QtQmlModels -I/home/build/armhf/qt-everywhere-src-5.15.0/qtdeclarative/include/QtQml/5.15.0 -I/home/build/armhf/qt-everywhere-src-5.15.0/qtdeclarative/include/QtQml/5.15.0/QtQml -I/home/build/armhf/qt-everywhere-src-5.15.0/qtbase/include/QtCore/5.15.0 -I/home/build/armhf/qt-everywhere-src-5.15.0/qtbase/include/QtCore/5.15.0/QtCore -I/home/build/armhf/qt-everywhere-src-5.15.0/qtdeclarative/include/QtQmlModels -I/home/build/armhf/qt-everywhere-src-5.15.0/qtwebchannel/include -I/home/build/armhf/qt-everywhere-src-5.15.0/qtwebchannel/include/QtWebChannel -I/home/build/armhf/qt-everywhere-src-5.15.0/qtdeclarative/include/QtQml -I/home/build/armhf/qt-everywhere-src-5.15.0/qtbase/include/QtNetwork -I/home/build/armhf/qt-everywhere-src-5.15.0/qtlocation/include -I/home/build/armhf/qt-everywhere-src-5.15.0/qtlocation/include/QtPositioning -I/home/build/armhf/qt-everywhere-src-5.15.0/qtbase/include/QtCore -I/home/build/armhf/qt-everywhere-src-5.15.0/qtwebengine/include -I/home/build/armhf/qt-everywhere-src-5.15.0/qtwebengine/include/QtWebEngineCore -I/home/build/armhf/qt-everywhere-src-5.15.0/qtwebengine/include/QtWebEngineCore/5.15.0 -I/home/build/armhf/qt-everywhere-src-5.15.0/qtwebengine/include/QtWebEngineCore/5.15.0/QtWebEngineCore -I.moc -I/home/build/sysroot/opt/vc/include -I/home/build/sysroot/opt/vc/include/interface/vcos/pthreads -I/home/build/sysroot/opt/vc/include/interface/vmcs_host/linux -Igen/.moc -I/home/build/armhf/qt-everywhere-src-5.15.0/qtbase/mkspecs/devices/linux-rasp-pi2-g++ -Igen -Igen -I../../3rdparty/chromium/third_party/libyuv/include -Igen -I../../3rdparty/chromium/third_party/jsoncpp/source/include -I../../3rdparty/chromium/third_party/jsoncpp/generated -Igen -Igen -I../../3rdparty/chromium/third_party/khronos -I../../3rdparty/chromium/gpu -I../../3rdparty/chromium/third_party/vulkan/include -I../../3rdparty/chromium/third_party/perfetto/include -Igen/third_party/perfetto/build_config -Igen -Igen -Igen/third_party/dawn/src/include -I../../3rdparty/chromium/third_party/dawn/src/include -Igen -I../../3rdparty/chromium/third_party/boringssl/src/include -I../../3rdparty/chromium/third_party/protobuf/src -Igen/protoc_out -I../../3rdparty/chromium/third_party/protobuf/src -I../../3rdparty/chromium/third_party/ced/src -I../../3rdparty/chromium/third_party/icu/source/common -I../../3rdparty/chromium/third_party/icu/source/i18n -I../../3rdparty/chromium/third_party/webrtc_overrides -I../../3rdparty/chromium/third_party/webrtc -Igen/third_party/webrtc -I../../3rdparty/chromium/third_party/abseil-cpp -I../../3rdparty/chromium/third_party/skia -I../../3rdparty/chromium/third_party/libgifcodec -I../../3rdparty/chromium/third_party/vulkan/include -I../../3rdparty/chromium/third_party/skia/third_party/vulkanmemoryallocator -I../../3rdparty/chromium/third_party/vulkan/include -Igen/third_party/perfetto -Igen/third_party/perfetto -Igen/third_party/perfetto -Igen/third_party/perfetto -Igen/third_party/perfetto -Igen/third_party/perfetto -Igen/third_party/perfetto -Igen/third_party/perfetto -Igen/third_party/perfetto -Igen/third_party/perfetto -I../../3rdparty/chromium/third_party/crashpad/crashpad -I../../3rdparty/chromium/third_party/crashpad/crashpad/compat/non_mac -I../../3rdparty/chromium/third_party/crashpad/crashpad/compat/linux -I../../3rdparty/chromium/third_party/crashpad/crashpad/compat/non_win -I../../3rdparty/chromium/third_party/libwebm/source -I../../3rdparty/chromium/third_party/leveldatabase -I../../3rdparty/chromium/third_party/leveldatabase/src -I../../3rdparty/chromium/third_party/leveldatabase/src/include -I../../3rdparty/chromium/v8/include -Igen/v8/include -I../../3rdparty/chromium/third_party/mesa_headers -fno-strict-aliasing --param=ssp-buffer-size=4 -fstack-protector -fno-unwind-tables -fno-asynchronous-unwind-tables -fPIC -pipe -pthread -march=armv7-a -mfloat-abi=hard -mtune=generic-armv7-a -mfpu=vfpv3-d16 -mthumb -Wall -U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=2 -Wno-psabi -Wno-unused-local-typedefs -Wno-maybe-uninitialized -Wno-deprecated-declarations -fno-delete-null-pointer-checks -Wno-comments -Wno-packed-not-aligned -Wno-dangling-else -Wno-missing-field-initializers -Wno-unused-parameter -O2 -fno-ident -fdata-sections -ffunction-sections -fno-omit-frame-pointer -g0 -fvisibility=hidden -g -O2 -fdebug-prefix-map=/home/build/armhf/qt-everywhere-src-5.15.0=. -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -O2 -fno-exceptions -Wall -Wextra -D_REENTRANT -I/home/build/sysroot/usr/include/nss -I/home/build/sysroot/usr/include/nspr -std=gnu++14 -Wno-narrowing -Wno-class-memaccess -Wno-attributes -Wno-class-memaccess -Wno-subobject-linkage -Wno-invalid-offsetof -Wno-return-type -Wno-deprecated-copy -fno-exceptions -fno-rtti --sysroot=../../../../../../sysroot/ -fvisibility-inlines-hidden -g -O2 -fdebug-prefix-map=/home/build/armhf/qt-everywhere-src-5.15.0=. -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -O2 -std=gnu++1y -fno-exceptions -Wall -Wextra -D_REENTRANT -Wno-unused-parameter -Wno-unused-variable -Wno-deprecated-declarations -c /home/build/armhf/qt-everywhere-src-5.15.0/qtwebengine/src/core/ozone/gl_ozone_glx_qt.cpp -o obj/QtWebEngineCore/gl_ozone_glx_qt.o
In file included from ../../3rdparty/chromium/ui/gl/gl_bindings.h:497,
                 from ../../3rdparty/chromium/ui/gl/gl_gl_api_implementation.h:12,
                 from /home/build/armhf/qt-everywhere-src-5.15.0/qtwebengine/src/core/ozone/gl_ozone_glx_qt.cpp:49:
../../3rdparty/chromium/ui/gl/gl_bindings_autogen_egl.h:227:5: error:  EGLSetBlobFuncANDROID  has not been declared
     EGLSetBlobFuncANDROID set,
     ^~~~~~~~~~~~~~~~~~~~~
../../3rdparty/chromium/ui/gl/gl_bindings_autogen_egl.h:228:5: error:  EGLGetBlobFuncANDROID  has not been declared
     EGLGetBlobFuncANDROID get);
     ^~~~~~~~~~~~~~~~~~~~~
../../3rdparty/chromium/ui/gl/gl_bindings_autogen_egl.h:571:46: error:  EGLSetBlobFuncANDROID  has not been declared
                                              EGLSetBlobFuncANDROID set,
                                              ^~~~~~~~~~~~~~~~~~~~~
../../3rdparty/chromium/ui/gl/gl_bindings_autogen_egl.h:572:46: error:  EGLGetBlobFuncANDROID  has not been declared
                                              EGLGetBlobFuncANDROID get) = 0;
                                              ^~~~~~~~~~~~~~~~~~~~~
cc1plus: warning: unrecognized command line option  -Wno-deprecated-copy 
/usr/bin/arm-linux-gnueabihf-g++ -MMD -MF obj/QtWebEngineCore/display_gl_output_surface.o.d -DCHROMIUM_VERSION=\"80.0.3987.163\" -DUSE_UDEV -DUSE_AURA=1 -DUSE_NSS_CERTS=1 -DUSE_OZONE=1 -DOFFICIAL_BUILD -DTOOLKIT_QT -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE -D_LARGEFILE64_SOURCE -DNO_UNWIND_TABLES -D__STDC_CONSTANT_MACROS -D__STDC_FORMAT_MACROS -DCR_SYSROOT_HASH=76e6068f9f6954e2ab1ff98ce5fa236d3d85bcbd -DNDEBUG -DNVALGRIND -DDYNAMIC_ANNOTATIONS_ENABLED=0 -DQT_NO_LINKED_LIST -DQT_NO_KEYWORDS -DQT_USE_QSTRINGBUILDER -DQ_FORWARD_DECLARE_OBJC_CLASS=QT_FORWARD_DECLARE_CLASS -DQTWEBENGINECORE_VERSION_STR=\"5.15.0\" -DQTWEBENGINEPROCESS_NAME=\"QtWebEngineProcess\" -DBUILDING_CHROMIUM -DQTWEBENGINE_EMBEDDED_SWITCHES -DQT_NO_EXCEPTIONS -D_LARGEFILE64_SOURCE -D_LARGEFILE_SOURCE -DQT_NO_DEBUG -DQT_QUICK_LIB -DQT_GUI_LIB -DQT_QMLMODELS_LIB -DQT_WEBCHANNEL_LIB -DQT_QML_LIB -DQT_NETWORK_LIB -DQT_POSITIONING_LIB -DQT_CORE_LIB -DQT_WEBENGINECOREHEADERS_LIB -DVK_NO_PROTOTYPES -DGL_GLEXT_PROTOTYPES -DUSE_GLX -DUSE_EGL -DGOOGLE_PROTOBUF_NO_RTTI -DGOOGLE_PROTOBUF_NO_STATIC_INITIALIZER -DHAVE_PTHREAD -DU_USING_ICU_NAMESPACE=0 -DU_ENABLE_DYLOAD=0 -DUSE_CHROMIUM_ICU=1 -DU_STATIC_IMPLEMENTATION -DICU_UTIL_DATA_IMPL=ICU_UTIL_DATA_FILE -DUCHAR_TYPE=uint16_t -DWEBRTC_NON_STATIC_TRACE_EVENT_HANDLERS=0 -DWEBRTC_CHROMIUM_BUILD -DWEBRTC_POSIX -DWEBRTC_LINUX -DABSL_ALLOCATOR_NOTHROW=1 -DWEBRTC_USE_BUILTIN_ISAC_FIX=1 -DWEBRTC_USE_BUILTIN_ISAC_FLOAT=0 -DHAVE_SCTP -DNO_MAIN_THREAD_WRAPPING -DSK_HAS_PNG_LIBRARY -DSK_HAS_WEBP_LIBRARY -DSK_USER_CONFIG_HEADER=\"../../skia/config/SkUserConfig.h\" -DSK_GL -DSK_HAS_JPEG_LIBRARY -DSK_USE_LIBGIFCODEC -DSK_VULKAN_HEADER=\"../../skia/config/SkVulkanConfig.h\" -DSK_VULKAN=1 -DSK_SUPPORT_GPU=1 -DSK_GPU_WORKAROUNDS_HEADER=\"gpu/config/gpu_driver_bug_workaround_autogen.h\" -DVK_NO_PROTOTYPES -DLEVELDB_PLATFORM_CHROMIUM=1 -DLEVELDB_PLATFORM_CHROMIUM=1 -DV8_31BIT_SMIS_ON_64BIT_ARCH -DV8_DEPRECATION_WARNINGS -I../../3rdparty/chromium/skia/config -I../../3rdparty/chromium/third_party -I../../3rdparty/chromium/third_party/boringssl/src/include -I../../3rdparty/chromium/third_party/skia/include/core -Igen -I../../3rdparty/chromium -I/home/build/armhf/qt-everywhere-src-5.15.0/qtwebengine/src/core -I/home/build/armhf/qt-everywhere-src-5.15.0/qtwebengine/src/core/api -I/home/build/armhf/qt-everywhere-src-5.15.0/qtdeclarative/include/QtQuick/5.15.0 -I/home/build/armhf/qt-everywhere-src-5.15.0/qtdeclarative/include/QtQuick/5.15.0/QtQuick -I/home/build/armhf/qt-everywhere-src-5.15.0/qtbase/include/QtGui/5.15.0 -I/home/build/armhf/qt-everywhere-src-5.15.0/qtbase/include/QtGui/5.15.0/QtGui -I/home/build/armhf/qt-everywhere-src-5.15.0/qtdeclarative/include -I/home/build/armhf/qt-everywhere-src-5.15.0/qtdeclarative/include/QtQuick -I/home/build/armhf/qt-everywhere-src-5.15.0/qtbase/include -I/home/build/armhf/qt-everywhere-src-5.15.0/qtbase/include/QtGui -I/home/build/armhf/qt-everywhere-src-5.15.0/qtdeclarative/include/QtQmlModels/5.15.0 -I/home/build/armhf/qt-everywhere-src-5.15.0/qtdeclarative/include/QtQmlModels/5.15.0/QtQmlModels -I/home/build/armhf/qt-everywhere-src-5.15.0/qtdeclarative/include/QtQml/5.15.0 -I/home/build/armhf/qt-everywhere-src-5.15.0/qtdeclarative/include/QtQml/5.15.0/QtQml -I/home/build/armhf/qt-everywhere-src-5.15.0/qtbase/include/QtCore/5.15.0 -I/home/build/armhf/qt-everywhere-src-5.15.0/qtbase/include/QtCore/5.15.0/QtCore -I/home/build/armhf/qt-everywhere-src-5.15.0/qtdeclarative/include/QtQmlModels -I/home/build/armhf/qt-everywhere-src-5.15.0/qtwebchannel/include -I/home/build/armhf/qt-everywhere-src-5.15.0/qtwebchannel/include/QtWebChannel -I/home/build/armhf/qt-everywhere-src-5.15.0/qtdeclarative/include/QtQml -I/home/build/armhf/qt-everywhere-src-5.15.0/qtbase/include/QtNetwork -I/home/build/armhf/qt-everywhere-src-5.15.0/qtlocation/include -I/home/build/armhf/qt-everywhere-src-5.15.0/qtlocation/include/QtPositioning -I/home/build/armhf/qt-everywhere-src-5.15.0/qtbase/include/QtCore -I/home/build/armhf/qt-everywhere-src-5.15.0/qtwebengine/include -I/home/build/armhf/qt-everywhere-src-5.15.0/qtwebengine/include/QtWebEngineCore -I/home/build/armhf/qt-everywhere-src-5.15.0/qtwebengine/include/QtWebEngineCore/5.15.0 -I/home/build/armhf/qt-everywhere-src-5.15.0/qtwebengine/include/QtWebEngineCore/5.15.0/QtWebEngineCore -I.moc -I/home/build/sysroot/opt/vc/include -I/home/build/sysroot/opt/vc/include/interface/vcos/pthreads -I/home/build/sysroot/opt/vc/include/interface/vmcs_host/linux -Igen/.moc -I/home/build/armhf/qt-everywhere-src-5.15.0/qtbase/mkspecs/devices/linux-rasp-pi2-g++ -Igen -Igen -I../../3rdparty/chromium/third_party/libyuv/include -Igen -I../../3rdparty/chromium/third_party/jsoncpp/source/include -I../../3rdparty/chromium/third_party/jsoncpp/generated -Igen -Igen -I../../3rdparty/chromium/third_party/khronos -I../../3rdparty/chromium/gpu -I../../3rdparty/chromium/third_party/vulkan/include -I../../3rdparty/chromium/third_party/perfetto/include -Igen/third_party/perfetto/build_config -Igen -Igen -Igen/third_party/dawn/src/include -I../../3rdparty/chromium/third_party/dawn/src/include -Igen -I../../3rdparty/chromium/third_party/boringssl/src/include -I../../3rdparty/chromium/third_party/protobuf/src -Igen/protoc_out -I../../3rdparty/chromium/third_party/protobuf/src -I../../3rdparty/chromium/third_party/ced/src -I../../3rdparty/chromium/third_party/icu/source/common -I../../3rdparty/chromium/third_party/icu/source/i18n -I../../3rdparty/chromium/third_party/webrtc_overrides -I../../3rdparty/chromium/third_party/webrtc -Igen/third_party/webrtc -I../../3rdparty/chromium/third_party/abseil-cpp -I../../3rdparty/chromium/third_party/skia -I../../3rdparty/chromium/third_party/libgifcodec -I../../3rdparty/chromium/third_party/vulkan/include -I../../3rdparty/chromium/third_party/skia/third_party/vulkanmemoryallocator -I../../3rdparty/chromium/third_party/vulkan/include -Igen/third_party/perfetto -Igen/third_party/perfetto -Igen/third_party/perfetto -Igen/third_party/perfetto -Igen/third_party/perfetto -Igen/third_party/perfetto -Igen/third_party/perfetto -Igen/third_party/perfetto -Igen/third_party/perfetto -Igen/third_party/perfetto -I../../3rdparty/chromium/third_party/crashpad/crashpad -I../../3rdparty/chromium/third_party/crashpad/crashpad/compat/non_mac -I../../3rdparty/chromium/third_party/crashpad/crashpad/compat/linux -I../../3rdparty/chromium/third_party/crashpad/crashpad/compat/non_win -I../../3rdparty/chromium/third_party/libwebm/source -I../../3rdparty/chromium/third_party/leveldatabase -I../../3rdparty/chromium/third_party/leveldatabase/src -I../../3rdparty/chromium/third_party/leveldatabase/src/include -I../../3rdparty/chromium/v8/include -Igen/v8/include -I../../3rdparty/chromium/third_party/mesa_headers -fno-strict-aliasing --param=ssp-buffer-size=4 -fstack-protector -fno-unwind-tables -fno-asynchronous-unwind-tables -fPIC -pipe -pthread -march=armv7-a -mfloat-abi=hard -mtune=generic-armv7-a -mfpu=vfpv3-d16 -mthumb -Wall -U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=2 -Wno-psabi -Wno-unused-local-typedefs -Wno-maybe-uninitialized -Wno-deprecated-declarations -fno-delete-null-pointer-checks -Wno-comments -Wno-packed-not-aligned -Wno-dangling-else -Wno-missing-field-initializers -Wno-unused-parameter -O2 -fno-ident -fdata-sections -ffunction-sections -fno-omit-frame-pointer -g0 -fvisibility=hidden -g -O2 -fdebug-prefix-map=/home/build/armhf/qt-everywhere-src-5.15.0=. -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -O2 -fno-exceptions -Wall -Wextra -D_REENTRANT -I/home/build/sysroot/usr/include/nss -I/home/build/sysroot/usr/include/nspr -std=gnu++14 -Wno-narrowing -Wno-class-memaccess -Wno-attributes -Wno-class-memaccess -Wno-subobject-linkage -Wno-invalid-offsetof -Wno-return-type -Wno-deprecated-copy -fno-exceptions -fno-rtti --sysroot=../../../../../../sysroot/ -fvisibility-inlines-hidden -g -O2 -fdebug-prefix-map=/home/build/armhf/qt-everywhere-src-5.15.0=. -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -O2 -std=gnu++1y -fno-exceptions -Wall -Wextra -D_REENTRANT -Wno-unused-parameter -Wno-unused-variable -Wno-deprecated-declarations -c /home/build/armhf/qt-everywhere-src-5.15.0/qtwebengine/src/core/compositor/display_gl_output_surface.cpp -o obj/QtWebEngineCore/display_gl_output_surface.o
In file included from ../../3rdparty/chromium/gpu/command_buffer/client/gles2_interface.h:8,
                 from ../../3rdparty/chromium/gpu/command_buffer/client/client_transfer_cache.h:15,
                 from ../../3rdparty/chromium/gpu/command_buffer/client/gles2_implementation.h:28,
                 from /home/build/armhf/qt-everywhere-src-5.15.0/qtwebengine/src/core/compositor/display_gl_output_surface.cpp:47:
/home/build/sysroot/opt/vc/include/GLES2/gl2.h:78: warning: "GL_FALSE" redefined
 #define GL_FALSE                          (GLboolean)0
In file included from ../../3rdparty/chromium/gpu/command_buffer/client/client_context_state.h:10,
                 from ../../3rdparty/chromium/gpu/command_buffer/client/gles2_implementation.h:27,
                 from /home/build/armhf/qt-everywhere-src-5.15.0/qtwebengine/src/core/compositor/display_gl_output_surface.cpp:47:
../../3rdparty/chromium/third_party/khronos/GLES3/gl3.h:85: note: this is the location of the previous definition
 #define GL_FALSE                          0
In file included from ../../3rdparty/chromium/gpu/command_buffer/client/gles2_interface.h:8,
                 from ../../3rdparty/chromium/gpu/command_buffer/client/client_transfer_cache.h:15,
                 from ../../3rdparty/chromium/gpu/command_buffer/client/gles2_implementation.h:28,
                 from /home/build/armhf/qt-everywhere-src-5.15.0/qtwebengine/src/core/compositor/display_gl_output_surface.cpp:47:
/home/build/sysroot/opt/vc/include/GLES2/gl2.h:79: warning: "GL_TRUE" redefined
 #define GL_TRUE                           (GLboolean)1
In file included from ../../3rdparty/chromium/gpu/command_buffer/client/client_context_state.h:10,
                 from ../../3rdparty/chromium/gpu/command_buffer/client/gles2_implementation.h:27,
                 from /home/build/armhf/qt-everywhere-src-5.15.0/qtwebengine/src/core/compositor/display_gl_output_surface.cpp:47:
../../3rdparty/chromium/third_party/khronos/GLES3/gl3.h:86: note: this is the location of the previous definition
 #define GL_TRUE                           1
In file included from ../../3rdparty/chromium/gpu/command_buffer/client/gles2_interface.h:8,
                 from ../../3rdparty/chromium/gpu/command_buffer/client/client_transfer_cache.h:15,
                 from ../../3rdparty/chromium/gpu/command_buffer/client/gles2_implementation.h:28,
                 from /home/build/armhf/qt-everywhere-src-5.15.0/qtwebengine/src/core/compositor/display_gl_output_surface.cpp:47:
/home/build/sysroot/opt/vc/include/GLES2/gl2.h:600:37: error: conflicting declaration of C function  void glShaderSource(GLuint, GLsizei, const GLchar**, const GLint*) 
 GL_APICALL void         GL_APIENTRY glShaderSource (GLuint shader, GLsizei count, const GLchar** string, const GLint* length);
                                     ^~~~~~~~~~~~~~
In file included from ../../3rdparty/chromium/gpu/command_buffer/client/client_context_state.h:10,
                 from ../../3rdparty/chromium/gpu/command_buffer/client/gles2_implementation.h:27,
                 from /home/build/armhf/qt-everywhere-src-5.15.0/qtwebengine/src/core/compositor/display_gl_output_surface.cpp:47:
../../3rdparty/chromium/third_party/khronos/GLES3/gl3.h:624:29: note: previous declaration  void glShaderSource(GLuint, GLsizei, const GLchar* const*, const GLint*) 
 GL_APICALL void GL_APIENTRY glShaderSource (GLuint shader, GLsizei count, const GLchar *const*string, const GLint *length);
                             ^~~~~~~~~~~~~~
cc1plus: warning: unrecognized command line option  -Wno-deprecated-copy 
I'm out of the allocated hour budget, and I'll stop here for now. Building Qt5 has been providing some of the most nightmarish work time in my entire professional life. If my daily job became being required to deal with this kind of insanity, I would strongly invest in a change of career. Update Andreas Gruber wrote:
Long story short, a fast solution for the issue with EGLSetBlobFuncANDROID is to remove libraspberrypi-dev from your sysroot and do a full rebuild. There will be some changes to the configure results, so please review them - if they are relevant for you - before proceeding with your work.
And thanks to Andreas, the story can continue...

17 November 2017

Renata Scheibler: Hello, world!

Renata's picture, a white woman profile. She touches her chin with her left hand fingertips About me Hello, world! For those who are meeting me for the first time, I am a 31 year old History teacher from Porto Alegre, Brazil. Some people might know me from the Python community, because I have been leading PyLadies Porto Alegre and helping organize Django Girls workshops in my state since 2016. If you don't, that's okay. Either way, it's nice to have you here. Ever since I learned about Rails Girls Summer of Code, during the International Free Software Forum - FISL 16, I have been wanting to get into a tech internship program. Google Summer of Code made into my radar as well, but I didn't really feel like I knew enough to try and get into those programs... until I found Outreachy. From their site:
Outreachy is an organization that provides three-month internships for people from groups traditionally underrepresented in tech. Interns work remotely with mentors from Free and Open Source Software (FOSS) communities on projects ranging from programming, user experience, documentation, illustration and graphical design, to data science.
There were many good projects to choose from on this round, a lot of them with few requirements - and most with requirements that I believed I could fullfill, such as some knowledge of HTML, CSS, Python or Django. I ought to say that I am not an expert in any of those. And, since you're reading this, I'm going to be completely honest. Coding is hard. Coding is hard to learn, it takes a lot of studying and a lot of practice. Even though I have been messing around computers pretty much since I was a kid, because I was a girl lucky enough to have a father who owned a computer store, I hadn't began learning how to program until mid-2015 - and I am still learning. I think I became such an autodidact because I had to (and, of course, because I was given the conditions to be, such as having spare time to study when I wasn't at school). I had to get any and all information from my surroundings and turn into knowledge that I could use to achieve my goals. In a time when I could only get new computer games through a CD-ROM and the computer I was allowed to use didn't have a CD-ROM drive, I had to try and learn how to open a computer cabinet and connect/disconnect hardware properly, so I could use my brother's CD-ROM drive on the computer I was allowed to and install the games without anyone noticing. When, back in 1998, I couldn't connect to the internet because the computer I was allowed to use didn't have a modem, I had to learn about networks to figure out how to do it from my brother's computer on the LAN (local network). I would go to the community public library and read books and any tech magazines I could get my hands into (libraries didn't usually have computers to be used by the public back then). It was about 2002 when I learned how to create HTML sites by studying at the source code from pages I had saved to read offline in one of the very, very few times I was allowed to access the internet and browse the web. Of course, the site I created back then never saw the light of the day, because I didn't have really have internet access at home. So, how come it is that only now, 14 years later, I am trying to get into tech? Because when I finished high school in 2003, I was still a minor and my family didn't allow me to go to Vocational School and take an IT course. (Never mind that my own oldest brother had graduated in IT and working with for almost a decade.) I ended up going to study... teacher training in History as an undergrad course. A lot has happened since then. I took the exam to become a public school teacher and more than two years had passed without being called to work. I spent 3 years in odd-jobs that paid barely enough to pay rent (and, sometimes, not even that). Since the IT is the new thing and all jobs are in IT, finally, finally it seemed okay for me to take that Vocational School training in a public school - and so I did. I gotta say, I thought that while I studied, I would be able to get some sort of job or internship to help with my learning. After all, I had seen it easily happening with people I met before getting into the course. And by "people", of course, I mean white men. For me, it took a whole year of searching, trying and interviewing for me to get an internship related to the field - tech support in a school computer lab, running GNU/Linux. And, in that very same week, I was hired as a public school teacher. There is a lot more... actually, there is so much more to this story, but I think I have told enough for now. Enough to know where I came from and who I am, as of now. I hope you stick around. I am bound to write here every two weeks, so I guess I will see you then! o/

12 October 2017

Joachim Breitner: Isabelle functions: Always total, sometimes undefined

Often, when I mention how things work in the interactive theorem prover [Isabelle/HOL] (in the following just Isabelle 1) to people with a strong background in functional programming (whether that means Haskell or Coq or something else), I cause confusion, especially around the issue of what is a function, are function total and what is the business with undefined. In this blog post, I want to explain some these issues, aimed at functional programmers or type theoreticians. Note that this is not meant to be a tutorial; I will not explain how to do these things, and will focus on what they mean.

HOL is a logic of total functions If I have a Isabelle function f :: a b between two types a and b (the function arrow in Isabelle is , not ), then by definition of what it means to be a function in HOL whenever I have a value x :: a, then the expression f x (i.e. f applied to x) is a value of type b. Therefore, and without exception, every Isabelle function is total. In particular, it cannot be that f x does not exist for some x :: a. This is a first difference from Haskell, which does have partial functions like
spin :: Maybe Integer -> Bool
spin (Just n) = spin (Just (n+1))
Here, neither the expression spin Nothing nor the expression spin (Just 42) produce a value of type Bool: The former raises an exception ( incomplete pattern match ), the latter does not terminate. Confusingly, though, both expressions have type Bool. Because every function is total, this confusion cannot arise in Isabelle: If an expression e has type t, then it is a value of type t. This trait is shared with other total systems, including Coq. Did you notice the emphasis I put on the word is here, and how I deliberately did not write evaluates to or returns ? This is because of another big source for confusion:

Isabelle functions do not compute We (i.e., functional programmers) stole the word function from mathematics and repurposed it2. But the word function , in the context of Isabelle, refers to the mathematical concept of a function, and it helps to keep that in mind. What is the difference?
  • A function a b in functional programming is an algorithm that, given a value of type a, calculates (returns, evaluates to) a value of type b.
  • A function a b in math (or Isabelle) associates with each value of type a a value of type b.
For example, the following is a perfectly valid function definition in math (and HOL), but could not be a function in the programming sense:
definition foo :: "(nat   real)   real" where
  "foo seq = (if convergent seq then lim seq else 0)"
This assigns a real number to every sequence, but it does not compute it in any useful sense. From this it follows that

Isabelle functions are specified, not defined Consider this function definition:
fun plus :: "nat   nat   nat"  where
   "plus 0       m = m"
   "plus (Suc n) m = Suc (plus n m)"
To a functional programmer, this reads
plus is a function that analyses its first argument. If that is 0, then it returns the second argument. Otherwise, it calls itself with the predecessor of the first argument and increases the result by one.
which is clearly a description of a computation. But to Isabelle, the above reads
plus is a binary function on natural numbers, and it satisfies the following two equations:
And in fact, it is not so much Isabelle that reads it this way, but rather the fun command, which is external to the Isabelle logic. The fun command analyses the given equations, constructs a non-recursive definition of plus under the hood, passes that to Isabelle and then proves that the given equations hold for plus. One interesting consequence of this is that different specifications can lead to the same functions. In fact, if we would define plus' by recursing on the second argument, we d obtain the the same function (i.e. plus = plus' is a theorem, and there would be no way of telling the two apart).

Termination is a property of specifications, not functions Because a function does not evaluate, it does not make sense to ask if it terminates. The question of termination arises before the function is defined: The fun command can only construct plus in a way that the equations hold if it passes a termination check very much like Fixpoint in Coq. But while the termination check of Fixpoint in Coq is a deep part of the basic logic, in Isabelle it is simply something that this particular command requires for its internal machinery to go through. At no point does a termination proof of the function exist as a theorem inside the logic. And other commands may have other means of defining a function that do not even require such a termination argument! For example, a function specification that is tail-recursive can be turned in to a function, even without a termination proof: The following definition describes a higher-order function that iterates its first argument f on the second argument x until it finds a fixpoint. It is completely polymorphic (the single quote in 'a indicates that this is a type variable):
partial_function (tailrec)
  fixpoint :: "('a   'a)   'a   'a"
where
  "fixpoint f x = (if f x = x then x else fixpoint f (f x))"
We can work with this definition just fine. For example, if we instantiate f with ( x. x-1), we can prove that it will always return 0:
lemma "fixpoint (  n . n - 1) (n::nat) = 0"
  by (induction n) (auto simp add: fixpoint.simps)
Similarly, if we have a function that works within the option monad (i.e. Maybe in Haskell), its specification can always be turned into a function without an explicit termination proof here one that calculates the Collatz sequence:
partial_function (option) collatz :: "nat   nat list option"
 where "collatz n =
        (if n = 1 then Some [n]
         else if even n
           then do   ns <- collatz (n div 2);    Some (n # ns)  
           else do   ns <- collatz (3 * n + 1);  Some (n # ns) )"
Note that lists in Isabelle are finite (like in Coq, unlike in Haskell), so this function returns a list only if the collatz sequence eventually reaches 1. I expect these definitions to make a Coq user very uneasy. How can fixpoint be a total function? What is fixpoint ( n. n+1)? What if we run collatz n for a n where the Collatz sequence does not reach 1?3 We will come back to that question after a little detour

HOL is a logic of non-empty types Another big difference between Isabelle and Coq is that in Isabelle, every type is inhabited. Just like the totality of functions, this is a very fundamental fact about what HOL defines to be a type. Isabelle gets away with that design because in Isabelle, we do not use types for propositions (like we do in Coq), so we do not need empty types to denote false propositions. This design has an important consequence: It allows the existence of a polymorphic expression that inhabits any type, namely
undefined :: 'a
The naming of this term alone has caused a great deal of confusion for Isabelle beginners, or in communication with users of different systems, so I implore you to not read too much into the name. In fact, you will have a better time if you think of it as arbitrary or, even better, unknown. Since undefined can be instantiated at any type, we can instantiate it for example at bool, and we can observe an important fact: undefined is not an extra value besides the usual ones . It is simply some value of that type, which is demonstrated in the following lemma:
lemma "undefined = True   undefined = False" by auto
In fact, if the type has only one value (such as the unit type), then we know the value of undefined for sure:
lemma "undefined = ()" by auto
It is very handy to be able to produce an expression of any type, as we will see as follows

Partial functions are just underspecified functions For example, it allows us to translate incomplete function specifications. Consider this definition, Isabelle s equivalent of Haskell s partial fromJust function:
fun fromSome :: "'a option   'a" where
  "fromSome (Some x) = x"
This definition is accepted by fun (albeit with a warning), and the generated function fromSome behaves exactly as specified: when applied to Some x, it is x. The term fromSome None is also a value of type 'a, we just do not know which one it is, as the specification does not address that. So fromSome None behaves just like undefined above, i.e. we can prove
lemma "fromSome None = False   fromSome None = True" by auto
Here is a small exercise for you: Can you come up with an explanation for the following lemma:
fun constOrId :: "bool   bool" where
  "constOrId True = True"
lemma "constOrId = ( _.True)   constOrId = ( x. x)"
  by (metis (full_types) constOrId.simps)
Overall, this behavior makes sense if we remember that function definitions in Isabelle are not really definitions, but rather specifications. And a partial function definition is simply a underspecification. The resulting function is simply any function hat fulfills the specification, and the two lemmas above underline that observation.

Nonterminating functions are also just underspecified Let us return to the puzzle posed by fixpoint above. Clearly, the function seen as a functional program is not total: When passed the argument ( n. n + 1) or ( b. b) it will loop forever trying to find a fixed point. But Isabelle functions are not functional programs, and the definitions are just specifications. What does the specification say about the case when f has no fixed-point? It states that the equation fixpoint f x = fixpoint f (f x) holds. And this equation has a solution, for example fixpoint f _ = undefined. Or more concretely: The specification of the fixpoint function states that fixpoint ( b. b) True = fixpoint ( b. b) False has to hold, but it does not specify which particular value (True or False) it should denote any is fine.

Not all function specifications are ok At this point you might wonder: Can I just specify any equations for a function f and get a function out of that? But rest assured: That is not the case. For example, no Isabelle command allows you define a function bogus :: () nat with the equation bogus () = Suc (bogus ()), because this equation does not have a solution. We can actually prove that such a function cannot exist:
lemma no_bogus: "  bogus. bogus () = Suc (bogus ())" by simp
(Of course, not_bogus () = not_bogus () is just fine )

You cannot reason about partiality in Isabelle We have seen that there are many ways to define functions that one might consider partial . Given a function, can we prove that it is not partial in that sense? Unfortunately, but unavoidably, no: Since undefined is not a separate, recognizable value, but rather simply an unknown one, there is no way of stating that A function result is not specified . Here is an example that demonstrates this: Two partial functions (one with not all cases specified, the other one with a self-referential specification) are indistinguishable from the total variant:
fun partial1 :: "bool   unit" where
  "partial1 True = ()"
partial_function (tailrec) partial2 :: "bool   unit" where
  "partial2 b = partial2 b"
fun total :: "bool   unit" where
  "total True = ()"
  "total False = ()"
lemma "partial1 = total   partial2 = total" by auto
If you really do want to reason about partiality of functional programs in Isabelle, you should consider implementing them not as plain HOL functions, but rather use HOLCF, where you can give equational specifications of functional programs and obtain continuous functions between domains. In that setting, () and partial2 = total. We have done that to verify some of HLint s equations.

You can still compute with Isabelle functions I hope by this point, I have not scared away anyone who wants to use Isabelle for functional programming, and in fact, you can use it for that. If the equations that you pass to fun are a reasonable definition for a function (in the programming sense), then these equations, used as rewriting rules, will allow you to compute that function quite like you would in Coq or Haskell. Moreover, Isabelle supports code extraction: You can take the equations of your Isabelle functions and have them expored into Ocaml, Haskell, Scala or Standard ML. See Concon for a conference management system with confidentially verified in Isabelle. While these usually are the equations you defined the function with, they don't have to: You can declare other proved equations to be used for code extraction, e.g. to refine your elegant definitions to performant ones. Like with code extraction from Coq to, say, Haskell, the adequacy of the translations rests on a moral reasoning foundation. Unlike extraction from Coq, where you have an (unformalized) guarantee that the resulting Haskell code is terminating, you do not get that guarantee from Isabelle. Conversely, this allows you do reason about and extract non-terminating programs, like fixpoint, which is not possible in Coq. There is currently ongoing work about verified code generation, where the code equations are reflected into a deep embedding of HOL in Isabelle that would allow explicit termination proofs.

Conclusion We have seen how in Isabelle, every function is total. Function declarations have equations, but these do not define the function in an computational sense, but rather specify them. Because in HOL, there are no empty types, many specifications that appear partial (incomplete patterns, non-terminating recursion) have solutions in the space of total functions. Partiality in the specification is no longer visible in the final product.

PS: Axiom undefined in Coq This section is speculative, and an invitation for discussion. Coq already distinguishes between types used in programs (Set) and types used in proofs Prop. Could Coq ensure that every t : Set is non-empty? I imagine this would require additional checks in the Inductive command, similar to the checks that the Isabelle command datatype has to perform4, and it would disallow Empty_set. If so, then it would be sound to add the following axiom
Axiom undefined : forall (a : Set), a.
wouldn't it? This axiom does not have any computational meaning, but that seems to be ok for optional Coq axioms, like classical reasoning or function extensionality. With this in place, how much of what I describe above about function definitions in Isabelle could now be done soundly in Coq. Certainly pattern matches would not have to be complete and could sport an implicit case _ undefined. Would it help with non-obviously terminating functions? Would it allow a Coq command Tailrecursive that accepts any tailrecursive function without a termination check?

  1. Isabelle is a metalogical framework, and other logics, e.g. Isabelle/ZF, behave differently. For the purpose of this blog post, I always mean Isabelle/HOL.
  2. Isabelle is a metalogical framework, and other logics, e.g. Isabelle/ZF, behave differently. For the purpose of this blog post, I always mean Isabelle/HOL.
  3. Let me know if you find such an n. Besides n = 0.
  4. Like fun, the constructions by datatype are not part of the logic, but create a type definition from more primitive notions that is isomorphic to the specified data type.

21 June 2017

Joey Hess: DIY professional grade solar panel installation

I've installed 1 kilowatt of solar panels on my roof, using professional grade eqipment. The four panels are Astronergy 260 watt panels, and they're mounted on IronRidge XR100 rails. Did it all myself, without help. house with 4 solar panels on roof I had three goals for this install:
  1. Cheap but sturdy. Total cost will be under $2500. It would probably cost at least twice as much to get a professional install, and the pros might not even want to do such a small install.
  2. Learn the roof mount system. I want to be able to add more panels, remove panels when working on the roof, and understand everything.
  3. Make every day a sunny day. With my current solar panels, I get around 10x as much power on a sunny day as a cloudy day, and I have plenty of power on sunny days. So 10x the PV capacity should be a good amount of power all the time.
My main concerns were, would I be able to find the rafters when installing the rails, and would the 5x3 foot panels be too unweildly to get up on the roof by myself. I was able to find the rafters, without needing a stud finder, after I removed the roof's vent caps, which exposed the rafters. The shingles were on straight enough that I could follow the lines down and drilled into the rafter on the first try every time. And I got the rails on spaced well and straight, although I could have spaced the FlashFeet out better (oops). My drill ran out of juice half-way, and I had to hack it to recharge on solar power, but that's another story. Between the learning curve, a lot of careful measurement, not the greatest shoes for roofing, and waiting for recharging, it took two days to get the 8 FlashFeet installed and the rails mounted. Taking a break from that and swimming in the river, I realized I should have been wearing my water shoes on the roof all along. Super soft and nubbly, they make me feel like a gecko up there! After recovering from an (unrelated) achilles tendon strain, I got the panels installed today. Turns out they're not hard to handle on the roof by myself. Getting them up a ladder to the roof by yourself would normally be another story, but my house has a 2 foot step up from the back retaining wall to the roof, and even has a handy grip beam as you step up. roof next to the ground with a couple of cinderblock steps The last gotcha, which I luckily anticipated, is that panels will slide down off the rails before you can get them bolted down. This is where a second pair of hands would have been most useful. But, I macguyvered a solution, attaching temporary clamps before bringing a panel up, that stopped it sliding down while I was attaching it. clamp temporarily attached to side of panel I also finished the outside wiring today. Including the one hack of this install so far. Since the local hardware store didn't have a suitable conduit to bring the cables off the roof, I cobbled one together from pipe, with foam inserts to prevent chafing. some pipe with 5 wires running through it, attached to the side of the roof While I have 1 kilowatt of power on my roof now, I won't be able to use it until next week. After ordering the upgrade, I realized that my old PWM charge controller would be able to handle less than half the power, and to get even that I would have needed to mount the fuse box near the top of the roof and run down a large and expensive low-voltage high-amperage cable, around OO AWG size. Instead, I'll be upgrading to a MPPT controller, and running a single 150 volt cable to it. Then, since the MPPT controller can only handle 1 kilowatt when it's converting to 24 volts, not 12 volts, I'm gonna have to convert the entire house over from 12V DC to 24V DC, including changing all the light fixtures and rewiring the battery bank...

Vincent Bernat: IPv4 route lookup on Linux

TL;DR: With its implementation of IPv4 routing tables using LPC-tries, Linux offers good lookup performance (50 ns for a full view) and low memory usage (64 MiB for a full view).
During the lifetime of an IPv4 datagram inside the Linux kernel, one important step is the route lookup for the destination address through the fib_lookup() function. From essential information about the datagram (source and destination IP addresses, interfaces, firewall mark, ), this function should quickly provide a decision. Some possible options are: Since 2.6.39, Linux stores routes into a compressed prefix tree (commit 3630b7c050d9). In the past, a route cache was maintained but it has been removed1 in Linux 3.6.

Route lookup in a trie Looking up a route in a routing table is to find the most specific prefix matching the requested destination. Let s assume the following routing table:
$ ip route show scope global table 100
default via 203.0.113.5 dev out2
192.0.2.0/25
        nexthop via 203.0.113.7  dev out3 weight 1
        nexthop via 203.0.113.9  dev out4 weight 1
192.0.2.47 via 203.0.113.3 dev out1
192.0.2.48 via 203.0.113.3 dev out1
192.0.2.49 via 203.0.113.3 dev out1
192.0.2.50 via 203.0.113.3 dev out1
Here are some examples of lookups and the associated results:
Destination IP Next hop
192.0.2.49 203.0.113.3 via out1
192.0.2.50 203.0.113.3 via out1
192.0.2.51 203.0.113.7 via out3 or 203.0.113.9 via out4 (ECMP)
192.0.2.200 203.0.113.5 via out2
A common structure for route lookup is the trie, a tree structure where each node has its parent as prefix.

Lookup with a simple trie The following trie encodes the previous routing table: Simple routing trie For each node, the prefix is known by its path from the root node and the prefix length is the current depth. A lookup in such a trie is quite simple: at each step, fetch the nth bit of the IP address, where n is the current depth. If it is 0, continue with the first child. Otherwise, continue with the second. If a child is missing, backtrack until a routing entry is found. For example, when looking for 192.0.2.50, we will find the result in the corresponding leaf (at depth 32). However for 192.0.2.51, we will reach 192.0.2.50/31 but there is no second child. Therefore, we backtrack until the 192.0.2.0/25 routing entry. Adding and removing routes is quite easy. From a performance point of view, the lookup is done in constant time relative to the number of routes (due to maximum depth being capped to 32). Quagga is an example of routing software still using this simple approach.

Lookup with a path-compressed trie In the previous example, most nodes only have one child. This leads to a lot of unneeded bitwise comparisons and memory is also wasted on many nodes. To overcome this problem, we can use path compression: each node with only one child is removed (except if it also contains a routing entry). Each remaining node gets a new property telling how many input bits should be skipped. Such a trie is also known as a Patricia trie or a radix tree. Here is the path-compressed version of the previous trie: Patricia trie Since some bits have been ignored, on a match, a final check is executed to ensure all bits from the found entry are matching the input IP address. If not, we must act as if the entry wasn t found (and backtrack to find a matching prefix). The following figure shows two IP addresses matching the same leaf: Lookup in a Patricia trie The reduction on the average depth of the tree compensates the necessity to handle those false positives. The insertion and deletion of a routing entry is still easy enough. Many routing systems are using Patricia trees:

Lookup with a level-compressed trie In addition to path compression, level compression2 detects parts of the trie that are densily populated and replace them with a single node and an associated vector of 2k children. This node will handle k input bits instead of just one. For example, here is a level-compressed version our previous trie: Level-compressed trie Such a trie is called LC-trie or LPC-trie and offers higher lookup performances compared to a radix tree. An heuristic is used to decide how many bits a node should handle. On Linux, if the ratio of non-empty children to all children would be above 50% when the node handles an additional bit, the node gets this additional bit. On the other hand, if the current ratio is below 25%, the node loses the responsibility of one bit. Those values are not tunable. Insertion and deletion becomes more complex but lookup times are also improved.

Implementation in Linux The implementation for IPv4 in Linux exists since 2.6.13 (commit 19baf839ff4a) and is enabled by default since 2.6.39 (commit 3630b7c050d9). Here is the representation of our example routing table in memory3: Memory representation of a trie There are several structures involved: The trie can be retrieved through /proc/net/fib_trie:
$ cat /proc/net/fib_trie
Id 100:
  +-- 0.0.0.0/0 2 0 2
      -- 0.0.0.0
        /0 universe UNICAST
     +-- 192.0.2.0/26 2 0 1
         -- 192.0.2.0
           /25 universe UNICAST
         -- 192.0.2.47
           /32 universe UNICAST
        +-- 192.0.2.48/30 2 0 1
            -- 192.0.2.48
              /32 universe UNICAST
            -- 192.0.2.49
              /32 universe UNICAST
            -- 192.0.2.50
              /32 universe UNICAST
[...]
For internal nodes, the numbers after the prefix are:
  1. the number of bits handled by the node,
  2. the number of full children (they only handle one bit),
  3. the number of empty children.
Moreover, if the kernel was compiled with CONFIG_IP_FIB_TRIE_STATS, some interesting statistics are available in /proc/net/fib_triestat4:
$ cat /proc/net/fib_triestat
Basic info: size of leaf: 48 bytes, size of tnode: 40 bytes.
Id 100:
        Aver depth:     2.33
        Max depth:      3
        Leaves:         6
        Prefixes:       6
        Internal nodes: 3
          2: 3
        Pointers: 12
Null ptrs: 4
Total size: 1  kB
[...]
When a routing table is very dense, a node can handle many bits. For example, a densily populated routing table with 1 million entries packed in a /12 can have one internal node handling 20 bits. In this case, route lookup is essentially reduced to a lookup in a vector. The following graph shows the number of internal nodes used relative to the number of routes for different scenarios (routes extracted from an Internet full view, /32 routes spreaded over 4 different subnets with various densities). When routes are densily packed, the number of internal nodes are quite limited. Internal nodes and null pointers

Performance So how performant is a route lookup? The maximum depth stays low (about 6 for a full view), so a lookup should be quite fast. With the help of a small kernel module, we can accurately benchmark5 the fib_lookup() function: Maximum depth and lookup time The lookup time is loosely tied to the maximum depth. When the routing table is densily populated, the maximum depth is low and the lookup times are fast. When forwarding at 10 Gbps, the time budget for a packet would be about 50 ns. Since this is also the time needed for the route lookup alone in some cases, we wouldn t be able to forward at line rate with only one core. Nonetheless, the results are pretty good and they are expected to scale linearly with the number of cores. The measurements are done with a Linux kernel 4.11 from Debian unstable. I have gathered performance metrics accross kernel versions in Performance progression of IPv4 route lookup on Linux . Another interesting figure is the time it takes to insert all those routes into the kernel. Linux is also quite efficient in this area since you can insert 2 million routes in less than 10 seconds: Insertion time

Memory usage The memory usage is available directly in /proc/net/fib_triestat. The statistic provided doesn t account for the fib_info structures, but you should only have a handful of them (one for each possible next-hop). As you can see on the graph below, the memory use is linear with the number of routes inserted, whatever the shape of the routes is. Memory usage The results are quite good. With only 256 MiB, about 2 million routes can be stored!

Routing rules Unless configured without CONFIG_IP_MULTIPLE_TABLES, Linux supports several routing tables and has a system of configurable rules to select the table to use. These rules can be configured with ip rule. By default, there are three of them:
$ ip rule show
0:      from all lookup local
32766:  from all lookup main
32767:  from all lookup default
Linux will first lookup for a match in the local table. If it doesn t find one, it will lookup in the main table and at last resort, the default table.

Builtin tables The local table contains routes for local delivery:
$ ip route show table local
broadcast 127.0.0.0 dev lo proto kernel scope link src 127.0.0.1
local 127.0.0.0/8 dev lo proto kernel scope host src 127.0.0.1
local 127.0.0.1 dev lo proto kernel scope host src 127.0.0.1
broadcast 127.255.255.255 dev lo proto kernel scope link src 127.0.0.1
broadcast 192.168.117.0 dev eno1 proto kernel scope link src 192.168.117.55
local 192.168.117.55 dev eno1 proto kernel scope host src 192.168.117.55
broadcast 192.168.117.63 dev eno1 proto kernel scope link src 192.168.117.55
This table is populated automatically by the kernel when addresses are configured. Let s look at the three last lines. When the IP address 192.168.117.55 was configured on the eno1 interface, the kernel automatically added the appropriate routes:
  • a route for 192.168.117.55 for local unicast delivery to the IP address,
  • a route for 192.168.117.255 for broadcast delivery to the broadcast address,
  • a route for 192.168.117.0 for broadcast delivery to the network address.
When 127.0.0.1 was configured on the loopback interface, the same kind of routes were added to the local table. However, a loopback address receives a special treatment and the kernel also adds the whole subnet to the local table. As a result, you can ping any IP in 127.0.0.0/8:
$ ping -c1 127.42.42.42
PING 127.42.42.42 (127.42.42.42) 56(84) bytes of data.
64 bytes from 127.42.42.42: icmp_seq=1 ttl=64 time=0.039 ms
--- 127.42.42.42 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms
The main table usually contains all the other routes:
$ ip route show table main
default via 192.168.117.1 dev eno1 proto static metric 100
192.168.117.0/26 dev eno1 proto kernel scope link src 192.168.117.55 metric 100
The default route has been configured by some DHCP daemon. The connected route (scope link) has been automatically added by the kernel (proto kernel) when configuring an IP address on the eno1 interface. The default table is empty and has little use. It has been kept when the current incarnation of advanced routing has been introduced in Linux 2.1.68 after a first tentative using classes in Linux 2.1.156.

Performance Since Linux 4.1 (commit 0ddcf43d5d4a), when the set of rules is left unmodified, the main and local tables are merged and the lookup is done with this single table (and the default table if not empty). Moreover, since Linux 3.0 (commit f4530fa574df), without specific rules, there is no performance hit when enabling the support for multiple routing tables. However, as soon as you add new rules, some CPU cycles will be spent for each datagram to evaluate them. Here is a couple of graphs demonstrating the impact of routing rules on lookup times: Routing rules impact on performance For some reason, the relation is linear when the number of rules is between 1 and 100 but the slope increases noticeably past this threshold. The second graph highlights the negative impact of the first rule (about 30 ns). A common use of rules is to create virtual routers: interfaces are segregated into domains and when a datagram enters through an interface from domain A, it should use routing table A:
# ip rule add iif vlan457 table 10
# ip rule add iif vlan457 blackhole
# ip rule add iif vlan458 table 20
# ip rule add iif vlan458 blackhole
The blackhole rules may be removed if you are sure there is a default route in each routing table. For example, we add a blackhole default with a high metric to not override a regular default route:
# ip route add blackhole default metric 9999 table 10
# ip route add blackhole default metric 9999 table 20
# ip rule add iif vlan457 table 10
# ip rule add iif vlan458 table 20
To reduce the impact on performance when many interface-specific rules are used, interfaces can be attached to VRF instances and a single rule can be used to select the appropriate table:
# ip link add vrf-A type vrf table 10
# ip link set dev vrf-A up
# ip link add vrf-B type vrf table 20
# ip link set dev vrf-B up
# ip link set dev vlan457 master vrf-A
# ip link set dev vlan458 master vrf-B
# ip rule show
0:      from all lookup local
1000:   from all lookup [l3mdev-table]
32766:  from all lookup main
32767:  from all lookup default
The special l3mdev-table rule was automatically added when configuring the first VRF interface. This rule will select the routing table associated to the VRF owning the input (or output) interface. VRF was introduced in Linux 4.3 (commit 193125dbd8eb), the performance was greatly enhanced in Linux 4.8 (commit 7889681f4a6c) and the special routing rule was also introduced in Linux 4.8 (commit 96c63fa7393d, commit 1aa6c4f6b8cd). You can find more details about it in the kernel documentation.

Conclusion The takeaways from this article are:
  • route lookup times hardly increase with the number of routes,
  • densily packed /32 routes lead to amazingly fast route lookups,
  • memory use is low (128 MiB par million routes),
  • no optimization is done on routing rules.

  1. The routing cache was subject to reasonably easy to launch denial of service attacks. It was also believed to not be efficient for high volume sites like Google but I have first-hand experience it was not the case for moderately high volume sites.
  2. IP-address lookup using LC-tries , IEEE Journal on Selected Areas in Communications, 17(6):1083-1092, June 1999.
  3. For internal nodes, the key_vector structure is embedded into a tnode structure. This structure contains information rarely used during lookup, notably the reference to the parent that is usually not needed for backtracking as Linux keeps the nearest candidate in a variable.
  4. One leaf can contain several routes (struct fib_alias is a list). The number of prefixes can therefore be greater than the number of leaves. The system also keeps statistics about the distribution of the internal nodes relative to the number of bits they handle. In our example, all the three internal nodes are handling 2 bits.
  5. The measurements are done in a virtual machine with one vCPU. The host is an Intel Core i5-4670K running at 3.7 GHz during the experiment (CPU governor was set to performance). The benchmark is single-threaded. It runs a warm-up phase, then executes about 100,000 timed iterations and keeps the median. Timings of individual runs are computed from the TSC.
  6. Fun fact: the documentation of this first tentative of more flexible routing is still available in today s kernel tree and explains the usage of the default class .

11 April 2017

Matthew Garrett: Disabling SSL validation in binary apps

Reverse engineering protocols is a great deal easier when they're not encrypted. Thankfully most apps I've dealt with have been doing something convenient like using AES with a key embedded in the app, but others use remote protocols over HTTPS and that makes things much less straightforward. MITMProxy will solve this, as long as you're able to get the app to trust its certificate, but if there's a built-in pinned certificate that's going to be a pain. So, given an app written in C running on an embedded device, and without an easy way to inject new certificates into that device, what do you do?

First: The app is probably using libcurl, because it's free, works and is under a license that allows you to link it into proprietary apps. This is also bad news, because libcurl defaults to having sensible security settings. In the worst case we've got a statically linked binary with all the symbols stripped out, so we're left with the problem of (a) finding the relevant code and (b) replacing it with modified code. Fortuntely, this is much less difficult than you might imagine.

First, let's find where curl sets up its defaults. Curl_init_userdefined() in curl/lib/url.c has the following code:
set->ssl.primary.verifypeer = TRUE;
set->ssl.primary.verifyhost = TRUE;
#ifdef USE_TLS_SRP
set->ssl.authtype = CURL_TLSAUTH_NONE;
#endif
set->ssh_auth_types = CURLSSH_AUTH_DEFAULT; /* defaults to any auth
type */
set->general_ssl.sessionid = TRUE; /* session ID caching enabled by
default */
set->proxy_ssl = set->ssl;

set->new_file_perms = 0644; /* Default permissions */
set->new_directory_perms = 0755; /* Default permissions */

TRUE is defined as 1, so we want to change the code that currently sets verifypeer and verifyhost to 1 to instead set them to 0. How to find it? Look further down - new_file_perms is set to 0644 and new_directory_perms is set to 0755. The leading 0 indicates octal, so these correspond to decimal 420 and 493. Passing the file to objdump -d (assuming a build of objdump that supports this architecture) will give us a disassembled version of the code, so time to fix our problems with grep:
objdump -d target grep --after=20 ,420 grep ,493

This gives us the disassembly of target, searches for any occurrence of ",420" (indicating that 420 is being used as an argument in an instruction), prints the following 20 lines and then searches for a reference to 493. It spits out a single hit:
43e864: 240301ed li v1,493
Which is promising. Looking at the surrounding code gives:
43e820: 24030001 li v1,1
43e824: a0430138 sb v1,312(v0)
43e828: 8fc20018 lw v0,24(s8)
43e82c: 24030001 li v1,1
43e830: a0430139 sb v1,313(v0)
43e834: 8fc20018 lw v0,24(s8)
43e838: ac400170 sw zero,368(v0)
43e83c: 8fc20018 lw v0,24(s8)
43e840: 2403ffff li v1,-1
43e844: ac4301dc sw v1,476(v0)
43e848: 8fc20018 lw v0,24(s8)
43e84c: 24030001 li v1,1
43e850: a0430164 sb v1,356(v0)
43e854: 8fc20018 lw v0,24(s8)
43e858: 240301a4 li v1,420
43e85c: ac4301e4 sw v1,484(v0)
43e860: 8fc20018 lw v0,24(s8)
43e864: 240301ed li v1,493
43e868: ac4301e8 sw v1,488(v0)

Towards the end we can see 493 being loaded into v1, and v1 then being copied into an offset from v0. This looks like a structure member being set to 493, which is what we expected. Above that we see the same thing being done to 420. Further up we have some more stuff being set, including a -1 - that corresponds to CURLSSH_AUTH_DEFAULT, so we seem to be in the right place. There's a zero above that, which corresponds to CURL_TLSAUTH_NONE. That means that the two 1 operations above the -1 are the code we want, and simply changing 43e820 and 43e82c to 24030000 instead of 24030001 means that our targets will be set to 0 (ie, FALSE) rather than 1 (ie, TRUE). Copy the modified binary back to the device, run it and now it happily talks to MITMProxy. Huge success.

(If the app calls Curl_setopt() to reconfigure the state of these values, you'll need to stub those out as well - thankfully, recent versions of curl include a convenient string "CURLOPT_SSL_VERIFYHOST no longer supports 1 as value!" in this function, so if the code in question is using semi-recent curl it's easy to find. Then it's just a matter of looking for the constants that CURLOPT_SSL_VERIFYHOST and CURLOPT_SSL_VERIFYPEER are set to, following the jumps and hacking the code to always set them to 0 regardless of the argument)

comment count unavailable comments

20 October 2016

Kees Cook: CVE-2016-5195

My prior post showed my research from earlier in the year at the 2016 Linux Security Summit on kernel security flaw lifetimes. Now that CVE-2016-5195 is public, here are updated graphs and statistics. Due to their rarity, the Critical bug average has now jumped from 3.3 years to 5.2 years. There aren t many, but, as I mentioned, they still exist, whether you know about them or not. CVE-2016-5195 was sitting on everyone s machine when I gave my LSS talk, and there are still other flaws on all our Linux machines right now. (And, I should note, this problem is not unique to Linux.) Dealing with knowing that there are always going to be bugs present requires proactive kernel self-protection (to minimize the effects of possible flaws) and vendors dedicated to updating their devices regularly and quickly (to keep the exposure window minimized once a flaw is widely known). So, here are the graphs updated for the 668 CVEs known today:

2016, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

19 October 2016

Kees Cook: Security bug lifetime

In several of my recent presentations, I ve discussed the lifetime of security flaws in the Linux kernel. Jon Corbet did an analysis in 2010, and found that security bugs appeared to have roughly a 5 year lifetime. As in, the flaw gets introduced in a Linux release, and then goes unnoticed by upstream developers until another release 5 years later, on average. I updated this research for 2011 through 2016, and used the Ubuntu Security Team s CVE Tracker to assist in the process. The Ubuntu kernel team already does the hard work of trying to identify when flaws were introduced in the kernel, so I didn t have to re-do this for the 557 kernel CVEs since 2011. As the README details, the raw CVE data is spread across the active/, retired/, and ignored/ directories. By scanning through the CVE files to find any that contain the line Patches_linux: , I can extract the details on when a flaw was introduced and when it was fixed. For example CVE-2016-0728 shows:
Patches_linux:
 break-fix: 3a50597de8635cd05133bd12c95681c82fe7b878 23567fd052a9abb6d67fe8e7a9ccdd9800a540f2
This means that CVE-2016-0728 is believed to have been introduced by commit 3a50597de8635cd05133bd12c95681c82fe7b878 and fixed by commit 23567fd052a9abb6d67fe8e7a9ccdd9800a540f2. If there are multiple lines, then there may be multiple SHAs identified as contributing to the flaw or the fix. And a - is just short-hand for the start of Linux git history. Then for each SHA, I queried git to find its corresponding release, and made a mapping of release version to release date, wrote out the raw data, and rendered graphs. Each vertical line shows a given CVE from when it was introduced to when it was fixed. Red is Critical , orange is High , blue is Medium , and black is Low : CVE lifetimes 2011-2016 And here it is zoomed in to just Critical and High: Critical and High CVE lifetimes 2011-2016 The line in the middle is the date from which I started the CVE search (2011). The vertical axis is actually linear time, but it s labeled with kernel releases (which are pretty regular). The numerical summary is: This comes out to roughly 5 years lifetime again, so not much has changed from Jon s 2010 analysis. While we re getting better at fixing bugs, we re also adding more bugs. And for many devices that have been built on a given kernel version, there haven t been frequent (or some times any) security updates, so the bug lifetime for those devices is even longer. To really create a safe kernel, we need to get proactive about self-protection technologies. The systems using a Linux kernel are right now running with security flaws. Those flaws are just not known to the developers yet, but they re likely known to attackers, as there have been prior boasts/gray-market advertisements for at least CVE-2010-3081 and CVE-2013-2888. (Edit: see my updated graphs that include CVE-2016-5195.)

2016, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

26 September 2016

Kees Cook: security things in Linux v4.3

When I gave my State of the Kernel Self-Protection Project presentation at the 2016 Linux Security Summit, I included some slides covering some quick bullet points on things I found of interest in recent Linux kernel releases. Since there wasn t a lot of time to talk about them all, I figured I d make some short blog posts here about the stuff I was paying attention to, along with links to more information. This certainly isn t everything security-related or generally of interest, but they re the things I thought needed to be pointed out. If there s something security-related you think I should cover from v4.3, please mention it in the comments. I m sure I haven t caught everything. :) A note on timing and context: the momentum for starting the Kernel Self Protection Project got rolling well before it was officially announced on November 5th last year. To that end, I included stuff from v4.3 (which was developed in the months leading up to November) under the umbrella of the project, since the goals of KSPP aren t unique to the project nor must the goals be met by people that are explicitly participating in it. Additionally, not everything I think worth mentioning here technically falls under the kernel self-protection ideal anyway some things are just really interesting userspace-facing features. So, to that end, here are things I found interesting in v4.3: CONFIG_CPU_SW_DOMAIN_PAN Russell King implemented this feature for ARM which provides emulated segregation of user-space memory when running in kernel mode, by using the ARM Domain access control feature. This is similar to a combination of Privileged eXecute Never (PXN, in later ARMv7 CPUs) and Privileged Access Never (PAN, coming in future ARMv8.1 CPUs): the kernel cannot execute user-space memory, and cannot read/write user-space memory unless it was explicitly prepared to do so. This stops a huge set of common kernel exploitation methods, where either a malicious executable payload has been built in user-space memory and the kernel was redirected to run it, or where malicious data structures have been built in user-space memory and the kernel was tricked into dereferencing the memory, ultimately leading to a redirection of execution flow. This raises the bar for attackers since they can no longer trivially build code or structures in user-space where they control the memory layout, locations, etc. Instead, an attacker must find areas in kernel memory that are writable (and in the case of code, executable), where they can discover the location as well. For an attacker, there are vastly fewer places where this is possible in kernel memory as opposed to user-space memory. And as we continue to reduce the attack surface of the kernel, these opportunities will continue to shrink. While hardware support for this kind of segregation exists in s390 (natively separate memory spaces), ARM (PXN and PAN as mentioned above), and very recent x86 (SMEP since Ivy-Bridge, SMAP since Skylake), ARM is the first upstream architecture to provide this emulation for existing hardware. Everyone running ARMv7 CPUs with this kernel feature enabled suddenly gains the protection. Similar emulation protections (PAX_MEMORY_UDEREF) have been available in PaX/Grsecurity for a while, and I m delighted to see a form of this land in upstream finally. To test this kernel protection, the ACCESS_USERSPACE and EXEC_USERSPACE triggers for lkdtm have existed since Linux v3.13, when they were introduced in anticipation of the x86 SMEP and SMAP features. Ambient Capabilities Andy Lutomirski (with Christoph Lameter and Serge Hallyn) implemented a way for processes to pass capabilities across exec() in a sensible manner. Until Ambient Capabilities, any capabilities available to a process would only be passed to a child process if the new executable was correctly marked with filesystem capability bits. This turns out to be a real headache for anyone trying to build an even marginally complex least privilege execution environment. The case that Chrome OS ran into was having a network service daemon responsible for calling out to helper tools that would perform various networking operations. Keeping the daemon not running as root and retaining the needed capabilities in children required conflicting or crazy filesystem capabilities organized across all the binaries in the expected tree of privileged processes. (For example you may need to set filesystem capabilities on bash!) By being able to explicitly pass capabilities at runtime (instead of based on filesystem markings), this becomes much easier. For more details, the commit message is well-written, almost twice as long as than the code changes, and contains a test case. If that isn t enough, there is a self-test available in tools/testing/selftests/capabilities/ too. PowerPC and Tile support for seccomp filter Michael Ellerman added support for seccomp to PowerPC, and Chris Metcalf added support to Tile. As the seccomp maintainer, I get excited when an architecture adds support, so here we are with two. Also included were updates to the seccomp self-tests (in tools/testing/selftests/seccomp), to help make sure everything continues working correctly. That s it for v4.3. If I missed stuff you found interesting, please let me know! I m going to try to get more per-version posts out in time to catch up to v4.8, which appears to be tentatively scheduled for release this coming weekend.

2016, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

8 June 2016

Francois Marier: Simple remote mail queue monitoring

In order to monitor some of the machines I maintain, I rely on a simple email setup using logcheck. Unfortunately that system completely breaks down if mail delivery stops. This is the simple setup I've come up with to ensure that mail doesn't pile up on the remote machine.

Server setup The first thing I did on the server-side is to follow Sean Whitton's advice and configure postfix so that it keeps undelivered emails for 10 days (instead of 5 days, the default):
postconf -e maximal_queue_lifetime=10d
Then I created a new user:
adduser mailq-check
with a password straight out of pwgen -s 32. I gave ssh permission to that user:
adduser mailq-check sshuser
and then authorized my new ssh key (see next section):
sudo -u mailq-check -i
mkdir ~/.ssh/
cat - > ~/.ssh/authorized_keys

Laptop setup On my laptop, the machine from where I monitor the server's mail queue, I first created a new password-less ssh key:
ssh-keygen -t ed25519 -f .ssh/egilsstadir-mailq-check
cat ~/.ssh/egilsstadir-mailq-check.pub
which I then installed on the server. Then I added this cronjob in /etc/cron.d/egilsstadir-mailq-check:
0 2 * * * francois /usr/bin/ssh -i /home/francois/.ssh/egilsstadir-mailq-check mailq-check@egilsstadir mailq   grep -v "Mail queue is empty"
and that's it. I get a (locally delivered) email whenever the mail queue on the server is non-empty. There is a race condition built into this setup since it's possible that the server will want to send an email at 2am. However, all that does is send a spurious warning email in that case and so it's a pretty small price to pay for a dirt simple setup that's unlikely to break.

15 December 2015

Antoine Beaupr : Using a Yubikey NEO for SSH and OpenPGP on Debian jessie

I recently ordered two Yubikey devices from Yubico, partly because of a special offer from Github. I ordered both a Yubikey NEO and a Yubikey 4, although I am not sure I remember why I ordered two - you can see their Yubikey product comparison if you want to figure that out, but basically, the main difference is that the NEO has support for NFC while the "4" has support for larger RSA key sizes (4096). This article details my experiment on the matter. It is partly based on first hand experience, but also links to various other tutorials that helped me along the way. Especially thanks to folks on various IRC channels that really helped me out in understanding this. My objective in getting a hardware security token like this was three-fold:
  1. use 2FA on important websites like Github, to improve the security of critical infrastructure (like my Borg backup software)
  2. login to remote SSH servers without exposing my password or my private key material on third party computers
  3. store OpenPGP key material on the key securely, so that the private key material can never be compromised
To make a long story short: this article documents step 2 and implicitly step 3 (because I use OpenPGP to login to SSH servers). However it is not possible to use the key on arbitrary third party computers, given how much setup was necessary to make the thing work at all. 2FA on the Github site completely failed, but could be used on other sites, although this is not covered by this article. I have also not experimented in details the other ways the Yubikey can also be used (sorry for the acronym flood) as: Update: OATH works! It is easy to configure and i added a section below.

Not recommended After experimenting with the device and doing a little more research, I am not sure it was the right decision to buy a Yubikey. I would not recommend buying Yubikey devices because they don't allow changing the firmware, making the device basically proprietary, even in the face of an embarrassing security vulnerability on the Yubikey NEO that came out in 2015. A security device, obviously, should be as open as the protocols it uses, otherwise it's basically impossible to trust that the crypto hasn't been backdoored or compromised, or, in this case, is vulnerable to the simplest drive-by attacks. Furthermore, it turns out that the primary use case that Github was promoting is actually not working as advertised: to use the Yubikey on Github, you actually first need to configure 2FA with another tool, either with your phone's text messages (SMS) or with something like Google Authenticator. After contacting Github support, they explained that the Yubikey is seen as a "backup device", which seems really odd to me, especially considering the promotion and the fact that I don't have a "smart" (aka "Google", it seems these days) phone or the desire to share my personal phone number with Github. Finally, as I mentioned before, the fact that those devices are fairly new and the configuration necessary to make them work at all is completely obtuse, non-standardized or at least not available by default on arbitrary computers makes them basically impossible to use on other computers than your own specially crafted gems.

Plugging it in The Yubikey, when inserted into a USB port, seems to be detected properly. It shows up both as a USB keyboard and a generic device.
d c 14 17:23:26 angela kernel: input: Yubico Yubikey NEO OTP+U2F as /devices/pci0000:00/0000:00:12.0/usb3/3-2/3-2:1.0/0003:1050:0114.0016/input/input127
d c 14 17:23:26 angela kernel: hid-generic 0003:1050:0114.0016: input,hidraw3: USB HID v1.10 Keyboard [Yubico Yubikey NEO OTP+U2F] on usb-0000:00:12.0-2/input0
d c 14 17:23:26 angela kernel: hid-generic 0003:1050:0114.0017: hiddev0,hidraw4: USB HID v1.10 Device [Yubico Yubikey NEO OTP+U2F] on usb-0000:00:12.0-2/input1
We'll be changing this now - we want to to support OTP, U2F and CCID. Don't worry about those acronyms now, but U2F is for the web, CCID is for GPG/SSH, and OTP is for the One Time Passwords stuff mentionned earlier. I am using the Yubikey Personalization tool from stretch because the jessie one is too old, according to Gorzen. Indeed, I found out that the jessie version doesn't ship with the proper udev rules. Also, note that we need to run as sudo otherwise we get a permission denied:
$ sudo apt install yubikey-personalization/stretch
$ sudo ykpersonalize -m86
Firmware version 3.4.3 Touch level 1541 Program sequence 1
The USB mode will be set to: 0x86
Commit? (y/n) [n]: y
To understand better what the above does, see the NEO composite device documentation. The next step is to reconnect the key, for the udev rules to kick in. If you were like me, you enthusiastically plugged in the device before installing the yubikey-personalization package, and the udev rules were not present then.

Configuring a PIN Various operations will require you to enter a PIN when talking to the key. The default PIN is 123456 and the default admin PIN is 12345678. You will want to change that, otherwise someone that gets a hold of your key could do any operation without your consent. For this, you need to use:
$ gpg --card-edit
> passwd
> admin
> passwd
Be sure to remember those passwords! Of course, the key material on the Yubikey can be revoked when you loose the key, but only if you still have control of the master key, or if you have a OpenPGP revocation certification (which you should have).

Configuring GPG To do OpenPGP operations (like decryption, signatures and so on), or SSH operations (like authentication on a remote server), you need to talk with GPG. Yes, OpenPGP keys are RSA keys that can be used to authenticate with SSH servers, that's not new and I have already been doing this with Monkeysphere for a while. Now the challenge is how to make GPG talk with the Yubikey. So the next step is to see if gpg can see the key alright, as described in the Yubikey importing keys howto - you will need first to install scdaemon and pcscd (according to this howto) for gpg-agent to be able to talk with the key:
$ sudo apt install scdaemon gnupg-agent pcscd
$ gpg-connect-agent --hex "scd apdu 00 f1 00 00" /bye
ERR 100663404 Card error <SCD>
Well that failed. At this point, touching the key types a bunch of seemingly random characters wherever my cursor is sitting - fun but totally useless still. That was because I failed to reconnect the key: make sure the udev rules are in place and reconnect the key, the above should work:
$ gpg-connect-agent --hex "scd apdu 00 f1 00 00" /bye
D[0000]  01 00 10 90 00                                     .....
OK
This shows it is running the firmware 1.10, which is not vulnerable to the infamous security issue. (Note: I also happened to install opensc and gpgsm because of this suggestion but I am not sure they are required at all.)

Using SSH To make GPG work with SSH, you need somehow to start gpg-agent with ssh support, for example with:
gpg-agent --daemon --enable-ssh-support bash
Of course, this will work better if it's started with your Xsession. Such an agent should already be started, so you just need to add the ssh emulation to its configuration file and restart your X session.
echo 'enable-ssh-support' >> .gnupg/gpg-agent.conf
In Debian Jessie, the ssh-agent wrapper will not start if it detects that you have already one running (for example from gpg-agent) but if that fails, you can try commenting out use-ssh-agent from /etc/X11/Xsession.options to keep it from starting up in your session. (Thanks to stackexchange for that reference.) Here I assume you have already created an authentication subkey on your PGP key. If you haven't, I suggest trying out simply monkeysphere gen-subkey, which will generate an authentication subkey for you. You can also do it by hand by following one of the OpenPGP/SSH tutorials from Yubikey, especially the more complete one. If you are going to generate a completely new OpenPGP key, you may want to follow this simpler tutorial here. Then you need to move your authentication subkey to the Yubikey. For this, you need to edit the key and use the keytocard command:
$ gpg2 --edit-key anarcat@debian.org
> toggle
> key 2
> keytocard
> save
Here, we use toggle to show the OpenPGP private key material. You should see a key marked with A for Authentication. Mine was the second one so I selected it with key 2 which put a star next to it. The keytocard command moved it to the key and save ensure the key was removed from the local keyring. Obviously, backups are essential before doing this, because it's perfectly possible to loose that key in the process, for example if you destroy or lose the key or forget the password. It's probably better to create a completely different authentication subkey for just this purpose, but that may require reconfiguring all remote SSH hosts, and you may not want to do that. Then SSH should magically talk with the GPG agent and ask you for the PIN! That's pretty much all there is to it - if it doesn't, it means that gpg-agent is not your SSH agent, and obviously things will fail... Also, you should be able to see the key being loaded in the agent when it is:
$ ssh-add -l
2048 23:f3:be:bf:1e:da:e8:ad:4b:c7:f6:60:5e:03:c2:a6 cardno:000603647189 (RSA)
.. that's about it! I have yet to cover 2FA and OpenPGP support, but that got me going for a while and I'll stop geeking around with that thing for now. It was fun, for sure, but not sure it's worth it for now.

Using OATH This is pretty neat: it allows you to add two factor authentication to a lot of things. For example, PAM has such a module, which I will configure here to allow myself to login to my server from untrusted machines. While I will expose my main password to keyloggers, the OTP password will prevent that from being reused. This is a simplified version of this OATH tutorial. We install the PAM module with:
sudo apt install libpam-oath
Then, we can hook it into any PAM consumer, for example with sshd:
--- a/pam.d/sshd
+++ b/pam.d/sshd
@@ -1,5 +1,8 @@
 # PAM configuration for the Secure Shell service
+# for the yubikey
+auth required pam_oath.so usersfile=/etc/users.oath window=5 digits=8
+
 # Standard Un*x authentication.
 @include common-auth
We also needed to allow OTP passwords in sshd explicitely, with:
ChallengeResponseAuthentication yes
This will force the user to enter a valid oath token on the server. Unfortunately, this will affect all users, regardless of whether they are present in the users.oath file. I filed bug #807990 regarding this, with a patch. Also, it means the main password is still exposed on the client machine - you can use the sufficient keyword instead of required to workaround that, but then it means anyone with your key can login to your machine, which is something to keep in mind. The /etc/users.oath file needs to be created with something like:
#type   username        pin     start seed
HOTP    anarcat -   00
00 is obviously a fake, and insecure string. Generate a proper one with:
dd if=/dev/random bs=1k count=1   sha1sum # create a random secret
Then the shared secret needs to be added to the Yubikey:
ykpersonalize -1 -o oath-hotp -o oath-hotp8 -o append-cr -a
You simply paste the random secret you created above, when prompted, and that shared secret will be saved in a Yubikey slot for future use. Next time you login to the SSH server, you will be prompted for a OATH password, you just touch the button on the key and it will be pasted there:
$ ssh -o PubkeyAuthentication=no anarc.at
One-time password (OATH) for  anarcat':
Password:
[... logged in!]
Final note: the centralized file approach makes it hard, if not impossible, for users to update their own secret token.. It would be nice if there would be a user-accessible token file, maybe ~/.oath? Filed feature request #807992 about this as well.

8 June 2015

Craig Small: Checking Cloudflare SSL

My website for a while has used CloudFlare as its front-end. It s a rather nice setup and means my real server gets less of a hammering, which is a good thing. A few months ago they enabled a feature called Universal SSL which I have also added to my site. Around the same time, my SSL check scripts started failing for the website, the certificate had expired apparently many many days ago. Something wasn t right. The Problem The problem was simply at first I d get emails saying The SSL certificate for enc.com.au (CN: ) has expired! . I use a program called ssl-cert-check that would check all (web, smtp, imap) of my certificates. It s very easy to forget to renew and this program runs daily and does a simple check. Running the program on the command line gave some more information, but nothing (for me) that really helped:
$ /usr/bin/ssl-cert-check -s enc.com.au -p 443
Host Status Expires Days
----------------------------------------------- ------------ ------------ ----
unable to load certificate
140364897941136:error:0906D06C:PEM routines:PEM_read_bio:no start line:pem_lib.c:701:Expecting: TRUSTED CERTIFICATE
unable to load certificate
139905089558160:error:0906D06C:PEM routines:PEM_read_bio:no start line:pem_lib.c:701:Expecting: TRUSTED CERTIFICATE
unable to load certificate
140017829234320:error:0906D06C:PEM routines:PEM_read_bio:no start line:pem_lib.c:701:Expecting: TRUSTED CERTIFICATE
unable to load certificate
140567473276560:error:0906D06C:PEM routines:PEM_read_bio:no start line:pem_lib.c:701:Expecting: TRUSTED CERTIFICATE
enc.com.au:443 Expired -2457182
So, apparently, there was something wrong with the certificate. The problem was this was CloudFlare who seem to have a good idea on how to handle certificates and all my browsers were happy. ssl-cert-check is a shell script that uses openssl to make the connection, so the next stop was to see what openssl had to say.
$ echo ""   /usr/bin/openssl s_client -connect enc.com.au:443 CONNECTED(00000003)
140115756086928:error:14077438:SSL routines:SSL23_GET_SERVER_HELLO:tlsv1 alert internal error:s23_clnt.c:769:
---
no peer certificate available
---
No client certificate CA names sent
---
SSL handshake has read 7 bytes and written 345 bytes
---
New, (NONE), Cipher is (NONE)
Secure Renegotiation IS NOT supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
---
No peer certificate available. That was the clue I was looking for. Where s my Certificate? CloudFlare Universal SSL uses certificates that have multiple domains in the one certificate. The do this by having one canonical name which is something like sni(numbers).cloudflaressl.com and then multiple Subject Alternative Names (a bit like ServerAlias in apache configurations). This way a single server with a single certificate can serve multiple domains. The way that the client tells the server which website it is looking for is Server Name Indication (SNI). As part of the TLS handshaking the client tells the server I want website www.enc.com.au . The thing is, by default, both openssl s_client and the check script do not use this feature. That was fail the SSL certificate checks were failing. The server was waiting for the client to ask what website it wanted. Modern browsers do this automatically so it just works for them. The Fix For openssl on the command line, there is a flag -servername which does the trick nicely:
$ echo ""   /usr/bin/openssl s_client -connect enc.com.au:443 -servername enc.com.au 
CONNECTED(00000003)
depth=2 C = GB, ST = Greater Manchester, L = Salford, O = COMODO CA Limited, CN = COMODO ECC Certification Authority
verify error:num=20:unable to get local issuer certificate
---
(lots of good SSL type messages)
That was openssl happy now. We asked the server what website we were interested in with the -servername and got the certificate. The fix for ssl-cert-check is even simpler. Like a lot of things once you know the problem, the solution is not only easy to work out but someone has done it for you already. There is a Debian bug report on this problem with a simple fix from Francois Marier. Just edit the check script and change the line that has:
 TLSSERVERNAME="FALSE"
and change it to true. Then the script is happy too:
$ ssl-cert-check -s enc.com.au -p https
Host Status Expires Days
----------------------------------------------- ------------ ------------ ----
enc.com.au:https Valid Sep 30 2015 114
All working and as expected! This isn t really a CloudFlare problem as such, it is just that s the first place I had seen these sort of SNI certificates being used in something I administer (or more correctly something behind the something).

5 June 2015

Patrick Schoenfeld: Testing puppet modules: an overview

When it comes to testing puppet modules, there are lot of options, but for someone entering the world of puppet module testing, the pure variety may seem overwhelming. This is a try to provide some overview. So you ve written a puppet module and would like to add some tests. Now what?As of today, puppet tests basically can be done in two ways, complementing each other: Catalog tests
In most cases you should at least write some catalog tests.
As of writing this (June 2015) the tool of choice is rspec-puppet. There used to be at least one other and you might have heard about it, but it s deprecated. For an introduction to this tool, you are best served by reading its brief but sufficient docs. Function acceptance tests
If catalog testing is not enough for you (e.g. you want to test that your website profile is actually installing and serving your site on port 80 and port 443) the next logical step is to write beaker tests for tests in a real system (as real as a virtual machine can be). This is also what you need, if you are writing custom types. Today s tool of choice for this job is beaker with beaker-rspec. After you ve written some rspec tests, this might feel similar. Since the documentation, might not seem very newbie friendly at first glance, the page Howto Beaker lists the relevant pages in the documentation to get started in a sensible order. Basically it s: Update your modules build depencies (Gemfile), decide for a hypervisor, create (or describe) your test environment, write spec tests and execute them :)
Skeleton of a module If testing puppet modules falls into your lap and you ve already written your puppet code, it s to late too start with a module anatomy as generated by puppet module generate But: It s certainly a good bet to know which technologies are todays common best practice. Further reading A very good guide to setting things up and writing tests of both types is the threepart blog post by Micka l Can vet written for camptocamp. It is a basic guide into test-driven development (writing tests before writing actual code) on a practical example.

18 July 2014

Osamu Aoki: Debian does not boot ...Crucial/Micron RealSSD m4/C400/P400

Today, my PC did not boot as usual to Debian. BIOS could not find my /dev/sda and was looking for the netboot image. I restarted my PC and got into the BIOS boot setting menu. Hmmm.... my first SDD (/dev/sda) was missing. My second HDD (/dev/sdb) was there. But I did not put the Grub boot-loader there. No wonder it did not boot.

I have a 32GB USB3 stick with the full Debian system. It is not a live CD image USB stick but a HDD formatted and encrypted system. Though it is not the fastest system, it is very light and usable. I plugged it in and powered it up. It booted OK but /dev/sda was still missing. While it booted, I saw "ata1: COMRESET failed (errorno=-16)" . So this ata1 SSD cannot be accessed from BIOS nor Linux. Sigh ...

Looking around the web under the USB stick system, I saw some people were talking that the loose serial ATA cable sometimes causes this message. Since my PC is a laptop, It has no flexible cable but has an on-board connector inside for the SSD.

Hoping my problem is just a bad connection problem, I crack opened the back panel of my PC. The SSD looked fine. I unplugged it from the connector and reinserted back into the connector. After repeating this several times to be sure, I closed the back panel and booted.

It boots as expected into Debian. Looks like everything is fine.
SMART Error Log Version: 1
No Errors Logged
Good.

If you have any boot problem like mine, please reinsert your SSD to the connector like I did before you panic.

Good luck.

Osamu

PS: This Crucial/Micron RealSSD m4/C400/P400 M4-CT256M4SSD2 previously had a problem. A firmware bug made it read-only. The firmware updates fixed my Debian system on this SSD. I could fix this without Win*** OS since the firmware update was on a bootable disk image file.

27 August 2013

John Goerzen: Engaged!

Today I have the delightful chance to write about a deep and wonderful joy. Yes, Laura and I are engaged! There are no words adequate for this kind of occasion, but there is a picture that gets close: I never imagined a person could find a friend so wonderful, a person that enjoys so much in common, someone that understands me and that I understand so well. And yet, here I am, engaged to that friend. Amazing only begins to describe the feeling. One of the first things Laura and I talked about was a hymn I was tinkering with, typesetting with GNU LilyPond. That hymn ends like this: Since Love is Lord of heav n and earth, how can I keep from singing? It is a wonderful thought, and very true. Even literally true; I often find myself singing, humming, or playing the penny whistle during my day. One of my good friends once told me, I am completely sure that your happiest days lie ahead of you. And he was right. I have already experienced them. This is a wonderful time, and every day brings plentiful reasons to be thankful. And as Laura and I prepare for a life together, I know that it is true not only that my friend was right, but that he is right my happiest days are yet to come, and our happiest days are yet to come, too. I have been blessed in many ways, and feel like the luckiest man alive. To be loved, and to love, a person so wonderful is truly a remarkable gift.

15 August 2013

Ingo Juergensmann: Exim4 and TLS with GMX/Web.de

Due to the unveiling of the NSA surveillance by Edward Snowden, some German mail providers decided last week to use TLS when sending mails out. For example GMX and Web.de. Usually there shouldn't be a problem with that, but it seems as if the Debian package of Exim4 (exim4-daemon-heavy) doesn't support any TLS ciphers that those providers will accept. The Debian package uses GnuTLS for TLS and there is Bug #446036 that asks for compilation against OpenSSL instead. Anyway, maybe it's something in my config as I don't use the Debian config but my own /etc/exim4/exim4.conf. Here are the TLS related parts:
tls_advertise_hosts = *
tls_certificate = /etc/exim4/ssl.crt/webmail-ssl.crt
tls_privatekey = /etc/exim4/ssl.key/webmail-server.key
That's my basic setup. After discovering that GMX and Web.de cannot send mails anymore, I added some more, following the Exim docs (it's commented out, because I don't use GnuTLS anymore):
#tls_dhparam = /etc/exim4/gnutls-params-2236
#tls_require_ciphers = $ if == $received_port 25 \
# NORMAL:%COMPAT \
# SECURE128
But still I got this kind of errors:
2013-08-14 22:49:27 TLS error on connection from mout.gmx.net [212.227.17.21] (gnutls_handshake): Could not negotiate a supported cipher suite.
As this didn't help either, I recompiled exim4-daemon-heavy against OpenSSL and et voila, it worked again. So, the question is if there's any way to get it working with GnuTLS ? Does the default Debian config work and if so, why? And if not, can a decision be made to use OpenSSL instead of GnuTLS? Reading the bug report it seems as if there are exemptions for linking against OpenSSL , so GPL wouldn't be violated. UPDATE 16.08.2013:
I reinstalled the GnuTLS version of exim4-daemon-heavy to test the recommendation in the comments with explicit tls_require_chiphers settings, but with no luck:
#tls_require_ciphers = SECURE256
#tls_require_ciphers = SECURE128
#tls_require_ciphers = NORMAL
These all resulted in the usual "(gnutls_handshake): Could not negotiate a supported cipher suite." error when trying one by one cipher setting. UPDATE 2 16.08.2013:
There was a different report about recent GnuTLS problem on the debian-user-german mailing list. It's not the same cause, but might be related.
Kategorie:

Next.