Search Results: "Simon Richter"

7 January 2017

Simon Richter: Crossgrading Debian in 2017

So, once again I had a box that had been installed with the kind-of-wrong Debian architecture, in this case, powerpc (32 bit, bigendian), while I wanted ppc64 (64 bit, bigendian). So, crossgrade time. If you want to follow this, be aware that I use sysvinit. I doubt this can be done this way with systemd installed, because systemd has a lot more dependencies for PID 1, and there is also a dbus daemon involved that cannot be upgraded without a reboot. To make this a bit more complicated, ppc64 is an unofficial port, so it is even less synchronized across architectures than sid normally is (I would have used jessie, but there is no jessie for ppc64). Step 1: Be Prepared To work around the archive synchronisation issues, I installed pbuilder and created 32 and 64 bit base.tgz archives:
pbuilder --create --basetgz /var/cache/pbuilder/powerpc.tgz
pbuilder --create --basetgz /var/cache/pbuilder/ppc64.tgz \
    --architecture ppc64 \
    --mirror http://ftp.ports.debian.org/debian-ports \
    --debootstrapopts --keyring=/usr/share/keyrings/debian-ports-archive-keyring.gpg \
    --debootstrapopts --include=debian-ports-archive-keyring
Step 2: Gradually Heat the Water so the Frog Doesn't Notice Then, I added the sources to sources.list, and added the architecture to dpkg:
deb [arch=powerpc] http://ftp.debian.org/debian sid main
deb [arch=ppc64] http://ftp.ports.debian.org/debian-ports sid main
deb-src http://ftp.debian.org/debian sid main
dpkg --add-architecture ppc64
apt update
Step 3: Time to Go Wild
apt install dpkg:ppc64
Obviously, that didn't work, in my case because libattr1 and libacl1 weren't in sync, so there was no valid way to install powerpc and ppc64 versions in parallel, so I used pbuilder to compile the current version from sid for the architecture that wasn't up to date (IIRC, one for powerpc, and one for ppc64). Manually installed the libraries, then tried again:
apt install dpkg:ppc64
Woo, it actually wants to do that. Now, that only half works, because apt calls dpkg twice, once to remove the old version, and once to install the new one. Your options at this point are
apt-get download dpkg:ppc64
dpkg -i dpkg_*_ppc64.deb
or if you didn't think far enough ahead, cursing followed by
cd /tmp
ar x /var/cache/apt/archives/dpkg_*_ppc64.deb
cd /
tar -xJf /tmp/data.tar.xz
dpkg -i /var/cache/apt/archives/dpkg_*_ppc64.deb
Step 4: Automate That Now, I'd like to get this a bit more convenient, so I had to repeat the same dance with apt and aptitude and their dependencies. Thanks to pbuilder, this wasn't too bad. With the aptitude resolver, it was then simple to upgrade a test package
aptitude install coreutils:ppc64 coreutils:powerpc-
The resolver did its thing, and asked whether I really wanted to remove an Essential package. I did, and it replaced the package just fine. So I asked dpkg for a list of all powerpc packages installed (since it's a ppc64 dpkg, it will report powerpc as foreign), massage that into shape with grep and sed, and give the result to aptitude as a command line. Some time later, aptitude finished, and I had a shiny 64 bit system. Crossgrade through an ssh session that remained open all the time, and without a reboot. After closing the ssh session, the last 32 bit binary was deleted as it was no longer in use. There were a few minor hiccups during the process where dpkg refused to overwrite "shared" files with different versions, but these could be solved easily by manually installing the offending package with
dpkg --force-overwrite -i ...
and then resuming what aptitude was doing, using
aptitude install
So, in summary, this still works fairly well.

9 December 2016

Simon Richter: Busy

I'm fairly busy at the moment, so I don't really have time to work on free software, and when I do I really want to do something else than sit in front of a computer. I have declared email bankruptcy at 45,000 unread mails. I still have them, and plan to deal with them in small batches of a few hundred at a time, but in case you sent me something important, it is probably stuck in there. I now practice Inbox Zero, so resending it is a good way to reach me. For my Debian packages, not much changes. Any package with more than ten users is team maintained anyway. Sponsoring for the packages where I agreed to do so goes on. For KiCad, I won't get around to much of what I'd planned this year. Fortunately, at this point no one expects me to do anything soon. I still look into the CI system and unclog anything that doesn't clear on its own within a week. Plans for December: Plans for January: Plans for February: Other than that, reading lots of books and meeting other people.

1 November 2016

Simon Richter: Using the Arduino IDE with a tiling window manager

The Arduino IDE does not work properly with tiling window managers, because they do some interesting reparenting. To solve this, add
_JAVA_AWT_WM_NONREPARENTING=1
export _JAVA_AWT_WM_NONREPARENTING
to the start script or your environment. Credit: "Joost"

1 May 2016

Simon Richter: With great power comes great responsibility

On the other hand,
export EDITOR='sed -ie "/^\+/s/ override//"'
yes e   git add -p
is a good way to commit all your changes except the addition of C++11 override specifiers.

14 April 2016

Simon Richter: It Begins.

Just starting a small side project... 3D printing GLaDOS's head

14 March 2016

Simon Richter: IPsec settings for FortiNet VPN

So, a customer uses a FortiNet VPN gateway. Because I have perfectly fine IPsec software already installed, the only thing missing are appropriate settings. As they use IKEv1 in aggressive mode, there is not much of an error reply if you get any of them wrong. So, here's a StrongSwan setup that works for me:
conn fortinet
        left=%any
        leftauth=psk
        leftid=""
        leftauth2=xauth
        xauth_identity="your username"
        leftsourceip=%config
        right=gateway IP address
        rightsubnet=VPN subnet
        rightauth=psk
        keyexchange=ikev1
        aggressive=yes
        ike=aes128-sha1-modp1536!
        esp=aes128-sha1-modp1536!
        auto=add
Not sure if that can be optimized further by getting the VPN subnet through mode_config as well, but I'm basically happy with the current state. In addition to that, you need the PSK and XAUTH secrets, obviously.

12 March 2016

Simon Richter: 8192!

So, finally I've also joined the club. 2048 board with 8192 tile

3 March 2016

Simon Richter: Why sysvinit?

As most distributions have switched to systemd as default "init", some people have asked why we actually keep sysvinit around, as it's old and crusty, and systemd can do so much more. My answer is: systemd and sysvinit solve entirely different problems, and neither can ever replace the other fully. The big argument in favour of systemd is integration: when you keep everything in one process, it is easy to make sure that components talk to each other. That is nice, but kind of misses the point: integration could be done via IPC as well, and systemd uses a lot of it to separate privileges anyway. The tighter integration comes from something else: the configuration format. Where sysvinit uses a lot of shell scripts ("imperative" format), systemd uses explicit keywords ("descriptive" format), and this is the important design decision here: descriptive formats cannot do anything unanticipated, neither good nor bad. This is important for UI programmers: if there is a button that says "if I close the lid switch, suspend the laptop", then this needs to be connected to functionality that behaves this way, and this connection needs to work in both directions, so the user will find the active setting there when opening the dialog. So, we need to limit ourselves to a closed set of choices, so the UI people can prepare translations. This is the design tradeoff systemd makes: better integration in exchange for less flexibility. Of course, it is simple to allow "custom" actions everywhere, but that starts to break the "well-integrated" behaviour. There is an old blog post from Joey Hess about how to implement a simple alarm clock, using systemd, and this shows both how powerful and how limited the system is. On one hand, the single line WakeSystem=true takes care of the entire dependency chain attached to making sure the system is actually on -- when the system is going to suspend mode, some program needs to determine that there are scheduled jobs that should wake up the system, determined which of these is the next one, program the wakeup time into the hardware and only then allow the suspend to continue. On the other hand, the descriptive framework already breaks down in this simple example, because the default behaviour of the system is to go to sleep if the lid switch is closed, and the easiest way to fix this is to tell systemd to ignore the lid switch while the program is running. The IPC permissions disallow scheduled user jobs from setting this, so a "custom" action is taken that sets up the "ignore the lid switch" policy, changes to an unprivileged user, and runs the actual job. So, it is possible to get back flexibility, but this is traded for integration: while the job is running, any functionality that the "suspend laptop when the lid is closed" button had, is broken. For something simple as a switch on a personal laptop that breaks if the user configures something that is non-standard, that is acceptable. However, the keywords in the service unit files, the timer files, etc. are also part of a descriptive framework, and the services using them expect that the things they promise are kept by the framework, so there is a limit to how many exceptions you can grant. SystemV init really comes from the other direction here. Nothing is promised to init scripts but a vague notion that they will be run with sufficient privileges. There is no dependency tracking that ensures that certain services will always start before others, just priorities. You can add a dependency system like insserv if you like and make sure that all the requirements for it to work (i.e. services declaring their dependencies) are given, and there is a safe fallback of not parallelizing so we can always play it safe. Because there is so little promised to scripts, writing them becomes a bit tedious -- all of them have a large case statement where they parse the command line, all of them need to reimplement dropping privileges and finding the PID of a running process. There are lots of helpers for the more common things, like the LSB standard functions for colorful and matching console output, and Debian's start-stop-daemon for privilege and PID file handling, but in the end it remains each script's responsibility. Systemd will never be able to provide the same flexibility to admins while at the same time keeping the promises made in the descriptive language, and sysvinit will never reach the same amount of easy integration. I think it is fairly safe to predict both of these things, and I'd even go a step further: if any one of them tried, I'd ask the project managers what they were thinking. The only way to keep systems as complex as these running is to limit the scope of the project. With systemd, the complexity is kept in the internals, and it is important for manageability that all possible interactions are well-defined. That is a huge issue when writing adventure games, where you need to define interaction rules for each player action in combination with each object, and each pair of objects that can be combined with the "Use" action. Where adventure games would default to "This doesn't seem to work." or "I cannot use this.", this would not be a helpful error message when we're trying to boot, so we really need to cut down on actions and object types here. Sysvinit, on the other hand, is both more extreme in limiting the scope to very few actions (restart, wait, once) and only one object type (process that can be started and may occasionally terminate) in the main program, and a lot more open in what scripts outside the main program are allowed to do. The prime example for this is that the majority of configuration takes place outside the inittab even -- a script is responsible for the interface that services are registered by dropping init scripts into specific directories, and scripts can also create additional interfaces, but none of this is intrinsic to the init system itself. Which system is the right choice for your system is just that: your choice. You would choose a starting point, and customize what is necessary for your computer to do what you want it to do. Here's the thing: most users will be entirely happy with fully uncustomized systemd. It will suspend your laptop if you close the lid, and even give your download manager veto power. I fully support Debian's decision to use systemd as the default init for new installs, because it makes sense to use the default that is good for the vast majority of users. However, my opening statement still stands: systemd can never support all use cases, because it is constrained by its design. Trying to change that would turn it into an unmaintainable mess, and the authors are right in rejecting this. There are use cases that define the project, and use cases that are out of scope. This is what we need sysvinit for: the bits that the systemd project managers rightfully reject, but that are someone's use case. Telling these users to get lost would be extremely bad style for the "universal operating system." For example, if someone needs a service that asks a database server for a list of virtual machines to start, runs each in its private network namespace that is named after replacing part of the virtual machine name with the last two octets of the host machine's IP address, then binds a virtual Ethernet device into the network namespace and sets up a NAT rule that allows devices to talk to the public Internet and a firewall rule to stop VMs from talking to each other. Such a beast would live outside of systemd's world view. You can easily start it, but systemd would not know how to monitor it (as long as there is some process still running, is that a good sign), not know how to shut down one instance, not know how to map processes to instances and so on. SystemV init has the advantage here that none of these things are defined by the system, and as long as my init script has some code to handle "status", I can easily check up on my processes with the same interface I'd use for everything else. This requires me to have more knowledge of how the init system works, that is true, however I'd also venture that the learning curve for sysvinit is shallower. If you know shell scripting, you can understand what init scripts do, without memorizing special rules that govern how certain keywords interact, and keeping track of changes to these rules as the feature set is expanded. That said, I'm looking at becoming the new maintainer of the sysvinit package so we can continue to ship it and give our users a choice, but I'd definitely need help, as I only have a few systems left that won't run properly with systemd because of its nasty habit to keep lots of code in memory where sysvinit would unload it after use (by letting the shell process end), and these don't make good test systems. The overall plan is to set up a CI system that Ideally, I'd like to reach a state where we can actually ensure that the default configuration is somewhat similar regardless of which init system we use, and that it is possible to install with either (this is a nasty regression in jessie: bootstrapping a foreign-architecture system broke). This will take time, and step one is probably looking at bugs first -- I'm grateful for any help I can get here, primarily I'd like to hear about regressions via bug reports. tl;dr: as a distribution, we need both systemd and sysvinit. If you suggest that systemd can replace sysvinit, I will probably understand that you think that the systemd maintainers don't know project management. I'm reluctantly looking at becoming the official maintainer because I think it is important to keep around.

Simon Richter: UI Design is Hard, Let's Go Shopping

As I have a few hours of downtime on the train, I'm working on one of my favourite KiCad features again: the pin table. This silently appeared in the last KiCad release -- when you have a schematic component open in the editor, there is a new icon, currently on the right of the toolbar, that brings up a table of all pins. Currently, I have the pin number, name and type in the table (as well as the coordinates on the grid, but that's probably not that useful). For the release, this table is read-only, but can at least be useful for proofreading: left mouse click on a column header sorts (as would be expected), and right mouse click on a column header creates a grouping (i.e. all pins that have the same value in this column are collapsed into a single row). The latter is entirely unintuitive, but I have no idea how to show the user that this option exists at all. The good old DirectoryOpus on Amiga had a convention that buttons that had right-click functionality had an earmark to indicate that a "backside" exists, but that is a fairly obscure UI pattern, and would be hard to reproduce across all platforms. I could make the right click drop down a menu instead of immediately selecting this column for grouping -- at least this way it would be explained what is happening, but advanced users may get annoyed at the extra step. I could add a status bar with a hint, but that will always cost me screen space, and this only papers over the issue. I could add a menu with sorting/grouping options, which would allow novice and intermediate users to find the function, but that still does not hint at the shortcut, and the functions to set the sorting would have to be disabled on Linux because wxGTK cannot set the sort column from a program. For editing, another problem turns up in the collapsed rows: what should editing do when we are basically editing multiple pins at once. The most useful mapping would be: However, I'd like to later on introduce multiple-function pins, which would add multiple (partially) exclusive pin names per pin number, which would change the semantics: At the moment, I'm mostly inclined to have separate modes for "grouped by pin number" and "grouped by pin name" -- these modes could be switched by the current grouping column, but that is neither intuitive, nor does it provide a full solution (because grouping by pin type is also supported). Time to look at the use cases: At 32C3, I've implemented a quick completeness check, consisting of a summary line that collapses the pin number column into a comma-separated list of ranges (where a range is defined as consecutive numbers in the last group of digits). I wonder if there is a group of developers with an UI focus who are happy about solving problems like these...

1 December 2015

Simon Richter: Debian at the 32C3

In case you are going to 32C3, you may be interested to join us at the Debian Assembly there. As in the last years, this is going to be fairly informal, so if you are loosely affiliated with Debian or want to become a member by the time 34C3 rolls around, you are more than welcome to show up and sit there.

28 October 2015

Simon Richter: CEC remote control with Onkyo Receivers

For future reference: if you have an Onkyo AV receiver, these are picky about the CEC logical address of the player. If you have a default installation of Kodi, the default type is "recording device", and the logical address is 14 -- this is rejected. If you change the CEC device configuration, which can be found in .kodi/userdata/peripheral_data/ to use a device_type of 4 (which maps to a logical address of 4), then the receiver will start sending key events to Kodi. You may also need to set your remote control mode to 32910, by pressing the Display button in combination with the input source selector your Pi is connected to for three seconds, and then entering the code. This setting is not available from the GUI, so you really have to change the XML file. Next step: finding out why Kodi ignores these key presses. :(

20 October 2015

Simon Richter: Bluetooth and Dual Boot

I still have a dual-boot installation because sometimes I need Windows for something (usually consulting work), and I use a Bluetooth mouse. Obviously, when I boot into Windows, it does not have the pairing information for the mouse, so I have to redo the entire pairing process, and repeat that when I get back to Linux. So, dear lazyweb: is there a way to share Bluetooth pairing information between multiple operating system installations?

17 October 2015

Simon Richter: Key Transition

So, since several years I've had a second gpg key, 4096R/6AABE354. Several of you have already signed it, and I've been using it in Debian for some time already, but I've not announced it more widely yet, and I occasionally still get mail encrypted to the old key (which remains valid and usable, but it's 1024R). Of course, I've also made a formal transition statement:
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
OpenPGP Key Transition Statement for Simon Richter
I have created a new OpenPGP key and will be transitioning away from
my old key.  The old key has not been compromised and will continue to
be valid for some time, but I prefer all future correspondence to be
encrypted to the new key, and will be making signatures with the new
key going forward.
I would like this new key to be re-integrated into the web of trust.
This message is signed by both keys to certify the transition.  My new
and old keys are signed by each other.  If you have signed my old key,
I would appreciate signatures on my new key as well, provided that
your signing policy permits that without re-authenticating me.
The old key, which I am transitioning away from, is:
pub   1024D/5706A4B4 2002-02-26
      Key fingerprint = 040E B5F7 84F1 4FBC CEAD  ADC6 18A0 CC8D 5706 A4B4
The new key, to which I am transitioning, is:
pub   4096R/6AABE354 2009-11-19
      Key fingerprint = 9C43 2534 95E4 DCA8 3794  5F5B EBF6 7A84 6AAB E354
The entire key may be downloaded from: http://www.simonrichter.eu/simon.asc
To fetch the full new key from a public key server using GnuPG, run:
  gpg --keyserver keys.gnupg.net --recv-key 6AABE354
If you already know my old key, you can now verify that the new key is
signed by the old one:
  gpg --check-sigs 6AABE354
If you are satisfied that you've got the right key, and the User IDs
match what you expect, I would appreciate it if you would sign my key:
  gpg --sign-key 6AABE354
You can upload your signatures to a public keyserver directly:
  gpg --keyserver keys.gnupg.net --send-key 6AABE354
Or email sr@simonrichter.eu (possibly encrypted) the output from:
  gpg --armor --export 6AABE354
If you'd like any further verification or have any questions about the
transition please contact me directly.
To verify the integrity of this statement:
  wget -q -O- http://www.simonrichter.eu/key-transition-2015-03-09.txt   gpg --verify
   Simon
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQEcBAEBAgAGBQJU/bbEAAoJEH69OHuwmQgRWOIH/AogHxhhVO5Tp5FFGpBFwljf
NzKTPBExMhZ/8trAzYybOWFv3Bx4AGdWkYfDnxP6oxQJOXVq4KL6ZcPPuIZuZ6fZ
bu0XHdPMU89u0TymR/WJENRCOcydRBe/lZs+zdJbKQtEZ+on4uNXxHpUiZPi1xxM
ggSdVBKn2PlCBcYih40S9Oo/rM9uBmYcFavX7JMouBSzgX78cVoIcY6zPRmHoq4k
TkGKfvHeSu+wbzWRmDwu/PFHRA4TKNvR6oeO+Et1yk454zjrHMXineBILRvvMCIA
t54pV6n+XzOUmtXcKnkIGltK+ZaJSV6am0swtx84RaevVXrknIQE8NvlA4MNgguI
nAQBAQIABgUCVP23tAAKCRDSx966V9+/u3j4BACVHifAcO86jAc5dn+4OkFZFhV1
l3MKIolL+E7Q7Ox+vJunGJJuGnOnazUK+64yDGZ2JxNJ4QNWD1FOs/Ng2gm82Vin
ArBtyp1ZGWUa+349X+1qarUQF9qAaUXDZjFp5Hzh/o6KC4t3eECxcb41og3LUTQD
VuG2KWNXYBe5P5ak9Q==
=o61r
-----END PGP SIGNATURE-----

15 October 2015

Simon Richter: Restarting blogging

I haven't had a blog for a few years now (and I might at some point be tempted to import all the old content), but occasionally I just want to point at something cool, rant about technology, or talk about my projects in the hope that someone else gets inspired to point at, or rant about them. My current interests are

13 March 2011

Lars Wirzenius: DPL elections: candidate counts

Out of curiosity, and because it is Sunday morning and I have a cold and can't get my brain to do anything tricky, I counted the number of candidates in each year's DPL elections.
Year Count Names
1999 4 Joseph Carter, Ben Collins, Wichert Akkerman, Richard Braakman
2000 4 Ben Collins, Wichert Akkerman, Joel Klecker, Matthew Vernon
2001 4 Branden Robinson, Anand Kumria, Ben Collins, Bdale Garbee
2002 3 Branden Robinson, Rapha l Hertzog, Bdale Garbee
2003 4 Moshe Zadka, Bdale Garbee, Branden Robinson, Martin Michlmayr
2004 3 Martin Michlmayr, Gergely Nagy, Branden Robinson
2005 6 Matthew Garrett, Andreas Schuldei, Angus Lees, Anthony Towns, Jonathan Walther, Branden Robinson
2006 7 Jeroen van Wolffelaar, Ari Pollak, Steve McIntyre, Anthony Towns, Andreas Schuldei, Jonathan (Ted) Walther, Bill Allombert
2007 8 Wouter Verhelst, Aigars Mahinovs, Gustavo Franco, Sam Hocevar, Steve McIntyre, Rapha l Hertzog, Anthony Towns, Simon Richter
2008 3 Marc Brockschmidt, Rapha l Hertzog, Steve McIntyre
2009 2 Stefano Zacchiroli, Steve McIntyre
2010 4 Stefano Zacchiroli, Wouter Verhelst, Charles Plessy, Margarita Manterola
2011 1 Stefano Zacchiroli (no vote yet)
Winner indicate by boldface. I expect Zack to win over "None Of The Above", so I went ahead and boldfaced him already, even if there has not been a vote for this year. Median number of candidates is 4.

14 November 2008

Simon Richter: --as-needed

Raphael cheers for the addition of an --as-needed option to a linker command line. While I think the general idea of not linking libraries you don't use directly is a good one, --as-needed is not a proper solution since it tells the linker to second-guess the compiler. A proper solution would be to fix the build systems so they don't pass the names of unneeded libraries in the first place. There are valid reasons for linking against objects that provide no symbols to the object being built, for example if the plugin ABI of a program has these symbols exported to plugins (in order to have all plugins agree on a common set of allocation functions).

3 November 2008

Simon Richter: The future

I don't know what Web 3.0 will look like, but Web 4.0 will be built with structural markup and <blink> tags.

10 October 2008

Simon Richter: Berlin

I'll be there.

2 October 2008

Simon Richter: A design question

As some of you may or may not be aware, I'm hacking on a new build tool that has a purely descriptive language for project descriptions, with lots of sensible defaults. Right now, the only thing I really require users to do is to declare the name and type of the project, for example Program: foo This will just take all files in this directory that it understands, compile them, and link them to an executable called "foo" (or "foo.exe"). So far, so good. However, there are more complex examples, specifically those where multiple outputs are installed to different directories, for example in the case of libraries: Library: foo
Public: foo.h, foo.c
I've omitted the library versioning stuff from the example, because it isn't necessary to understand the problem. This means "take all compilable files, link them to "libfoo.so" (resp. "foo.dll") and only mark the symbols defined in foo.c as exported (not listing code generating files will fall back to exporting everything); then install foo.h into /usr/include (or make it available as <foo.h> to dependent projects). Now, I'd like to add an optional parameter to place a "published" resource into a namespace. How that is defined is dependent on the type of the resource (exported symbols could be tagged with a symbol version, while include files would be installed into a subdirectory of /usr/include). And this is where I would like to get some input. How should such a declaration look like in the project description file? The file is in RFC822 style format, with one section per project (multiple projects in a single directory are permitted if you list the inputs explicitly), and I'd rather keep it that way. Ideas so far:
  1. Public[foo]: foo.h
    Public[FOO_1_0]: foo.c
This means multiple lines and dividing .c and .h files since their "namespace" attributes mean something entirely different.
  • Public: foo.h [foo], foo.c [FOO_1_0]
  • This ends up becoming pretty repetitive if I have a lot of header files. Neither of these looks as "clean" as I'd like a new file format to look. I'm pretty much sure "built-in" tagging of certain things will be handled by separate top-level tags (similar to automake): User-Public: userdoc.sgml
    Developer-Public: devdoc.sgml
    I'm open to criticism on this one as well though.

    21 September 2008

    Simon Richter: I need to change my morning routine

    Reading other people's blogs right after getting up leads to public embarrassment. :-P
    1. Take a picture of yourself right now.
    2. Don t change your clothes, don t fix your hair just take a picture.
    3. Post that picture with NO editing.
    4. Post these instructions with your picture.

    Next.