Search Results: "ghe"

15 July 2022

Steve Kemp: So we come to Lisp

Recently I've been working with simple/trivial scripting languages, and I guess I finally reached a point where I thought "Lisp? Why not". One of the reasons for recent experimentation was thinking about the kind of minimalism that makes implementing a language less work - being able to actually use the language to write itself. FORTH is my recurring example, because implementing it mostly means writing a virtual machine which consists of memory ("cells") along with a pair of stacks, and some primitives for operating upon them. Once you have that groundwork in place you can layer the higher-level constructs (such as "for", "if", etc). Lisp allows a similar approach, albeit with slightly fewer low-level details required, and far less tortuous thinking. Lisp always feels higher-level to me anyway, given the explicit data-types ("list", "string", "number", etc). Here's something that works in my toy lisp:
;; Define a function,  fact , to calculate factorials (recursively).
(define fact (lambda (n)
  (if (<= n 1)
    1
      (* n (fact (- n 1))))))
;; Invoke the factorial function, using apply
(apply (list 1 2 3 4 5 6 7 8 9 10)
  (lambda (x)
    (print "%s! => %s" x (fact x))))
The core language doesn't have helpful functions to filter lists, or build up lists by applying a specified function to each member of a list, but adding them is trivial using the standard car, cdr, and simple recursion. That means you end up writing lots of small functions like this:
(define zero? (lambda (n) (if (= n 0) #t #f)))
(define even? (lambda (n) (if (zero? (% n 2)) #t #f)))
(define odd?  (lambda (n) (! (even? n))))
(define sq    (lambda (x) (* x x)))
Once you have them you can use them in a way that feels simple and natural:
(print "Even numbers from 0-10: %s"
  (filter (nat 11) (lambda (x) (even? x))))
(print "Squared numbers from 0-10: %s"
  (map (nat 11) (lambda (x) (sq x))))
This all feels very sexy and simple, because the implementations of map, apply, filter are all written using the lisp - and they're easy to write. Lisp takes things further than some other "basic" languages because of the (infamous) support for Macros. But even without them writing new useful functions is pretty simple. Where things struggle? I guess I don't actually have a history of using lisp to actually solve problems - although it's great for configuring my editor.. Anyway I guess the journey continues. Having looked at the obvious "minimal core" languages I need to go further afield: I'll make an attempt to look at some of the esoteric programming languages, and see if any of those are fun to experiment with.

1 July 2022

Steve Kemp: An update on my simple golang TCL interpreter

So my previous post introduced a trivial interpreter for a TCL-like language. In the past week or two I've cleaned it up, fixed a bunch of bugs, and added 100% test-coverage. I'm actually pretty happy with it now. One of the reasons for starting this toy project was to experiment with how easy it is to extend the language using itself Some things are simple, for example replacing this:
puts "3 x 4 = [expr 3 * 4]"
With this:
puts "3 x 4 = [* 3 4]"
Just means defining a function (proc) named *. Which we can do like so:
proc *  a b   
    expr $a * $b
 
(Of course we don't have lists, or variadic arguments, so this is still a bit of a toy example.) Doing more than that is hard though without support for more primitives written in the parent language than I've implemented. The obvious thing I'm missing is a native implementation of upvalue, which is TCL primitive allowing you to affect/update variables in higher-scopes. Without that you can't write things as nicely as you would like, and have to fall back to horrid hacks or be unable to do things.
# define a procedure to run a body N times
proc repeat  n body   
    set res ""
    while  > $n 0   
        decr n
        set res [$body]
     
    $res
 
# test it out
set foo 12
repeat 5   incr foo  
#  foo is now 17 (i.e. 12 + 5)
A similar story implementing the loop word, which should allow you to set the contents of a variable and run a body a number of times:
proc loop  var min max bdy   
    // result
    set res ""
    // set the variable.  Horrid.
    // We miss upvalue here.
    eval "set $var [set min]"
    // Run the test
    while  <= [set "$$var"] $max    
        set res [$bdy]
        // This is a bit horrid
        // We miss upvalue here, and not for the first time.
        eval  incr "$var" 
     
    // return the last result
    $res
 
loop cur 0 10   puts "current iteration $cur ($min->$max)"  
# output is:
# => current iteration 0 (0-10)
# => current iteration 1 (0-10)
# ...
That said I did have fun writing some simple test-cases, and implementing assert, assert_equal, etc. In conclusion I think the number of required primitives needed to implement your own control-flow, and run-time behaviour, is a bit higher than I'd like. Writing switch, repeat, while, and similar primitives inside TCL is harder than creating those same things in FORTH, for example.

29 June 2022

Aigars Mahinovs: Long travel in an electric car

Since the first week of April 2022 I have (finally!) changed my company car from a plug-in hybrid to a fully electic car. My new ride, for the next two years, is a BMW i4 M50 in Aventurine Red metallic. An ellegant car with very deep and memorable color, insanely powerful (544 hp/795 Nm), sub-4 second 0-100 km/h, large 84 kWh battery (80 kWh usable), charging up to 210 kW, top speed of 225 km/h and also very efficient (which came out best in this trip) with WLTP range of 510 km and EVDB real range of 435 km. The car also has performance tyres (Hankook Ventus S1 evo3 245/45R18 100Y XL in front and 255/45R18 103Y XL in rear all at recommended 2.5 bar) that have reduced efficiency. So I wanted to document and describe how was it for me to travel ~2000 km (one way) with this, electric, car from south of Germany to north of Latvia. I have done this trip many times before since I live in Germany now and travel back to my relatives in Latvia 1-2 times per year. This was the first time I made this trip in an electric car. And as this trip includes both travelling in Germany (where BEV infrastructure is best in the world) and across Eastern/Northen Europe, I believe that this can be interesting to a few people out there. Normally when I travelled this trip with a gasoline/diesel car I would normally drive for two days with an intermediate stop somewhere around Warsaw with about 12 hours of travel time in each day. This would normally include a couple bathroom stops in each day, at least one longer lunch stop and 3-4 refueling stops on top of that. Normally this would use at least 6 liters of fuel per 100 km on average with total usage of about 270 liters for the whole trip (or about 540 just in fuel costs, nowadays). My (personal) quirk is that both fuel and recharging of my (business) car inside Germany is actually paid by my employer, so it is useful for me to charge up (or fill up) at the last station in Gemany before driving on. The plan for this trip was made in a similar way as when travelling with a gasoline car: travelling as fast as possible on German Autobahn network to last chargin stop on the A4 near G rlitz, there charging up as much as reasonable and then travelling to a hotel in Warsaw, charging there overnight and travelling north towards Ionity chargers in Lithuania from where reaching the final target in north of Latvia should be possible. How did this plan meet the reality? Travelling inside Germany with an electric car was basically perfect. The most efficient way would involve driving fast and hard with top speed of even 180 km/h (where possible due to speed limits and traffic). BMW i4 is very efficient at high speeds with consumption maxing out at 28 kWh/100km when you actually drive at this speed all the time. In real situation in this trip we saw consumption of 20.8-22.2 kWh/100km in the first legs of the trip. The more traffic there is, the more speed limits and roadworks, the lower is the average speed and also the lower the consumption. With this kind of consumption we could comfortably drive 2 hours as fast as we could and then pick any fast charger along the route and in 26 minutes at a charger (50 kWh charged total) we'd be ready to drive for another 2 hours. This lines up very well with recommended rest stops for biological reasons (bathroom, water or coffee, a bit of movement to get blood circulating) and very close to what I had to do anyway with a gasoline car. With a gasoline car I had to refuel first, then park, then go to bathroom and so on. With an electric car I can do all of that while the car is charging and in the end the total time for a stop is very similar. Also not that there was a crazy heat wave going on and temperature outside was at about 34C minimum the whole day and hitting 40C at one point of the trip, so a lot of power was used for cooling. The car has a heat pump standard, but it still was working hard to keep us cool in the sun. The car was able to plan a charging route with all the charging stops required and had all the good options (like multiple intermediate stops) that many other cars (hi Tesla) and mobile apps (hi Google and Apple) do not have yet. There are a couple bugs with charging route and display of current route guidance, those are already fixed and will be delivered with over the air update with July 2022 update. Another good alterantive is the ABRP (A Better Route Planner) that was specifically designed for electric car routing along the best route for charging. Most phone apps (like Google Maps) have no idea about your specific electric car - it has no idea about the battery capacity, charging curve and is missing key live data as well - what is the current consumption and remaining energy in the battery. ABRP is different - it has data and profiles for almost all electric cars and can also be linked to live vehicle data, either via a OBD dongle or via a new Tronity cloud service. Tronity reads data from vehicle-specific cloud service, such as MyBMW service, saves it, tracks history and also re-transmits it to ABRP for live navigation planning. ABRP allows for options and settings that no car or app offers, for example, saying that you want to stop at a particular place for an hour or until battery is charged to 90%, or saying that you have specific charging cards and would only want to stop at chargers that support those. Both the car and the ABRP also support alternate routes even with multiple intermediate stops. In comparison, route planning by Google Maps or Apple Maps or Waze or even Tesla does not really come close. After charging up in the last German fast charger, a more interesting part of the trip started. In Poland the density of high performance chargers (HPC) is much lower than in Germany. There are many chargers (west of Warsaw), but vast majority of them are (relatively) slow 50kW chargers. And that is a difference between putting 50kWh into the car in 23-26 minutes or in 60 minutes. It does not seem too much, but the key bit here is that for 20 minutes there is easy to find stuff that should be done anyway, but after that you are done and you are just waiting for the car and if that takes 4 more minutes or 40 more minutes is a big, perceptual, difference. So using HPC is much, much preferable. So we put in the Ionity charger near Lodz as our intermediate target and the car suggested an intermediate stop at a Greenway charger by Katy Wroclawskie. The location is a bit weird - it has 4 charging stations with 150 kW each. The weird bits are that each station has two CCS connectors, but only one parking place (and the connectors share power, so if two cars were to connect, each would get half power). Also from the front of the location one can only see two stations, the otehr two are semi-hidden around a corner. We actually missed them on the way to Latvia and one person actually waited for the charger behind us for about 10 minutes. We only discovered the other two stations on the way back. With slower speeds in Poland the consumption goes down to 18 kWh/100km which translates to now up to 3 hours driving between stops. At the end of the first day we drove istarting from Ulm from 9:30 in the morning until about 23:00 in the evening with total distance of about 1100 km, 5 charging stops, starting with 92% battery, charging for 26 min (50 kWh), 33 min (57 kWh + lunch), 17 min (23 kWh), 12 min (17 kWh) and 13 min (37 kW). In the last two chargers you can see the difference between a good and fast 150 kW charger at high battery charge level and a really fast Ionity charger at low battery charge level, which makes charging faster still. Arriving to hotel with 23% of battery. Overnight the car charged from a Porsche Destination Charger to 87% (57 kWh). That was a bit less than I would expect from a full power 11kW charger, but good enough. Hotels should really install 11kW Type2 chargers for their guests, it is a really significant bonus that drives more clients to you. The road between Warsaw and Kaunas is the most difficult part of the trip for both driving itself and also for charging. For driving the problem is that there will be a new highway going from Warsaw to Lithuanian border, but it is actually not fully ready yet. So parts of the way one drives on the new, great and wide highway and parts of the way one drives on temporary roads or on old single lane undivided roads. And the most annoying part is navigating between parts as signs are not always clear and the maps are either too old or too new. Some maps do not have the new roads and others have on the roads that have not been actually build or opened to traffic yet. It's really easy to loose ones way and take a significant detour. As far as charging goes, basically there is only the slow 50 kW chargers between Warsaw and Kaunas (for now). We chose to charge on the last charger in Poland, by Suwalki Kaufland. That was not a good idea - there is only one 50 kW CCS and many people decide the same, so there can be a wait. We had to wait 17 minutes before we could charge for 30 more minutes just to get 18 kWh into the battery. Not the best use of time. On the way back we chose a different charger in Lomza where would have a relaxed dinner while the car was charging. That was far more relaxing and a better use of time. We also tried charging at an Orlen charger that was not recommended by our car and we found out why. Unlike all other chargers during our entire trip, this charger did not accept our universal BMW Charging RFID card. Instead it demanded that we download their own Orlen app and register there. The app is only available in some countries (and not in others) and on iPhone it is only available in Polish. That is a bad exception to the rule and a bad example. This is also how most charging works in USA. Here in Europe that is not normal. The normal is to use a charging card - either provided from the car maker or from another supplier (like PlugSufring or Maingau Energy). The providers then make roaming arrangements with all the charging networks, so the cards just work everywhere. In the end the user gets the prices and the bills from their card provider as a single monthly bill. This also saves all any credit card charges for the user. Having a clear, separate RFID card also means that one can easily choose how to pay for each charging session. For example, I have a corporate RFID card that my company pays for (for charging in Germany) and a private BMW Charging card that I am paying myself for (for charging abroad). Having the car itself authenticate direct with the charger (like Tesla does) removes the option to choose how to pay. Having each charge network have to use their own app or token bring too much chaos and takes too much setup. The optimum is having one card that works everywhere and having the option to have additional card or cards for specific purposes. Reaching Ionity chargers in Lithuania is again a breath of fresh air - 20-24 minutes to charge 50 kWh is as expected. One can charge on the first Ionity just enough to reach the next one and then on the second charger one can charge up enough to either reach the Ionity charger in Adazi or the final target in Latvia. There is a huge number of CSDD (Road Traffic and Safety Directorate) managed chargers all over Latvia, but they are 50 kW chargers. Good enough for local travel, but not great for long distance trips. BMW i4 charges at over 50 kW on a HPC even at over 90% battery state of charge (SoC). This means that it is always faster to charge up in a HPC than in a 50 kW charger, if that is at all possible. We also tested the CSDD chargers - they worked without any issues. One could pay with the BMW Charging RFID card, one could use the CSDD e-mobi app or token and one could also use Mobilly - an app that you can use in Latvia for everything from parking to public transport tickets or museums or car washes. We managed to reach our final destination near Aluksne with 17% range remaining after just 3 charging stops: 17+30 min (18 kWh), 24 min (48 kWh), 28 min (36 kWh). Last stop we charged to 90% which took a few extra minutes that would have been optimal. For travel around in Latvia we were charging at our target farmhouse from a normal 3 kW Schuko EU socket. That is very slow. We charged for 33 hours and went from 17% to 94%, so not really full. That was perfectly fine for our purposes. We easily reached Riga, drove to the sea and then back to Aluksne with 8% still in reserve and started charging again for the next trip. If it were required to drive around more and charge faster, we could have used the normal 3-phase 440V connection in the farmhouse to have a red CEE 16A plug installed (same as people use for welders). BMW i4 comes standard with a new BMW Flexible Fast Charger that has changable socket adapters. It comes by default with a Schucko connector in Europe, but for 90 one can buy an adapter for blue CEE plug (3.7 kW) or red CEE 16A or 32A plugs (11 kW). Some public charging stations in France actually use the blue CEE plugs instead of more common Type2 electric car charging stations. The CEE plugs are also common in camping parking places. On the way back the long distance BEV travel was already well understood and did not cause us any problem. From our destination we could easily reach the first Ionity in Lithuania, on the Panevezhis bypass road where in just 8 minutes we got 19 kWh and were ready to drive on to Kaunas, there a longer 32 minute stop before the charging desert of Suwalki Gap that gave us 52 kWh to 90%. That brought us to a shopping mall in Lomzha where we had some food and charged up 39 kWh in lazy 50 minutes. That was enough to bring us to our return hotel for the night - Hotel 500W in Strykow by Lodz that has a 50kW charger on site, while we were having late dinner and preparing for sleep, the car easily recharged to full (71 kWh in 95 minutes), so I just moved it from charger to a parking spot just before going to sleep. Really easy and well flowing day. Second day back went even better as we just needed an 18 minute stop at the same Katy Wroclawskie charger as before to get 22 kWh and that was enough to get back to Germany. After that we were again flying on the Autobahn and charging as needed, 15 min (31 kWh), 23 min (48 kWh) and 31 min (54 kWh + food). We started the day on about 9:40 and were home at 21:40 after driving just over 1000 km on that day. So less than 12 hours for 1000 km travelled, including all charging, bio stops, food and some traffic jams as well. Not bad. Now let's take a look at all the apps and data connections that a technically minded customer can have for their car. Architecturally the car is a network of computers by itself, but it is very secured and normally people do not have any direct access. However, once you log in into the car with your BMW account the car gets your profile info and preferences (seat settings, navigation favorites, ...) and the car then also can start sending information to the BMW backend about its status. This information is then available to the user over multiple different channels. There is no separate channel for each of those data flow. The data only goes once to the backend and then all other communication of apps happens with the backend. First of all the MyBMW app. This is the go-to for everything about the car - seeing its current status and location (when not driving), sending commands to the car (lock, unlock, flash lights, pre-condition, ...) and also monitor and control charging processes. You can also plan a route or destination in the app in advance and then just send it over to the car so it already knows where to drive to when you get to the car. This can also integrate with calendar entries, if you have locations for appointments, for example. This also shows full charging history and allows a very easy export of that data, here I exported all charging sessions from June and then trimmed it back to only sessions relevant to the trip and cut off some design elements to have the data more visible. So one can very easily see when and where we were charging, how much power we got at each spot and (if you set prices for locations) can even show costs. I've already mentioned the Tronity service and its ABRP integration, but it also saves the information that it gets from the car and gathers that data over time. It has nice aspects, like showing the driven routes on a map, having ways to do business trip accounting and having good calendar view. Sadly it does not correctly capture the data for charging sessions (the amounts are incorrect). Update: after talking to Tronity support, it looks like the bug was in the incorrect value for the usable battery capacity for my car. They will look into getting th eright values there by default, but as a workaround one can edit their car in their system (after at least one charging session) and directly set the expected battery capacity (usable) in the car properties on the Tronity web portal settings. One other fun way to see data from your BMW is using the BMW integration in Home Assistant. This brings the car as a device in your own smart home. You can read all the variables from the car current status (and Home Asisstant makes cute historical charts) and you can even see interesting trends, for example for remaining range shows much higher value in Latvia as its prediction is adapted to Latvian road speeds and during the trip it adapts to Polish and then to German road speeds and thus to higher consumption and thus lower maximum predicted remaining range. Having the car attached to the Home Assistant also allows you to attach the car to automations, both as data and event source (like detecting when car enters the "Home" zone) and also as target, so you could flash car lights or even unlock or lock it when certain conditions are met. So, what in the end was the most important thing - cost of the trip? In total we charged up 863 kWh, so that would normally cost one about 290 , which is close to half what this trip would have costed with a gasoline car. Out of that 279 kWh in Germany (paid by my employer) and 154 kWh in the farmhouse (paid by our wonderful relatives :D) so in the end the charging that I actually need to pay adds up to 430 kWh or about 150 . Typically, it took about 400 in fuel that I had to pay to get to Latvia and back. The difference is really nice! In the end I believe that there are three different ways of charging:
  • incidental charging - this is wast majority of charging in the normal day-to-day life. The car gets charged when and where it is convinient to do so along the way. If we go to a movie or a shop and there is a chance to leave the car at a charger, then it can charge up. Works really well, does not take extra time for charging from us.
  • fast charging - charging up at a HPC during optimal charging conditions - from relatively low level to no more than 70-80% while you are still doing all the normal things one would do in a quick stop in a long travel process: bio things, cleaning the windscreen, getting a coffee or a snack.
  • necessary charging - charging from a whatever charger is available just enough to be able to reach the next destination or the next fast charger.
The last category is the only one that is really annoying and should be avoided at all costs. Even by shifting your plans so that you find something else useful to do while necessary charging is happening and thus, at least partially, shifting it over to incidental charging category. Then you are no longer just waiting for the car, you are doing something else and the car magically is charged up again. And when one does that, then travelling with an electric car becomes no more annoying than travelling with a gasoline car. Having more breaks in a trip is a good thing and makes the trips actually easier and less stressfull - I was more relaxed during and after this trip than during previous trips. Having the car air conditioning always be on, even when stopped, was a godsend in the insane heat wave of 30C-38C that we were driving trough. Final stats: 4425 km driven in the trip. Average consumption: 18.7 kWh/100km. Time driving: 2 days and 3 hours. Car regened 152 kWh. Charging stations recharged 863 kWh. Questions? You can use this i4talk forum thread or this Twitter thread to ask them to me.

22 June 2022

John Goerzen: I Finally Found a Solid Debian Tablet: The Surface Go 2

I have been looking for a good tablet for Debian for well, years. I want thin, light, portable, excellent battery life, and a servicable keyboard. For a while, I tried a Lenovo Chromebook Duet. It meets the hardware requirements, well sort of. The problem is with performance and the OS. I can run Debian inside the ChromeOS Linux environment. That works, actually pretty well. But it is slow. Terribly, terribly, terribly slow. Emacs takes minutes to launch. apt-gets also do. It has barely enough RAM to keep its Chrome foundation happy, let alone a Linux environment also. But basically it is too slow to be servicable. Not just that, but I ran into assorted issues with having it tied to a Google account particularly being unable to login unless I had Internet access after an update. That and my growing concern over Google s privacy practices led me sort of write it off. I have a wonderful System76 Lemur Pro that I m very happy with. Plenty of RAM, a good compromise size between portability and screen size at 14.1 , and so forth. But a 10 goes-anywhere it s not. I spent quite a lot of time looking at thin-and-light convertible laptops of various configurations. Many of them were quite expensive, not as small as I wanted, or had dubious Linux support. To my surprise, I wound up buying a Surface Go 2 from the Microsoft store, along with the Type Cover. They had a pretty good deal on it since the Surface Go 3 is out; the highest-processor model of the Go 2 is roughly similar to the Go 3 in terms of performance. There is an excellent linux-surface project out there that provides very good support for most Surface devices, including the Go 2 and 3. I put Debian on it. I had a fair bit of hassle with EFI, and wound up putting rEFInd on it, which mostly solved those problems. (I did keep a Windows partition, and if it comes up for some reason, the easiest way to get it back to Debian is to use the Windows settings tool to reboot into advanced mode, and then select the appropriate EFI entry to boot from there.) Researching on-screen keyboards, it seemed like Gnome had the most mature. So I wound up with Gnome (my other systems are using KDE with tiling, but I figured I d try Gnome on it.) Almost everything worked without additional tweaking, the one exception being the cameras. The cameras on the Surfaces are a known point of trouble and I didn t bother to go to all the effort to get them working. With 8GB of RAM, I didn t put ZFS on it like I do on other systems. Performance is quite satisfactory, including for Rust development. Battery life runs about 10 hours with light use; less when running a lot of cargo builds, of course. The 1920 1280 screen is nice at 10.5 . Gnome with Wayland does a decent job of adjusting to this hi-res configuration. I took this as my only computer for a trip from the USA to Germany. It was a little small at times; though that was to be expected. It let me take a nicely small bag as a carryon, and being light, it was pleasant to carry around in airports. It served its purpose quite well. One downside is that it can t be powered by a phone charger like my Chromebook Duet can. However, I found a nice slim 65W Anker charger that could charge it and phones simultaneously that did the job well enough (I left the Microsoft charger with the proprietary connector at home). The Surface Go 2 maxes out at a 128GB SSD. That feels a bit constraining, especially since I kept Windows around. However, it also has a micro SD slot, so you can put LUKS and ext4 on that and use it as another filesystem. I popped a micro SD I had lying around into there and that felt a lot better storage-wise. I could also completely zap Windows, but that would leave no way to get firmware updates and I didn t really want to do that. Still, I don t use Windows and that could be an option also. All in all, I m pretty pleased with it. Around $600 for a fully-functional Debian tablet, with a keyboard is pretty nice. I had been hoping for months that the Pinetab would come back into stock, because I d much rather support a Linux hardware vendor, but for now I think the Surface Go series is the most solid option for a Linux tablet.

12 June 2022

Iustin Pop: Somewhat committing to a new sport

Quite a few years ago - 4, to be precise, so in 2018 - I did a couple of SUP trainings, organised by a colleague. That was enjoyable, but not really matching with me (asymmetric paddling, ugh!), so I also did learn some kayaking, which I really love, but that s way higher overhead - no sea around in Switzerland, and lakes are generally too small. So I basically postponed any more water sports , until sometime in the future when I ll finally decide what I want to do (and in what setup). I did a couple of one-off SUP rides in various places (2019, 2021), but I really was out of practice, so it wasn t really enjoyable. But with family, SUP offers a much easier way to carry a passenger (than a kayak), so slowly I started thinking more about doing it more seriously. So last week, after much deliberation, bought an inflatable board, paddle and various other accessories, and on Saturday went to try it out, on excellent weather (completely flat) and hot but not overly so. The board choosing in itself was something I like to do (research options), so for a bit I was concerned whether I m more interested in the gear, or the actual paddling itself To my surprise, it went way better than I feared - last time I tried it, paddled 30 minutes on my knees (knee-paddling?!), since I didn t dare stand up. But this time, I launched and then did stand up, and while very shaky, I didn t fall in. Neither by myself, nor with an extra passenger And hour later, and my initial shakiness went away, with the trainings slowly coming back to mind. Another half hour, and - for completely flat water - I felt quite confident. The view was awesome, the weather nice, the water cold enough to be refreshing and the only question on my mind was - why didn t I do this 2, 3 years ago? Well, Corona aside. I forgot how much I love just being on the water. It definitely pays off the cost of going somewhere, unpacking the stuff, pumping up the board (that s a bit of a sport in itself ), because the blue-green-light-blue colour palette is just how things should be:
Small lake, but beautiful view Small lake, but beautiful view
Well, approximately blue. This being a small lake, it s more blue-green than proper blue. That s next level, since bigger lakes mean waves, and more traffic. Of course, this could also turn up like many other things I tried (a device in a corner that s not used anymore), but at least for yesterday, I was a happy paddler!

Russ Allbery: Review: The Shattered Sphere

Review: The Shattered Sphere, by Roger MacBride Allen
Series: Hunted Earth #2
Publisher: Tor
Copyright: July 1994
Printing: September 1995
ISBN: 0-8125-3016-0
Format: Mass market
Pages: 491
The Shattered Sphere is a direct sequel to The Ring of Charon and spoils everything about the plot of the first book. You don't want to start here. Also be aware that essentially everything you can read about this book will spoil the major plot driver of The Ring of Charon in the first sentence. I'm going to review the book without doing that, but it's unlikely anyone else will try. The end of the previous book stabilized matters, but in no way resolved the plot. The Shattered Sphere opens five years later. Most of the characters from the first novel are joined by some new additions, and all of them are trying to make sense of a drastically changed and far more dangerous understanding of the universe. Humanity has a new enemy, one that's largely unaware of humanity's existence and able to operate on a scale that dwarfs human endeavors. The good news is that humans aren't being actively attacked. The bad news is that they may be little more than raw resources, stashed in a safe spot for future use. That is reason enough to worry. Worse are the hints of a far greater danger, one that may be capable of destruction on a scale nearly beyond human comprehension. Humanity may be trapped between a sophisticated enemy to whom human activity is barely more noticeable than ants, and a mysterious power that sends that enemy into an anxious panic. This series is an easily-recognized example of an in-between style of science fiction. It shares the conceptual bones of an earlier era of short engineer-with-a-wrench stories that are full of set pieces and giant constructs, but Allen attempts to add the characterization that those books lacked. But the technique isn't there; he's trying, and the basics of characterization are present, but with none of the emotional and descriptive sophistication of more recent SF. The result isn't bad, exactly, but it's bloated and belabored. Most of the characterization comes through repetition and ham-handed attempts at inner dialogue. Slow plotting doesn't help. Allen spends half of a nearly 500 page novel on setup in two primary threads. One is mostly people explaining detailed scientific theories to each other, mixed with an attempt at creating reader empathy that's more forceful than effective. The other is a sort of big dumb object exploration that failed to hold my attention and that turned out to be mostly irrelevant. Key revelations from that thread are revealed less by the actions of the characters than by dumping them on the reader in an extended monologue. The reading goes quickly, but only because the writing is predictable and light on interesting information, not because the plot is pulling the reader through the book. I found myself wishing for an earlier era that would have cut about 300 pages out of this book without losing any of the major events. Once things finally start happening, the book improves considerably. I grew up reading large-scale scientific puzzle stories, and I still have a soft spot for a last-minute scientific fix and dramatic set piece even if the descriptive detail leaves something to be desired. The last fifty pages are fast-moving and satisfying, only marred by their failure to convince me that the humans were required for the plot. The process of understanding alien technology well enough to use it the right way kept me entertained, but I don't understand why the aliens didn't use it themselves. I think this book falls between two stools. The scientific mysteries and set pieces would have filled a tight, fast-moving 200 page book with a minimum of characterization. It would have been a throwback to an earlier era of science fiction, but not a bad one. Allen instead wanted to provide a large cast of sympathetic and complex characters, and while I appreciate the continued lack of villains, the writing quality is not sufficient to the task. This isn't an awful book, but the quality bar in the genre is so much higher now. There are better investments of your reading time available today. Like The Ring of Charon, The Shattered Sphere reaches a satisfying conclusion but does not resolve the series plot. No sequel has been published, and at this point one seems unlikely to materialize. Rating: 5 out of 10

11 June 2022

Louis-Philippe V ronneau: Updating a rooted Pixel 3a

A short while after getting a Pixel 3a, I decided to root it, mostly to have more control over the charging procedure. In order to preserve battery life, I like my phone to stop charging at around 75% of full battery capacity and to shut down automatically at around 12%. Some Android ROMs have extra settings to manage this, but LineageOS unfortunately does not. Android already comes with a fairly complex mechanism to handle the charge cycle, but it is mostly controlled by the kernel and cannot be easily configured by end-users. acc is a higher-level "systemless" interface for the Android kernel battery management, but one needs root to do anything interesting with it. Once rooted, you can use the AccA app instead of playing on the command line to fine tune your battery settings. Sadly, having a rooted phone also means I need to re-root it each time there is an OS update (typically each week). Somehow, I keep forgetting the exact procedure to do this! Hopefully, I will be able to use this post as a reference in the future :) Note that these instructions might not apply to your exact phone model, proceed with caution! Extract the boot.img file This procedure mostly comes from the LineageOS documentation on extracting proprietary blobs from the payload.
  1. Download the latest LineageOS image for your phone.
  2. unzip the image to get the payload.bin file inside it.
  3. Clone the LineageOS scripts git repository: $ git clone https://github.com/LineageOS/scripts
  4. extract the boot image (requires python3-protobuf): $ mkdir extracted-payload $ python3 scripts/update-payload-extractor/extract.py payload.bin --output_dir extracted-payload
You should now have a boot.img file. Patch the boot image file using Magisk
  1. Upload the boot.img file you previously extracted to your device.
  2. Open Magisk and patch the boot.img file.
  3. Download the patched file back on your computer.
Flash the patched boot image
  1. Enable ADB debug mode on your phone.
  2. Reboot into fastboot mode. $ adb reboot fastboot
  3. Flash the patched boot image file: $ fastboot flash boot magisk_patched-foo.img
  4. Disable ADB debug mode on your phone.
Troubleshooting In an ideal world, you would do this entire process each time you upgrade to a new LineageOS version. Sadly, this creates friction and makes updating much more troublesome. To simplify things, you can try to flash an old patched boot.img file after upgrading, instead of generating it each time. In my experience, it usually works. When it does not, the device behaves weirdly after a reboot and things that require proprietary blobs (like WiFi) will stop working. If that happens:
  1. Download the latest LineageOS version for your phone.
  2. Reboot into recovery (Power + Volume Down).
  3. Click on "Apply Updates"
  4. Sideload the ROM: $ adb sideload lineageos-foo.zip

22 May 2022

Ulrike Uhlig: How do kids conceive the internet? - part 3

I received some feedback on the first part of interviews about the internet with children that I d like to share publicly here. Thank you! Your thoughts and experiences are important to me! In the first interview round there was this French girl.
Asked what she would change if she could, the 9 year old girl advocated for a global usage limit of the internet in order to protect the human brain. Also, she said, her parents spend way too much time on their phones and people should rather spend more time with their children.
To this bit, one person reacted saying that they first laughed when reading her proposal, but then felt extremely touched by it. Another person reacted to the same bit of text:
That s just brilliant. We spend so much time worrying about how the internet will affect children while overlooking how it has already affected us as parents. It actively harms our relationship with our children (keeping us distracted from their amazing life) and sets a bad example for them. Too often, when we worry about children, we should look at our own behavior first. Until about that age (9-10+) at least, they are such a direct reflection of us that it s frightening
Yet another person reacted to the fact that many of the interviewees in the first round seemed to believe that the internet is immaterial, located somewhere in the air, while being at the same time omnipresent:
It reminds me of one time about a dozen years ago, when i was still working closely with one of the city high schools where i d just had a terrible series of days, dealing with hardware failure, crappy service followthrough by the school s ISP, and overheating in the server closet, and had basically stayed overnight at the school and just managed to get things back to mostly-functional before kids and teachers started showing up again. That afternoon, i d been asked by the teacher of a dystopian fiction class to join them for a discussion of Feed, which they d just finished reading. i had read it the week before, and came to class prepared for their questions. (the book is about a near-future where kids have cybernetic implants and their society is basically on a runaway communications overload; not a bad Y[oung]A[dult] novel, really!) The kids all knew me from around the school, but the teacher introduced my appearance in class as one of the most Internet-connected people and they wanted to ask me about whether i really thought the internet would do this kind of thing to our culture, which i think was the frame that the teacher had prepped them with. I asked them whether they thought the book was really about the Internet, or whether it was about mobile phones. Totally threw off the teacher s lesson plans, i think, but we had a good discussion. At one point, one of the kids asked me if there was some kind of crazy disaster and all the humans died out, would the internet just keep running? what would happen on it if we were all gone? all of my labor even that grueling week was invisible to him! The internet was an immaterial thing, or if not immaterial, a force of nature, a thing that you accounted for the way you accounted for the weather, or traffic jams. It didn t occur to him, even having just read a book that asked questions about what hyperconnectivity does to a culture (including grappling with issues of disparate access, effective discrimination based on who has the latest hardware, etc), it didn t occur to him that this shit all works to the extent that it does because people make it go. I felt lost trying to explain it to him, because where i wanted to get to with the class discussion was about how we might decide collectively to make it go somewhere else that our contributions to it, and our labor to perpetuate it (or not) might actually help shape the future that the network helps us slide into. but he didn t even see that human decisions or labor played a role it in at all, let alone a potentially directive role. We were really starting at square zero, which wasn t his fault. Or the fault of his classmates that matter but maybe a little bit of fault on the teacher, who i thought should have been emphasizing this more but even the teacher clearly thought of the internet as a thing being done to us not as something we might actually drive one way or another. And she s not even wrong most people don t have much control, just like most people can t control the weather, even as our weather changes based on aggregate human activity.
I was quite impressed by seeing the internet perceived as a force of nature, so we continued this discussion a bit:
that whole story happened before we started talking about the cloud , but the cloud really reinforces this idea, i think. not that anyone actually thinks that the cloud is a literal cloud, but language shapes minds in subtle ways.
(Bold emphasis in the texts are mine.) Thanks :) I m happy and touched that these interviews prompted your wonderful reactions, and I hope that there ll be more to come on this topic. I m working on it!

5 May 2022

Bits from Debian: Google Platinum Sponsor of DebConf22

Googlelogo We are very pleased to announce that Google has committed to support DebConf22 as a Platinum sponsor. This is the third year in a row that Google is sponsoring The Debian Conference with the higher tier! Google is one of the largest technology companies in the world, providing a wide range of Internet-related services and products as online advertising technologies, search, cloud computing, software, and hardware. Google has been supporting Debian by sponsoring DebConf since more than ten years, and is also a Debian partner sponsoring parts of Salsa's continuous integration infrastructure within Google Cloud Platform. With this additional commitment as Platinum Sponsor for DebConf22, Google contributes to make possible our annual conference, and directly supports the progress of Debian and Free Software helping to strengthen the community that continues to collaborate on Debian projects throughout the rest of the year. Thank you very much Google, for your support of DebConf22! Become a sponsor too! DebConf22 will take place from July 17th to 24th, 2022 at the Innovation and Training Park (ITP) in Prizren, Kosovo, and will be preceded by DebCamp, from July 10th to 16th. And DebConf22 is still accepting sponsors! Interested companies and organizations may contact the DebConf team through sponsors@debconf.org, and visit the DebConf22 website at https://debconf22.debconf.org/sponsors/become-a-sponsor. DebConf22 banner open registration

26 April 2022

Reproducible Builds: Supporter spotlight: Google Open Source Security Team (GOSST)

The Reproducible Builds project relies on several projects, supporters and sponsors for financial support, but they are also valued as ambassadors who spread the word about our project and the work that we do. This is the fourth instalment in a series featuring the projects, companies and individuals who support the Reproducible Builds project. If you are a supporter of the Reproducible Builds project (of whatever size) and would like to be featured here, please let get in touch with us at contact@reproducible-builds.org. We started this series by featuring the Civil Infrastructure Platform project and followed this up with a post about the Ford Foundation as well as a recent one about ARDC. However, today, we ll be talking with Meder Kydyraliev of the Google Open Source Security Team (GOSST).
Chris Lamb: Hi Meder, thanks for taking the time to talk to us today. So, for someone who has not heard of the Google Open Source Security Team (GOSST) before, could you tell us what your team is about? Meder: Of course. The Google Open Source Security Team (or GOSST ) was created in 2020 to work with the open source community at large, with the goal of making the open source software that everyone relies on more secure.
Chris: What kinds of activities is the GOSST involved in? Meder: The range of initiatives that the team is involved in recognizes the diversity of the ecosystem and unique challenges that projects face on their security journey. For example, our sponsorship of sos.dev ensures that developers are rewarded for security improvements to open source projects, whilst the long term work on improving Linux kernel security tackles specific kernel-related vulnerability classes. Many of the projects GOSST is involved with aim to make it easier for developers to improve security through automated assessment (Scorecards and Allstar) and vulnerability discovery tools (OSS-Fuzz, ClusterFuzzLite, FuzzIntrospector), in addition to contributing to infrastructure to make adopting certain best practices easier. Two great examples of best practice efforts are Sigstore for artifact signing and OSV for automated vulnerability management.
Chris: The usage of open source software has exploded in the last decade, but supply-chain hygiene and best practices has seemingly not kept up. How does GOSST see this issue and what approaches is it taking to ensure that past and future systems are resilient? Meder: Every open source ecosystem is a complex environment and that awareness informs our approaches in this space. There are, of course, no silver bullets , and long-lasting and material supply-chain improvements requires infrastructure and tools that will make lives of open source developers easier, all whilst improving the state of the wider software supply chain. As part of a broader effort, we created the Supply-chain Levels for Software Artifacts framework that has been used internally at Google to protect production workloads. This framework describes the best practices for source code and binary artifact integrity, and we are engaging with the community on its refinement and adoption. Here, package managers (such as PyPI, Maven Central, Debian, etc.) are an essential link in the software supply chain due to their near-universal adoption; users do not download and compile their own software anymore. GOSST is starting to work with package managers to explore ways to collaborate together on improving the state of the supply chain and helping package maintainers and application developers do better all with the understanding that many open source projects are developed in spare time as a hobby! Solutions like this, which are the result of collaboration between GOSST and GitHub, are very encouraging as they demonstrate a way to materially strengthen software supply chain security with readily available tools, while also improving development workflows. For GOSST, the problem of supply chain security also covers vulnerability management and solutions to make it easier for everyone to discover known vulnerabilities in open source packages in a scalable and automated way. This has been difficult in the past due to lack of consistently high-quality data in an easily-consumable format. To address this, we re working on infrastructure (OSV.dev) to make vulnerability data more easily accessible as well as a widely adopted and automation friendly data format.
Chris: How does the Reproducible Builds effort help GOSST achieve its goals? Meder: Build reproducibility has a lot of attributes that are attractive as part of generally good build hygiene . As an example, hermeticity, one of requirements to meet SLSA level 4, makes it much easier to reason about the dependencies of a piece of software. This is an enormous benefit during vulnerability or supply chain incident response. On a higher level, however, we always think about reproducibility from the viewpoint of a user and the threats that reproducibility protects them from. Here, a lot of progress has been made, of course, but a lot of work remains to make reproducibility part of everyone s software consumption practices.
Chris: So if someone wanted to know more about GOSST or follow the team s work, where might they go to look? Meder: We post regular updates on Google s Security Blog and on the Linux hardening mailing list. We also welcome community participation in the projects we work on! See any of the projects linked above or OpenSSF s GitHub projects page for a list of efforts you can contribute to directly if you want to get involved in strengthening the open source ecosystem.
Chris: Thanks for taking the time to talk to us today. Meder: No problem. :)


For more information about the Reproducible Builds project, please see our website at reproducible-builds.org. If you are interested in ensuring the ongoing security of the software that underpins our civilisation and wish to sponsor the Reproducible Builds project, please reach out to the project by emailing contact@reproducible-builds.org.

22 April 2022

Andrej Shadura: To England by train (part 1)

This post was written in August 2021. Just as I was going to publish it, I received an email from BB stating that due to a railway strike in Germany my night train would be cancelled. Since the rest of the trip has already been booked well in advance, I had to take a plane to Charleroi and a bus to Brussels to catch my Eurostar. Ultimately, I ended up publishing it in April 2022, just as I m about to leave for a fully train-powered trip to the UK once again. Before the pandemic started, I planned to use the last months of my then-expiring UK visa and go to England by train. I ve completed two train long journeys by that time already, to Brussels and to Belarus and Ukraine, but this would be something quite different, as I wanted to have multiple stops on my way, use night trains where it made sense, and to go through the Channel Tunnel. The Channel Tunnel fascinated me since my childhood; I first read about it in the Soviet Science and Life magazine ( ) when I was seven. I ve never had the chance to use it though, since to board any train going though it I d first need to get to France, Belgium or the Netherlands, making it significantly more expensive than the cheap 30 Ryanair flights to Stansted. As the coronavirus spread across the world, all of my travel plans along with plans for a sabbatical had to be cancelled. During 2020, I only managed to go on two weekend trips to Prague and Budapest, followed by a two-weeks holiday on Crete (we returned just a couple of weeks before the infection numbers rose and lockdowns started). I do realise that a lot of people couldn t even have this much because the situation in their countries was much worse we were lucky to have had at least some travel. Fast forward to August 2021, I m fully vaccinated, I once again have a UK visa for five years, and the UK finally recognises the EU vaccination passports yay! I can finally go to Devon to see my mother and sister again. By train, of course. Compared to my original plan, this journey will be different: about the same or even more expensive than I originally planned, but shorter and with fewer stops on the way. My original plan would be to take multiple trains from Bratislava to France or Belgium and complete this segment of the trip in about three days, enjoying my stay in a couple of cities on the way. Instead, I m taking a direct NightJet from Vienna to Brussels, not stopping anywhere on the way.
Map: train route from Bratislava to BrusselsTrain route from Bratislava to Brussels
Since I was booking my trip just two weeks ahead, the price of the ticket is not that I hoped for, but much higher: 109 for the ticket itself and 60 for the berth (advance bookings could be about twice as cheap). Next, to London! Eurostar is still on a very much reduced schedule, running one train only from Amsterdam through Brussels and Lille to London each day. This means, of course, higher ticket prices (I paid about 100 for the ticket) and longer waiting time in Brussels my sleeper arrives about 10 am, but the Eurostar train is scheduled to depart at 3 pm.
Map: train route from Brussels to LondonTrain route from Brussels to London
The train makes a stop in Lille, which I initially suspected to be risky as at the time when I booked my tickets, as at the time France was on the amber plus list for the UK, requiring a quarantine upon arrival. However, Eurostar announced that they will assign travellers from Lille to a different carriage to avoid other passengers having to go to quarantine, but recently France was taken off the amber plus list. The train fare system in the UK is something I don t quite understand, as sometimes split tickets are cheaper, sometimes they re expensive, sometimes prices for the same service at different times can be vastly different, off-peak tickets don t say what exactly off-peak means (very few people in the UK are asked were able to tell me when exactly off-peak hours are). Curiously, transfers between train stations using London Underground services can be included into railway tickets, but some last mile connection like Exeter to Honiton cannot (but this used to be possible). Both GWR.com and TrainLine refused to sell me a single ticket from London to Honiton through Exeter, insisting I split the ticket at Exeter St Davids or take the slower South Western train to Honiton via Salisbury and Yeovil. I ended up buying a 57 ticket from Paddington to Exeter St Davids with the first segment being the London Underground from St Pancras, and a separate 7.70 ticket to Honiton.
Map: arrival at St Pancras, the underground, departure from PaddingtonChange of trains in London
Map: arrival at Exeter St Davids, change to a South Western train, arrival at HonitonChange of trains in Exeter
Date Station Arrival Departure Train
12.8 Bratislava-Petr alka 18:15 REX 7756
Wien Hbf 19:15 19:53 NJ 50490
13.8 Bruxelles-Midi 9:55 15:06 EST 9145
London St Pancras 16:03 16:34 TfL
London Paddington 16:49 17:04 GWR 59231
Exeter St Davids 19:19 19:25 SWR 52706
Honiton 19:54
Unfortunately, due to the price of the tickets, I m taking a 15 Ryanair flight back Update after the journey Since I flew to Charleroi instead of comfortably sleeping in a night train, I had to put up with inconveniences of airports, including cumbersome connections to the nearby cities. The only reasonable way of getting from Charleroi to Brussels is an overcrowded bus which takes almost an hour to arrive. I used to take this bus when I tried to save money on my way to FOSDEM, and I must admit it s not something I missed. Boarding the Eurostar train went fine, my vaccination passport and Covid test wasn t really checked, just glanced at. The waiting room was a bit of a disappointment, with bars closed and vending machines broken. Since it was underground, I couldn t even see the trains until the very last moment when we were finally allowed on the platform. The train itself, while comfortable, disappointed me with the bistro carriage: standing only, instant coffee, poor selection of food and drinks. I m glad I bought some food at Carrefour at the Midi station! When I arrived in Exeter, I soon found out why the system refused to sell me a through ticket: 6 minutes is not enough to change trains at Exeter St Davids! Or, it might have been if I took the right footbridge but I took the one which led into a very talkative (and slow!) lift. I ended up running to the train just as it closed the doors and departed, leaving me tin Exeter for an hour until. I used this chance and walked to Exeter Central, and had a pint in a conveniently located pub around the corner. P.S. The maps in this and other posts were created using uMap; the map data come from OpenStreetMap. The train route visualisation was generated with help of the Raildar.fr OSRM instance.

21 April 2022

Andy Simpkins: Firmware and Debian

There has been a flurry of activity on the Debian mailing lists ever since Steve McIntyre raised the issue of including non-free firmware as part of official Debian installation images. Firstly I should point out that I am in complete agreement with Steve s proposal to include non-free firmware as part of an installation image. Likewise I think that we should have a separate archive section for firmware. Because without doing so it will soon become almost impossible to install onto any new hardware. However, as always the issue is more nuanced than a first glance would suggest. Lets start by defining what is firmware? Firmware is any software that runs outside the orchestration of the operating system. Typically firmware will be executed on a processor(s) separate from the processor(s) running the OS, but this does not need to be the case. As Debian we are content that our systems can operate using fully free and open source software and firmware. We can install our OS without needing any non-free firmware. This is an illusion! Each and every PC platform contains non-free firmware It may be possible to run free firmware on some Graphics controllers, Wi-Fi chip-sets, or Ethernet cards and we can (and perhaps should) choose to spend our money on systems where this is the case. When installing a new system we might still be forced to hold our nose and install with non-free firmware on the peripheral before we are able to upgrade it to FLOSS firmware later if this is exists or is even possible to do so. However after the installation we are running a full FLOSS system in terms of software and firmware. We all (almost without exception) are running propitiatory firmware whether we like it or not. Even after carefully selecting graphics and network hardware with FLOSS firmware options we still haven t escaped from non-free-firmware. Other peripherals contain firmware too each keyboard, disk (SSDs and Spinning rust). Even your USB memory stick that you use to contain the Debian installation image contains a microcontroller and hence also contains firmware that runs on it.
  1. Much of this firmware can not even be updated.
  2. Some can be updated, but is stored in FLASH ROM and the hardware vendor has defeated all programming methods (possibly circumnavigated with a hardware mod).
  3. Some of it can be updated but requires external device programmers (and often the programming connections are a series of test points dotted around the board and not on a header in order to make programming as difficult as possible).
  4. Sometimes the firmware can be updated from within the host operating system (i.e. Debian)
  5. Sometimes, as Steve pointed out in his post, the hardware vendor has enough firmware on a peripheral to perform basic functions perhaps enough to install the OS, but requires additional firmware to enable specific feature (i.e. higher screen resolutions, hardware accelerated functions etc.)
  6. Finally some vendors don t even bother with any non-volatile storage beyond a basic boot loader and firmware must be loaded before the device can be used in any mode.
What about the motherboard? If we are lucky we might be able to run a FLOSS implementation of the UEFI subsystem (edk2/tianocore for example), indeed the non AMD64/i386 platforms based around ARM, MIPS architectures are often the most free when it comes to firmware. What about the microcode on the processor? Personally I wasn t aware that that this was updatable firmware until the Spectre and Meltdown classes of vulnerabilities arose a few years back. So back to Debian images including non-free firmware. This is specifically to address the last two use cases mentioned above, i.e. where firmware needs to be loaded to achieve a minimum functioning of a device. Although it could also include motherboard support, and microcode as well. As far as I can tell the proposal exists for several reasons: #1 Because some freely distributable firmware is required for more and more devices, in order to install Debian, or because whilst Debian can be installed a desktop environment can not be started or fully function #2 Because frankly it is less work to produce, test and maintain fewer installation images As someone who performs tests on our images, this clearly gets my vote :-) and perhaps most important of all.. #3 Because our least experienced users, and new users will download an official image and give up if things don t just work TM Steve s proposal option 5 would address theses issues and I fully support it. I would love to see separate repositories for firmware and firmware-none free. Additionally to accompany firmware non-free I would like to have information on what the firmware actually does. Can I run my hardware without it, what function(s) are limited without the firmware, better yet is there a FLOSS equivalent that I can load instead? Is this something that we can present in Debian installer? I would love not to require non-free firmware, but if I can t, I would love if DI would enable a user to make an informed choice as to what, if any, firmware is installed. Should we be requesting (requiring?) this information for any non-free firmware image that we carry in the archive? Finally lets consider firmware in the wider general case, not just the case where we need to load firmware from within Debian each and every boot. Personally I am annoyed whenever a hardware manufacturer has gone out of their way to prevent firmware updates. Lets face it software contains bugs, and we can assume that the software making up a firmware image will as well. Critical (security) vulnerabilities found in firmware, especially if this runs on the same processor(s) as the OS can impact on the wider system, not just the device itself. This will mean that, without updatable firmware, the hardware itself should be withdrawn from use whilst it would otherwise still function. By preventing firmware updates vendors are forcing early obsolescence in the hardware they sell, perhaps good for their bottom line, but certainly no good for users or the environment. Here I can practice what I preach. As an Electronic Engineer / Systems architect I have been beating the drum for In System Updatable firmware for ALL programmable devices in a system, be it a simple peripheral or a deeply embedded system. I can honestly say that over the last 20 years (yes I have been banging this particular drum for that long) I have had 100% success in arguing this case commercially. Having device programmers in R&D departments is one thing, but that is additional cost for production, and field service. Needing custom programming headers or even a bed of nails fixture to connect your target device to a programmer is more trouble than it is worth. Finally, the ability to update firmware in the field means that you can launch your product on schedule, make a sale and ship to a customer even if the first thing that you need to do is download an update. Offering that to any project manager will make you very popular indeed. So what if this firmware is non-free? As long as the firmware resides in non-volatile media without needing the OS to interact with it, we as a project don t need to carry it in our archives. And we as principled individuals can vote with our feet and wallets by choosing to purchase devices that have free firmware. But where that isn t an option, I ll take updatable but non-free firmware over non-free firmware that can not be updated any day of the week. Sure, the manufacture can choose to no longer support the firmware, and it is shocking how soon this happens often in the consumer market, the manufacture has withdrawn support for a product before it even reaches the end user (In which case we should boycott that manufacture in future until they either change their ways of go bust). But again if firmware can be updated in system that would at least allow the possibility of open firmware to arise. Indeed the only commercial case I have seen to argue against updatable firmware has been either for DRM, in which case good lets get rid of both, or for RF licence compliance, and even then it is tenuous because in this case the manufacture wants ISP for its own use right up until a device is shipped out the door, typically achived by blowing one time programmable fuse links .

19 April 2022

Steve McIntyre: Firmware - what are we going to do about it?

TL;DR: firmware support in Debian sucks, and we need to change this. See the "My preference, and rationale" Section below. In my opinion, the way we deal with (non-free) firmware in Debian is a mess, and this is hurting many of our users daily. For a long time we've been pretending that supporting and including (non-free) firmware on Debian systems is not necessary. We don't want to have to provide (non-free) firmware to our users, and in an ideal world we wouldn't need to. However, it's very clearly no longer a sensible path when trying to support lots of common current hardware. Background - why has (non-free) firmware become an issue? Firmware is the low-level software that's designed to make hardware devices work. Firmware is tightly coupled to the hardware, exposing its features, providing higher-level functionality and interfaces for other software to use. For a variety of reasons, it's typically not Free Software. For Debian's purposes, we typically separate firmware from software by considering where the code executes (does it run on a separate processor? Is it visible to the host OS?) but it can be difficult to define a single reliable dividing line here. Consider the Intel/AMD CPU microcode packages, or the U-Boot firmware packages as examples. In times past, all necessary firmware would normally be included directly in devices / expansion cards by their vendors. Over time, however, it has become more and more attractive (and therefore more common) for device manufacturers to not include complete firmware on all devices. Instead, some devices just embed a very simple set of firmware that allows for upload of a more complete firmware "blob" into memory. Device drivers are then expected to provide that blob during device initialisation. There are a couple of key drivers for this change: Due to these reasons, more and more devices in a typical computer now need firmware to be uploaded at runtime for them to function correctly. This has grown: At the beginning of this timeline, a typical Debian user would be able to use almost all of their computer's hardware without needing any firmware blobs. It might have been inconvenient to not be able to use the WiFi, but most laptops had wired ethernet anyway. The WiFi could always be enabled and configured after installation. Today, a user with a new laptop from most vendors will struggle to use it at all with our firmware-free Debian installation media. Modern laptops normally don't come with wired ethernet now. There won't be any usable graphics on the laptop's screen. A visually-impaired user won't get any audio prompts. These experiences are not acceptable, by any measure. There are new computers still available for purchase today which don't need firmware to be uploaded, but they are growing less and less common. Current state of firmware in Debian For clarity: obviously not all devices need extra firmware uploading like this. There are many devices that depend on firmware for operation, but we never have to think about them in normal circumstances. The code is not likely to be Free Software, but it's not something that we in Debian must spend our time on as we're not distributing that code ourselves. Our problems come when our user needs extra firmware to make their computer work, and they need/expect us to provide it. We have a small set of Free firmware binaries included in Debian main, and these are included on our installation and live media. This is great - we all love Free Software and this works. However, there are many more firmware binaries that are not Free. If we are legally able to redistribute those binaries, we package them up and include them in the non-free section of the archive. As Free Software developers, we don't like providing or supporting non-free software for our users, but we acknowledge that it's sometimes a necessary thing for them. This tension is acknowledged in the Debian Free Software Guidelines. This tension extends to our installation and live media. As non-free is officially not considered part of Debian, our official media cannot include anything from non-free. This has been a deliberate policy for many years. Instead, we have for some time been building a limited parallel set of "unofficial non-free" images which include non-free firmware. These non-free images are produced by the same software that we use for the official images, and by the same team. There are a number of issues here that make developers and users unhappy:
  1. Building, testing and publishing two sets of images takes more effort.
  2. We don't really want to be providing non-free images at all, from a philosophy point of view. So we mainly promote and advertise the preferred official free images. That can be a cause of confusion for users. We do link to the non-free images in various places, but they're not so easy to find.
  3. Using non-free installation media will cause more installations to use non-free software by default. That's not a great story for us, and we may end up with more of our users using non-free software and believing that it's all part of Debian.
  4. A number of users and developers complain that we're wasting their time by publishing official images that are just not useful for a lot (a majority?) of users.
We should do better than this. Options The status quo is a mess, and I believe we can and should do things differently. I see several possible options that the images team can choose from here. However, several of these options could undermine the principles of Debian. We don't want to make fundamental changes like that without the clear backing of the wider project. That's why I'm writing this...
  1. Keep the existing setup. It's horrible, but maybe it's the best we can do? (I hope not!)
  2. We could just stop providing the non-free unofficial images altogether. That's not really a promising route to follow - we'd be making it even harder for users to install our software. While ideologically pure, it's not going to advance the cause of Free Software.
  3. We could stop pretending that the non-free images are unofficial, and maybe move them alongside the normal free images so they're published together. This would make them easier to find for people that need them, but is likely to cause users to question why we still make any images without firmware if they're otherwise identical.
  4. The images team technically could simply include non-free into the official images, and add firmware packages to the input lists for those images. However, that would still leave us with problem 3 from above (non-free generally enabled on most installations).
  5. We could split out the non-free firmware packages into a new non-free-firmware component in the archive, and allow a specific exception only to allow inclusion of those packages on our official media. We would then generate only one set of official media, including those non-free firmware packages. (We've already seen various suggestions in recent years to split up the non-free component of the archive like this, for example into non-free-firmware, non-free-doc, non-free-drivers, etc. Disagreement (bike-shedding?) about the split caused us to not make any progress on this. I believe this project should be picked up and completed. We don't have to make a perfect solution here immediately, just something that works well enough for our needs today. We can always tweak and improve the setup incrementally if that's needed.)
These are the most likely possible options, in my opinion. If you have a better suggestion, please let us know! I'd like to take this set of options to a GR, and do it soon. I want to get a clear decision from the wider Debian project as to how to organise firmware and installation images. If we do end up changing how we do things, I want a clear mandate from the project to do that. My preference, and rationale Mainly, I want to see how the project as a whole feels here - this is a big issue that we're overdue solving. What would I choose to do? My personal preference would be to go with option 5: split the non-free firmware into a special new component and include that on official media. Does that make me a sellout? I don't think so. I've been passionately supporting and developing Free Software for more than half my life. My philosophy here has not changed. However, this is a complex and nuanced situation. I firmly believe that sharing software freedom with our users comes with a responsibility to also make our software useful. If users can't easily install and use Debian, that helps nobody. By splitting things out here, we would enable users to install and use Debian on their hardware, without promoting/pushing higher-level non-free software in general. I think that's a reasonable compromise. This is simply a change to recognise that hardware requirements have moved on over the years. Further work If we do go with the changes in option 5, there are other things we could do here for better control of and information about non-free firmware:
  1. Along with adding non-free firmware onto media, when the installer (or live image) runs, we should make it clear exactly which firmware packages have been used/installed to support detected hardware. We could link to docs about each, and maybe also to projects working on Free re-implementations.
  2. Add an option at boot to explicitly disable the use of the non-free firmware packages, so that users can choose to avoid them.
Acknowledgements Thanks to people who reviewed earlier versions of this document and/or made suggestions for improvement, in particular:

13 April 2022

Antoine Beaupr : Tuning my wifi radios

After listening to an episode of the 2.5 admins podcast, I realized there was some sort of low-hanging fruit I could pick to better tune my WiFi at home. You see, I'm kind of a fraud in WiFi: I only started a WiFi mesh in Montreal (now defunct), I don't really know how any of that stuff works. So I was surprised to hear one of the podcast host say "it's all about airtime" and "you want to reduce the power on your access points" (APs). It seemed like sound advice: better bandwidth means less time on air, means less collisions, less latency, and less power also means less collisions. Worth a try, right?

Frequency So the first thing I looked at was WifiAnalyzer to see if I had any optimisation I could do there. Normally, I try to avoid having nearby APs on the same frequency to avoid collisions, but who knows, maybe I had messed that up. And turns out I did! Both APs were on "auto" for 5GHz, which typically means "do nothing or worse". 5GHz is really interesting, because, in theory, there are LOTS of channels to pick from, it goes up to 196!! And both my APs were on 36, what gives? So the first thing I did was to set it to channel 100, as there was that long gap in WifiAnalyzer where no other AP was. But that just broke 5GHz on the AP. The OpenWRT GUI (luci) would just say "wireless not associated" and the ESSID wouldn't show up in a scan anymore. At first, I thought this was a problem with OpenWRT or my hardware, but I could reproduce the problem with both my APs: a TP-Link Archer A7 v5 and a Turris Omnia (see also my review). As it turns out, that's because that range of the WiFi band interferes with trivial things like satellites and radar, which make the actually very useful radar maps look like useless christmas trees. So those channels require DFS to operate. DFS works by first listening on the frequency for a certain amount of time (1-2 minute, but could be as high as 10) to see if there's something else transmitting at all. So typically, that means they just don't operate at all in those bands, especially if you're near any major city which generally means you are near a weather radar that will transmit on that band. In the system logs, if you have such a problem, you might see this:
Apr  9 22:17:39 octavia hostapd: wlan0: DFS-CAC-START freq=5500 chan=100 sec_chan=1, width=0, seg0=102, seg1=0, cac_time=60s
Apr  9 22:17:39 octavia hostapd: DFS start_dfs_cac() failed, -1
... and/or this:
Sat Apr  9 18:05:03 2022 daemon.notice hostapd: Channel 100 (primary) not allowed for AP mode, flags: 0x10095b NO-IR RADAR
Sat Apr  9 18:05:03 2022 daemon.warn hostapd: wlan0: IEEE 802.11 Configured channel (100) not found from the channel list of current mode (2) IEEE 802.11a
Sat Apr  9 18:05:03 2022 daemon.warn hostapd: wlan0: IEEE 802.11 Hardware does not support configured channel
Here, it clearly says RADAR (in all caps too, which means it's really important). NO-IR is also important, I'm not sure what it means but it could be that you're not allowed to transmit in that band because of other local regulations. There might be a way to workaround those by changing the "region" in the Luci GUI, but I didn't mess with that, because I figured that other devices will have that already configured. So using a forbidden channel might make it more difficult for clients to connect (although it's possible this is enforced only on the AP side). In any case, 5GHz is promising, but in reality, you only get from channel 36 (5.170GHz) to 48 (5.250GHz), inclusively. Fast counters will notice that is exactly 80MHz, which means that if an AP is configured for that hungry, all-powerful 80MHz, it will effectively take up all 5GHz channels at once. This, in other words, is as bad as 2.4GHz, where you also have only two 40MHz channels. (Really, what did you expect: this is an unregulated frequency controlled by commercial interests...) So the first thing I did was to switch to 40MHz. This gives me two distinct channels in 5GHz at no noticeable bandwidth cost. (In fact, I couldn't find hard data on what the bandwidth ends up being on those frequencies, but I could still get 400Mbps which is fine for my use case.)

Power The next thing I did was to fiddle with power. By default, both radios were configured to transmit as much power as they needed to reach clients, which means that if a client gets farther away, it would boost its transmit power which, in turns, would mean the client would still connect to instead of failing and properly roaming to the other AP. The higher power also means more interference with neighbors and other APs, although that matters less if they are on different channels. On 5GHz, power was about 20dBm (100 mW) -- and more on the Turris! -- when I first looked, so I tried to lower it drastically to 5dBm (3mW) just for kicks. That didn't work so well, so I bumped it back up to 14 dBm (25 mW) and that seems to work well: clients hit about -80dBm when they get far enough from the AP, which gets close to the noise floor (and where the neighbor APs are), which is exactly what I want. On 2.4GHz, I lowered it down even further, to 10 dBm (10mW) since it's better at going through wells, I figured it would need less power. And anyways, I rather people use the 5GHz APs, so maybe that will act as an encouragement to switch. I was still able to connect correctly to the APs at that power as well.

Other tweaks I disabled the "Allow legacy 802.11b rates" setting in the 5GHz configuration. According to this discussion:
Checking the "Allow b rates" affects what the AP will transmit. In particular it will send most overhead packets including beacons, probe responses, and authentication / authorization as the slow, noisy, 1 Mb DSSS signal. That is bad for you and your neighbors. Do not check that box. The default really should be unchecked.
This, in particular, "will make the AP unusable to distant clients, which again is a good thing for public wifi in general". So I just unchecked that box and I feel happier now. I didn't make tests to see the effect separately however, so this is mostly just a guess.

6 April 2022

Bits from Debian: Infomaniak Platinum Sponsor of DebConf22

infomaniaklogo We are very pleased to announce that Infomaniak has committed to support DebConf22 as a Platinum sponsor. This is the fourth year in a row that Infomaniak is sponsoring The Debian Conference with the higher tier! Infomaniak is Switzerland's largest web-hosting company, also offering backup and storage services, solutions for event organizers, live-streaming and video on demand services. It wholly owns its datacenters and all elements critical to the functioning of the services and products provided by the company (both software and hardware). With this commitment as Platinum Sponsor, Infomaniak contributes to make possible our annual conference, and directly supports the progress of Debian and Free Software helping to strengthen the community that continues to collaborate on Debian projects throughout the rest of the year. Thank you very much Infomaniak, for your support of DebConf22! Become a sponsor too! DebConf22 will take place from July 17th to 24th, 2022 at the Innovation and Training Park (ITP) in Prizren, Kosovo, and will be preceded by DebCamp, from July 10th to 16th. And DebConf22 is still accepting sponsors! Interested companies and organizations may contact the DebConf team through sponsors@debconf.org, and visit the DebConf22 website at https://debconf22.debconf.org/sponsors/become-a-sponsor. DebConf22 banner open registration

5 April 2022

Dirk Eddelbuettel: RcppArmadillo 0.11.0.0.0 on CRAN: Upstream Updates

armadillo image Armadillo is a powerful and expressive C++ template library for linear algebra aiming towards a good balance between speed and ease of use with a syntax deliberately close to a Matlab. RcppArmadillo integrates this library with the R environment and language and is widely used by (currently) 972 other packages on CRAN, downloaded over 24 million times (per the partial logs from the cloud mirrors of CRAN), and the CSDA paper (preprint / vignette) by Conrad and myself has been cited 465 times according to Google Scholar. This release brings a new upstream release 11.0.0. We tested this very rigorously via three different RC release each of which got a full reverse-dependencies run (for which results are always logged here). The full set of changes (since the last CRAN release 0.10.8.1.0) follows.

Changes in RcppArmadillo version 0.11.0.0 (2022-04-04)
  • Upgraded to Armadillo release 11.0.0 (Creme Brulee)
    • added variants of inv() and inv_sympd() that provide rcond (reciprocal condition number)
    • expanded inv() and inv_sympd() with options inv_opts::tiny and inv_opts::allow_approx
    • stricter handling of singular matrices by inv() and inv_sympd()
    • stricter handling of non-sympd matrices by inv() and inv_sympd()
    • stricter handling of non-finitie matrices by pinv()
    • more robust handling of rank deficient matrices by solve()
    • faster handling of diagonal matrices by rcond()
    • changed eigs_sym() and eigs_gen() to use higher quality RNG
    • quantile() and median() will now throw an exception if given matrices/vectors have NaN elements
    • workaround for yet another bug in Intel MKL
  • Until May 2022, protect correction to Field behavior via define of RCPP_ARMADILLO_FIX_Field
  • If a LAPACK installation with missing complex routines is found (as e.g. Ubuntu using 3.9.0) then the LAPACK unit test is skipped.

Changes in RcppArmadillo version 0.10.8.2.0 (2022-02-01)
  • Upgraded to Armadillo release 10.8.2 (Realm Raider)
    • fix potential speed regression in pinv() and rank()

Courtesy of my CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page. If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

30 March 2022

Bits from Debian: Lenovo Platinum Sponsor of DebConf22

lenovologo We are very pleased to announce that Lenovo has committed to supporting DebConf22 as a Platinum sponsor. This is the fourth year in a row that Lenovo is sponsoring The Debian Conference with the higher tier! As a global technology leader manufacturing a wide portfolio of connected products, including smartphones, tablets, PCs and workstations as well as AR/VR devices, smart home/office and data center solutions, Lenovo understands how critical open systems and platforms are to a connected world. With this commitment as Platinum Sponsor, Lenovo is contributing to make possible our annual conference, and directly supporting the progress of Debian and Free Software, helping to strengthen the community that continues to collaborate on Debian projects throughout the rest of the year. Thank you very much Lenovo, for your support of DebConf22! Become a sponsor too! DebConf22 will take place from July 17th to 24th, 2022 at the Innovation and Training Park (ITP) in Prizren, Kosovo, and will be preceded by DebCamp, from July 10th to 16th. And DebConf22 is still accepting sponsors! Interested companies and organizations may contact the DebConf team through sponsors@debconf.org, and visit the DebConf22 website at https://debconf22.debconf.org/sponsors/become-a-sponsor. DebConf22 banner open registration

23 March 2022

Matthew Garrett: AMD's Pluton implementation seems to be controllable

I've been digging through the firmware for an AMD laptop with a Ryzen 6000 that incorporates Pluton for the past couple of weeks, and I've got some rough conclusions. Note that these are extremely preliminary and may not be accurate, but I'm going to try to encourage others to look into this in more detail. For those of you at home, I'm using an image from here, specifically version 309. The installer is happy to run under Wine, and if you tell it to "Extract" rather than "Install" it'll leave a file sitting in C:\\DRIVERS\ASUS_GA402RK_309_BIOS_Update_20220322235241 which seems to have an additional 2K of header on it. Strip that and you should have something approximating a flash image.

Looking for UTF16 strings in this reveals something interesting:

Pluton (HSP) X86 Firmware Support
Enable/Disable X86 firmware HSP related code path, including AGESA HSP module, SBIOS HSP related drivers.
Auto - Depends on PcdAmdHspCoreEnable build value
NOTE: PSP directory entry 0xB BIT36 have the highest priority.
NOTE: This option will NOT put HSP hardware in disable state, to disable HSP hardware, you need setup PSP directory entry 0xB, BIT36 to 1.
// EntryValue[36] = 0: Enable, HSP core is enabled.
// EntryValue[36] = 1: Disable, HSP core is disabled then PSP will gate the HSP clock, no further PSP to HSP commands. System will boot without HSP.

"HSP" here means "Hardware Security Processor" - a generic term that refers to Pluton in this case. This is a configuration setting that determines whether Pluton is "enabled" or not - my interpretation of this is that it doesn't directly influence Pluton, but disables all mechanisms that would allow the OS to communicate with it. In this scenario, Pluton has its firmware loaded and could conceivably be functional if the OS knew how to speak to it directly, but the firmware will never speak to it itself. I took a quick look at the Windows drivers for Pluton and it looks like they won't do anything unless the firmware wants to expose Pluton, so this should mean that Windows will do nothing.

So what about the reference to "PSP directory entry 0xB BIT36 have the highest priority"? The PSP is the AMD Platform Security Processor - it's an ARM core on the CPU package that boots before the x86. The PSP firmware lives in the same flash image as the x86 firmware, so the PSP looks for a header that points it towards the firmware it should execute. This gives a pointer to a "directory" - a list of different object types and where they're located in flash (there's a description of this for slightly older AMDs here). Type 0xb is treated slightly specially. Where most types contain the address of where the actual object is, type 0xb contains a 64-bit value that's interpreted as enabling or disabling various features - something AMD calls "soft fusing" (Intel have something similar that involves setting bits in the Firmware Interface Table). The PSP looks at the bits that are set here and alters its behaviour. If bit 36 is set, the PSP tells Pluton to turn itself off and will no longer send any commands to it.

So, we have two mechanisms to disable Pluton - the PSP can tell it to turn itself off, or the x86 firmware can simply never speak to it or admit that it exists. Both of these imply that Pluton has started executing before it's shut down, so it's reasonable to wonder whether it can still do stuff. In the image I'm looking at, there's a blob starting at 0x0069b610 that appears to be firmware for Pluton - it contains chunks that appear to be the reference TPM2 implementation, and it broadly decompiles as valid ARM code. It should be viable to figure out whether it can do anything in the face of being "disabled" via either of the above mechanisms.

Unfortunately for me, the system I'm looking at does set bit 36 in the 0xb entry - as a result, Pluton is disabled before x86 code starts running and I can't investigate further in any straightforward way. The implication that the user-controllable mechanism for disabling Pluton merely disables x86 communication with it rather than turning it off entirely is a little concerning, although (assuming Pluton is behaving as a TPM rather than having an enhanced set of capabilities) skipping any firmware communication means the OS has no way to know what happened before it started running even if it has a mechanism to communicate with Pluton without firmware assistance. In that scenario it'd be viable to write a bootloader shim that just faked up the firmware measurements before handing control to the OS.

The bit 36 disabling mechanism seems more solid? Again, it should be possible to analyse the Pluton firmware to determine whether it actually pays attention to a disable command being sent. But even if it chooses to ignore that, if the PSP is in a position to just cut the clock to Pluton, it's not going to be able to do a lot. At that point we're trusting AMD rather than trusting Microsoft, but given that you're also trusting AMD to execute the code you're giving them to execute, it's hard to avoid placing trust in them.

Overall: I'm reasonably confident that systems that ship with Pluton disabled via setting bit 36 in the soft fuses are going to disable it sufficiently hard that the OS can't do anything about it. Systems that give the user an option to enable or disable it are a little less clear in that respect, and it's possible (but not yet demonstrated) that an OS could communicate with Pluton anyway. However, if that's true, and if the firmware never communicates with Pluton itself, the user could install a stub loader in UEFI that mimicks the firmware behaviour and leaves the OS thinking everything was good when it absolutely is not.

So, assuming that Pluton in its current form on AMD has no capabilities outside those we know about, the disabling mechanisms are probably good enough. It's tough to make a firm statement on this before I have access to a system that doesn't just disable it immediately, so stay tuned for updates.

comment count unavailable comments

21 March 2022

Gunnar Wolf: Long, long, long live Emacs after 39 years

Reading Planet Debian (see, Sam, we are still having a conversation over there? ), I read Anarcat s 20+ years of Emacs. And.. Well, should I brag contribute to the discussion? Of course, why not? Emacs is the first computer program I can name that I ever learnt to use to do something minimally useful. 39 years ago.
From the Space Cadet keyboard that (obviously ) influenced Emacs early design
The Emacs editor was born, according to Wikipedia, in 1976, same year as myself. I am clearly not among its first users. It was already a well-established citizen when I first learnt it; I am fortunate to be the son of a Physics researcher at UNAM, My father used to take me to his institute after he noticed how I was attracted to computers; we would usually spend some hours there between 7 and 11PM on Friday nights. His institute had a computer room where they had very sweet gear: Some 10 Heathkit terminals quite similar to this one: The terminals were connected (via individual switches) to both a PDP-11 and a Foonly F2 computers. The room also had a beautiful thermal printer, a beautiful Tektronix vectorial graphics output terminal, and some other stuff. The main user for my father was to typeset some books; he had recently (1979) published Integral Transforms in Science and Engineering (that must be my first mention in scientific literature), and I remember he was working on the proceedings of a conference he held in Oaxtepec (the account he used in the system was oax, not his usual kbw, which he lent me). He was also working on Manual de Lenguaje y Tipograf a Cient fica en Castellano, where you can see some examples of TeX; due to a hardware crash, the book has the rare privilege of being a direct copy of the output of the thermal printer: It was not possible to produce a higher resolution copy for several years But it is fun and interesting to see what we were able to produce with in-house tools back in 1985! So, what could he teach me so I could use the computers while he worked? TeX, of course. No, no LaTeX (that was published in 1984). LaTeX is a set of macros developed initially by Leslie Lamport, used to make TeX easier; TeX was developed by Donald Knuth, and if I have this information correct, it was Knuth himself who installed and demonstrated TeX in the Foonly computer, during a visit to UNAM. Now, after 39 years hammering at Emacs buffers Have I grown extra fingers? Nope. I cannot even write decent elisp code, and can barely read it. I do use org-mode (a lot!) and love it; I have written basically five books, many articles and lots of presentations and minor documents with it. But I don t read my mail or handle my git from Emacs. I could say, I m a relatively newbie after almost four decades. Four decades When we got a PC in 1986, my father got the people at the Institute to get him memacs (micro-emacs). There was probably a ten year period I barely used any emacs, but always recognized it. My fingers hve memorized a dozen or so movement commands, and a similar number of file management commands. And yes, Emacs and TeX are still the main tools I use day to day.

15 March 2022

Russell Coker: Librem 5 First Impression

I just received the Purism Librem 5 that I paid for years ago (I think it was 2018). The process of getting the basic setup done was typical (choosing keyboard language, connecting to wifi, etc). Then I tried doing things. One thing I did was update to the latest PureOS release which gave me a list of the latest Debian packages installed which is nice. The first problem I found was the lack of notification when the phone is trying to do something. I d select to launch an app, nothing would happen, then a few seconds later it would appear. When I go to the PureOS app store and get a list of apps in a category nothing happens for ages (shows a blank list) then it might show actual apps, or it might not. I don t know what it s doing, maybe downloading a list of apps, if so it should display how many apps have had their summary data downloaded or how many KB of data have been downloaded so I know if it s doing something and how long it might take. Running any of the productivity applications requires a GNOME keyring, I selected a keyring password of a few characters and it gave a warning about having no password (does this mean it took 3 characters to be the same as 0 characters?). Then I couldn t unlock it later. I tried deleting the key file and creating a new one with a single character password and got the same result. I think that such keyring apps have little benefit, all processes in the session have the same UID and presumable can use ptrace to get data from each other s memory space. If the keyring program was SETGID and the directory used to store the keyring files was a system directory with execute access only for that group then it might provide some benefit (SETGID means that ptrace is denied). Ptrace is denied for the keyring but relying on a user space prompt for the passphrase to a file that the user can read seems of minimal benefit as a hostile process could read the file and prompt for the passphrase. This is probably more of a Debian issue, and I reproduced the keyring issue with my Debian workstation. The Librem 5 is a large phone (unusually thick by modern phone standards) and is rumoured to be energy hungry. When I tried charging it from the USB port on my PC (HP ML110 Gen9) the charge level went down. I used the same USB port and USB cable that I use to charge my Huawei Mate 10 Pro every day, so other phones can draw more power from that USB port and cable faster than they use it. The on-sceen keyboard for the Librem 5 is annoying, it doesn t have a TAB key and the cursor control keys are unreasonably small. The keyboard used by ConnectBot (the most popular SSH client for Android) is much better, it has it s own keys for CTRL, ESC, TAB, arrows, HOME, and END in addition to the regular on-screen keyboard. The Librem 5 comes with a terminal app by default which is much more difficult to use than it should be due to the lack of TAB filename completion etc. The phone has a specified temperature range of 0C to 35C, that s not great for Australia where even the cooler cities have summer temperatures higher than that. When charging on a fast charger (one that can provide energy faster than the phone uses it) the phone gets quite warm. It feels like more than 10C hotter than the ambient temperature, so I guess I can t charge it on Monday afternoon when the forecast is 31C! Maybe I should put a USB charger by my fridge with a long enough cable that I can charge a phone that s inside the fridge, seriously. Having switches to disable networking is a good security feature and designing the phone with separate components that can t interfere with each other is good too. There are reports that software fixes will reduce the electricity use which will alleviate the problems with charging and temperature. Most of my problems are clearly software related and therefore things that I can fix (in theory at least I don t have unlimited coding time). Overall this wasn t the experience I had hoped for after spending something like $700 and waiting about 4 years (so long that I can t easily find the records of how long and how much money). Getting It Working It seems that the PureOS app store app doesn t work properly. I can visit the app site and then select an app to install which then launches the app store app to do the install, which failed for every app I tried. Then I tried going to the terminal and running the following:
sudo bash
apt update
apt install openssh-server netcat
So I should be able to use APT to install everything I want and use the PureOS web site as a guide to what is expected to work on the phone. As an aside the PureOS apt repository appears to be a mirror or rebuild of the Debian/Bullseye arm64 archive without non-free components that they call Byzanteum. Then I could ssh to my phone via ssh purism@purism (after adding an entry to /etc/hosts with the name purism and a static entry in my DHCP configuration to match) and run sudo bash to get root. To be able to login to root directly I had to install a ssh key (root is setup to login without password) and run usermod --expiredate= root (empty string for expire date) to have direct root logins. I put the following in /etc/ssh/sshd_config.d/local.conf to restrict who can login (I added the users I want to the sshusers group). It also uses the ClientAlive checks because having sessions break due to IP address changes is really common with mobile devices and we don t want disconnected sessions taking up memory forever.
AllowGroups sshusers
PasswordAuthentication no
UseDNS no
ClientAliveInterval 60
ClientAliveCountMax 6
Notifications The GNOME notification system is used for notifications in the phone UI. So if you install the package libnotify-bin you get a utility notify-send that allows sending notifications from shell scripts. Final Result Now it basically works as a Debian workstation with a single-button mouse. So I just need to configure it as I would a Debian system and fix bugs along the way. Presumably any bug fixes I get into Debian will appear in PureOS shortly after the next Debian release.

Next.

Previous.