Search Results: "chip"

11 September 2022

Shirish Agarwal: Politics, accessibility, books

Politics I have been reading books, both fiction and non-fiction for a long long time. My first book was a comic most probably when I was down with Malaria when I was a kid. I must be around 4-5 years old. Over the years, books have given me great joy and I continue to find nuggets of useful information, both in fiction as well as non-fiction books. So here s to sharing something and how that can lead you to a rabbit hole. This entry would be a bit NSFW as far as language is concerned. NYPD Red 5 by James Patterson First of all, have no clue as to why James Patterson s popularity has been falling. He used to be right there with Lee Child and others, but not so much now. While I try to be mysterious about books, I would give a bit of heads-up so people know what to expect. This is probably more towards the Adult crowd as there is a bit of sex as well as quite a few grey characters. The NYPD Red is a sort of elite police task force that basically is for celebrities. In the book series, they do a lot of ass-kissing (figuratively more than literally). Now the reason I have always liked fiction is that however wild the assumption or presumption is, it does have somewhere a grain of truth. And each and every time I read a book or two, that gets cemented. One of the statements in the book told something about how 9/11 took a lot of police personnel out of the game. First, there were a number of policemen who were patrolling the Two Towers, so they perished literally during the explosion. Then there were policemen who were given the cases to close the cases (bring the cases to conclusion). When you are investigating your own brethren or even civilians who perished 9/11 they must have experienced emotional trauma and no outlet. Mental health even in cops is the same and given similar help as you and me (i.e. next to none.) But both of these were my assumptions. The only statement that was in the book was they lost a lot of bench strength. Even NYFD (New York Fire Department). This led me to me to With Crime At Record Lows, Should NYC Have Fewer Cops? This is more right-wing sentiment and in fact, there have been calls to defund the police. This led me to https://cbcny.org/ and one specific graph. Unfortunately, this tells the story from 2010-2022 but not before. I was looking for data from around 1999 to 2005 because that will tell whether or not it happened. Then I remembered reading in newspapers the year or two later how 9/11 had led NYC to recession. I looked up online and for sure NY was booming before 9/11. One can argue that NYC could come down and that is pretty much possible, everything that goes up comes down, it s a law of nature but it would have been steady rather than abrupt. And once you are in recession, the first thing to go is personnel. So people both from NYPD and NYFD were let go, even though they were needed the most then. As you can see, a single statement in a book can take you to places & time literally. Edit: Addition 11th September There were quite a few people who also died from New York Port Authority and they also lost quite a number of people directly and indirectly and did a lot of patrolling of the water bodies near NYC. Later on, even in their department, there were a lot of early retirements.

Kosovo A couple of days back I had a look at the Debconf 2023 BOF that was done in Kosovo. One of the interesting things that happened during the BOF is when a woman participant chimed in and asks India to recognize Kosovo. Immediately it triggered me and I opened the Kosovo Wikipedia page to get some understanding of the topic. Reading up on it, came to know Russia didn t agree and doesn t recognize Kosovo. Mr. Modi likes Putin and India imports a lot of its oil from Russia. Unrelatedly, but still useful, we rejected to join IPEF. Earlier, we had rejected China s BRI. India has never been as vulnerable as she is now. Our foreign balance has reached record lows. Now India has been importing quite a bit of Russian crude and has been buying arms and ammunition from them. We are also scheduled to buy a couple of warships and submarines etc. We even took arms and ammunition from them on lease. So we can t afford that they are displeased with India. Even though Russia has more than friendly relations with both China and Pakistan. At the same time, the U.S. is back to aiding Pakistan which the mainstream media in India refuses to even cover. And to top all of this, we have the Chip 4 Alliance but that needs its own article, truth be told but we will do with a paragraph  Edit Addition 11th September Seems Kosovo isn t unique in that situation, there are 3-4 states like that. A brief look at worldpopulationreview tells you there are many more.

Chip 4 Alliance For almost a decade I have been screaming about this on my blog as well as everywhere that chip fabrication is a national security thing. And for years, most people deny it. And now we have chip 4 alliance. Now to understand this, you have to understand that China for almost a decade, somewhere around 2014 or so came up with something called the big fund . Now one can argue one way or the other how successful the fund has been, but it has, without doubt, created ripples so strong that the U.S., Taiwan, Japan, and probably South Korea will join and try to stem the tide. Interestingly, in this grouping, South Korea is the weakest in the statements and what they have been saying. Within the group itself, there is a lot of tension and China would use that and there are a number of unresolved issues between the three countries that both China & Russia would exploit. For e.g. the Comfort women between South Korea and Japan. Or the 1985 Accord Agreement between Japan and the U.S. Now people need to understand this, this is not just about China but also about us. If China has 5-6x times India s GDP and their research budget is at the very least 100x times what India spends, how do you think we will be self-reliant? Whom are we fooling? Are we not tired of fooling ourselves  In diplomacy, countries use leverage. Sadly, we let go of some of our most experienced negotiators in 2014 and since then have been singing in the wind

Accessibility, Jitsi, IRC, Element-Desktop The Wikipedia page on Accessibility says the following Accessibility is the design of products, devices, services, vehicles, or environments so as to be usable by people with disabilities. The concept of accessible design and practice of accessible development ensures both direct access (i.e. unassisted) and indirect access meaning compatibility with a person s assistive technology. Now IRC or Internet Relay Chat has been accessible for a long time. I know of even blind people who have been able to navigate IRC quite effortlessly as there has been a lot of work done to make sure all the joints speak to each other so people with one or more disabilities still can use, and contribute without an issue. It does help that IRC and many clients have been there since the 1970s so most of them have had more than enough time to get all the bugs fixed and both text-to-speech and speech-to-text work brilliantly on IRC. Newer software like Jitsi or for that matter Telegram is lacking those features. A few days ago, discovered on Telegram I was shared that Samsung Voice input is also able to do the same. The Samsung Voice Input works wonder as it translates voice to text, I have not yet tried the text-to-speech but perhaps somebody can and they can share whatever the results can be one way or the other. I have tried element-desktop both on the desktop as well as mobile phone and it has been disappointing, to say the least. On the desktop, it is unruly and freezes once in a while, and is buggy. The mobile version is a little better but that s not saying a lot. I prefer the desktop version as I can use the full-size keyboard. The bug I reported has been there since its Riot days. I had put up a bug report even then. All in all, yesterday was disappointing

31 August 2022

Russell Coker: Links Aug 2022

Armor is an interesting technology from Manchester University for stopping rowhammer attacks on DRAM [1]. Unfortunately armor is a term used for DRAM that looks fancy for ricers so finding out whether it s used in production is difficult. The Reckless Limitless Scope of Web Browsers is an insightful analysis of the size of web specs and why it s impossible to implement them properly [2]. Framework is a company that makes laptop kits you can assemble and upgrade, interesting concept [3]. I ll keep buying second hand laptops for less than $400 but if I wanted to spend $1000 then I d consider one of these. FS has an insightful article about why unstructured job interviews (IE the vast majority of job interviews) give a bad result [4]. How a child killer inspired Ayn Rand and indirectly conservatives all around the world [5]. Ayn Rand s love of a notoriously sadistic child killer is well known, but this article has a better discussion of it than most. 60 Minutes had an interesting article on Foreign Accent Syndrome where people suddenly sound like they are from another country [6]. 18 minute video but worth watching. Most Autistic people have experience of people claiming that they must be from another country because of the way they speak. Having differences in brain function lead to differences in perceived accent is nothing new. The IEEE has an interesting article about the creation of the i860, the first million-transistor chip [7]. The Game of Trust is an interactive web site demonstrating the game theory behind trusting other people [8]. Here s a choose your own adventure game in Twitter (Nitter is a non-tracking proxy for Twitter) [9], can you get your pawn elected Emperor of the Holy Roman Empire?

27 August 2022

James Valleroy: FreedomBox Packages in Debian

FreedomBox is a Debian pure blend that reduces the effort needed to run and maintain a small personal server. Being a pure blend means that all of the software packages which are used in FreedomBox are included in Debian. Most of these packages are not specific to FreedomBox: they are common things such as Apache web server, firewalld, slapd (LDAP server), etc. But there are a few packages which are specific to FreedomBox: they are named freedombox, freedombox-doc-en, freedombox-doc-es, freedom-maker, fbx-all and fbx-tasks. freedombox is the core package. You could say, if freedombox is installed, then your system is a FreedomBox (or a derivative). It has dependencies on all of the packages that are needed to get a FreedomBox up and running, such as the previously mentioned Apache, firewalld, and slapd. It also provides a web interface for the initial setup, configuration, and installing apps. (The web interface service is called Plinth and is written in Python using Django framework.) The source package of freedombox also builds freedombox-doc-en and freedombox-doc-es. These packages install the FreedomBox manuals for English and Spanish, respectively. freedom-maker is a tool that is used to build FreedomBox disk images. An image can be copied to a storage device such as a Solid State Disk (SSD), eMMC (internal flash memory chip), or a microSD card. Each image is meant for a particular hardware device (or target device), or a set of devices. In some cases, one image can be used across a wide range of devices. For example, the amd64 image is for all 64-bit x86 architecture machines (including virtual machines). The arm64 image is for all 64-bit ARM machines that support booting a generic image using UEFI. fbx-all and fbx-tasks are special metapackages, both built from a single source package named debian-fbx. They are related to tasksel, a program that displays a curated list of packages that can be installed, organized by interest area. Debian blends typically provide task files to list their relevant applications in tasksel. fbx-tasks only installs the tasks for FreedomBox (without actually installing FreedomBox). fbx-all goes one step further and also installs freedombox itself. In general, FreedomBox users won t need to interact with these two packages. Links:

12 August 2022

Wouter Verhelst: Upgrading a Windows 10 VM to Windows 11

I run Debian on my laptop (obviously); but occasionally, for $DAYJOB, I have some work to do on Windows. In order to do so, I have had a Windows 10 VM in my libvirt configuration that I can use. A while ago, Microsoft issued Windows 11. I recently found out that all the components for running Windows 11 inside a libvirt VM are available, and so I set out to upgrade my VM from Windows 10 to Windows 11. This wasn't as easy as I thought, so here's a bit of a writeup of all the things I ran against, and how I fixed them. Windows 11 has a number of hardware requirements that aren't necessary for Windows 10. There are a number of them, but the most important three are: So let's see about all three.

A modern enough processor If your processor isn't modern enough to run Windows 11, then you can probably forget about it (unless you want to use qemu JIT compilation -- I dunno, probably not going to work, and also not worth it if it were). If it is, all you need is the "host-passthrough" setting in libvirt, which I've been using for a long time now. Since my laptop is less than two months old, that's not a problem for me.

A TPM 2.0 module My Windows 10 VM did not have a TPM configured, because it wasn't needed. Luckily, a quick web search told me that enabling that is not hard. All you need to do is:
  • Install the swtpm and swtpm-tools packages
  • Adding the TPM module, by adding the following XML snippet to your VM configuration:
    <devices>
      <tpm model='tpm-tis'>
        <backend type='emulator' version='2.0'/>
      </tpm>
    </devices>
    
    Alternatively, if you prefer the graphical interface, click on the "Add hardware" button in the VM properties, choose the TPM, set it to Emulated, model TIS, and set its version to 2.0.
You're done! Well, with this part, anyway. Read on.

Secure boot Here is where it gets interesting. My Windows 10 VM was old enough that it was configured for the older i440fx chipset. This one is limited to PCI and IDE, unlike the more modern q35 chipset (which supports PCIe and SATA, and does not support IDE nor SATA in IDE mode). There is a UEFI/Secure Boot-capable BIOS for qemu, but it apparently requires the q35 chipset, Fun fact (which I found out the hard way): Windows stores where its boot partition is somewhere. If you change the hard drive controller from an IDE one to a SATA one, you will get a BSOD at startup. In order to fix that, you need a recovery drive. To create the virtual USB disk, go to the VM properties, click "Add hardware", choose "Storage", choose the USB bus, and then under "Advanced options", select the "Removable" option, so it shows up as a USB stick in the VM. Note: this takes a while to do (took about an hour on my system), and your virtual USB drive needs to be 16G or larger (I used the libvirt default of 20G). There is no possibility, using the buttons in the virt-manager GUI, to convert the machine from i440fx to q35. However, that doesn't mean it's not possible to do so. I found that the easiest way is to use the direct XML editing capabilities in the virt-manager interface; if you edit the XML in an editor it will produce error messages if something doesn't look right and tell you to go and fix it, whereas the virt-manager GUI will actually fix things itself in some cases (and will produce helpful error messages if not). What I did was:
  • Take backups of everything. No, really. If you fuck up, you'll have to start from scratch. I'm not responsible if you do.
  • Go to the Edit->Preferences option in the VM manager, then on the "General" tab, choose "Enable XML editing"
  • Open the Windows VM properties, and in the "Overview" section, go to the "XML" tab.
  • Change the value of the machine attribute of the domain.os.type element, so that it says pc-q35-7.0.
  • Search for the domain.devices.controller element that has pci in its type attribute and pci-root in its model one, and set the model attribute to pcie-root instead.
  • Find all domain.devices.disk.target elements, setting their dev=hdX to dev=sdX, and bus="ide" to bus="sata"
  • Find the USB controller (domain.devices.controller with type="usb", and set its model to qemu-xhci. You may also want to add ports="15" if you didn't have that yet.
  • Perhaps also add a few PCIe root ports:
    <controller type="pci" index="1" model="pcie-root-port"/>
    <controller type="pci" index="2" model="pcie-root-port"/>
    <controller type="pci" index="3" model="pcie-root-port"/>
    
I figured out most of this by starting the process for creating a new VM, on the last page of the wizard that pops up selecting the "Modify configuration before installation" option, going to the "XML" tab on the "Overview" section of the new window that shows up, and then comparing that against what my current VM had. Also, it took me a while to get this right, so I might have forgotten something. If virt-manager gives you an error when you hit the Apply button, compare notes against the VM that you're in the process of creating, and copy/paste things from there to the old VM to make the errors go away. As long as you don't remove configuration that is critical for things to start, this shouldn't break matters permanently (but hey, use your backups if you do break -- you have backups, right?) OK, cool, so now we have a Windows VM that is... unable to boot. Remember what I said about Windows storing where the controller is? Yeah, there you go. Boot from the virtual USB disk that you created above, and select the "Fix the boot" option in the menu. That will fix it. Ha ha, only kidding. Of course it doesn't. I honestly can't tell you everything that I fiddled with, but I think the bit that eventually fixed it was where I chose "safe mode", which caused the system to do a hickup, a regular reboot, and then suddenly everything was working again. Meh. Don't throw the virtual USB disk away yet, you'll still need it. Anyway, once you have it booting again, you will now have a machine that theoretically supports Secure Boot, but you're still running off an MBR partition. I found a procedure on how to convert things from MBR to GPT that was written almost 10 years ago, but surprisingly it still works, except for the bit where the procedure suggests you use diskmgmt.msc (for one thing, that was renamed; and for another, it can't touch the partition table of the system disk either). The last step in that procedure says to restart your computer!, which is fine, except at this point you obviously need to switch over to the TianoCore firmware, otherwise you're trying to read a UEFI boot configuration on a system that only supports MBR booting, which obviously won't work. In order to do that, you need to add a loader element to the domain.os element of your libvirt configuration:
<loader readonly="yes" type="pflash">/usr/share/OVMF/OVMF_CODE_4M.ms.fd</loader>
When you do this, you'll note that virt-manager automatically adds an nvram element. That's fine, let it. I figured this out by looking at the documentation for enabling Secure Boot in a VM on the Debian wiki, and using the same trick as for how to switch chipsets that I explained above. Okay, yay, so now secure boot is enabled, and we can install Windows 11! All good? Well, almost. I found that once I enabled secure boot, my display reverted to a 1024x768 screen. This turned out to be because I was using older unsigned drivers, and since we're using Secure Boot, that's no longer allowed, which means Windows reverts to the default VGA driver, and that only supports the 1024x768 resolution. Yeah, I know. The solution is to download the virtio-win ISO from one of the links in the virtio-win github project, connecting it to the VM, going to Device manager, selecting the display controller, clicking on the "Update driver" button, telling the system that you have the driver on your computer, browsing to the CD-ROM drive, clicking the "include subdirectories" option, and then tell Windows to do its thing. While there, it might be good to do the same thing for unrecognized devices in the device manager, if any. So, all I have to do next is to get used to the completely different user interface of Windows 11. Sigh. Oh, and to rename the "w10" VM to "w11", or some such. Maybe.

18 July 2022

Bits from Debian: DebConf22 welcomes its sponsors!

DebConf22 is taking place in Prizren, Kosovo, from 17th to 24th July, 2022. It is the 23rd edition of the Debian conference and organizers are working hard to create another interesting and fruitful event for attendees. We would like to warmly welcome the sponsors of DebConf22, and introduce you to them. We have four Platinum sponsors. Our first Platinum sponsor is Lenovo. As a global technology leader manufacturing a wide portfolio of connected products, including smartphones, tablets, PCs and workstations as well as AR/VR devices, smart home/office and data center solutions, Lenovo understands how critical open systems and platforms are to a connected world. Infomaniak is our second Platinum sponsor. Infomaniak is Switzerland's largest web-hosting company, also offering backup and storage services, solutions for event organizers, live-streaming and video on demand services. It wholly owns its datacenters and all elements critical to the functioning of the services and products provided by the company (both software and hardware). The ITP Prizren is our third Platinum sponsor. ITP Prizren intends to be a changing and boosting element in the area of ICT, agro-food and creatives industries, through the creation and management of a favourable environment and efficient services for SMEs, exploiting different kinds of innovations that can contribute to Kosovo to improve its level of development in industry and research, bringing benefits to the economy and society of the country as a whole. Google is our fourth Platinum sponsor. Google is one of the largest technology companies in the world, providing a wide range of Internet-related services and products such as online advertising technologies, search, cloud computing, software, and hardware. Google has been supporting Debian by sponsoring DebConf for more than ten years, and is also a Debian partner sponsoring parts of Salsa's continuous integration infrastructure within Google Cloud Platform. Our Gold sponsors are: Roche, a major international pharmaceutical provider and research company dedicated to personalized healthcare. Microsoft, enables digital transformation for the era of an intelligent cloud and an intelligent edge. Its mission is to empower every person and every organization on the planet to achieve more. Ipko Telecommunications, provides telecommunication services and it is the first and the most dominant mobile operator which offers fast-speed mobile internet 3G and 4G networks in Kosovo. Ubuntu, the Operating System delivered by Canonical. U.S. Agency for International Development, leads international development and humanitarian efforts to save lives, reduce poverty, strengthen democratic governance and help people progress beyond assistance. Our Silver sponsors are: Pexip, is the video communications platform that solves the needs of large organizations. Deepin is a Chinese commercial company focusing on the development and service of Linux-based operating systems. Hudson River Trading, a company researching and developing automated trading algorithms using advanced mathematical techniques. Amazon Web Services (AWS), is one of the world's most comprehensive and broadly adopted cloud platforms, offering over 175 fully featured services from data centers globally. The Bern University of Applied Sciences with near 7,800 students enrolled, located in the Swiss capital. credativ, a service-oriented company focusing on open-source software and also a Debian development partner. Collabora, a global consultancy delivering Open Source software solutions to the commercial world. Arm: with the world s Best SoC Design Portfolio, Arm powered solutions have been supporting innovation for more than 30 years and are deployed in over 225 billion chips to date. GitLab, an open source end-to-end software development platform with built-in version control, issue tracking, code review, CI/CD, and more. Two Sigma, rigorous inquiry, data analysis, and invention to help solve the toughest challenges across financial services. Starlabs, builds software experiences and focus on building teams that deliver creative Tech Solutions for our clients. Solaborate, has the world s most integrated and powerful virtual care delivery platform. Civil Infrastructure Platform, a collaborative project hosted by the Linux Foundation, establishing an open source base layer of industrial grade software. Matanel Foundation, operates in Israel, as its first concern is to preserve the cohesion of a society and a nation plagued by divisions. Bronze sponsors: bevuta IT, Kutia, Univention, Freexian. And finally, our Supporter level sponsors: Altus Metrum, Linux Professional Institute, Olimex, Trembelat, Makerspace IC Prizren, Cloud68.co, Gandi.net, ISG.EE, IPKO Foundation, The Deutsche Gesellschaft f r Internationale Zusammenarbeit (GIZ) GmbH. Thanks to all our sponsors for their support! Their contributions make it possible for a large number of Debian contributors from all over the globe to work together, help and learn from each other in DebConf22. DebConf22 logo

21 June 2022

John Goerzen: Lessons of Social Media from BBSs

In the recent article The Internet Origin Story You Know Is Wrong, I was somewhat surprised to see the argument that BBSs are a part of the Internet origin story that is often omitted. Surprised because I was there for BBSs, and even ran one, and didn t really consider them part of the Internet story myself. I even recently enjoyed a great BBS documentary and still didn t think of the connection on this way. But I think the argument is a compelling one.
In truth, the histories of Arpanet and BBS networks were interwoven socially and materially as ideas, technologies, and people flowed between them. The history of the internet could be a thrilling tale inclusive of many thousands of networks, big and small, urban and rural, commercial and voluntary. Instead, it is repeatedly reduced to the story of the singular Arpanet.
Kevin Driscoll goes on to highlight the social aspects of the modem world , how BBSs and online services like AOL and CompuServe were ways for people to connect. And yet, AOL members couldn t easily converse with CompuServe members, and vice-versa. Sound familiar?
Today s social media ecosystem functions more like the modem world of the late 1980s and early 1990s than like the open social web of the early 21st century. It is an archipelago of proprietary platforms, imperfectly connected at their borders. Any gateways that do exist are subject to change at a moment s notice. Worse, users have little recourse, the platforms shirk accountability, and states are hesitant to intervene.
Yes, it does. As he adds, People aren t the problem. The problem is the platforms. A thought-provoking article, and I think I ll need to buy the book it s excerpted from!

21 May 2022

Dirk Eddelbuettel: #37: Introducing r2u with 2 x 19k CRAN binaries for Ubuntu 22.04 and 20.04

One month ago I started work on a new side project which is now up and running, and deserving on an introductory blog post: r2u. It was announced in two earlier tweets (first, second) which contained the two (wicked) demos below also found at the documentation site. So what is this about? It brings full and complete CRAN installability to Ubuntu LTS, both the focal release 20.04 and the recent jammy release 22.04. It is unique in resolving all R and CRAN packages with the system package manager. So whenever you install something it is guaranteed to run as its dependencies are resolved and co-installed as needed. Equally important, no shared library will be updated or removed by the system as the possible dependency of the R package is known and declared. No other package management system for R does that as only apt on Debian or Ubuntu can and this project integrates all CRAN packages (plus 200+ BioConductor packages). It will work with any Ubuntu installation on laptop, desktop, server, cloud, container, or in WSL2 (but is limited to Intel/AMD chips, sorry Raspberry Pi or M1 laptop). It covers all of CRAN (or nearly 19k packages), all the BioConductor packages depended-upon (currently over 200), and only excludes less than a handful of CRAN packages that cannot be built.

Usage Setup instructions approaches described concisely in the repo README.md and documentation site. It consists of just five (or fewer) simple steps, and scripts are provided too for focal (20.04) and jammy (22.04).

Demos Check out these two demos (also at the r2u site):

Installing the full tidyverse in one command and 18 seconds

Installing brms and its depends in one command and 13 seconds (and show gitpod.io)

Integration via bspm The r2u setup can be used directly with apt (or dpkg or any other frontend to the package management system). Once installed apt update; apt upgrade will take care of new packages. For this to work, all CRAN packages (and all BioConductor packages depended upon) are mapped to names like r-cran-rcpp and r-bioc-s4vectors: an r prefix, the repo, and the package name, all lower-cased. That works but thanks to the wonderful bspm package by I aki car we can do much better. It connects R s own install.packages() and update.packages() to apt. So we can just say (as the demos above show) install.packages("tidyverse") or install.packages("brms") and binaries are installed via apt which is fantastic and it connects R to the system package manager. The setup is really only two lines and described at the r2u site as part of the setup.

History and Motivation Turning CRAN packages into .deb binaries is not a new idea. Albrecht Gebhardt was the first to realize this about twenty years ago (!!) and implemented it with a single Perl script. Next, Albrecht, Stefan Moeller, David Vernazobres and I built on top of this which is described in this useR! 2007 paper. A most excellent generalization and rewrite was provided by Charles Blundell in an superb Google Summer of Code contribution in 2008 which I mentored. Charles and I described it in this talk at useR! 2009. I ran that setup for a while afterwards, but it died via an internal database corruption in 2010 right when I tried to demo it at CRAN headquarters in Vienna. This peaked at, if memory serves, about 5k packages: all of CRAN at the time. Don Armstrong took it one step further in a full reimplemenation which, if I recall correctly, coverd all of CRAN and BioConductor for what may have been 8k or 9k packages. Don had a stronger system (with full RAID-5) but it also died in a crash and was never rebuilt even though he and I could have relied on Debian resources (as all these approaches focused on Debian). During that time, Michael Rutter created a variant that cleverly used an Ubuntu-only setup utilizing Launchpad. This repo is still going strong, used and relied-upon by many, and about 5k packages (per distribution) strong. At one point, a group consisting of Don, Michael, G bor Cs rdi and myself (as lead/PI) had financial support from the RConsortium ISC for a more general re-implementation , but that support was withdrawn when we did not have time to deliver. We should also note other long-standing approaches. Detlef Steuer has been using the openSUSE Build Service to provide nearly all of CRAN for openSUSE for many years. I aki car built a similar system for Fedora described in this blog post. I aki and I also have a arXiv paper describing all this.

Details Please see the the r2u site for all details on using r2u.

Acknowledgements The help of everybody who has worked on this is greatly appreciated. So a huge Thank you! to Albrecht, David, Stefan, Charles, Don, Michael, Detlef, G bor, I aki and whoever I may have omitted. Similarly, thanks to everybody working on R, CRAN, Debian, or Ubuntu it all makes for a superb system. And another big Thank you! goes to my GitHub sponsors whose continued support is greatly appreciated.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

21 April 2022

Andy Simpkins: Firmware and Debian

There has been a flurry of activity on the Debian mailing lists ever since Steve McIntyre raised the issue of including non-free firmware as part of official Debian installation images. Firstly I should point out that I am in complete agreement with Steve s proposal to include non-free firmware as part of an installation image. Likewise I think that we should have a separate archive section for firmware. Because without doing so it will soon become almost impossible to install onto any new hardware. However, as always the issue is more nuanced than a first glance would suggest. Lets start by defining what is firmware? Firmware is any software that runs outside the orchestration of the operating system. Typically firmware will be executed on a processor(s) separate from the processor(s) running the OS, but this does not need to be the case. As Debian we are content that our systems can operate using fully free and open source software and firmware. We can install our OS without needing any non-free firmware. This is an illusion! Each and every PC platform contains non-free firmware It may be possible to run free firmware on some Graphics controllers, Wi-Fi chip-sets, or Ethernet cards and we can (and perhaps should) choose to spend our money on systems where this is the case. When installing a new system we might still be forced to hold our nose and install with non-free firmware on the peripheral before we are able to upgrade it to FLOSS firmware later if this is exists or is even possible to do so. However after the installation we are running a full FLOSS system in terms of software and firmware. We all (almost without exception) are running propitiatory firmware whether we like it or not. Even after carefully selecting graphics and network hardware with FLOSS firmware options we still haven t escaped from non-free-firmware. Other peripherals contain firmware too each keyboard, disk (SSDs and Spinning rust). Even your USB memory stick that you use to contain the Debian installation image contains a microcontroller and hence also contains firmware that runs on it.
  1. Much of this firmware can not even be updated.
  2. Some can be updated, but is stored in FLASH ROM and the hardware vendor has defeated all programming methods (possibly circumnavigated with a hardware mod).
  3. Some of it can be updated but requires external device programmers (and often the programming connections are a series of test points dotted around the board and not on a header in order to make programming as difficult as possible).
  4. Sometimes the firmware can be updated from within the host operating system (i.e. Debian)
  5. Sometimes, as Steve pointed out in his post, the hardware vendor has enough firmware on a peripheral to perform basic functions perhaps enough to install the OS, but requires additional firmware to enable specific feature (i.e. higher screen resolutions, hardware accelerated functions etc.)
  6. Finally some vendors don t even bother with any non-volatile storage beyond a basic boot loader and firmware must be loaded before the device can be used in any mode.
What about the motherboard? If we are lucky we might be able to run a FLOSS implementation of the UEFI subsystem (edk2/tianocore for example), indeed the non AMD64/i386 platforms based around ARM, MIPS architectures are often the most free when it comes to firmware. What about the microcode on the processor? Personally I wasn t aware that that this was updatable firmware until the Spectre and Meltdown classes of vulnerabilities arose a few years back. So back to Debian images including non-free firmware. This is specifically to address the last two use cases mentioned above, i.e. where firmware needs to be loaded to achieve a minimum functioning of a device. Although it could also include motherboard support, and microcode as well. As far as I can tell the proposal exists for several reasons: #1 Because some freely distributable firmware is required for more and more devices, in order to install Debian, or because whilst Debian can be installed a desktop environment can not be started or fully function #2 Because frankly it is less work to produce, test and maintain fewer installation images As someone who performs tests on our images, this clearly gets my vote :-) and perhaps most important of all.. #3 Because our least experienced users, and new users will download an official image and give up if things don t just work TM Steve s proposal option 5 would address theses issues and I fully support it. I would love to see separate repositories for firmware and firmware-none free. Additionally to accompany firmware non-free I would like to have information on what the firmware actually does. Can I run my hardware without it, what function(s) are limited without the firmware, better yet is there a FLOSS equivalent that I can load instead? Is this something that we can present in Debian installer? I would love not to require non-free firmware, but if I can t, I would love if DI would enable a user to make an informed choice as to what, if any, firmware is installed. Should we be requesting (requiring?) this information for any non-free firmware image that we carry in the archive? Finally lets consider firmware in the wider general case, not just the case where we need to load firmware from within Debian each and every boot. Personally I am annoyed whenever a hardware manufacturer has gone out of their way to prevent firmware updates. Lets face it software contains bugs, and we can assume that the software making up a firmware image will as well. Critical (security) vulnerabilities found in firmware, especially if this runs on the same processor(s) as the OS can impact on the wider system, not just the device itself. This will mean that, without updatable firmware, the hardware itself should be withdrawn from use whilst it would otherwise still function. By preventing firmware updates vendors are forcing early obsolescence in the hardware they sell, perhaps good for their bottom line, but certainly no good for users or the environment. Here I can practice what I preach. As an Electronic Engineer / Systems architect I have been beating the drum for In System Updatable firmware for ALL programmable devices in a system, be it a simple peripheral or a deeply embedded system. I can honestly say that over the last 20 years (yes I have been banging this particular drum for that long) I have had 100% success in arguing this case commercially. Having device programmers in R&D departments is one thing, but that is additional cost for production, and field service. Needing custom programming headers or even a bed of nails fixture to connect your target device to a programmer is more trouble than it is worth. Finally, the ability to update firmware in the field means that you can launch your product on schedule, make a sale and ship to a customer even if the first thing that you need to do is download an update. Offering that to any project manager will make you very popular indeed. So what if this firmware is non-free? As long as the firmware resides in non-volatile media without needing the OS to interact with it, we as a project don t need to carry it in our archives. And we as principled individuals can vote with our feet and wallets by choosing to purchase devices that have free firmware. But where that isn t an option, I ll take updatable but non-free firmware over non-free firmware that can not be updated any day of the week. Sure, the manufacture can choose to no longer support the firmware, and it is shocking how soon this happens often in the consumer market, the manufacture has withdrawn support for a product before it even reaches the end user (In which case we should boycott that manufacture in future until they either change their ways of go bust). But again if firmware can be updated in system that would at least allow the possibility of open firmware to arise. Indeed the only commercial case I have seen to argue against updatable firmware has been either for DRM, in which case good lets get rid of both, or for RF licence compliance, and even then it is tenuous because in this case the manufacture wants ISP for its own use right up until a device is shipped out the door, typically achived by blowing one time programmable fuse links .

14 April 2022

Reproducible Builds: Supporter spotlight: Amateur Radio Digital Communications (ARDC)

The Reproducible Builds project relies on several projects, supporters and sponsors for financial support, but they are also valued as ambassadors who spread the word about the project and the work that we do. This is the third instalment in a series featuring the projects, companies and individuals who support the Reproducible Builds project. If you are a supporter of the Reproducible Builds project (of whatever size) and would like to be featured here, please let get in touch with us at contact@reproducible-builds.org. We started this series by featuring the Civil Infrastructure Platform project and followed this up with a post about the Ford Foundation. Today, however, we ll be talking with Dan Romanchik, Communications Manager at Amateur Radio Digital Communications (ARDC).
Chris Lamb: Hey Dan, it s nice to meet you! So, for someone who has not heard of Amateur Radio Digital Communications (ARDC) before, could you tell us what your foundation is about? Dan: Sure! ARDC s mission is to support, promote, and enhance experimentation, education, development, open access, and innovation in amateur radio, digital communication, and information and communication science and technology. We fulfill that mission in two ways:
  1. We administer an allocation of IP addresses that we call 44Net. These IP addresses (in the 44.0.0.0/8 IP range) can only be used for amateur radio applications and experimentation.
  2. We make grants to organizations whose work aligns with our mission. This includes amateur radio clubs as well as other amateur radio-related organizations and activities. Additionally, we support scholarship programs for people who either have an amateur radio license or are pursuing careers in technology, STEM education and open-source software development projects that fit our mission, such as Reproducible Builds.

Chris: How might you relate the importance of amateur radio and similar technologies to someone who is non-technical? Dan: Amateur radio is important in a number of ways. First of all, amateur radio is a public service. In fact, the legal name for amateur radio is the Amateur Radio Service, and one of the primary reasons that amateur radio exists is to provide emergency and public service communications. All over the world, amateur radio operators are prepared to step up and provide emergency communications when disaster strikes or to provide communications for events such as marathons or bicycle tours. Second, amateur radio is important because it helps advance the state of the art. By experimenting with different circuits and communications techniques, amateurs have made significant contributions to communications science and technology. Third, amateur radio plays a big part in technical education. It enables students to experiment with wireless technologies and electronics in ways that aren t possible without a license. Amateur radio has historically been a gateway for young people interested in pursuing a career in engineering or science, such as network or electrical engineering. Fourth and this point is a little less obvious than the first three amateur radio is a way to enhance international goodwill and community. Radio knows no boundaries, of course, and amateurs are therefore ambassadors for their country, reaching out to all around the world. Beyond amateur radio, ARDC also supports and promotes research and innovation in the broader field of digital communication and information and communication science and technology. Information and communication technology plays a big part in our lives, be it for business, education, or personal communications. For example, think of the impact that cell phones have had on our culture. The challenge is that much of this work is proprietary and owned by large corporations. By focusing on open source work in this area, we help open the door to innovation outside of the corporate landscape, which is important to overall technological resiliency.
Chris: Could you briefly outline the history of ARDC? Dan: Nearly forty years ago, a group of visionary hams saw the future possibilities of what was to become the internet and requested an address allocation from the Internet Assigned Numbers Authority (IANA). That allocation included more than sixteen million IPv4 addresses, 44.0.0.0 through 44.255.255.255. These addresses have been used exclusively for amateur radio applications and experimentation with digital communications techniques ever since. In 2011, the informal group of hams administering these addresses incorporated as a nonprofit corporation, Amateur Radio Digital Communications (ARDC). ARDC is recognized by IANA, ARIN and the other Internet Registries as the sole owner of these addresses, which are also known as AMPRNet or 44Net. Over the years, ARDC has assigned addresses to thousands of hams on a long-term loan (essentially acting as a zero-cost lease), allowing them to experiment with digital communications technology. Using these IP addresses, hams have carried out some very interesting and worthwhile research projects and developed practical applications, including TCP/IP connectivity via radio links, digital voice, telemetry and repeater linking. Even so, the amateur radio community never used much more than half the available addresses, and today, less than one third of the address space is assigned and in use. This is one of the reasons that ARDC, in 2019, decided to sell one quarter of the address space (or approximately 4 million IP addresses) and establish an endowment with the proceeds. This endowment now funds ARDC s a suite of grants, including scholarships, research projects, and of course amateur radio projects. Initially, ARDC was restricted to awarding grants to organizations in the United States, but is now able to provide funds to organizations around the world.
Chris: How does the Reproducible Builds effort help ARDC achieve its goals? Dan: Our aspirational goals include: We think that the Reproducible Builds efforts in helping to ensure the safety and security of open source software closely align with those goals.
Chris: Are there any specific success stories that ARDC is particularly proud of? Dan: We are really proud of our grant to the Hoopa Valley Tribe in California. With a population of nearly 2,100, their reservation is the largest in California. Like everywhere else, the COVID-19 pandemic hit the reservation hard, and the lack of broadband internet access meant that 130 children on the reservation were unable to attend school remotely. The ARDC grant allowed the tribe to address the immediate broadband needs in the Hoopa Valley, as well as encourage the use of amateur radio and other two-way communications on the reservation. The first nation was able to deploy a network that provides broadband access to approximately 90% of the residents in the valley. And, in addition to bringing remote education to those 130 children, the Hoopa now use the network for remote medical monitoring and consultation, adult education, and other applications. Other successes include our grants to:
Chris: ARDC supports a number of other existing projects and initiatives, not all of them in the open source world. How much do you feel being a part of the broader free culture movement helps you achieve your aims? Dan: In general, we find it challenging that most digital communications technology is proprietary and closed-source. It s part of our mission to fund open source alternatives. Without them, we are solely reliant, as a society, on corporate interests for our digital communication needs. It makes us vulnerable and it puts us at risk of increased surveillance. Thus, ARDC supports open source software wherever possible, and our grantees must make a commitment to share their work under an open source license or otherwise make it as freely available as possible.
Chris: Thanks so much for taking the time to talk to us today. Now, if someone wanted to know more about ARDC or to get involved, where might they go to look? To learn more about ARDC in general, please visit our website at https://www.ampr.org. To learn more about 44Net, go to https://wiki.ampr.org/wiki/Main_Page. And, finally, to learn more about our grants program, go to https://www.ampr.org/apply/

For more about the Reproducible Builds project, please see our website at reproducible-builds.org. If you are interested in ensuring the ongoing security of the software that underpins our civilisation and wish to sponsor the Reproducible Builds project, please reach out to the project by emailing contact@reproducible-builds.org.

5 April 2022

Kees Cook: security things in Linux v5.10

Previously: v5.9 Linux v5.10 was released in December, 2020. Here s my summary of various security things that I found interesting: AMD SEV-ES
While guest VM memory encryption with AMD SEV has been supported for a while, Joerg Roedel, Thomas Lendacky, and others added register state encryption (SEV-ES). This means it s even harder for a VM host to reconstruct a guest VM s state. x86 static calls
Josh Poimboeuf and Peter Zijlstra implemented static calls for x86, which operates very similarly to the static branch infrastructure in the kernel. With static branches, an if/else choice can be hard-coded, instead of being run-time evaluated every time. Such branches can be updated too (the kernel just rewrites the code to switch around the branch ). All these principles apply to static calls as well, but they re for replacing indirect function calls (i.e. a call through a function pointer) with a direct call (i.e. a hard-coded call address). This eliminates the need for Spectre mitigations (e.g. RETPOLINE) for these indirect calls, and avoids a memory lookup for the pointer. For hot-path code (like the scheduler), this has a measurable performance impact. It also serves as a kind of Control Flow Integrity implementation: an indirect call got removed, and the potential destinations have been explicitly identified at compile-time. network RNG improvements
In an effort to improve the pseudo-random number generator used by the network subsystem (for things like port numbers and packet sequence numbers), Linux s home-grown pRNG has been replaced by the SipHash round function, and perturbed by (hopefully) hard-to-predict internal kernel states. This should make it very hard to brute force the internal state of the pRNG and make predictions about future random numbers just from examining network traffic. Similarly, ICMP s global rate limiter was adjusted to avoid leaking details of network state, as a start to fixing recent DNS Cache Poisoning attacks. SafeSetID handles GID
Thomas Cedeno improved the SafeSetID LSM to handle group IDs (which required teaching the kernel about which syscalls were actually performing setgid.) Like the earlier setuid policy, this lets the system owner define an explicit list of allowed group ID transitions under CAP_SETGID (instead of to just any group), providing a way to keep the power of granting this capability much more limited. (This isn t complete yet, though, since handling setgroups() is still needed.) improve kernel s internal checking of file contents
The kernel provides LSMs (like the Integrity subsystem) with details about files as they re loaded. (For example, loading modules, new kernel images for kexec, and firmware.) There wasn t very good coverage for cases where the contents were coming from things that weren t files. To deal with this, new hooks were added that allow the LSMs to introspect the contents directly, and to do partial reads. This will give the LSMs much finer grain visibility into these kinds of operations. set_fs removal continues
With the earlier work landed to free the core kernel code from set_fs(), Christoph Hellwig made it possible for set_fs() to be optional for an architecture. Subsequently, he then removed set_fs() entirely for x86, riscv, and powerpc. These architectures will now be free from the entire class of kernel address limit attacks that only needed to corrupt a single value in struct thead_info. sysfs_emit() replaces sprintf() in /sys
Joe Perches tackled one of the most common bug classes with sprintf() and snprintf() in /sys handlers by creating a new helper, sysfs_emit(). This will handle the cases where kernel code was not correctly dealing with the length results from sprintf() calls, which might lead to buffer overflows in the PAGE_SIZE buffer that /sys handlers operate on. With the helper in place, it was possible to start the refactoring of the many sprintf() callers. nosymfollow mount option
Mattias Nissler and Ross Zwisler implemented the nosymfollow mount option. This entirely disables symlink resolution for the given filesystem, similar to other mount options where noexec disallows execve(), nosuid disallows setid bits, and nodev disallows device files. Quoting the patch, it is useful as a defensive measure for systems that need to deal with untrusted file systems in privileged contexts. (i.e. for when /proc/sys/fs/protected_symlinks isn t a big enough hammer.) Chrome OS uses this option for its stateful filesystem, as symlink traversal as been a common attack-persistence vector. ARMv8.5 Memory Tagging Extension support
Vincenzo Frascino added support to arm64 for the coming Memory Tagging Extension, which will be available for ARMv8.5 and later chips. It provides 4 bits of tags (covering multiples of 16 byte spans of the address space). This is enough to deterministically eliminate all linear heap buffer overflow flaws (1 tag for free , and then rotate even values and odd values for neighboring allocations), which is probably one of the most common bugs being currently exploited. It also makes use-after-free and over/under indexing much more difficult for attackers (but still possible if the target s tag bits can be exposed). Maybe some day we can switch to 128 bit virtual memory addresses and have fully versioned allocations. But for now, 16 tag values is better than none, though we do still need to wait for anyone to actually be shipping ARMv8.5 hardware. fixes for flaws found by UBSAN
The work to make UBSAN generally usable under syzkaller continues to bear fruit, with various fixes all over the kernel for stuff like shift-out-of-bounds, divide-by-zero, and integer overflow. Seeing these kinds of patches land reinforces the the rationale of shifting the burden of these kinds of checks to the toolchain: these run-time bugs continue to pop up. flexible array conversions
The work on flexible array conversions continues. Gustavo A. R. Silva and others continued to grind on the conversions, getting the kernel ever closer to being able to enable the -Warray-bounds compiler flag and clear the path for saner bounds checking of array indexes and memcpy() usage. That s it for now! Please let me know if you think anything else needs some attention. Next up is Linux v5.11.

2022, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 License.
CC BY-SA 4.0

12 March 2022

Petter Reinholdtsen: Publish Hargassner wood chip boiler state to MQTT

Recently I had a look at a Hargassner wood chip boiler, and what kind of free software can be used to monitor and control it. The boiler can be connected to some cloud service via what the producer call an Internet Gateway, which seem to be a computer connecting to the boiler and passing the information gathered to the cloud. I discovered the boiler controller got an IP address on the local network and listen on TCP port 23 to provide status information as a text line of numbers. It also provide a HTTP server listening on port 80, but I have not yet figured out what it can do beside return an error code. If I am to believe various free software implementations talking to such boiler, the interpretation of the line of numbers differ between type of boiler and software version on the boiler. By comparing the list of numbers on the front panel of the boiler with the numbers returned via TCP, I have been able to figure out several of the numbers, but there are a lot left to understand. I've located several temperature measurements and hours running values, as well as oxygen measurements and counters. I decided to write a simple parser in Python for the values I figured out so far, and a simple MQTT injector publishing both the interpreted and the unknown values on a MQTT bus to make collecting and graphing simpler. The end result is available from the hargassner2mqtt project page on gitlab. I very much welcome patches extending the parser to understand more values, boiler types and software versions. I do not really expect very few free software developers got their hands on such unit to experiment, but it would be fun if others too find this project useful. As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

13 February 2022

Gunnar Wolf: Got to boot a RPi Zero 2 W with Debian

About a month ago, I got tired of waiting for the newest member of the Raspberry product lineup to be sold in Mexico, and I bought it from a Chinese reseller through a big online shopping platform. I paid quite a bit of premium (~US$85 instead of the advertised US$15), and got it delivered ten days later Anyway, it s known this machine does not yet boot mainline Linux. The vast majority of ARM systems require the bootloader to load a Device Tree file, presenting the hardware characteristics map. And while the RPi Zero 2 W (hey what an awful and confusing naming scheme they chose!) is mostly similar to a RPi3B+, it is not quite the same thing. A kernel with RPi3B+ s device tree will refuse to boot. Anyway, I started digging, and found that some days ago Stephan Wahren sent a patch to the linux-arm-kernel mailing list with a matching device tree. Read the patch! It s quite simple to read (what is harder is to know where each declaration should go, if you want to write your own, of course). It basically includes all basic details for the main chip in the RPi3 family (BCM2837), pulls in also the declarations from the BCM2836 present in the RPi2, and adds the necessary bits for the USB OTG connection and the WiFi and Bluetooth declarations. Registers the model name as Raspberry Pi Zero 2 W, which you can easily see in the following photo, informs the kernel it has 512MB RAM, and Well, really, it s an easy device tree to read, don t be shy! So, I booted my RPi 3B+ with a freshly downloaded Bookworm image, installed and unpacked linux-source-5.15, applied Stephan s patch, and added the following for the DTB to be generated in the arm64 tree as well:
--- /dev/null   2022-01-26 23:35:40.747999998 +0000
+++ arch/arm64/boot/dts/broadcom/bcm2837-rpi-zero-2-w.dts       2022-02-13 06:28:29.968429953 +0000
@@ -0,0 +1 @@
+#include "arm/bcm2837-rpi-zero-2-w.dts"
Then, ran a simple make dtbs, and Failed, because bcm283x-rpi-wifi-bt.dtsi is not yet in the kernel . OK, no worries: Getting wireless to work is a later step. I commented out the lines causing conflict (10, 33-35, 134-136), and:
root@rpi3-20220212:/usr/src/linux-source-5.15# make dtbs
  DTC     arch/arm64/boot/dts/broadcom/bcm2837-rpi-zero-2-w.dtb
Great! Just copied over that generated file to /boot/firmware/, moved the SD over to my RPiZ2W, and behold! It boots! When I bragged about it in #debian-raspberrypi, steev suggested me to pull in the WiFi patch, that has also been submitted (but not yet accepted) for kernel inclusion. I did so, uncommented the lines I modified, and built again. It builds correctly, and again copied the DTB over. It still does not find the WiFi; dmesg still complains about a missing bit of firmware (failed to load brcm/brcmfmac43430b0-sdio.raspberrypi,model-zero-2-w.bin). Steev pointed out it can be downloaded from RPi Distro s GitHub page, but I called it a night and didn t pursue it any further ;-) So I understand this post is still a far cry from saying our images properly boot under a RPi 0 2 W , but we will get there

23 January 2022

Louis-Philippe V ronneau: Goodbye Nexus 5

I've blogged a few times already about my Nexus 5, the Android device I have/had been using for 8 years. Sadly, it died a few weeks ago, when the WiFi chip stopped working. I could probably have attempted a mainboard swap, but at this point, getting a new device seemed like the best choice. In a world where most Android devices are EOL after less than 3 years, it is amazing I was able to keep this device for so long, always running the latest Android version with the latest security patch. The Nexus 5 originally shipped with Android 4.4 and when it broke, I was running Android 11, with the November security patch! I'm very grateful to the FOSS Android community that made this possible, especially the LineageOS community. I've replaced my Nexus 5 by a used Pixel 3a, mostly because of the similar form factor, relatively affordable price and the presence of a headphone jack. Google also makes flashing a custom ROM easy, although I had more trouble with this than I first expected. The first Pixel 3a I bought on eBay was a scam: I ordered an "Open Box" phone and it arrived all scratched1 and with a broken rear camera. The second one I got (from the Amazon Renewed program) arrived in perfect condition, but happened to be a Verizon model. As I found out, Verizon locks the bootloader on their phones, making it impossible to install LineageOS2. The vendor was kind enough to let me return it. As they say, third time's the charm. This time around, I explicitly bought a phone on eBay listed with a unlocked bootloader. I'm very satisfied with my purchase, but all in all, dealing with all the returns and the shipping was exhausting. Hopefully this phone will last as long as my Nexus 5!

  1. There was literally a whole layer missing at the back, as if someone had sanded the phone...
  2. Apparently, and "Unlocked phone" means it is "SIM unlocked", i.e. you can use it with any carrier. What I should have been looking for is a "Factory Unlocked phone", one where the bootloader isn't locked :L

17 January 2022

Matthew Garrett: Boot Guard and PSB have user-hostile defaults

Compromising an OS without it being detectable is hard. Modern operating systems support the imposition of a security policy or the launch of some sort of monitoring agent sufficient early in boot that even if you compromise the OS, you're probably going to have left some sort of detectable trace[1]. You can avoid this by attacking the lower layers - if you compromise the bootloader then it can just hotpatch a backdoor into the kernel before executing it, for instance.

This is avoided via one of two mechanisms. Measured boot (such as TPM-based Trusted Boot) makes a tamper-proof cryptographic record of what the system booted, with each component in turn creating a measurement of the next component in the boot chain. If a component is tampered with, its measurement will be different. This can be used to either prevent the release of a cryptographic secret if the boot chain is modified (for instance, using the TPM to encrypt the disk encryption key), or can be used to attest the boot state to another device which can tell you whether you're safe or not. The other approach is verified boot (such as UEFI Secure Boot), where each component in the boot chain verifies the next component before executing it. If the verification fails, execution halts.

In both cases, each component in the boot chain measures and/or verifies the next. But something needs to be the first link in this chain, and traditionally this was the system firmware. Which means you could tamper with the system firmware and subvert the entire process - either have the firmware patch the bootloader in RAM after measuring or verifying it, or just load a modified bootloader and lie about the measurements or ignore the verification. Attackers had already been targeting the firmware (Hacking Team had something along these lines, although this was pre-secure boot so just dropped a rootkit into the OS), and given a well-implemented measured and verified boot chain, the firmware becomes an even more attractive target.

Intel's Boot Guard and AMD's Platform Secure Boot attempt to solve this problem by moving the validation of the core system firmware to an (approximately) immutable environment. Intel's solution involves the Management Engine, a separate x86 core integrated into the motherboard chipset. The ME's boot ROM verifies a signature on its firmware before executing it, and once the ME is up it verifies that the system firmware's bootblock is signed using a public key that corresponds to a hash blown into one-time programmable fuses in the chipset. What happens next depends on policy - it can either prevent the system from booting, allow the system to boot to recover the firmware but automatically shut it down after a while, or flag the failure but allow the system to boot anyway. Most policies will also involve a measurement of the bootblock being pushed into the TPM.

AMD's Platform Secure Boot is slightly different. Rather than the root of trust living in the motherboard chipset, it's in AMD's Platform Security Processor which is incorporated directly onto the CPU die. Similar to Boot Guard, the PSP has ROM that verifies the PSP's own firmware, and then that firmware verifies the system firmware signature against a set of blown fuses in the CPU. If that fails, system boot is halted. I'm having trouble finding decent technical documentation about PSB, and what I have found doesn't mention measuring anything into the TPM - if this is the case, PSB only implements verified boot, not measured boot.

What's the practical upshot of this? The first is that you can't replace the system firmware with anything that doesn't have a valid signature, which effectively means you're locked into firmware the vendor chooses to sign. This prevents replacing the system firmware with either a replacement implementation (such as Coreboot) or a modified version of the original implementation (such as firmware that disables locking of CPU functionality or removes hardware allowlists). In this respect, enforcing system firmware verification works against the user rather than benefiting them.
Of course, it also prevents an attacker from doing the same thing, but while this is a real threat to some users, I think it's hard to say that it's a realistic threat for most users.

The problem is that vendors are shipping with Boot Guard and (increasingly) PSB enabled by default. In the AMD case this causes another problem - because the fuses are in the CPU itself, a CPU that's had PSB enabled is no longer compatible with any motherboards running firmware that wasn't signed with the same key. If a user wants to upgrade their system's CPU, they're effectively unable to sell the old one. But in both scenarios, the user's ability to control what their system is running is reduced.

As I said, the threat that these technologies seek to protect against is real. If you're a large company that handles a lot of sensitive data, you should probably worry about it. If you're a journalist or an activist dealing with governments that have a track record of targeting people like you, it should probably be part of your threat model. But otherwise, the probability of you being hit by a purely userland attack is so ludicrously high compared to you being targeted this way that it's just not a big deal.

I think there's a more reasonable tradeoff than where we've ended up. Tying things like disk encryption secrets to TPM state means that if the system firmware is measured into the TPM prior to being executed, we can at least detect that the firmware has been tampered with. In this case nothing prevents the firmware being modified, there's just a record in your TPM that it's no longer the same as it was when you encrypted the secret. So, here's what I'd suggest:

1) The default behaviour of technologies like Boot Guard or PSB should be to measure the firmware signing key and whether the firmware has a valid signature into PCR 7 (the TPM register that is also used to record which UEFI Secure Boot signing key is used to verify the bootloader).
2) If the PCR 7 value changes, the disk encryption key release will be blocked, and the user will be redirected to a key recovery process. This should include remote attestation, allowing the user to be informed that their firmware signing situation has changed.
3) Tooling should be provided to switch the policy from merely measuring to verifying, and users at meaningful risk of firmware-based attacks should be encouraged to make use of this tooling

This would allow users to replace their system firmware at will, at the cost of having to re-seal their disk encryption keys against the new TPM measurements. It would provide enough information that, in the (unlikely for most users) scenario that their firmware has actually been modified without their knowledge, they can identify that. And it would allow users who are at high risk to switch to a higher security state, and for hardware that is explicitly intended to be resilient against attacks to have different defaults.

This is frustratingly close to possible with Boot Guard, but I don't think it's quite there. Before you've blown the Boot Guard fuses, the Boot Guard policy can be read out of flash. This means that you can drop a Boot Guard configuration into flash telling the ME to measure the firmware but not prevent it from running. But there are two problems remaining:

1) The measurement is made into PCR 0, and PCR 0 changes every time your firmware is updated. That makes it a bad default for sealing encryption keys.
2) It doesn't look like the policy is measured before being enforced. This means that an attacker can simply reflash modified firmware with a policy that disables measurement and then make a fake measurement that makes it look like the firmware is ok.

Fixing this seems simple enough - the Boot Guard policy should always be measured, and measurements of the policy and the signing key should be made into a PCR other than PCR 0. If an attacker modified the policy, the PCR value would change. If an attacker modified the firmware without modifying the policy, the PCR value would also change. People who are at high risk would run an app that would blow the Boot Guard policy into fuses rather than just relying on the copy in flash, and enable verification as well as measurement. Now if an attacker tampers with the firmware, the system simply refuses to boot and the attacker doesn't get anything.

Things are harder on the AMD side. I can't find any indication that PSB supports measuring the firmware at all, which obviously makes this approach impossible. I'm somewhat surprised by that, and so wouldn't be surprised if it does do a measurement somewhere. If it doesn't, there's a rather more significant problem - if a system has a socketed CPU, and someone has sufficient physical access to replace the firmware, they can just swap out the CPU as well with one that doesn't have PSB enabled. Under normal circumstances the system firmware can detect this and prompt the user, but given that the attacker has just replaced the firmware we can assume that they'd do so with firmware that doesn't decide to tell the user what just happened. In the absence of better documentation, it's extremely hard to say that PSB actually provides meaningful security benefits.

So, overall: I think Boot Guard protects against a real-world attack that matters to a small but important set of targets. I think most of its benefits could be provided in a way that still gave users control over their system firmware, while also permitting high-risk targets to opt-in to stronger guarantees. Based on what's publicly documented about PSB, it's hard to say that it provides real-world security benefits for anyone at present. In both cases, what's actually shipping reduces the control people have over their systems, and should be considered user-hostile.

[1] Assuming that someone's both turning this on and actually looking at the data produced

comment count unavailable comments

9 January 2022

Matthew Garrett: Pluton is not (currently) a threat to software freedom

At CES this week, Lenovo announced that their new Z-series laptops would ship with AMD processors that incorporate Microsoft's Pluton security chip. There's a fair degree of cynicism around whether Microsoft have the interests of the industry as a whole at heart or not, so unsurprisingly people have voiced concerns about Pluton allowing for platform lock-in and future devices no longer booting non-Windows operating systems. Based on what we currently know, I think those concerns are understandable but misplaced.

But first it's helpful to know what Pluton actually is, and that's hard because Microsoft haven't actually provided much in the way of technical detail. The best I've found is a discussion of Pluton in the context of Azure Sphere, Microsoft's IoT security platform. This, in association with the block diagrams on page 12 and 13 of this slidedeck, suggest that Pluton is a general purpose security processor in a similar vein to Google's Titan chip. It has a relatively low powered CPU core, an RNG, and various hardware cryptography engines - there's nothing terribly surprising here, and it's pretty much the same set of components that you'd find in a standard Trusted Platform Module of the sort shipped in pretty much every modern x86 PC. But unlike Titan, Pluton seems to have been designed with the explicit goal of being incorporated into other chips, rather than being a standalone component. In the Azure Sphere case, we see it directly incorporated into a Mediatek chip. In the Xbox Series devices, it's incorporated into the SoC. And now, we're seeing it arrive on general purpose AMD CPUs.

Microsoft's announcement says that Pluton can be shipped in three configurations:as the Trusted Platform Module; as a security processor used for non-TPM scenarios like platform resiliency; or OEMs can choose to ship with Pluton turned off. What we're likely to see to begin with is the former - Pluton will run firmware that exposes a Trusted Computing Group compatible TPM interface. This is almost identical to the status quo. Microsoft have required that all Windows certified hardware ship with a TPM for years now, but for cost reasons this is often not in the form of a separate hardware component. Instead, both Intel and AMD provide support for running the TPM stack on a component separate from the main execution cores on the system - for Intel, this TPM code runs on the Management Engine integrated into the chipset, and for AMD on the Platform Security Processor that's integrated into the CPU package itself.

So in this respect, Pluton changes very little; the only difference is that the TPM code is running on hardware dedicated to that purpose, rather than alongside other code. Importantly, in this mode Pluton will not do anything unless the system firmware or OS ask it to. Pluton cannot independently block the execution of any other code - it knows nothing about the code the CPU is executing unless explicitly told about it. What the OS can certainly do is ask Pluton to verify a signature before executing code, but the OS could also just verify that signature itself. Windows can already be configured to reject software that doesn't have a valid signature. If Microsoft wanted to enforce that they could just change the default today, there's no need to wait until everyone has hardware with Pluton built-in.

The two things that seem to cause people concerns are remote attestation and the fact that Microsoft will be able to ship firmware updates to Pluton via Windows Update. I've written about remote attestation before, so won't go into too many details here, but the short summary is that it's a mechanism that allows your system to prove to a remote site that it booted a specific set of code. What's important to note here is that the TPM (Pluton, in the scenario we're talking about) can't do this on its own - remote attestation can only be triggered with the aid of the operating system. Microsoft's Device Health Attestation is an example of remote attestation in action, and the technology definitely allows remote sites to refuse to grant you access unless you booted a specific set of software. But there are two important things to note here: first, remote attestation cannot prevent you from booting whatever software you want, and second, as evidenced by Microsoft already having a remote attestation product, you don't need Pluton to do this! Remote attestation has been possible since TPMs started shipping over two decades ago.

The other concern is Microsoft having control over the firmware updates. The context here is that TPMs are not magically free of bugs, and sometimes these can have security consequences. One example is Infineon TPMs producing weak RSA keys, a vulnerability that could be rectified by a firmware update to the TPM. Unfortunately these updates had to be issued by the device manufacturer rather than Infineon being able to do so directly. This meant users had to wait for their vendor to get around to shipping an update, something that might not happen at all if the machine was sufficiently old. From a security perspective, being able to ship firmware updates for the TPM without them having to go through the device manufacturer is a huge win.

Microsoft's obviously in a position to ship a firmware update that modifies the TPM's behaviour - there would be no technical barrier to them shipping code that resulted in the TPM just handing out your disk encryption secret on demand. But Microsoft already control the operating system, so they already have your disk encryption secret. There's no need for them to backdoor the TPM to give them something that the TPM's happy to give them anyway. If you don't trust Microsoft then you probably shouldn't be running Windows, and if you're not running Windows Microsoft can't update the firmware on your TPM.

So, as of now, Pluton running firmware that makes it look like a TPM just isn't a terribly interesting change to where we are already. It can't block you running software (either apps or operating systems). It doesn't enable any new privacy concerns. There's no mechanism for Microsoft to forcibly push updates to it if you're not running Windows.

Could this change in future? Potentially. Microsoft mention another use-case for Pluton "as a security processor used for non-TPM scenarios like platform resiliency", but don't go into any more detail. At this point, we don't know the full set of capabilities that Pluton has. Can it DMA? Could it play a role in firmware authentication? There are scenarios where, in theory, a component such as Pluton could be used in ways that would make it more difficult to run arbitrary code. It would be reassuring to hear more about what the non-TPM scenarios are expected to look like and what capabilities Pluton actually has.

But let's not lose sight of something more fundamental here. If Microsoft wanted to block free operating systems from new hardware, they could simply mandate that vendors remove the ability to disable secure boot or modify the key databases. If Microsoft wanted to prevent users from being able to run arbitrary applications, they could just ship an update to Windows that enforced signing requirements. If they want to be hostile to free software, they don't need Pluton to do it.

(Edit: it's been pointed out that I kind of gloss over the fact that remote attestation is a potential threat to free software, as it theoretically allows sites to block access based on which OS you're running. There's various reasons I don't think this is realistic - one is that there's just way too much variability in measurements for it to be practical to write a policy that's strict enough to offer useful guarantees without also blocking a number of legitimate users, and the other is that you can just pass the request through to a machine that is running the appropriate software and have it attest for you. The fact that nobody has actually bothered to use remote attestation for this purpose even though most consumer systems already ship with TPMs suggests that people generally agree with me on that)

comment count unavailable comments

4 January 2022

Jonathan McDowell: Upgrading from a CC2531 to a CC2538 Zigbee coordinator

Previously I setup a CC2531 as a Zigbee coordinator for my home automation. This has turned out to be a good move, with the 4 gang wireless switch being particularly useful. However the range of the CC2531 is fairly poor; it has a simple PCB antenna. It s also a very basic device. I set about trying to improve the range and scalability and settled upon a CC2538 + CC2592 device, which feature an MMCX antenna connector. This device also has the advantage that it s ARM based, which I m hopeful means I might be able to build some firmware myself using a standard GCC toolchain. For now I fetched the JetHome firmware from https://github.com/jethome-ru/zigbee-firmware/tree/master/ti/coordinator/cc2538_cc2592 (JH_2538_2592_ZNP_UART_20211222.hex) - while it s possible to do USB directly with the CC2538 my board doesn t have those bits so going the external USB UART route is easier. The device had some existing firmware on it, so I needed to erase this to force a drop into the boot loader. That means soldering up the JTAG pins and hooking it up to my Bus Pirate for OpenOCD goodness.
OpenOCD config
source [find interface/buspirate.cfg]
buspirate_port /dev/ttyUSB1
buspirate_mode normal
buspirate_vreg 1
buspirate_pullup 0
transport select jtag
source [find target/cc2538.cfg]
Steps to erase
$ telnet localhost 4444
Trying ::1...
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
Open On-Chip Debugger
> mww 0x400D300C 0x7F800
> mww 0x400D3008 0x0205
> shutdown
shutdown command invoked
Connection closed by foreign host.
At that point I can switch to the UART connection (on PA0 + PA1) and flash using cc2538-bsl:
$ git clone https://github.com/JelmerT/cc2538-bsl.git
$ cc2538-bsl/cc2538-bsl.py -p /dev/ttyUSB1 -e -w -v ~/JH_2538_2592_ZNP_UART_20211222.hex
Opening port /dev/ttyUSB1, baud 500000
Reading data from /home/noodles/JH_2538_2592_ZNP_UART_20211222.hex
Firmware file: Intel Hex
Connecting to target...
CC2538 PG2.0: 512KB Flash, 32KB SRAM, CCFG at 0x0027FFD4
Primary IEEE Address: 00:12:4B:00:22:22:22:22
    Performing mass erase
Erasing 524288 bytes starting at address 0x00200000
    Erase done
Writing 524256 bytes starting at address 0x00200000
Write 232 bytes at 0x0027FEF88
    Write done
Verifying by comparing CRC32 calculations.
    Verified (match: 0x74f2b0a1)
I then wanted to migrate from the old device to the new without having to repair everything. So I shut down Home Assistant and backed up the CC2531 network information using zigpy-znp (which is already installed for Home Assistant):
python3 -m zigpy_znp.tools.network_backup /dev/zigbee > cc2531-network.json
I copied the backup to cc2538-network.json and modified the coordinator_ieee to be the new device s MAC address (rather than end up with 2 devices claiming the same MAC if/when I reuse the CC2531) and did:
python3 -m zigpy_znp.tools.network_restore --input cc2538-network.json /dev/ttyUSB1
The old CC2531 needed unplugged first, otherwise I got an RuntimeError: Network formation refused, RF environment is likely too noisy. Temporarily unscrew the antenna or shield the coordinator with metal until a network is formed. error. After that I updated my udev rules to map the CC2538 to /dev/zigbee and restarted Home Assistant. To my surprise it came up and detected the existing devices without any extra effort on my part. However that resulted in 2 coordinators being shown in the visualisation, with the old one turning up as unk_manufacturer. Fixing that involved editing /etc/homeassistant/.storage/core.device_registry and removing the entry which had the old MAC address, removing the device entry in /etc/homeassistant/.storage/zha.storage for the old MAC and then finally firing up sqlite to modify the Zigbee database:
$ sqlite3 /etc/homeassistant/zigbee.db
SQLite version 3.34.1 2021-01-20 14:10:07
Enter ".help" for usage hints.
sqlite> DELETE FROM devices_v6 WHERE ieee = '00:12:4b:00:11:11:11:11';
sqlite> DELETE FROM endpoints_v6 WHERE ieee = '00:12:4b:00:11:11:11:11';
sqlite> DELETE FROM in_clusters_v6 WHERE ieee = '00:12:4b:00:11:11:11:11';
sqlite> DELETE FROM neighbors_v6 WHERE ieee = '00:12:4b:00:11:11:11:11' OR device_ieee = '00:12:4b:00:11:11:11:11';
sqlite> DELETE FROM node_descriptors_v6 WHERE ieee = '00:12:4b:00:11:11:11:11';
sqlite> DELETE FROM out_clusters_v6 WHERE ieee = '00:12:4b:00:11:11:11:11';
sqlite> .quit
So far it all seems a bit happier than with the CC2531; I ve been able to pair a light bulb that was previously detected but would not integrate, which suggests the range is improved. (This post another in the set of things I should write down so I can just grep my own website when I forget what I did to do foo .)

3 October 2021

Louis-Philippe V ronneau: ANC is not for me

Active noise cancellation (ANC) has been all the rage lately in the headphones and in-ear monitors market. It seems after Apple got heavily praised for their AirPods Pro, every somewhat serious electronics manufacturer released their own design incorporating this technology. The first headphones with ANC I remember trying on (in the early 2010s) were the Bose QuietComfort 15. Although the concept did work (they indeed cancelled some sounds), they weren't amazing and did a great job of convincing me ANC was some weird fad for people who flew often. The Sony WH-1000X M3 folded in their case As the years passed, chip size decreased, battery capacity improved and machine learning blossomed truly a perfect storm for the wireless ANC headphones market. I had mostly stayed a sceptic of this tech until recently a kind friend offered to let me try a pair of Sony WH-1000X M3. Having tested them thoroughly, I have to say I'm really tempted to buy them from him, as they truly are fantastic headphones1. They are very light, comfortable, work without a proprietary app and sound very good with the ANC on2 if a little bass-heavy for my taste3. The ANC itself is truly astounding and is leaps and bounds beyond what was available five years ago. It still isn't perfect and doesn't cancel ALL sounds, but transforms the low hum of the subway I find myself sitting in too often these days into a light *swoosh*. When you turn the ANC on, HVAC simply disappears. Most impressive to me is the way they completely cancel the dreaded sound of your footsteps resonating in your headphones when you walk with them. My old pair of Senheiser HD 280 Pro, with aftermarket sheepskin earpads I won't be keeping them though. Whilst I really like what Sony has achieved here, I've grown to understand ANC simply isn't for me. Some of the drawbacks of ANC somewhat bother me: the ear pressure it creates is tolerable, but is an additional energy drain over long periods of time and eventually gives me headaches. I've also found ANC accentuates the motion sickness I suffer from, probably because it messes up with some part of the inner ear balance system. Most of all, I found that it didn't provide noticeable improvements over good passive noise cancellation solutions, at least in terms of how high I have to turn the volume up to hear music or podcasts clearly. The human brain works in mysterious ways and it seems ANC cancelling a class of noises (low hums, constant noises, etc.) makes other noises so much more noticeable. People talking or bursty high pitched noises bothered me much more with ANC on than without. So for now, I'll keep using my trusty Senheiser HD 280 Pro4 at work and good in-ear monitors with Comply foam tips on the go.

  1. This blog post certainly doesn't aim to be a comprehensive review of these headphones. See Zeos' review if you want something more in-depth.
  2. As most ANC headphones, they don't sound as good when used passively through the 3.5mm port, but that's just a testament of how a great job Sony did of tuning the DSP.
  3. Easily fixed using an EQ.
  4. Retrofitted with aftermarket sheepskin earpads, they provide more than 32db of passive noise reduction.

28 September 2021

Jonathan McDowell: Adding Zigbee to my home automation

SonOff Zigbee Door Sensor My home automation setup has been fairly static recently; it does what we need and generally works fine. One area I think could be better is controlling it; we have access Home Assistant on our phones, and the Alexa downstairs can control things, but there are no smart assistants upstairs and sometimes it would be nice to just push a button to turn on the light rather than having to get my phone out. Thanks to the fact the UK generally doesn t have neutral wire in wall switches that means looking at something battery powered. Which means wifi based devices are a poor choice, and it s necessary to look at something lower power like Zigbee or Z-Wave. Zigbee seems like the better choice; it s a more open standard and there are generally more devices easily available from what I ve seen (e.g. Philips Hue and IKEA TR DFRI). So I bought a couple of Xiaomi Mi Smart Home Wireless Switches, and a CC2530 module and then ignored it for the best part of a year. Finally I got around to flashing the Z-Stack firmware that Koen Kanters kindly provides. (Insert rant about hardware manufacturers that require pay-for tool chains. The CC2530 is even worse because it s 8051 based, so SDCC should be able to compile for it, but the TI Zigbee libraries are only available in a format suitable for IAR s embedded workbench.) Flashing the CC2530 is a bit of faff. I ended up using the CCLib fork by Stephan Hadinger which supports the ESP8266. The nice thing about the CC2530 module is it has 2.54mm pitch pins so nice and easy to jumper up. It then needs a USB/serial dongle to connect it up to a suitable machine, where I ran Zigbee2MQTT. This scares me a bit, because it s a bunch of node.js pulling in a chunk of stuff off npm. On the flip side, it Just Works and I was able to pair the Xiaomi button with the device and see MQTT messages that I could then use with Home Assistant. So of course I tore down that setup and went and ordered a CC2531 (the variant with USB as part of the chip). The idea here was my test setup was upstairs with my laptop, and I wanted something hooked up in a more permanent fashion. Once the CC2531 arrived I got distracted writing support for the Desk Viking to support CCLib (and modified it a bit for Python3 and some speed ups). I flashed the dongle up with the Z-Stack Home 1.2 (default) firmware, and plugged it into the house server. At this point I more closely investigated what Home Assistant had to offer in terms of Zigbee integration. It turns out the ZHA integration has support for the ZNP protocol that the TI devices speak (I m reasonably sure it didn t when I first looked some time ago), so that seemed like a better option than adding the MQTT layer in the middle. I hit some complexity passing the dongle (which turns up as /dev/ttyACM0) through to the Home Assistant container. First I needed an override file in /etc/systemd/nspawn/hass.nspawn:
[Files]
Bind=/dev/ttyACM0:/dev/zigbee
[Network]
VirtualEthernet=true
(I m not clear why the VirtualEthernet needed to exist; without it networking broke entirely but I couldn t see why it worked with no override file.) A udev rule on the host to change the ownership of the device file so the root user and dialout group in the container could see it was also necessary, so into /etc/udev/rules.d/70-persistent-serial.rules went:
# Zigbee for HASS
SUBSYSTEM=="tty", ATTRS idVendor =="0451", ATTRS idProduct =="16a8", SYMLINK+="zigbee", \
	MODE="660", OWNER="1321926676", GROUP="1321926676"
In the container itself I had to switch PrivateDevices=true to PrivateDevices=false in the home-assistant.service file (which took me a while to figure out; yay for locking things down and then needing to use those locked down things). Finally I added the hass user to the dialout group. At that point I was able to go and add the integration with Home Assistant, and add the button as a new device. Excellent. I did find I needed a newer version of Home Assistant to get support for the button, however. I was still on 2021.1.5 due to upstream dropping support for Python 3.7 and not being prepared to upgrade to Debian 11 until it was actually released, so the version of zha-quirks didn t have the correct info. Upgrading to Home Assistant 2021.8.7 sorted that out. There was another slight problem. Range. Really I want to use the button upstairs. The server is downstairs, and most of my internal walls are brick. The solution turned out to be a TR DFRI socket, which replaced the existing ESP8266 wifi socket controlling the stair lights. That was close enough to the server to have a decent signal, and it acts as a Zigbee router so provides a strong enough signal for devices upstairs. The normal approach seems to be to have a lot of Zigbee light bulbs, but I have mostly kept overhead lights as uncontrolled - we don t use them day to day and it provides a nice fallback if the home automation has issues. Of course installing Zigbee for a single button would seem to be a bit pointless. So I ordered up a Sonoff door sensor to put on the front door (much smaller than expected - those white boxes on the door are it in the picture above). And I have a 4 gang wireless switch ordered to go on the landing wall upstairs. Now I ve got a Zigbee setup there are a few more things I m thinking of adding, where wifi isn t an option due to the need for battery operation (monitoring the external gas meter springs to mind). The CC2530 probably isn t suitable for my needs, as I ll need to write some custom code to handle the bits I want, but there do seem to be some ARM based devices which might well prove suitable

6 September 2021

Vincent Bernat: Switching to the i3 window manager

I have been using the awesome window manager for 10 years. It is a tiling window manager, configurable and extendable with the Lua language. Using a general-purpose programming language to configure every aspect is a double-edged sword. Due to laziness and the apparent difficulty of adapting my configuration about 3000 lines to newer releases, I was stuck with the 3.4 version, whose last release is from 2013. It was time for a rewrite. Instead, I have switched to the i3 window manager, lured by the possibility to migrate to Wayland and Sway later with minimal pain. Using an embedded interpreter for configuration is not as important to me as it was in the past: it brings both complexity and brittleness.
i3 dual screen setup
Dual screen desktop running i3, Emacs, some terminals, including a Quake console, Firefox, Polybar as the status bar, and Dunst as the notification daemon.
The window manager is only one part of a desktop environment. There are several options for the other components. I am also introducing them in this post.

i3: the window manager i3 aims to be a minimal tiling window manager. Its documentation can be read from top to bottom in less than an hour. i3 organize windows in a tree. Each non-leaf node contains one or several windows and has an orientation and a layout. This information arbitrates the window positions. i3 features three layouts: split, stacking, and tabbed. They are demonstrated in the below screenshot:
Example of layouts
Demonstration of the layouts available in i3. The main container is split horizontally. The first child is split vertically. The second one is tabbed. The last one is stacking.
Tree representation of the previous screenshot
Tree representation of the previous screenshot.
Most of the other tiling window managers, including the awesome window manager, use predefined layouts. They usually feature a large area for the main window and another area divided among the remaining windows. These layouts can be tuned a bit, but you mostly stick to a couple of them. When a new window is added, the behavior is quite predictable. Moreover, you can cycle through the various windows without thinking too much as they are ordered. i3 is more flexible with its ability to build any layout on the fly, it can feel quite overwhelming as you need to visualize the tree in your head. At first, it is not unusual to find yourself with a complex tree with many useless nested containers. Moreover, you have to navigate windows using directions. It takes some time to get used to. I set up a split layout for Emacs and a few terminals, but most of the other workspaces are using a tabbed layout. I don t use the stacking layout. You can find many scripts trying to emulate other tiling window managers but I did try to get my setup pristine of these tentatives and get a chance to familiarize myself. i3 can also save and restore layouts, which is quite a powerful feature. My configuration is quite similar to the default one and has less than 200 lines.

i3 companion: the missing bits i3 philosophy is to keep a minimal core and let the user implements missing features using the IPC protocol:
Do not add further complexity when it can be avoided. We are generally happy with the feature set of i3 and instead focus on fixing bugs and maintaining it for stability. New features will therefore only be considered if the benefit outweighs the additional complexity, and we encourage users to implement features using the IPC whenever possible. Introduction to the i3 window manager
While this is not as powerful as an embedded language, it is enough for many cases. Moreover, as high-level features may be opinionated, delegating them to small, loosely coupled pieces of code keeps them more maintainable. Libraries exist for this purpose in several languages. Users have published many scripts to extend i3: automatic layout and window promotion to mimic the behavior of other tiling window managers, window swallowing to put a new app on top of the terminal launching it, and cycling between windows with Alt+Tab. Instead of maintaining a script for each feature, I have centralized everything into a single Python process, i3-companion using asyncio and the i3ipc-python library. Each feature is self-contained into a function. It implements the following components:
make a workspace exclusive to an application
When a workspace contains Emacs or Firefox, I would like other applications to move to another workspace, except for the terminal which is allowed to intrude into any workspace. The workspace_exclusive() function monitors new windows and moves them if needed to an empty workspace or to one with the same application already running.
implement a Quake console
The quake_console() function implements a drop-down console available from any workspace. It can be toggled with Mod+ . This is implemented as a scratchpad window.
back and forth workspace switching on the same output
With the workspace back_and_forth command, we can ask i3 to switch to the previous workspace. However, this feature is not restricted to the current output. I prefer to have one keybinding to switch to the workspace on the next output and one keybinding to switch to the previous workspace on the same output. This behavior is implemented in the previous_workspace() function by keeping a per-output history of the focused workspaces.
create a new empty workspace or move a window to an empty workspace
To create a new empty workspace or move a window to an empty workspace, you have to locate a free slot and use workspace number 4 or move container to workspace number 4. The new_workspace() function finds a free number and use it as the target workspace.
restart some services on output change
When adding or removing an output, some actions need to be executed: refresh the wallpaper, restart some components unable to adapt their configuration on their own, etc. i3 triggers an event for this purpose. The output_update() function also takes an extra step to coalesce multiple consecutive events and to check if there is a real change with the low-level library xcffib.
I will detail the other features as this post goes on. On the technical side, each function is decorated with the events it should react to:
@on(CommandEvent("previous-workspace"), I3Event.WORKSPACE_FOCUS)
async def previous_workspace(i3, event):
    """Go to previous workspace on the same output."""
The CommandEvent() event class is my way to send a command to the companion, using either i3-msg -t send_tick or binding a key to a nop command. The latter is used to avoid spawning a shell and a i3-msg process just to send a message. The companion listens to binding events and checks if this is a nop command.
bindsym $mod+Tab nop "previous-workspace"
There are other decorators to avoid code duplication: @debounce() to coalesce multiple consecutive calls, @static() to define a static variable, and @retry() to retry a function on failure. The whole script is a bit more than 1000 lines. I think this is worth a read as I am quite happy with the result.

dunst: the notification daemon Unlike the awesome window manager, i3 does not come with a built-in notification system. Dunst is a lightweight notification daemon. I am running a modified version with HiDPI support for X11 and recursive icon lookup. The i3 companion has a helper function, notify(), to send notifications using DBus. container_info() and workspace_info() uses it to display information about the container or the tree for a workspace.
Notification showing i3 tree for a workspace
Notification showing i3 s tree for a workspace

polybar: the status bar i3 bundles i3bar, a versatile status bar, but I have opted for Polybar. A wrapper script runs one instance for each monitor. The first module is the built-in support for i3 workspaces. To not have to remember which application is running in a workspace, the i3 companion renames workspaces to include an icon for each application. This is done in the workspace_rename() function. The icons are from the Font Awesome project. I maintain a mapping between applications and icons. This is a bit cumbersome but it looks great.
i3 workspaces in Polybar
i3 workspaces in Polybar
For CPU, memory, brightness, battery, disk, and audio volume, I am relying on the built-in modules. Polybar s wrapper script generates the list of filesystems to monitor and they get only displayed when available space is low. The battery widget turns red and blinks slowly when running out of power. Check my Polybar configuration for more details.
Various modules for Polybar
Polybar displaying various information: CPU usage, memory usage, screen brightness, battery status, Bluetooth status (with a connected headset), network status (connected to a wireless network and to a VPN), notification status, and speaker volume.
For Bluetooh, network, and notification statuses, I am using Polybar s ipc module: the next version of Polybar can receive an arbitrary text on an IPC socket. The module is defined with a single hook to be executed at the start to restore the latest status.
[module/network]
type = custom/ipc
hook-0 = cat $XDG_RUNTIME_DIR/i3/network.txt 2> /dev/null
initial = 1
It can be updated with polybar-msg action "#network.send.XXXX". In the i3 companion, the @polybar() decorator takes the string returned by a function and pushes the update through the IPC socket. The i3 companion reacts to DBus signals to update the Bluetooth and network icons. The @on() decorator accepts a DBusSignal() object:
@on(
    StartEvent,
    DBusSignal(
        path="/org/bluez",
        interface="org.freedesktop.DBus.Properties",
        member="PropertiesChanged",
        signature="sa sv as",
        onlyif=lambda args: (
            args[0] == "org.bluez.Device1"
            and "Connected" in args[1]
            or args[0] == "org.bluez.Adapter1"
            and "Powered" in args[1]
        ),
    ),
)
@retry(2)
@debounce(0.2)
@polybar("bluetooth")
async def bluetooth_status(i3, event, *args):
    """Update bluetooth status for Polybar."""
The middle of the bar is occupied by the date and a weather forecast. The latest also uses the IPC mechanism, but the source is a Python script triggered by a timer.
Date and weather in Polybar
Current date and weather forecast for the day in Polybar. The data is retrieved with the OpenWeather API.
I don t use the system tray integrated with Polybar. The embedded icons usually look horrible and they all behave differently. A few years back, Gnome has removed the system tray. Most of the problems are fixed by the DBus-based Status Notifier Item protocol also known as Application Indicators or Ayatana Indicators for GNOME. However, Polybar does not support this protocol. In the i3 companion, The implementation of Bluetooth and network icons, including displaying notifications on change, takes about 200 lines. I got to learn a bit about how DBus works and I get exactly the info I want.

picom: the compositor I like having slightly transparent backgrounds for terminals and to reduce the opacity of unfocused windows. This requires a compositor.1 picom is a lightweight compositor. It works well for me, but it may need some tweaking depending on your graphic card.2 Unlike the awesome window manager, i3 does not handle transparency, so the compositor needs to decide by itself the opacity of each window. Check my configuration for details.

systemd: the service manager I use systemd to start i3 and the various services around it. My xsession script only sets some environment variables and lets systemd handles everything else. Have a look at this article from Micha G ral for the rationale. Notably, each component can be easily restarted and their logs are not mangled inside the ~/.xsession-errors file.3 I am using a two-stage setup: i3.service depends on xsession.target to start services before i3:
[Unit]
Description=X session
BindsTo=graphical-session.target
Wants=autorandr.service
Wants=dunst.socket
Wants=inputplug.service
Wants=picom.service
Wants=pulseaudio.socket
Wants=policykit-agent.service
Wants=redshift.service
Wants=spotify-clean.timer
Wants=ssh-agent.service
Wants=xiccd.service
Wants=xsettingsd.service
Wants=xss-lock.service
Then, i3 executes the second stage by invoking the i3-session.target:
[Unit]
Description=i3 session
BindsTo=graphical-session.target
Wants=wallpaper.service
Wants=wallpaper.timer
Wants=polybar-weather.service
Wants=polybar-weather.timer
Wants=polybar.service
Wants=i3-companion.service
Wants=misc-x.service
Have a look on my configuration files for more details.

rofi: the application launcher Rofi is an application launcher. Its appearance can be customized through a CSS-like language and it comes with several themes. Have a look at my configuration for mine.
Rofi as an application launcher
Rofi as an application launcher
It can also act as a generic menu application. I have a script to control a media player and another one to select the wifi network. It is quite a flexible application.
Rofi as a wifi network selector
Rofi to select a wireless network

xss-lock and i3lock: the screen locker i3lock is a simple screen locker. xss-lock invokes it reliably on inactivity or before a system suspend. For inactivity, it uses the XScreenSaver events. The delay is configured using the xset s command. The locker can be invoked immediately with xset s activate. X11 applications know how to prevent the screen saver from running. I have also developed a small dimmer application that is executed 20 seconds before the locker to give me a chance to move the mouse if I am not away.4 Have a look at my configuration script.
Demonstration of xss-lock, xss-dimmer and i3lock with a 4 speedup.

The remaining components
  • autorandr is a tool to detect the connected display, match them against a set of profiles, and configure them with xrandr.
  • inputplug executes a script for each new mouse and keyboard plugged. This is quite useful to load the appropriate the keyboard map. See my configuration.
  • xsettingsd provides settings to X11 applications, not unlike xrdb but it notifies applications for changes. The main use is to configure the Gtk and DPI settings. See my article on HiDPI support on Linux with X11.
  • Redshift adjusts the color temperature of the screen according to the time of day.
  • maim is a utility to take screenshots. I use Prt Scn to trigger a screenshot of a window or a specific area and Mod+Prt Scn to capture the whole desktop to a file. Check the helper script for details.
  • I have a collection of wallpapers I rotate every hour. A script selects them using advanced machine learning algorithms and stitches them together on multi-screen setups. The selected wallpaper is reused by i3lock.

  1. Apart from the eye candy, a compositor also helps to get tear-free video playbacks.
  2. My configuration works with both Haswell (2014) and Whiskey Lake (2018) Intel GPUs. It also works with AMD GPU based on the Polaris chipset (2017).
  3. You cannot manage two different displays this way e.g. :0 and :1. In the first implementation, I did try to parametrize each service with the associated display, but this is useless: there is only one DBus user session and many services rely on it. For example, you cannot run two notification daemons.
  4. I have only discovered later that XSecureLock ships such a dimmer with a similar implementation. But mine has a cool countdown!

25 August 2021

Bits from Debian: DebConf21 welcomes its sponsors!

DebConf21 logo DebConf21 is taking place online, from 24 August to 28 August 2021. It is the 22nd Debian conference, and organizers and participants are working hard together at creating interesting and fruitful events. We would like to warmly welcome the 19 sponsors of DebConf21, and introduce them to you. We have five Platinum sponsors. Our first Platinum sponsor is Lenovo. As a global technology leader manufacturing a wide portfolio of connected products, including smartphones, tablets, PCs and workstations as well as AR/VR devices, smart home/office and data center solutions, Lenovo understands how critical open systems and platforms are to a connected world. Our next Platinum sponsor is Infomaniak. Infomaniak is Switzerland's largest web-hosting company, also offering backup and storage services, solutions for event organizers, live-streaming and video on demand services. It wholly owns its datacenters and all elements critical to the functioning of the services and products provided by the company (both software and hardware). Roche is our third Platinum sponsor. Roche is a major international pharmaceutical provider and research company dedicated to personalized healthcare. More than 100,000 employees worldwide work towards solving some of the greatest challenges for humanity using science and technology. Roche is strongly involved in publicly funded collaborative research projects with other industrial and academic partners and has supported DebConf since 2017. Amazon Web Services (AWS) is our fourth Platinum sponsor. Amazon Web Services is one of the world's most comprehensive and broadly adopted cloud platform, offering over 175 fully featured services from data centers globally (in 77 Availability Zones within 24 geographic regions). AWS customers include the fastest-growing startups, largest enterprises and leading government agencies. Google is our fifth Platinum sponsor. Google is one of the largest technology companies in the world, providing a wide range of Internet-related services and products such as online advertising technologies, search, cloud computing, software, and hardware. Google has been supporting Debian by sponsoring DebConf for more than ten years, and is also a Debian partner sponsoring parts of Salsa's continuous integration infrastructure within Google Cloud Platform. Our Gold sponsor is the Matanel Foundation. The Matanel Foundation operates in Israel, as its first concern is to preserve the cohesion of a society and a nation plagued by divisions. The Matanel Foundation also works in Europe, in Africa and in South America. Our Silver sponsors are: arm: the World s Best SoC Design Portfolio, Arm powered solutions have been supporting innovation for more than 30 years and are deployed in over 160 billion chips to date, Hudson-Trading, a company researching and developing automated trading algorithms using advanced mathematical techniques, Ubuntu the Operating System delivered by Canonical, Globo, the largest media conglomerate in Brazil, founded in Rio de Janeiro in 1925 and distributing high-quality content across multiple platforms, Two Sigma, rigorous inquiry, data analysis, and invention to help solve the toughest challenges across financial services and GitLab, an open source end-to-end software development platform with built-in version control, issue tracking, code review, CI/CD, and more. Bronze sponsors: Univention, Gandi.net, daskeyboard, InterFace AG and credativ. And finally, our Supporter level sponsor, ISG.EE. Thanks to all our sponsors for their support! Their contributions make it possible for a large number of Debian contributors from all over the globe to work together, help and learn from each other in DebConf21. Participating in DebConf21 online The 22nd Debian Conference is being held online, due to COVID-19, from August 24 to 28, 2021. Talks, discussions, panels and other activities run from 12:00 to 02:00 UTC. Visit the DebConf21 website at https://debconf21.debconf.org to learn about the complete schedule, watch the live streaming and join the different communication channels for participating in the conference.

Next.