Alternatively, if you prefer the graphical interface, click on the "Add hardware" button in the VM properties, choose the TPM, set it to Emulated, model TIS, and set its version to 2.0.
<devices> <tpm model='tpm-tis'> <backend type='emulator' version='2.0'/> </tpm> </devices>
i440fxchipset. This one is limited to PCI and IDE, unlike the more modern
q35chipset (which supports PCIe and SATA, and does not support IDE nor SATA in IDE mode). There is a UEFI/Secure Boot-capable BIOS for qemu, but it apparently requires the
q35chipset, Fun fact (which I found out the hard way): Windows stores where its boot partition is somewhere. If you change the hard drive controller from an IDE one to a SATA one, you will get a BSOD at startup. In order to fix that, you need a recovery drive. To create the virtual USB disk, go to the VM properties, click "Add hardware", choose "Storage", choose the USB bus, and then under "Advanced options", select the "Removable" option, so it shows up as a USB stick in the VM. Note: this takes a while to do (took about an hour on my system), and your virtual USB drive needs to be 16G or larger (I used the libvirt default of 20G). There is no possibility, using the buttons in the
virt-managerGUI, to convert the machine from
q35. However, that doesn't mean it's not possible to do so. I found that the easiest way is to use the direct XML editing capabilities in the
virt-managerinterface; if you edit the XML in an editor it will produce error messages if something doesn't look right and tell you to go and fix it, whereas the
virt-managerGUI will actually fix things itself in some cases (and will produce helpful error messages if not). What I did was:
machineattribute of the
domain.os.typeelement, so that it says
domain.devices.controllerelement that has
modelone, and set the
domain.devices.disk.targetelements, setting their
type="usb", and set its
qemu-xhci. You may also want to add
ports="15"if you didn't have that yet.
<controller type="pci" index="1" model="pcie-root-port"/> <controller type="pci" index="2" model="pcie-root-port"/> <controller type="pci" index="3" model="pcie-root-port"/>
virt-managergives you an error when you hit the
Applybutton, compare notes against the VM that you're in the process of creating, and copy/paste things from there to the old VM to make the errors go away. As long as you don't remove configuration that is critical for things to start, this shouldn't break matters permanently (but hey, use your backups if you do break -- you have backups, right?) OK, cool, so now we have a Windows VM that is... unable to boot. Remember what I said about Windows storing where the controller is? Yeah, there you go. Boot from the virtual USB disk that you created above, and select the "Fix the boot" option in the menu. That will fix it. Ha ha, only kidding. Of course it doesn't. I honestly can't tell you everything that I fiddled with, but I think the bit that eventually fixed it was where I chose "safe mode", which caused the system to do a hickup, a regular reboot, and then suddenly everything was working again. Meh. Don't throw the virtual USB disk away yet, you'll still need it. Anyway, once you have it booting again, you will now have a machine that theoretically supports Secure Boot, but you're still running off an MBR partition. I found a procedure on how to convert things from MBR to GPT that was written almost 10 years ago, but surprisingly it still works, except for the bit where the procedure suggests you use diskmgmt.msc (for one thing, that was renamed; and for another, it can't touch the partition table of the system disk either). The last step in that procedure says to restart your computer!, which is fine, except at this point you obviously need to switch over to the TianoCore firmware, otherwise you're trying to read a UEFI boot configuration on a system that only supports MBR booting, which obviously won't work. In order to do that, you need to add a
loaderelement to the
domain.oselement of your libvirt configuration:
When you do this, you'll note that
<loader readonly="yes" type="pflash">/usr/share/OVMF/OVMF_CODE_4M.ms.fd</loader>
virt-managerautomatically adds an
nvramelement. That's fine, let it. I figured this out by looking at the documentation for enabling Secure Boot in a VM on the Debian wiki, and using the same trick as for how to switch chipsets that I explained above. Okay, yay, so now secure boot is enabled, and we can install Windows 11! All good? Well, almost. I found that once I enabled secure boot, my display reverted to a 1024x768 screen. This turned out to be because I was using older unsigned drivers, and since we're using Secure Boot, that's no longer allowed, which means Windows reverts to the default VGA driver, and that only supports the 1024x768 resolution. Yeah, I know. The solution is to download the virtio-win ISO from one of the links in the virtio-win github project, connecting it to the VM, going to Device manager, selecting the display controller, clicking on the "Update driver" button, telling the system that you have the driver on your computer, browsing to the CD-ROM drive, clicking the "include subdirectories" option, and then tell Windows to do its thing. While there, it might be good to do the same thing for unrecognized devices in the device manager, if any. So, all I have to do next is to get used to the completely different user interface of Windows 11. Sigh. Oh, and to rename the "w10" VM to "w11", or some such. Maybe.
In truth, the histories of Arpanet and BBS networks were interwoven socially and materially as ideas, technologies, and people flowed between them. The history of the internet could be a thrilling tale inclusive of many thousands of networks, big and small, urban and rural, commercial and voluntary. Instead, it is repeatedly reduced to the story of the singular Arpanet.Kevin Driscoll goes on to highlight the social aspects of the modem world , how BBSs and online services like AOL and CompuServe were ways for people to connect. And yet, AOL members couldn t easily converse with CompuServe members, and vice-versa. Sound familiar?
Today s social media ecosystem functions more like the modem world of the late 1980s and early 1990s than like the open social web of the early 21st century. It is an archipelago of proprietary platforms, imperfectly connected at their borders. Any gateways that do exist are subject to change at a moment s notice. Worse, users have little recourse, the platforms shirk accountability, and states are hesitant to intervene.Yes, it does. As he adds, People aren t the problem. The problem is the platforms. A thought-provoking article, and I think I ll need to buy the book it s excerpted from!
apton Debian or Ubuntu can and this project integrates all CRAN packages (plus 200+ BioConductor packages). It will work with any Ubuntu installation on laptop, desktop, server, cloud, container, or in WSL2 (but is limited to Intel/AMD chips, sorry Raspberry Pi or M1 laptop). It covers all of CRAN (or nearly 19k packages), all the BioConductor packages depended-upon (currently over 200), and only excludes less than a handful of CRAN packages that cannot be built.
bspmThe r2u setup can be used directly with
dpkgor any other frontend to the package management system). Once installed
apt update; apt upgradewill take care of new packages. For this to work, all CRAN packages (and all BioConductor packages depended upon) are mapped to names like
r-bioc-s4vectors: an r prefix, the repo, and the package name, all lower-cased. That works but thanks to the wonderful bspm package by I aki car we can do much better. It connects R s own
apt. So we can just say (as the demos above show)
install.packages("brms")and binaries are installed via
aptwhich is fantastic and it connects R to the system package manager. The setup is really only two lines and described at the r2u site as part of the setup.
elsechoice can be hard-coded, instead of being run-time evaluated every time. Such branches can be updated too (the kernel just rewrites the code to switch around the branch ). All these principles apply to static calls as well, but they re for replacing indirect function calls (i.e. a call through a function pointer) with a direct call (i.e. a hard-coded call address). This eliminates the need for Spectre mitigations (e.g. RETPOLINE) for these indirect calls, and avoids a memory lookup for the pointer. For hot-path code (like the scheduler), this has a measurable performance impact. It also serves as a kind of Control Flow Integrity implementation: an indirect call got removed, and the potential destinations have been explicitly identified at compile-time. network RNG improvements
CAP_SETGID(instead of to just any group), providing a way to keep the power of granting this capability much more limited. (This isn t complete yet, though, since handling
setgroups()is still needed.) improve kernel s internal checking of file contents
set_fs(), Christoph Hellwig made it possible for set_fs() to be optional for an architecture. Subsequently, he then removed
set_fs()entirely for x86, riscv, and powerpc. These architectures will now be free from the entire class of kernel address limit attacks that only needed to corrupt a single value in
struct thead_info. sysfs_emit() replaces sprintf() in /sys
/syshandlers by creating a new helper,
sysfs_emit(). This will handle the cases where kernel code was not correctly dealing with the length results from
sprintf()calls, which might lead to buffer overflows in the
/syshandlers operate on. With the helper in place, it was possible to start the refactoring of the many
sprintf()callers. nosymfollow mount option
nosymfollowmount option. This entirely disables symlink resolution for the given filesystem, similar to other mount options where
nosuiddisallows setid bits, and
nodevdisallows device files. Quoting the patch, it is useful as a defensive measure for systems that need to deal with untrusted file systems in privileged contexts. (i.e. for when
/proc/sys/fs/protected_symlinksisn t a big enough hammer.) Chrome OS uses this option for its stateful filesystem, as symlink traversal as been a common attack-persistence vector. ARMv8.5 Memory Tagging Extension support
-Warray-boundscompiler flag and clear the path for saner bounds checking of array indexes and
memcpy()usage. That s it for now! Please let me know if you think anything else needs some attention. Next up is Linux v5.11.
2022, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 License.
Raspberry Pi Zero 2 W, which you can easily see in the following photo, informs the kernel it has 512MB RAM, and Well, really, it s an easy device tree to read, don t be shy! So, I booted my RPi 3B+ with a freshly downloaded Bookworm image, installed and unpacked
linux-source-5.15, applied Stephan s patch, and added the following for the DTB to be generated in the arm64 tree as well:
--- /dev/null 2022-01-26 23:35:40.747999998 +0000 +++ arch/arm64/boot/dts/broadcom/bcm2837-rpi-zero-2-w.dts 2022-02-13 06:28:29.968429953 +0000 @@ -0,0 +1 @@ +#include "arm/bcm2837-rpi-zero-2-w.dts"
make dtbs, and Failed, because bcm283x-rpi-wifi-bt.dtsi is not yet in the kernel . OK, no worries: Getting wireless to work is a later step. I commented out the lines causing conflict (10, 33-35, 134-136), and:
root@rpi3-20220212:/usr/src/linux-source-5.15# make dtbs DTC arch/arm64/boot/dts/broadcom/bcm2837-rpi-zero-2-w.dtb
#debian-raspberrypi, steev suggested me to pull in the WiFi patch, that has also been submitted (but not yet accepted) for kernel inclusion. I did so, uncommented the lines I modified, and built again. It builds correctly, and again copied the DTB over. It still does not find the WiFi;
dmesgstill complains about a missing bit of firmware (
failed to load brcm/brcmfmac43430b0-sdio.raspberrypi,model-zero-2-w.bin). Steev pointed out it can be downloaded from RPi Distro s GitHub page, but I called it a night and didn t pursue it any further ;-) So I understand this post is still a far cry from saying our images properly boot under a RPi 0 2 W , but we will get there
as the Trusted Platform Module; as a security processor used for non-TPM scenarios like platform resiliency; or OEMs can choose to ship with Pluton turned off. What we're likely to see to begin with is the former - Pluton will run firmware that exposes a Trusted Computing Group compatible TPM interface. This is almost identical to the status quo. Microsoft have required that all Windows certified hardware ship with a TPM for years now, but for cost reasons this is often not in the form of a separate hardware component. Instead, both Intel and AMD provide support for running the TPM stack on a component separate from the main execution cores on the system - for Intel, this TPM code runs on the Management Engine integrated into the chipset, and for AMD on the Platform Security Processor that's integrated into the CPU package itself.
JH_2538_2592_ZNP_UART_20211222.hex) - while it s possible to do USB directly with the CC2538 my board doesn t have those bits so going the external USB UART route is easier. The device had some existing firmware on it, so I needed to erase this to force a drop into the boot loader. That means soldering up the JTAG pins and hooking it up to my Bus Pirate for OpenOCD goodness.
source [find interface/buspirate.cfg] buspirate_port /dev/ttyUSB1 buspirate_mode normal buspirate_vreg 1 buspirate_pullup 0 transport select jtag source [find target/cc2538.cfg]
$ telnet localhost 4444 Trying ::1... Trying 127.0.0.1... Connected to localhost. Escape character is '^]'. Open On-Chip Debugger > mww 0x400D300C 0x7F800 > mww 0x400D3008 0x0205 > shutdown shutdown command invoked Connection closed by foreign host.
$ git clone https://github.com/JelmerT/cc2538-bsl.git $ cc2538-bsl/cc2538-bsl.py -p /dev/ttyUSB1 -e -w -v ~/JH_2538_2592_ZNP_UART_20211222.hex Opening port /dev/ttyUSB1, baud 500000 Reading data from /home/noodles/JH_2538_2592_ZNP_UART_20211222.hex Firmware file: Intel Hex Connecting to target... CC2538 PG2.0: 512KB Flash, 32KB SRAM, CCFG at 0x0027FFD4 Primary IEEE Address: 00:12:4B:00:22:22:22:22 Performing mass erase Erasing 524288 bytes starting at address 0x00200000 Erase done Writing 524256 bytes starting at address 0x00200000 Write 232 bytes at 0x0027FEF88 Write done Verifying by comparing CRC32 calculations. Verified (match: 0x74f2b0a1)
python3 -m zigpy_znp.tools.network_backup /dev/zigbee > cc2531-network.json
cc2538-network.jsonand modified the
coordinator_ieeeto be the new device s MAC address (rather than end up with 2 devices claiming the same MAC if/when I reuse the CC2531) and did:
python3 -m zigpy_znp.tools.network_restore --input cc2538-network.json /dev/ttyUSB1
RuntimeError: Network formation refused, RF environment is likely too noisy. Temporarily unscrew the antenna or shield the coordinator with metal until a network is formed.error. After that I updated my
udevrules to map the CC2538 to
/dev/zigbeeand restarted Home Assistant. To my surprise it came up and detected the existing devices without any extra effort on my part. However that resulted in 2 coordinators being shown in the visualisation, with the old one turning up as
unk_manufacturer. Fixing that involved editing
/etc/homeassistant/.storage/core.device_registryand removing the entry which had the old MAC address, removing the device entry in
/etc/homeassistant/.storage/zha.storagefor the old MAC and then finally firing up
sqliteto modify the Zigbee database:
$ sqlite3 /etc/homeassistant/zigbee.db SQLite version 3.34.1 2021-01-20 14:10:07 Enter ".help" for usage hints. sqlite> DELETE FROM devices_v6 WHERE ieee = '00:12:4b:00:11:11:11:11'; sqlite> DELETE FROM endpoints_v6 WHERE ieee = '00:12:4b:00:11:11:11:11'; sqlite> DELETE FROM in_clusters_v6 WHERE ieee = '00:12:4b:00:11:11:11:11'; sqlite> DELETE FROM neighbors_v6 WHERE ieee = '00:12:4b:00:11:11:11:11' OR device_ieee = '00:12:4b:00:11:11:11:11'; sqlite> DELETE FROM node_descriptors_v6 WHERE ieee = '00:12:4b:00:11:11:11:11'; sqlite> DELETE FROM out_clusters_v6 WHERE ieee = '00:12:4b:00:11:11:11:11'; sqlite> .quit
/dev/ttyACM0) through to the Home Assistant container. First I needed an override file in
[Files] Bind=/dev/ttyACM0:/dev/zigbee [Network] VirtualEthernet=true
udevrule on the host to change the ownership of the device file so the root user and dialout group in the container could see it was also necessary, so into
# Zigbee for HASS SUBSYSTEM=="tty", ATTRS idVendor =="0451", ATTRS idProduct =="16a8", SYMLINK+="zigbee", \ MODE="660", OWNER="1321926676", GROUP="1321926676"
home-assistant.servicefile (which took me a while to figure out; yay for locking things down and then needing to use those locked down things). Finally I added the
hassuser to the
dialoutgroup. At that point I was able to go and add the integration with Home Assistant, and add the button as a new device. Excellent. I did find I needed a newer version of Home Assistant to get support for the button, however. I was still on 2021.1.5 due to upstream dropping support for Python 3.7 and not being prepared to upgrade to Debian 11 until it was actually released, so the version of zha-quirks didn t have the correct info. Upgrading to Home Assistant 2021.8.7 sorted that out. There was another slight problem. Range. Really I want to use the button upstairs. The server is downstairs, and most of my internal walls are brick. The solution turned out to be a TR DFRI socket, which replaced the existing ESP8266 wifi socket controlling the stair lights. That was close enough to the server to have a decent signal, and it acts as a Zigbee router so provides a strong enough signal for devices upstairs. The normal approach seems to be to have a lot of Zigbee light bulbs, but I have mostly kept overhead lights as uncontrolled - we don t use them day to day and it provides a nice fallback if the home automation has issues. Of course installing Zigbee for a single button would seem to be a bit pointless. So I ordered up a Sonoff door sensor to put on the front door (much smaller than expected - those white boxes on the door are it in the picture above). And I have a 4 gang wireless switch ordered to go on the landing wall upstairs. Now I ve got a Zigbee setup there are a few more things I m thinking of adding, where wifi isn t an option due to the need for battery operation (monitoring the external gas meter springs to mind). The CC2530 probably isn t suitable for my needs, as I ll need to write some custom code to handle the bits I want, but there do seem to be some ARM based devices which might well prove suitable
Do not add further complexity when it can be avoided. We are generally happy with the feature set of i3 and instead focus on fixing bugs and maintaining it for stability. New features will therefore only be considered if the benefit outweighs the additional complexity, and we encourage users to implement features using the IPC whenever possible. Introduction to the i3 window managerWhile this is not as powerful as an embedded language, it is enough for many cases. Moreover, as high-level features may be opinionated, delegating them to small, loosely coupled pieces of code keeps them more maintainable. Libraries exist for this purpose in several languages. Users have published many scripts to extend i3: automatic layout and window promotion to mimic the behavior of other tiling window managers, window swallowing to put a new app on top of the terminal launching it, and cycling between windows with Alt+Tab. Instead of maintaining a script for each feature, I have centralized everything into a single Python process,
i3-companionusing asyncio and the i3ipc-python library. Each feature is self-contained into a function. It implements the following components:
workspace_exclusive()function monitors new windows and moves them if needed to an empty workspace or to one with the same application already running.
quake_console()function implements a drop-down console available from any workspace. It can be toggled with Mod+ . This is implemented as a scratchpad window.
workspace back_and_forthcommand, we can ask i3 to switch to the previous workspace. However, this feature is not restricted to the current output. I prefer to have one keybinding to switch to the workspace on the next output and one keybinding to switch to the previous workspace on the same output. This behavior is implemented in the
previous_workspace()function by keeping a per-output history of the focused workspaces.
workspace number 4or
move container to workspace number 4. The
new_workspace()function finds a free number and use it as the target workspace.
output_update()function also takes an extra step to coalesce multiple consecutive events and to check if there is a real change with the low-level library xcffib.
@on(CommandEvent("previous-workspace"), I3Event.WORKSPACE_FOCUS) async def previous_workspace(i3, event): """Go to previous workspace on the same output."""
CommandEvent()event class is my way to send a command to the companion, using either
i3-msg -t send_tickor binding a key to a
nopcommand. The latter is used to avoid spawning a shell and a
i3-msgprocess just to send a message. The companion listens to binding events and checks if this is a
bindsym $mod+Tab nop "previous-workspace"
@debounce()to coalesce multiple consecutive calls,
@static()to define a static variable, and
@retry()to retry a function on failure. The whole script is a bit more than 1000 lines. I think this is worth a read as I am quite happy with the result.
notify(), to send notifications using DBus.
workspace_info()uses it to display information about the container or the tree for a workspace.
workspace_rename()function. The icons are from the Font Awesome project. I maintain a mapping between applications and icons. This is a bit cumbersome but it looks great. For CPU, memory, brightness, battery, disk, and audio volume, I am relying on the built-in modules. Polybar s wrapper script generates the list of filesystems to monitor and they get only displayed when available space is low. The battery widget turns red and blinks slowly when running out of power. Check my Polybar configuration for more details. For Bluetooh, network, and notification statuses, I am using Polybar s
ipcmodule: the next version of Polybar can receive an arbitrary text on an IPC socket. The module is defined with a single hook to be executed at the start to restore the latest status.
[module/network] type = custom/ipc hook-0 = cat $XDG_RUNTIME_DIR/i3/network.txt 2> /dev/null initial = 1
polybar-msg action "#network.send.XXXX". In the i3 companion, the
@polybar()decorator takes the string returned by a function and pushes the update through the IPC socket. The i3 companion reacts to DBus signals to update the Bluetooth and network icons. The
@on()decorator accepts a
@on( StartEvent, DBusSignal( path="/org/bluez", interface="org.freedesktop.DBus.Properties", member="PropertiesChanged", signature="sa sv as", onlyif=lambda args: ( args == "org.bluez.Device1" and "Connected" in args or args == "org.bluez.Adapter1" and "Powered" in args ), ), ) @retry(2) @debounce(0.2) @polybar("bluetooth") async def bluetooth_status(i3, event, *args): """Update bluetooth status for Polybar."""
~/.xsession-errorsfile.3 I am using a two-stage setup:
xsession.targetto start services before i3:
[Unit] Description=X session BindsTo=graphical-session.target Wants=autorandr.service Wants=dunst.socket Wants=inputplug.service Wants=picom.service Wants=pulseaudio.socket Wants=policykit-agent.service Wants=redshift.service Wants=spotify-clean.timer Wants=ssh-agent.service Wants=xiccd.service Wants=xsettingsd.service Wants=xss-lock.service
[Unit] Description=i3 session BindsTo=graphical-session.target Wants=wallpaper.service Wants=wallpaper.timer Wants=polybar-weather.service Wants=polybar-weather.timer Wants=polybar.service Wants=i3-companion.service Wants=misc-x.service
xset scommand. The locker can be invoked immediately with
xset s activate. X11 applications know how to prevent the screen saver from running. I have also developed a small dimmer application that is executed 20 seconds before the locker to give me a chance to move the mouse if I am not away.4 Have a look at my configuration script.
:1. In the first implementation, I did try to parametrize each service with the associated display, but this is useless: there is only one DBus user session and many services rely on it. For example, you cannot run two notification daemons.