Search Results: "bam"

24 October 2023

Iustin Pop: OS updates are damn easy nowadays!

I m baffled at how simple and reliable operating system updates have become. Upgraded Debian bullseye to bookworm, across a few systems, easy. On VMs, it s even so fast that installing base system from scratch is probably the same time. But Linux/Debian OFC works well. Shall we look at MacOS? Takes longer, but just runs and reboots a couple of times and then, bam, it s up and with windows restored. Surely Windows is the outlier? Nah, finally said yes to the Upgrade to Win 11? prompt, and it took a while to download (why is Win/Mac so heavy and slow to download? Debian just flies!), then rebooted a few times, and again, bam, it s up and GoG and Steam still work. I swear, there was a time when updating the OS felt like an accomplishment. Now, except for Raspberry Pi OS ( upgrades not supported, reinstall! but I bet they also work), upgrading an actual OS is just like new Android/iOS version. And yes, get off my lawn! I still have a lower digit count Slashdot ID

14 October 2023

Ravi Dwivedi: Kochi - Wayanad Trip in August-September 2023

A trip full of hitchhiking, beautiful places and welcoming locals.

Day 1: Arrival in Kochi Kochi is a city in the state of Kerala, India. This year s DebConf was to be held in Kochi from 3rd September to 17th of September, which I was planning to attend. My friend Suresh, who was planning to join, told me that 29th August 2023 will be Onam, a major festival of the state of Kerala. So, we planned a Kerala trip before the DebConf. We booked early morning flights for Kochi from Delhi and reached Kochi on 28th August. We had booked a hostel named Zostel in Ernakulam. During check-in, they asked me to fill a form which required signing in using a Google account. I told them I don t have a Google account and I don t want to create one either. The people at the front desk seemed receptive, so I went ahead with telling them the problems of such a sign-in being mandatory for check-in. Anyways, they only took a photo of my passport and let me check-in without a Google account. We stayed in a ten room dormitory, which allowed travellers of any gender. The dormitory room was air-conditioned, spacious, clean and beds were also comfortable. There were two bathrooms in the dormitory and they were clean. Plus, there was a separate dormitory room in the hostel exclusive for females. I noticed that that Zostel was not added in the OpenStreetMap and so, I added it :) . The hostel had a small canteen for tea and snacks, a common sitting area outside the dormitories, which had beds too. There was a separate silent room, suitable for people who want to work.
Dormitory room in Zostel Ernakulam, Kochi.
Beds in Zostel Ernakulam, Kochi.
We had lunch at a nearby restaurant and it was hard to find anything vegetarian for me. I bought some freshly made banana chips from the street and they were tasty. As far as I remember, I had a big glass of pineapple juice for lunch. Then I went to the Broadway market and bought some cardamom and cinnamon for home. I also went to a nearby supermarket and bought Matta brown rice for home. Then, I looked for a courier shop to send the things home but all of them were closed due to Onam festival. After returning to the Zostel, I overslept till 9 PM and in the meanwhile, Suresh planned with Saidut and Shwetank (who met us during our stay in Zostel) to go to a place in Fort Kochi for dinner. I suspected I will be disappointed by lack of vegetarian options as they were planning to have fish. I already had a restaurant in mind - Brindhavan restaurant (suggested by Anupa), which was a pure vegetarian restaurant. To reach there, I got off at Palarivattom metro station and started looking for an auto-rickshaw to get to the restaurant. I didn t get any for more than 5 minutes. Since that restaurant was not added to the OpenStreetMap, I didn t even know how far that was and which direction to go to. Then, I saw a Zomato delivery person on a motorcycle and asked him where the restaurant was. It was already 10 PM and the restaurant closes at 10:30. So, I asked him whether he can drop me off. He agreed and dropped me off at that restaurant. It was 4-5 km from that metro station. I tipped him and expressed my gratefulness for the help. He refused to take the tip, but I insisted and he accepted. I entered the restaurant and it was coming to a close, so many items were not available. I ordered some Kadhai Paneer (only item left) with naan. It tasted fine. Since the next day was Thiruvonam, I asked the restaurant about the Sadya thali menu and prices for the next day. I planned to eat Sadya thali at that restaurant, but my plans got changed later.
Onam sadya menu from Brindhavan restaurant.

Day 2: Onam celebrations Next day, on 29th of August 2023, we had plan to leave for Wayanad. Wayanad is a hill station in Kerala and a famous tourist spot. Praveen suggested to visit Munnar as it is far closer to Kochi than Wayanad (80 km vs 250 km). But I had already visited Munnar in my previous trips, so we chose Wayanad. We had a train late night from Ernakulam Junction (at 23:30 hours) to Kozhikode, which is the nearest railway station from Wayanad. So, we checked out in the morning as we had plans to roam around in Kochi before taking the train. Zostel was celebrating Onam on that day. To opt-in, we had to pay 400 rupees, which included a Sadya Thali and a mundu. Me and Suresh paid the amount and opted in for the celebrations. Sadya thali had Rice, Sambhar, Rasam, Avial, Banana Chips, Pineapple Pachadi, Pappadam, many types of pickels and chutneys, Pal Ada Payasam and Coconut jaggery Pasam. And, there was water too :). Those payasams were really great and I had one more round of them. Later, I had a lot of variety of payasams during the DebConf.
Sadya lined up for serving
Sadya thali served on banana leaf.
So, we hung out in the common room and put our luggage there. We played UNO and had conversations with other travellers in the hostel. I had a fun time there and I still think it is one of the best hostel experiences I had. We made good friends with Saiduth (Telangana) and Shwetank (Uttarakhand). They were already aware about the software like debian, and we had some detailed conversations about the Free Software movement. I remember explaining the difference between the terms Open Source and Free Software . I also told them about the Streetcomplete app, a beginner friendly app to edit OpenStreetMap. We had dinner at a place nearby (named Palaraam), but again, the vegetarian options were very limited! After dinner, we came back to the Zostel and me and Suresh left for Ernakulam Junction to catch our train Maveli Express (16604).

Day 3: Going to Wayanad Maveli Express was scheduled to reach Kozhikode at 03:25 (morning). I had set alarms from 03:00 to 03:30, with the gap of 10 minutes. Every time I woke up, I turned off the alarm. Then I woke up and saw train reaching the Kozhikode station and woke up Suresh for deboarding. But then I noticed that the train is actually leaving the station, not arriving! This means we missed our stop. Now we looked at the next stops and whether we can deboard there. I was very sleepy and wanted to take a retiring room at some station before continuing our journey to Wayanad. The next stop was Quilandi and we checked online that it didn t have a retiring room. So, we skipped this stop. We got off at the next stop named Vadakara and found out no retiring room was available. So, we asked about information regarding bus for Wayanad and they said that there is a bus to Wayanad around 07:00 hours from bus station which was a few kilometres from the railway station. We took a bus for Kalpetta (in Wayanad) at around 07:00. The destination of the buses were written in Malayalam, which we could not read. Once again, the locals helped us to get on to the bus to Kalpetta. Vadakara is not a big city and it can be hard to find people who know good Hindi or English, unlike Kochi. Despite language issues, I had no problem there in navigation, thanks to locals. I mostly spent time sleeping during the bus journey. A few hours later, the bus dropped us at Kalpetta. We had a booking at a hostel in Rippon village. It was 16 km from Kalpetta. On the way, we were treated with beautiful views of nature, which was present everywhere in Wayanad. The place was covered with tea gardens and our eyes were treated with beautiful scenery at every corner.
We were treated with such views during the Wayanad trip.
Rippon village was a very quiet place and I liked the calm atmosphere. This place is blessed by nature and has stunning scenery. I found English was more common than Hindi in Wayanad. Locals were very nice and helped me, even if they didn t know my language.
A road in Rippon.
After catching some sleep at the hostel, I went out in the afternoon. I hitchhiked to reach the main road from the hostel. I bought more spices from a nearby shop and realized that I should have waited for my visit to Wayanad to buy cardamom, which I already bought from Kochi. Then, I was looking for post office to send spices home. The people at the spices shop told me that the nearby Rippon post office was closed by that time, but the post office at Meppadi was open, which was 5 km from there. I went to Meppadi and saw the post office closes at 15:00, but I reached five minutes late. My packing was not very good and they asked me to pack it tighter. There was a shop near the post office and the people there gave me a cardboard and tapes, and helped pack my stuff for the post. By the time I went to the post office again, it was 15:30. But they accepted my parcel for post.

Day 4: Kanthanpara Falls, Zostel Wayanad and Karapuzha Dam Kanthanpara waterfalls were 2 km from the hostel. I hitchhiked to the place from the hostel on a scooty. Entry ticket was worth Rs 40. There were good views inside and nothing much to see except the waterfalls.
Entry to Kanthanpara Falls.
Kanthanpara Falls.
We had a booking at Zostel Wayanad for this day and so we shifted there. Again, as with their Ernakulam branch, they asked me to fill a form which required signing in using Google, but when I said I don t have a Google account they checked me in without that. There were tea gardens inside the Zostel boundaries and the property was beautiful.
A view of Zostel Wayanad.
A map of Wayanad showing tourist places.
A view from inside the Zostel Wayanad property.
Later in the evening, I went to Karapuzha Dam. I witnessed a beautiful sunset during the journey. Karapuzha dam had many activites, like ziplining, and was nice to roam around. Chembra Peak is near to the Zostel Wayanad. So, I was planning to trek to the heart shaped lake. It was suggested by Praveen and looking online, this trek seemed worth doing. There was an issue however. The charges for trek were Rs 1770 for upto five people. So, if I go alone I will have to spend Rs 1770 for the trek. If I go with another person, we split Rs 1770 into two, and so on. The optimal way to do it is to go in a group of five (you included :D). I asked front desk at Zostel if they can connect me with people going to Chembra peak the next day, and they told me about a group of four people planning to go to Chembra peak the next day. I got lucky! All four of them were from Kerala and worked in Qatar.

Day 5: Chembra peak trek The date was 1st September 2023. I woke up early (05:30 in the morning) for the Chembra peak trek. I had bought hiking shoes especially for trekking, which turned out to be a very good idea. The ticket counter opens at 07:00. The group of four with which I planned to trek met me around 06:00 in the Zostel. We went to the ticket counter around 06:30. We had breakfast at shops selling Maggi noodles and bread omlette near the ticket counter. It was a hot day and the trek was difficult for an inexperienced person like me. The scenery was green and beautiful throughout.
Terrain during trekking towards the Chembra peak.
Heart-shaped lake at the Chembra peak.
Me at the heart-shaped lake.
Views from the top of the Chembra peak.
View of another peak from the heart-shaped lake.
While returning from the trek, I found out a shop selling bamboo rice, which I bought and will make bamboo rice payasam out of it at home (I have some coconut milk from Kerala too ;)). We returned to Zostel in the afternoon. I had muscle pain after the trek and it has still not completely disappeared. At night, we took a bus from Kalpetta to Kozhikode in order to return to Kochi.

Day 6: Return to Kochi At midnight of 2nd of September, we reached Kozhikode bus stand. Then we roamed around for something to eat. I didn t find anything vegetarian to eat. No surprises there! Then we went to Kozhikode railway station and looked for retiring rooms, but no luck there. We waited at the station and took the next train to Kochi at 03:30 and reached Ernakulam Junction at 07:30 (half hours before train s scheduled time!). From there, we went to Zostel Fort Kochi and stayed one night there and checked out next morning.

Day 7: Roaming around in Fort Kochi On 3rd of September, we roamed around in Fort Kochi. We visited the usual places - St Francis Church, Dutch Palace, Jew Town, Pardesi Synagogue. I also visited some homestays and the owners were very happy to show their place even when I made it clear that I was not looking for a stay. In the evening, we went to Kakkanad to attend DebConf. The story continues in my DebConf23 blog post.

13 March 2023

Antoine Beaupr : Framework 12th gen laptop review

The Framework is a 13.5" laptop body with swappable parts, which makes it somewhat future-proof and certainly easily repairable, scoring an "exceedingly rare" 10/10 score from ifixit.com. There are two generations of the laptop's main board (both compatible with the same body): the Intel 11th and 12th gen chipsets. I have received my Framework, 12th generation "DIY", device in late September 2022 and will update this page as I go along in the process of ordering, burning-in, setting up and using the device over the years. Overall, the Framework is a good laptop. I like the keyboard, the touch pad, the expansion cards. Clearly there's been some good work done on industrial design, and it's the most repairable laptop I've had in years. Time will tell, but it looks sturdy enough to survive me many years as well. This is also one of the most powerful devices I ever lay my hands on. I have managed, remotely, more powerful servers, but this is the fastest computer I have ever owned, and it fits in this tiny case. It is an amazing machine. On the downside, there's a bit of proprietary firmware required (WiFi, Bluetooth, some graphics) and the Framework ships with a proprietary BIOS, with currently no Coreboot support. Expect to need the latest kernel, firmware, and hacking around a bunch of things to get resolution and keybindings working right. Like others, I have first found significant power management issues, but many issues can actually be solved with some configuration. Some of the expansion ports (HDMI, DP, MicroSD, and SSD) use power when idle, so don't expect week-long suspend, or "full day" battery while those are plugged in. Finally, the expansion ports are nice, but there's only four of them. If you plan to have a two-monitor setup, you're likely going to need a dock. Read on for the detailed review. For context, I'm moving from the Purism Librem 13v4 because it basically exploded on me. I had, in the meantime, reverted back to an old ThinkPad X220, so I sometimes compare the Framework with that venerable laptop as well. This blog post has been maturing for months now. It started in September 2022 and I declared it completed in March 2023. It's the longest single article on this entire website, currently clocking at about 13,000 words. It will take an average reader a full hour to go through this thing, so I don't expect anyone to actually do that. This introduction should be good enough for most people, read the first section if you intend to actually buy a Framework. Jump around the table of contents as you see fit for after you did buy the laptop, as it might include some crucial hints on how to make it work best for you, especially on (Debian) Linux.

Advice for buyers Those are things I wish I would have known before buying:
  1. consider buying 4 USB-C expansion cards, or at least a mix of 4 USB-A or USB-C cards, as they use less power than other cards and you do want to fill those expansion slots otherwise they snag around and feel insecure
  2. you will likely need a dock or at least a USB hub if you want a two-monitor setup, otherwise you'll run out of ports
  3. you have to do some serious tuning to get proper (10h+ idle, 10 days suspend) power savings
  4. in particular, beware that the HDMI, DisplayPort and particularly the SSD and MicroSD cards take a significant amount power, even when sleeping, up to 2-6W for the latter two
  5. beware that the MicroSD card is what it says: Micro, normal SD cards won't fit, and while there might be full sized one eventually, it's currently only at the prototyping stage
  6. the Framework monitor has an unusual aspect ratio (3:2): I like it (and it matches classic and digital photography aspect ratio), but it might surprise you

Current status I have the framework! It's setup with a fresh new Debian bookworm installation. I've ran through a large number of tests and burn in. I have decided to use the Framework as my daily driver, and had to buy a USB-C dock to get my two monitors connected, which was own adventure. Update: Framework just (2023-03-23) just announced a whole bunch of new stuff: The recording is available in this video and it's not your typical keynote. It starts ~25 minutes late, audio is crap, lightning and camera are crap, clapping seems to be from whatever staff they managed to get together in a room, decor is bizarre, colors are shit. It's amazing.

Specifications Those are the specifications of the 12th gen, in general terms. Your build will of course vary according to your needs.
  • CPU: i5-1240P, i7-1260P, or i7-1280P (Up to 4.4-4.8 GHz, 4+8 cores), Iris Xe graphics
  • Storage: 250-4000GB NVMe (or bring your own)
  • Memory: 8-64GB DDR4-3200 (or bring your own)
  • WiFi 6e (AX210, vPro optional, or bring your own)
  • 296.63mm X 228.98mm X 15.85mm, 1.3Kg
  • 13.5" display, 3:2 ratio, 2256px X 1504px, 100% sRGB, >400 nit
  • 4 x USB-C user-selectable expansion ports, including
    • USB-C
    • USB-A
    • HDMI
    • DP
    • Ethernet
    • MicroSD
    • 250-1000GB SSD
  • 3.5mm combo headphone jack
  • Kill switches for microphone and camera
  • Battery: 55Wh
  • Camera: 1080p 60fps
  • Biometrics: Fingerprint Reader
  • Backlit keyboard
  • Power Adapter: 60W USB-C (or bring your own)
  • ships with a screwdriver/spludger
  • 1 year warranty
  • base price: 1000$CAD, but doesn't give you much, typical builds around 1500-2000$CAD

Actual build This is the actual build I ordered. Amounts in CAD. (1CAD = ~0.75EUR/USD.)

Base configuration
  • CPU: Intel Core i5-1240P (AKA Alder Lake P 8 4.4GHz P-threads, 8 3.2GHz E-threads, 16 total, 28-64W), 1079$
  • Memory: 16GB (1 x 16GB) DDR4-3200, 104$

Customization
  • Keyboard: US English, included

Expansion Cards
  • 2 USB-C $24
  • 3 USB-A $36
  • 2 HDMI $50
  • 1 DP $50
  • 1 MicroSD $25
  • 1 Storage 1TB $199
  • Sub-total: 384$

Accessories
  • Power Adapter - US/Canada $64.00

Total
  • Before tax: 1606$
  • After tax and duties: 1847$
  • Free shipping

Quick evaluation This is basically the TL;DR: here, just focusing on broad pros/cons of the laptop.

Pros

Cons
  • the 11th gen is out of stock, except for the higher-end CPUs, which are much less affordable (700$+)
  • the 12th gen has compatibility issues with Debian, followup in the DebianOn page, but basically: brightness hotkeys, power management, wifi, the webcam is okay even though the chipset is the infamous alder lake because it does not have the fancy camera; most issues currently seem solvable, and upstream is working with mainline to get their shit working
  • 12th gen might have issues with thunderbolt docks
  • they used to have some difficulty keeping up with the orders: first two batches shipped, third batch sold out, fourth batch should have shipped (?) in October 2021. they generally seem to keep up with shipping. update (august 2022): they rolled out a second line of laptops (12th gen), first batch shipped, second batch shipped late, September 2022 batch was generally on time, see this spreadsheet for a crowdsourced effort to track those supply chain issues seem to be under control as of early 2023. I got the Ethernet expansion card shipped within a week.
  • compared to my previous laptop (Purism Librem 13v4), it feels strangely bulkier and heavier; it's actually lighter than the purism (1.3kg vs 1.4kg) and thinner (15.85mm vs 18mm) but the design of the Purism laptop (tapered edges) makes it feel thinner
  • no space for a 2.5" drive
  • rather bright LED around power button, but can be dimmed in the BIOS (not low enough to my taste) I got used to it
  • fan quiet when idle, but can be noisy when running, for example if you max a CPU for a while
  • battery described as "mediocre" by Ars Technica (above), confirmed poor in my tests (see below)
  • no RJ-45 port, and attempts at designing ones are failing because the modular plugs are too thin to fit (according to Linux After Dark), so unlikely to have one in the future Update: they cracked that nut and ship an 2.5 gbps Ethernet expansion card with a realtek chipset, without any firmware blob (!)
  • a bit pricey for the performance, especially when compared to the competition (e.g. Dell XPS, Apple M1)
  • 12th gen Intel has glitchy graphics, seems like Intel hasn't fully landed proper Linux support for that chipset yet

Initial hardware setup A breeze.

Accessing the board The internals are accessed through five TorX screws, but there's a nice screwdriver/spudger that works well enough. The screws actually hold in place so you can't even lose them. The first setup is a bit counter-intuitive coming from the Librem laptop, as I expected the back cover to lift and give me access to the internals. But instead the screws is release the keyboard and touch pad assembly, so you actually need to flip the laptop back upright and lift the assembly off (!) to get access to the internals. Kind of scary. I also actually unplugged a connector in lifting the assembly because I lifted it towards the monitor, while you actually need to lift it to the right. Thankfully, the connector didn't break, it just snapped off and I could plug it back in, no harm done. Once there, everything is well indicated, with QR codes all over the place supposedly leading to online instructions.

Bad QR codes Unfortunately, the QR codes I tested (in the expansion card slot, the memory slot and CPU slots) did not actually work so I wonder how useful those actually are. After all, they need to point to something and that means a URL, a running website that will answer those requests forever. I bet those will break sooner than later and in fact, as far as I can tell, they just don't work at all. I prefer the approach taken by the MNT reform here which designed (with the 100 rabbits folks) an actual paper handbook (PDF). The first QR code that's immediately visible from the back of the laptop, in an expansion cord slot, is a 404. It seems to be some serial number URL, but I can't actually tell because, well, the page is a 404. I was expecting that bar code to lead me to an introduction page, something like "how to setup your Framework laptop". Support actually confirmed that it should point a quickstart guide. But in a bizarre twist, they somehow sent me the URL with the plus (+) signs escaped, like this:
https://guides.frame.work/Guide/Framework\+Laptop\+DIY\+Edition\+Quick\+Start\+Guide/57
... which Firefox immediately transforms in:
https://guides.frame.work/Guide/Framework/+Laptop/+DIY/+Edition/+Quick/+Start/+Guide/57
I'm puzzled as to why they would send the URL that way, the proper URL is of course:
https://guides.frame.work/Guide/Framework+Laptop+DIY+Edition+Quick+Start+Guide/57
(They have also "let the team know about this for feedback and help resolve the problem with the link" which is a support code word for "ha-ha! nope! not my problem right now!" Trust me, I know, my own code word is "can you please make a ticket?")

Seating disks and memory The "DIY" kit doesn't actually have that much of a setup. If you bought RAM, it's shipped outside the laptop in a little plastic case, so you just seat it in as usual. Then you insert your NVMe drive, and, if that's your fancy, you also install your own mPCI WiFi card. If you ordered one (which was my case), it's pre-installed. Closing the laptop is also kind of amazing, because the keyboard assembly snaps into place with magnets. I have actually used the laptop with the keyboard unscrewed as I was putting the drives in and out, and it actually works fine (and will probably void your warranty, so don't do that). (But you can.) (But don't, really.)

Hardware review

Keyboard and touch pad The keyboard feels nice, for a laptop. I'm used to mechanical keyboard and I'm rather violent with those poor things. Yet the key travel is nice and it's clickety enough that I don't feel too disoriented. At first, I felt the keyboard as being more laggy than my normal workstation setup, but it turned out this was a graphics driver issues. After enabling a composition manager, everything feels snappy. The touch pad feels good. The double-finger scroll works well enough, and I don't have to wonder too much where the middle button is, it just works. Taps don't work, out of the box: that needs to be enabled in Xorg, with something like this:
cat > /etc/X11/xorg.conf.d/40-libinput.conf <<EOF
Section "InputClass"
      Identifier "libinput touch pad catchall"
      MatchIsTouchpad "on"
      MatchDevicePath "/dev/input/event*"
      Driver "libinput"
      Option "Tapping" "on"
      Option "TappingButtonMap" "lmr"
EndSection
EOF
But be aware that once you enable that tapping, you'll need to deal with palm detection... So I have not actually enabled this in the end.

Power button The power button is a little dangerous. It's quite easy to hit, as it's right next to one expansion card where you are likely to plug in a cable power. And because the expansion cards are kind of hard to remove, you might squeeze the laptop (and the power key) when trying to remove the expansion card next to the power button. So obviously, don't do that. But that's not very helpful. An alternative is to make the power button do something else. With systemd-managed systems, it's actually quite easy. Add a HandlePowerKey stanza to (say) /etc/systemd/logind.conf.d/power-suspends.conf:
[Login]
HandlePowerKey=suspend
HandlePowerKeyLongPress=poweroff
You might have to create the directory first:
mkdir /etc/systemd/logind.conf.d/
Then restart logind:
systemctl restart systemd-logind
And the power button will suspend! Long-press to power off doesn't actually work as the laptop immediately suspends... Note that there's probably half a dozen other ways of doing this, see this, this, or that.

Special keybindings There is a series of "hidden" (as in: not labeled on the key) keybindings related to the fn keybinding that I actually find quite useful.
Key Equivalent Effect Command
p Pause lock screen xset s activate
b Break ? ?
k ScrLk switch keyboard layout N/A
It looks like those are defined in the microcontroller so it would be possible to add some. For example, the SysRq key is almost bound to fn s in there. Note that most other shortcuts like this are clearly documented (volume, brightness, etc). One key that's less obvious is F12 that only has the Framework logo on it. That actually calls the keysym XF86AudioMedia which, interestingly, does absolutely nothing here. By default, on Windows, it opens your browser to the Framework website and, on Linux, your "default media player". The keyboard backlight can be cycled with fn-space. The dimmer version is dim enough, and the keybinding is easy to find in the dark. A skinny elephant would be performed with alt PrtScr (above F11) KEY, so for example alt fn F11 b should do a hard reset. This comment suggests you need to hold the fn only if "function lock" is on, but that's actually the opposite of my experience. Out of the box, some of the fn keys don't work. Mute, volume up/down, brightness, monitor changes, and the airplane mode key all do basically nothing. They don't send proper keysyms to Xorg at all. This is a known problem and it's related to the fact that the laptop has light sensors to adjust the brightness automatically. Somehow some of those keys (e.g. the brightness controls) are supposed to show up as a different input device, but don't seem to work correctly. It seems like the solution is for the Framework team to write a driver specifically for this, but so far no progress since July 2022. In the meantime, the fancy functionality can be supposedly disabled with:
echo 'blacklist hid_sensor_hub'   sudo tee /etc/modprobe.d/framework-als-blacklist.conf
... and a reboot. This solution is also documented in the upstream guide. Note that there's another solution flying around that fixes this by changing permissions on the input device but I haven't tested that or seen confirmation it works.

Kill switches The Framework has two "kill switches": one for the camera and the other for the microphone. The camera one actually disconnects the USB device when turned off, and the mic one seems to cut the circuit. It doesn't show up as muted, it just stops feeding the sound. Both kill switches are around the main camera, on top of the monitor, and quite discreet. Then turn "red" when enabled (i.e. "red" means "turned off").

Monitor The monitor looks pretty good to my untrained eyes. I have yet to do photography work on it, but some photos I looked at look sharp and the colors are bright and lively. The blacks are dark and the screen is bright. I have yet to use it in full sunlight. The dimmed light is very dim, which I like.

Screen backlight I bind brightness keys to xbacklight in i3, but out of the box I get this error:
sep 29 22:09:14 angela i3[5661]: No outputs have backlight property
It just requires this blob in /etc/X11/xorg.conf.d/backlight.conf:
Section "Device"
    Identifier  "Card0"
    Driver      "intel"
    Option      "Backlight"  "intel_backlight"
EndSection
This way I can control the actual backlight power with the brightness keys, and they do significantly reduce power usage.

Multiple monitor support I have been able to hook up my two old monitors to the HDMI and DisplayPort expansion cards on the laptop. The lid closes without suspending the machine, and everything works great. I actually run out of ports, even with a 4-port USB-A hub, which gives me a total of 7 ports:
  1. power (USB-C)
  2. monitor 1 (DisplayPort)
  3. monitor 2 (HDMI)
  4. USB-A hub, which adds:
  5. keyboard (USB-A)
  6. mouse (USB-A)
  7. Yubikey
  8. external sound card
Now the latter, I might be able to get rid of if I switch to a combo-jack headset, which I do have (and still need to test). But still, this is a problem. I'll probably need a powered USB-C dock and better monitors, possibly with some Thunderbolt chaining, to save yet more ports. But that means more money into this setup, argh. And figuring out my monitor situation is the kind of thing I'm not that big of a fan of. And neither is shopping for USB-C (or is it Thunderbolt?) hubs. My normal autorandr setup doesn't work: I have tried saving a profile and it doesn't get autodetected, so I also first need to do:
autorandr -l framework-external-dual-lg-acer
The magic:
autorandr -l horizontal
... also works well. The worst problem with those monitors right now is that they have a radically smaller resolution than the main screen on the laptop, which means I need to reset the font scaling to normal every time I switch back and forth between those monitors and the laptop, which means I actually need to do this:
autorandr -l horizontal &&
eho Xft.dpi: 96   xrdb -merge &&
systemctl restart terminal xcolortaillog background-image emacs &&
i3-msg restart
Kind of disruptive.

Expansion ports I ordered a total of 10 expansion ports. I did manage to initialize the 1TB drive as an encrypted storage, mostly to keep photos as this is something that takes a massive amount of space (500GB and counting) and that I (unfortunately) don't work on very often (but still carry around). The expansion ports are fancy and nice, but not actually that convenient. They're a bit hard to take out: you really need to crimp your fingernails on there and pull hard to take them out. There's a little button next to them to release, I think, but at first it feels a little scary to pull those pucks out of there. You get used to it though, and it's one of those things you can do without looking eventually. There's only four expansion ports. Once you have two monitors, the drive, and power plugged in, bam, you're out of ports; there's nowhere to plug my Yubikey. So if this is going to be my daily driver, with a dual monitor setup, I will need a dock, which means more crap firmware and uncertainty, which isn't great. There are actually plans to make a dual-USB card, but that is blocked on designing an actual board for this. I can't wait to see more expansion ports produced. There's a ethernet expansion card which quickly went out of stock basically the day it was announced, but was eventually restocked. I would like to see a proper SD-card reader. There's a MicroSD card reader, but that obviously doesn't work for normal SD cards, which would be more broadly compatible anyways (because you can have a MicroSD to SD card adapter, but I have never heard of the reverse). Someone actually found a SD card reader that fits and then someone else managed to cram it in a 3D printed case, which is kind of amazing. Still, I really like that idea that I can carry all those little adapters in a pouch when I travel and can basically do anything I want. It does mean I need to shuffle through them to find the right one which is a little annoying. I have an elastic band to keep them lined up so that all the ports show the same side, to make it easier to find the right one. But that quickly gets undone and instead I have a pouch full of expansion cards. Another awesome thing with the expansion cards is that they don't just work on the laptop: anything that takes USB-C can take those cards, which means you can use it to connect an SD card to your phone, for backups, for example. Heck, you could even connect an external display to your phone that way, assuming that's supported by your phone of course (and it probably isn't). The expansion ports do take up some power, even when idle. See the power management section below, and particularly the power usage tests for details.

USB-C charging One thing that is really a game changer for me is USB-C charging. It's hard to overstate how convenient this is. I often have a USB-C cable lying around to charge my phone, and I can just grab that thing and pop it in my laptop. And while it will obviously not charge as fast as the provided charger, it will stop draining the battery at least. (As I wrote this, I had the laptop plugged in the Samsung charger that came with a phone, and it was telling me it would take 6 hours to charge the remaining 15%. With the provided charger, that flew down to 15 minutes. Similarly, I can power the laptop from the power grommet on my desk, reducing clutter as I have that single wire out there instead of the bulky power adapter.) I also really like the idea that I can charge my laptop with a power bank or, heck, with my phone, if push comes to shove. (And vice-versa!) This is awesome. And it works from any of the expansion ports, of course. There's a little led next to the expansion ports as well, which indicate the charge status:
  • red/amber: charging
  • white: charged
  • off: unplugged
I couldn't find documentation about this, but the forum answered. This is something of a recurring theme with the Framework. While it has a good knowledge base and repair/setup guides (and the forum is awesome) but it doesn't have a good "owner manual" that shows you the different parts of the laptop and what they do. Again, something the MNT reform did well. Another thing that people are asking about is an external sleep indicator: because the power LED is on the main keyboard assembly, you don't actually see whether the device is active or not when the lid is closed. Finally, I wondered what happens when you plug in multiple power sources and it turns out the charge controller is actually pretty smart: it will pick the best power source and use it. The only downside is it can't use multiple power sources, but that seems like a bit much to ask.

Multimedia and other devices Those things also work:
  • webcam: splendid, best webcam I've ever had (but my standards are really low)
  • onboard mic: works well, good gain (maybe a bit much)
  • onboard speakers: sound okay, a little metal-ish, loud enough to be annoying, see this thread for benchmarks, apparently pretty good speakers
  • combo jack: works, with slight hiss, see below
There's also a light sensor, but it conflicts with the keyboard brightness controls (see above). There's also an accelerometer, but it's off by default and will be removed from future builds.

Combo jack mic tests The Framework laptop ships with a combo jack on the left side, which allows you to plug in a CTIA (source) headset. In human terms, it's a device that has both a stereo output and a mono input, typically a headset or ear buds with a microphone somewhere. It works, which is better than the Purism (which only had audio out), but is on par for the course for that kind of onboard hardware. Because of electrical interference, such sound cards very often get lots of noise from the board. With a Jabra Evolve 40, the built-in USB sound card generates basically zero noise on silence (invisible down to -60dB in Audacity) while plugging it in directly generates a solid -30dB hiss. There is a noise-reduction system in that sound card, but the difference is still quite striking. On a comparable setup (curie, a 2017 Intel NUC), there is also a his with the Jabra headset, but it's quieter, more in the order of -40/-50 dB, a noticeable difference. Interestingly, testing with my Mee Audio Pro M6 earbuds leads to a little more hiss on curie, more on the -35/-40 dB range, close to the Framework. Also note that another sound card, the Antlion USB adapter that comes with the ModMic 4, also gives me pretty close to silence on a quiet recording, picking up less than -50dB of background noise. It's actually probably picking up the fans in the office, which do make audible noises. In other words, the hiss of the sound card built in the Framework laptop is so loud that it makes more noise than the quiet fans in the office. Or, another way to put it is that two USB sound cards (the Jabra and the Antlion) are able to pick up ambient noise in my office but not the Framework laptop. See also my audio page.

Performance tests

Compiling Linux 5.19.11 On a single core, compiling the Debian version of the Linux kernel takes around 100 minutes:
5411.85user 673.33system 1:37:46elapsed 103%CPU (0avgtext+0avgdata 831700maxresident)k
10594704inputs+87448000outputs (9131major+410636783minor)pagefaults 0swaps
This was using 16 watts of power, with full screen brightness. With all 16 cores (make -j16), it takes less than 25 minutes:
19251.06user 2467.47system 24:13.07elapsed 1494%CPU (0avgtext+0avgdata 831676maxresident)k
8321856inputs+87427848outputs (30792major+409145263minor)pagefaults 0swaps
I had to plug the normal power supply after a few minutes because battery would actually run out using my desk's power grommet (34 watts). During compilation, fans were spinning really hard, quite noisy, but not painfully so. The laptop was sucking 55 watts of power, steadily:
  Time    User  Nice   Sys  Idle    IO  Run Ctxt/s  IRQ/s Fork Exec Exit  Watts
-------- ----- ----- ----- ----- ----- ---- ------ ------ ---- ---- ---- ------
 Average  87.9   0.0  10.7   1.4   0.1 17.8 6583.6 5054.3 233.0 223.9 233.1  55.96
 GeoMean  87.9   0.0  10.6   1.2   0.0 17.6 6427.8 5048.1 227.6 218.7 227.7  55.96
  StdDev   1.4   0.0   1.2   0.6   0.2  3.0 1436.8  255.5 50.0 47.5 49.7   0.20
-------- ----- ----- ----- ----- ----- ---- ------ ------ ---- ---- ---- ------
 Minimum  85.0   0.0   7.8   0.5   0.0 13.0 3594.0 4638.0 117.0 111.0 120.0  55.52
 Maximum  90.8   0.0  12.9   3.5   0.8 38.0 10174.0 5901.0 374.0 362.0 375.0  56.41
-------- ----- ----- ----- ----- ----- ---- ------ ------ ---- ---- ---- ------
Summary:
CPU:  55.96 Watts on average with standard deviation 0.20
Note: power read from RAPL domains: package-0, uncore, package-0, core, psys.
These readings do not cover all the hardware in this device.

memtest86+ I ran Memtest86+ v6.00b3. It shows something like this:
Memtest86+ v6.00b3        12th Gen Intel(R) Core(TM) i5-1240P
CLK/Temp: 2112MHz    78/78 C   Pass  2% #
L1 Cache:   48KB    414 GB/s   Test 46% ##################
L2 Cache: 1.25MB    118 GB/s   Test #3 [Moving inversions, 1s & 0s] 
L3 Cache:   12MB     43 GB/s   Testing: 16GB - 18GB [1GB of 15.7GB]
Memory  :  15.7GB  14.9 GB/s   Pattern: 
--------------------------------------------------------------------------------
CPU: 4P+8E-Cores (16T)    SMP: 8T (PAR))    Time:  0:27:23  Status: Pass     \
RAM: 1600MHz (DDR4-3200) CAS 22-22-22-51    Pass:  1        Errors: 0
--------------------------------------------------------------------------------
Memory SPD Information
----------------------
 - Slot 2: 16GB DDR-4-3200 - Crucial CT16G4SFRA32A.C16FP (2022-W23)
                          Framework FRANMACP04
 <ESC> Exit  <F1> Configuration  <Space> Scroll Lock            6.00.unknown.x64
So about 30 minutes for a full 16GB memory test.

Software setup Once I had everything in the hardware setup, I figured, voil , I'm done, I'm just going to boot this beautiful machine and I can get back to work. I don't understand why I am so na ve some times. It's mind boggling. Obviously, it didn't happen that way at all, and I spent the best of the three following days tinkering with the laptop.

Secure boot and EFI First, I couldn't boot off of the NVMe drive I transferred from the previous laptop (the Purism) and the BIOS was not very helpful: it was just complaining about not finding any boot device, without dropping me in the real BIOS. At first, I thought it was a problem with my NVMe drive, because it's not listed in the compatible SSD drives from upstream. But I figured out how to enter BIOS (press F2 manically, of course), which showed the NVMe drive was actually detected. It just didn't boot, because it was an old (2010!!) Debian install without EFI. So from there, I disabled secure boot, and booted a grml image to try to recover. And by "boot" I mean, I managed to get to the grml boot loader which promptly failed to load its own root file system somehow. I still have to investigate exactly what happened there, but it failed some time after the initrd load with:
Unable to find medium containing a live file system
This, it turns out, was fixed in Debian lately, so a daily GRML build will not have this problems. The upcoming 2022 release (likely 2022.10 or 2022.11) will also get the fix. I did manage to boot the development version of the Debian installer which was a surprisingly good experience: it mounted the encrypted drives and did everything pretty smoothly. It even offered me to reinstall the boot loader, but that ultimately (and correctly, as it turns out) failed because I didn't have a /boot/efi partition. At this point, I realized there was no easy way out of this, and I just proceeded to completely reinstall Debian. I had a spare NVMe drive lying around (backups FTW!) so I just swapped that in, rebooted in the Debian installer, and did a clean install. I wanted to switch to bookworm anyways, so I guess that's done too.

Storage limitations Another thing that happened during setup is that I tried to copy over the internal 2.5" SSD drive from the Purism to the Framework 1TB expansion card. There's no 2.5" slot in the new laptop, so that's pretty much the only option for storage expansion. I was tired and did something wrong. I ended up wiping the partition table on the original 2.5" drive. Oops. It might be recoverable, but just restoring the partition table didn't work either, so I'm not sure how I recover the data there. Normally, everything on my laptops and workstations is designed to be disposable, so that wasn't that big of a problem. I did manage to recover most of the data thanks to git-annex reinit, but that was a little hairy.

Bootstrapping Puppet Once I had some networking, I had to install all the packages I needed. The time I spent setting up my workstations with Puppet has finally paid off. What I actually did was to restore two critical directories:
/etc/ssh
/var/lib/puppet
So that I would keep the previous machine's identity. That way I could contact the Puppet server and install whatever was missing. I used my Puppet optimization trick to do a batch install and then I had a good base setup, although not exactly as it was before. 1700 packages were installed manually on angela before the reinstall, and not in Puppet. I did not inspect each one individually, but I did go through /etc and copied over more SSH keys, for backups and SMTP over SSH.

LVFS support It looks like there's support for the (de-facto) standard LVFS firmware update system. At least I was able to update the UEFI firmware with a simple:
apt install fwupd-amd64-signed
fwupdmgr refresh
fwupdmgr get-updates
fwupdmgr update
Nice. The 12th gen BIOS updates, currently (January 2023) beta, can be deployed through LVFS with:
fwupdmgr enable-remote lvfs-testing
echo 'DisableCapsuleUpdateOnDisk=true' >> /etc/fwupd/uefi_capsule.conf 
fwupdmgr update
Those instructions come from the beta forum post. I performed the BIOS update on 2023-01-16T16:00-0500.

Resolution tweaks The Framework laptop resolution (2256px X 1504px) is big enough to give you a pretty small font size, so welcome to the marvelous world of "scaling". The Debian wiki page has a few tricks for this.

Console This will make the console and grub fonts more readable:
cat >> /etc/default/console-setup <<EOF
FONTFACE="Terminus"
FONTSIZE=32x16
EOF
echo GRUB_GFXMODE=1024x768 >> /etc/default/grub
update-grub

Xorg Adding this to your .Xresources will make everything look much bigger:
! 1.5*96
Xft.dpi: 144
Apparently, some of this can also help:
! These might also be useful depending on your monitor and personal preference:
Xft.autohint: 0
Xft.lcdfilter:  lcddefault
Xft.hintstyle:  hintfull
Xft.hinting: 1
Xft.antialias: 1
Xft.rgba: rgb
It my experience it also makes things look a little fuzzier, which is frustrating because you have this awesome monitor but everything looks out of focus. Just bumping Xft.dpi by a 1.5 factor looks good to me. The Debian Wiki has a page on HiDPI, but it's not as good as the Arch Wiki, where the above blurb comes from. I am not using the latter because I suspect it's causing some of the "fuzziness". TODO: find the equivalent of this GNOME hack in i3? (gsettings set org.gnome.mutter experimental-features "['scale-monitor-framebuffer']"), taken from this Framework guide

Issues

BIOS configuration The Framework BIOS has some minor issues. One issue I personally encountered is that I had disabled Quick boot and Quiet boot in the BIOS to diagnose the above boot issues. This, in turn, triggers a bug where the BIOS boot manager (F12) would just hang completely. It would also fail to boot from an external USB drive. The current fix (as of BIOS 3.03) is to re-enable both Quick boot and Quiet boot. Presumably this is something that will get fixed in a future BIOS update. Note that the following keybindings are active in the BIOS POST check:
Key Meaning
F2 Enter BIOS setup menu
F12 Enter BIOS boot manager
Delete Enter BIOS setup menu

WiFi compatibility issues I couldn't make WiFi work at first. Obviously, the default Debian installer doesn't ship with proprietary firmware (although that might change soon) so the WiFi card didn't work out of the box. But even after copying the firmware through a USB stick, I couldn't quite manage to find the right combination of ip/iw/wpa-supplicant (yes, after repeatedly copying a bunch more packages over to get those bootstrapped). (Next time I should probably try something like this post.) Thankfully, I had a little USB-C dongle with a RJ-45 jack lying around. That also required a firmware blob, but it was a single package to copy over, and with that loaded, I had network. Eventually, I did managed to make WiFi work; the problem was more on the side of "I forgot how to configure a WPA network by hand from the commandline" than anything else. NetworkManager worked fine and got WiFi working correctly. Note that this is with Debian bookworm, which has the 5.19 Linux kernel, and with the firmware-nonfree (firmware-iwlwifi, specifically) package.

Battery life I was having between about 7 hours of battery on the Purism Librem 13v4, and that's after a year or two of battery life. Now, I still have about 7 hours of battery life, which is nicer than my old ThinkPad X220 (20 minutes!) but really, it's not that good for a new generation laptop. The 12th generation Intel chipset probably improved things compared to the previous one Framework laptop, but I don't have a 11th gen Framework to compare with). (Note that those are estimates from my status bar, not wall clock measurements. They should still be comparable between the Purism and Framework, that said.) The battery life doesn't seem up to, say, Dell XPS 13, ThinkPad X1, and of course not the Apple M1, where I would expect 10+ hours of battery life out of the box. That said, I do get those kind estimates when the machine is fully charged and idle. In fact, when everything is quiet and nothing is plugged in, I get dozens of hours of battery life estimated (I've seen 25h!). So power usage fluctuates quite a bit depending on usage, which I guess is expected. Concretely, so far, light web browsing, reading emails and writing notes in Emacs (e.g. this file) takes about 8W of power:
Time    User  Nice   Sys  Idle    IO  Run Ctxt/s  IRQ/s Fork Exec Exit  Watts
-------- ----- ----- ----- ----- ----- ---- ------ ------ ---- ---- ---- ------
 Average   1.7   0.0   0.5  97.6   0.2  1.2 4684.9 1985.2 126.6 39.1 128.0   7.57
 GeoMean   1.4   0.0   0.4  97.6   0.1  1.2 4416.6 1734.5 111.6 27.9 113.3   7.54
  StdDev   1.0   0.2   0.2   1.2   0.0  0.5 1584.7 1058.3 82.1 44.0 80.2   0.71
-------- ----- ----- ----- ----- ----- ---- ------ ------ ---- ---- ---- ------
 Minimum   0.2   0.0   0.2  94.9   0.1  1.0 2242.0  698.2 82.0 17.0 82.0   6.36
 Maximum   4.1   1.1   1.0  99.4   0.2  3.0 8687.4 4445.1 463.0 249.0 449.0   9.10
-------- ----- ----- ----- ----- ----- ---- ------ ------ ---- ---- ---- ------
Summary:
System:   7.57 Watts on average with standard deviation 0.71
Expansion cards matter a lot in the battery life (see below for a thorough discussion), my normal setup is 2xUSB-C and 1xUSB-A (yes, with an empty slot, and yes, to save power). Interestingly, playing a video in a (720p) window in a window takes up more power (10.5W) than in full screen (9.5W) but I blame that on my desktop setup (i3 + compton)... Not sure if mpv hits the VA-API, maybe not in windowed mode. Similar results with 1080p, interestingly, except the window struggles to keep up altogether. Full screen playback takes a relatively comfortable 9.5W, which means a solid 5h+ of playback, which is fine by me. Fooling around the web, small edits, youtube-dl, and I'm at around 80% battery after about an hour, with an estimated 5h left, which is a little disappointing. I had a 7h remaining estimate before I started goofing around Discourse, so I suspect the website is a pretty big battery drain, actually. I see about 10-12 W, while I was probably at half that (6-8W) just playing music with mpv in the background... In other words, it looks like editing posts in Discourse with Firefox takes a solid 4-6W of power. Amazing and gross. (When writing about abusive power usage generates more power usage, is that an heisenbug? Or schr dinbug?)

Power management Compared to the Purism Librem 13v4, the ongoing power usage seems to be slightly better. An anecdotal metric is that the Purism would take 800mA idle, while the more powerful Framework manages a little over 500mA as I'm typing this, fluctuating between 450 and 600mA. That is without any active expansion card, except the storage. Those numbers come from the output of tlp-stat -b and, unfortunately, the "ampere" unit makes it quite hard to compare those, because voltage is not necessarily the same between the two platforms.
  • TODO: review Arch Linux's tips on power saving
  • TODO: i915 driver has a lot of parameters, including some about power saving, see, again, the arch wiki, and particularly enable_fbc=1
TL:DR; power management on the laptop is an issue, but there's various tweaks you can make to improve it. Try:
  • powertop --auto-tune
  • apt install tlp && systemctl enable tlp
  • nvme.noacpi=1 mem_sleep_default=deep on the kernel command line may help with standby power usage
  • keep only USB-C expansion cards plugged in, all others suck power even when idle
  • consider upgrading the BIOS to latest beta (3.06 at the time of writing), unverified power savings
  • latest Linux kernels (6.2) promise power savings as well (unverified)
Update: also try to follow the official optimization guide. It was made for Ubuntu but will probably also work for your distribution of choice with a few tweaks. They recommend using tlpui but it's not packaged in Debian. There is, however, a Flatpak release. In my case, it resulted in the following diff to tlp.conf: tlp.patch.

Background on CPU architecture There were power problems in the 11th gen Framework laptop, according to this report from Linux After Dark, so the issues with power management on the Framework are not new. The 12th generation Intel CPU (AKA "Alder Lake") is a big-little architecture with "power-saving" and "performance" cores. There used to be performance problems introduced by the scheduler in Linux 5.16 but those were eventually fixed in 5.18, which uses Intel's hardware as an "intelligent, low-latency hardware-assisted scheduler". According to Phoronix, the 5.19 release improved the power saving, at the cost of some penalty cost. There were also patch series to make the scheduler configurable, but it doesn't look those have been merged as of 5.19. There was also a session about this at the 2022 Linux Plumbers, but they stopped short of talking more about the specific problems Linux is facing in Alder lake:
Specifically, the kernel's energy-aware scheduling heuristics don't work well on those CPUs. A number of features present there complicate the energy picture; these include SMT, Intel's "turbo boost" mode, and the CPU's internal power-management mechanisms. For many workloads, running on an ostensibly more power-hungry Pcore can be more efficient than using an Ecore. Time for discussion of the problem was lacking, though, and the session came to a close.
All this to say that the 12gen Intel line shipped with this Framework series should have better power management thanks to its power-saving cores. And Linux has had the scheduler changes to make use of this (but maybe is still having trouble). In any case, this might not be the source of power management problems on my laptop, quite the opposite. Also note that the firmware updates for various chipsets are supposed to improve things eventually. On the other hand, The Verge simply declared the whole P-series a mistake...

Attempts at improving power usage I did try to follow some of the tips in this forum post. The tricks powertop --auto-tune and tlp's PCIE_ASPM_ON_BAT=powersupersave basically did nothing: I was stuck at 10W power usage in powertop (600+mA in tlp-stat). Apparently, I should be able to reach the C8 CPU power state (or even C9, C10) in powertop, but I seem to be stock at C7. (Although I'm not sure how to read that tab in powertop: in the Core(HW) column there's only C3/C6/C7 states, and most cores are 85% in C7 or maybe C6. But the next column over does show many CPUs in C10 states... As it turns out, the graphics card actually takes up a good chunk of power unless proper power management is enabled (see below). After tweaking this, I did manage to get down to around 7W power usage in powertop. Expansion cards actually do take up power, and so does the screen, obviously. The fully-lit screen takes a solid 2-3W of power compared to the fully dimmed screen. When removing all expansion cards and making the laptop idle, I can spin it down to 4 watts power usage at the moment, and an amazing 2 watts when the screen turned off.

Caveats Abusive (10W+) power usage that I initially found could be a problem with my desktop configuration: I have this silly status bar that updates every second and probably causes redraws... The CPU certainly doesn't seem to spin down below 1GHz. Also note that this is with an actual desktop running with everything: it could very well be that some things (I'm looking at you Signal Desktop) take up unreasonable amount of power on their own (hello, 1W/electron, sheesh). Syncthing and containerd (Docker!) also seem to take a good 500mW just sitting there. Beyond my desktop configuration, this could, of course, be a Debian-specific problem; your favorite distribution might be better at power management.

Idle power usage tests Some expansion cards waste energy, even when unused. Here is a summary of the findings from the powerstat page. I also include other devices tested in this page for completeness:
Device Minimum Average Max Stdev Note
Screen, 100% 2.4W 2.6W 2.8W N/A
Screen, 1% 30mW 140mW 250mW N/A
Backlight 1 290mW ? ? ? fairly small, all things considered
Backlight 2 890mW 1.2W 3W? 460mW? geometric progression
Backlight 3 1.69W 1.5W 1.8W? 390mW? significant power use
Radios 100mW 250mW N/A N/A
USB-C N/A N/A N/A N/A negligible power drain
USB-A 10mW 10mW ? 10mW almost negligible
DisplayPort 300mW 390mW 600mW N/A not passive
HDMI 380mW 440mW 1W? 20mW not passive
1TB SSD 1.65W 1.79W 2W 12mW significant, probably higher when busy
MicroSD 1.6W 3W 6W 1.93W highest power usage, possibly even higher when busy
Ethernet 1.69W 1.64W 1.76W N/A comparable to the SSD card
So it looks like all expansion cards but the USB-C ones are active, i.e. they draw power with idle. The USB-A cards are the least concern, sucking out 10mW, pretty much within the margin of error. But both the DisplayPort and HDMI do take a few hundred miliwatts. It looks like USB-A connectors have this fundamental flaw that they necessarily draw some powers because they lack the power negotiation features of USB-C. At least according to this post:
It seems the USB A must have power going to it all the time, that the old USB 2 and 3 protocols, the USB C only provides power when there is a connection. Old versus new.
Apparently, this is a problem specific to the USB-C to USB-A adapter that ships with the Framework. Some people have actually changed their orders to all USB-C because of this problem, but I'm not sure the problem is as serious as claimed in the forums. I couldn't reproduce the "one watt" power drains suggested elsewhere, at least not repeatedly. (A previous version of this post did show such a power drain, but it was in a less controlled test environment than the series of more rigorous tests above.) The worst offenders are the storage cards: the SSD drive takes at least one watt of power and the MicroSD card seems to want to take all the way up to 6 watts of power, both just sitting there doing nothing. This confirms claims of 1.4W for the SSD (but not 5W) power usage found elsewhere. The former post has instructions on how to disable the card in software. The MicroSD card has been reported as using 2 watts, but I've seen it as high as 6 watts, which is pretty damning. The Framework team has a beta update for the DisplayPort adapter but currently only for Windows (LVFS technically possible, "under investigation"). A USB-A firmware update is also under investigation. It is therefore likely at least some of those power management issues will eventually be fixed. Note that the upcoming Ethernet card has a reported 2-8W power usage, depending on traffic. I did my own power usage tests in powerstat-wayland and they seem lower than 2W. The upcoming 6.2 Linux kernel might also improve battery usage when idle, see this Phoronix article for details, likely in early 2023.

Idle power usage tests under Wayland Update: I redid those tests under Wayland, see powerstat-wayland for details. The TL;DR: is that power consumption is either smaller or similar.

Idle power usage tests, 3.06 beta BIOS I redid the idle tests after the 3.06 beta BIOS update and ended up with this results:
Device Minimum Average Max Stdev Note
Baseline 1.96W 2.01W 2.11W 30mW 1 USB-C, screen off, backlight off, no radios
2 USB-C 1.95W 2.16W 3.69W 430mW USB-C confirmed as mostly passive...
3 USB-C 1.95W 2.16W 3.69W 430mW ... although with extra stdev
1TB SSD 3.72W 3.85W 4.62W 200mW unchanged from before upgrade
1 USB-A 1.97W 2.18W 4.02W 530mW unchanged
2 USB-A 1.97W 2.00W 2.08W 30mW unchanged
3 USB-A 1.94W 1.99W 2.03W 20mW unchanged
MicroSD w/o card 3.54W 3.58W 3.71W 40mW significant improvement! 2-3W power saving!
MicroSD w/ card 3.53W 3.72W 5.23W 370mW new measurement! increased deviation
DisplayPort 2.28W 2.31W 2.37W 20mW unchanged
1 HDMI 2.43W 2.69W 4.53W 460mW unchanged
2 HDMI 2.53W 2.59W 2.67W 30mW unchanged
External USB 3.85W 3.89W 3.94W 30mW new result
Ethernet 3.60W 3.70W 4.91W 230mW unchanged
Note that the table summary is different than the previous table: here we show the absolute numbers while the previous table was doing a confusing attempt at showing relative (to the baseline) numbers. Conclusion: the 3.06 BIOS update did not significantly change idle power usage stats except for the MicroSD card which has significantly improved. The new "external USB" test is also interesting: it shows how the provided 1TB SSD card performs (admirably) compared to existing devices. The other new result is the MicroSD card with a card which, interestingly, uses less power than the 1TB SSD drive.

Standby battery usage I wrote some quick hack to evaluate how much power is used during sleep. Apparently, this is one of the areas that should have improved since the first Framework model, let's find out. My baseline for comparison is the Purism laptop, which, in 10 minutes, went from this:
sep 28 11:19:45 angela systemd-sleep[209379]: /sys/class/power_supply/BAT/charge_now                      =   6045 [mAh]
... to this:
sep 28 11:29:47 angela systemd-sleep[209725]: /sys/class/power_supply/BAT/charge_now                      =   6037 [mAh]
That's 8mAh per 10 minutes (and 2 seconds), or 48mA, or, with this battery, about 127 hours or roughly 5 days of standby. Not bad! In comparison, here is my really old x220, before:
sep 29 22:13:54 emma systemd-sleep[176315]: /sys/class/power_supply/BAT0/energy_now                     =   5070 [mWh]
... after:
sep 29 22:23:54 emma systemd-sleep[176486]: /sys/class/power_supply/BAT0/energy_now                     =   4980 [mWh]
... which is 90 mwH in 10 minutes, or a whopping 540mA, which was possibly okay when this battery was new (62000 mAh, so about 100 hours, or about 5 days), but this battery is almost dead and has only 5210 mAh when full, so only 10 hours standby. And here is the Framework performing a similar test, before:
sep 29 22:27:04 angela systemd-sleep[4515]: /sys/class/power_supply/BAT1/charge_full                    =   3518 [mAh]
sep 29 22:27:04 angela systemd-sleep[4515]: /sys/class/power_supply/BAT1/charge_now                     =   2861 [mAh]
... after:
sep 29 22:37:08 angela systemd-sleep[4743]: /sys/class/power_supply/BAT1/charge_now                     =   2812 [mAh]
... which is 49mAh in a little over 10 minutes (and 4 seconds), or 292mA, much more than the Purism, but half of the X220. At this rate, the battery would last on standby only 12 hours!! That is pretty bad. Note that this was done with the following expansion cards:
  • 2 USB-C
  • 1 1TB SSD drive
  • 1 USB-A with a hub connected to it, with keyboard and LAN
Preliminary tests without the hub (over one minute) show that it doesn't significantly affect this power consumption (300mA). This guide also suggests booting with nvme.noacpi=1 but this still gives me about 5mAh/min (or 300mA). Adding mem_sleep_default=deep to the kernel command line does make a difference. Before:
sep 29 23:03:11 angela systemd-sleep[3699]: /sys/class/power_supply/BAT1/charge_now                     =   2544 [mAh]
... after:
sep 29 23:04:25 angela systemd-sleep[4039]: /sys/class/power_supply/BAT1/charge_now                     =   2542 [mAh]
... which is 2mAh in 74 seconds, which is 97mA, brings us to a more reasonable 36 hours, or a day and a half. It's still above the x220 power usage, and more than an order of magnitude more than the Purism laptop. It's also far from the 0.4% promised by upstream, which would be 14mA for the 3500mAh battery. It should also be noted that this "deep" sleep mode is a little more disruptive than regular sleep. As you can see by the timing, it took more than 10 seconds for the laptop to resume, which feels a little alarming as your banging the keyboard to bring it back to life. You can confirm the current sleep mode with:
# cat /sys/power/mem_sleep
s2idle [deep]
In the above, deep is selected. You can change it on the fly with:
printf s2idle > /sys/power/mem_sleep
Here's another test:
sep 30 22:25:50 angela systemd-sleep[32207]: /sys/class/power_supply/BAT1/charge_now                     =   1619 [mAh]
sep 30 22:31:30 angela systemd-sleep[32516]: /sys/class/power_supply/BAT1/charge_now                     =   1613 [mAh]
... better! 6 mAh in about 6 minutes, works out to 63.5mA, so more than two days standby. A longer test:
oct 01 09:22:56 angela systemd-sleep[62978]: /sys/class/power_supply/BAT1/charge_now                     =   3327 [mAh]
oct 01 12:47:35 angela systemd-sleep[63219]: /sys/class/power_supply/BAT1/charge_now                     =   3147 [mAh]
That's 180mAh in about 3.5h, 52mA! Now at 66h, or almost 3 days. I wasn't sure why I was seeing such fluctuations in those tests, but as it turns out, expansion card power tests show that they do significantly affect power usage, especially the SSD drive, which can take up to two full watts of power even when idle. I didn't control for expansion cards in the above tests running them with whatever card I had plugged in without paying attention so it's likely the cause of the high power usage and fluctuations. It might be possible to work around this problem by disabling USB devices before suspend. TODO. See also this post. In the meantime, I have been able to get much better suspend performance by unplugging all modules. Then I get this result:
oct 04 11:15:38 angela systemd-sleep[257571]: /sys/class/power_supply/BAT1/charge_now                     =   3203 [mAh]
oct 04 15:09:32 angela systemd-sleep[257866]: /sys/class/power_supply/BAT1/charge_now                     =   3145 [mAh]
Which is 14.8mA! Almost exactly the number promised by Framework! With a full battery, that means a 10 days suspend time. This is actually pretty good, and far beyond what I was expecting when starting down this journey. So, once the expansion cards are unplugged, suspend power usage is actually quite reasonable. More detailed standby tests are available in the standby-tests page, with a summary below. There is also some hope that the Chromebook edition specifically designed with a specification of 14 days standby time could bring some firmware improvements back down to the normal line. Some of those issues were reported upstream in April 2022, but there doesn't seem to have been any progress there since. TODO: one final solution here is suspend-then-hibernate, which Windows uses for this TODO: consider implementing the S0ix sleep states , see also troubleshooting TODO: consider https://github.com/intel/pm-graph

Standby expansion cards test results This table is a summary of the more extensive standby-tests I have performed:
Device Wattage Amperage Days Note
baseline 0.25W 16mA 9 sleep=deep nvme.noacpi=1
s2idle 0.29W 18.9mA ~7 sleep=s2idle nvme.noacpi=1
normal nvme 0.31W 20mA ~7 sleep=s2idle without nvme.noacpi=1
1 USB-C 0.23W 15mA ~10
2 USB-C 0.23W 14.9mA same as above
1 USB-A 0.75W 48.7mA 3 +500mW (!!) for the first USB-A card!
2 USB-A 1.11W 72mA 2 +360mW
3 USB-A 1.48W 96mA <2 +370mW
1TB SSD 0.49W 32mA <5 +260mW
MicroSD 0.52W 34mA ~4 +290mW
DisplayPort 0.85W 55mA <3 +620mW (!!)
1 HDMI 0.58W 38mA ~4 +250mW
2 HDMI 0.65W 42mA <4 +70mW (?)
Conclusions:
  • USB-C cards take no extra power on suspend, possibly less than empty slots, more testing required
  • USB-A cards take a lot more power on suspend (300-500mW) than on regular idle (~10mW, almost negligible)
  • 1TB SSD and MicroSD cards seem to take a reasonable amount of power (260-290mW), compared to their runtime equivalents (1-6W!)
  • DisplayPort takes a surprising lot of power (620mW), almost double its average runtime usage (390mW)
  • HDMI cards take, surprisingly, less power (250mW) in standby than the DP card (620mW)
  • and oddly, a second card adds less power usage (70mW?!) than the first, maybe a circuit is used by both?
A discussion of those results is in this forum post.

Standby expansion cards test results, 3.06 beta BIOS Framework recently (2022-11-07) announced that they will publish a firmware upgrade to address some of the USB-C issues, including power management. This could positively affect the above result, improving both standby and runtime power usage. The update came out in December 2022 and I redid my analysis with the following results:
Device Wattage Amperage Days Note
baseline 0.25W 16mA 9 no cards, same as before upgrade
1 USB-C 0.25W 16mA 9 same as before
2 USB-C 0.25W 16mA 9 same
1 USB-A 0.80W 62mA 3 +550mW!! worse than before
2 USB-A 1.12W 73mA <2 +320mW, on top of the above, bad!
Ethernet 0.62W 40mA 3-4 new result, decent
1TB SSD 0.52W 34mA 4 a bit worse than before (+2mA)
MicroSD 0.51W 22mA 4 same
DisplayPort 0.52W 34mA 4+ upgrade improved by 300mW
1 HDMI ? 38mA ? same
2 HDMI ? 45mA ? a bit worse than before (+3mA)
Normal 1.08W 70mA ~2 Ethernet, 2 USB-C, USB-A
Full results in standby-tests-306. The big takeaway for me is that the update did not improve power usage on the USB-A ports which is a big problem for my use case. There is a notable improvement on the DisplayPort power consumption which brings it more in line with the HDMI connector, but it still doesn't properly turn off on suspend either. Even worse, the USB-A ports now sometimes fails to resume after suspend, which is pretty annoying. This is a known problem that will hopefully get fixed in the final release.

Battery wear protection The BIOS has an option to limit charge to 80% to mitigate battery wear. There's a way to control the embedded controller from runtime with fw-ectool, partly documented here. The command would be:
sudo ectool fwchargelimit 80
I looked at building this myself but failed to run it. I opened a RFP in Debian so that we can ship this in Debian, and also documented my work there. Note that there is now a counter that tracks charge/discharge cycles. It's visible in tlp-stat -b, which is a nice improvement:
root@angela:/home/anarcat# tlp-stat -b
--- TLP 1.5.0 --------------------------------------------
+++ Battery Care
Plugin: generic
Supported features: none available
+++ Battery Status: BAT1
/sys/class/power_supply/BAT1/manufacturer                   = NVT
/sys/class/power_supply/BAT1/model_name                     = Framewo
/sys/class/power_supply/BAT1/cycle_count                    =      3
/sys/class/power_supply/BAT1/charge_full_design             =   3572 [mAh]
/sys/class/power_supply/BAT1/charge_full                    =   3541 [mAh]
/sys/class/power_supply/BAT1/charge_now                     =   1625 [mAh]
/sys/class/power_supply/BAT1/current_now                    =    178 [mA]
/sys/class/power_supply/BAT1/status                         = Discharging
/sys/class/power_supply/BAT1/charge_control_start_threshold = (not available)
/sys/class/power_supply/BAT1/charge_control_end_threshold   = (not available)
Charge                                                      =   45.9 [%]
Capacity                                                    =   99.1 [%]
One thing that is still missing is the charge threshold data (the (not available) above). There's been some work to make that accessible in August, stay tuned? This would also make it possible implement hysteresis support.

Ethernet expansion card The Framework ethernet expansion card is a fancy little doodle: "2.5Gbit/s and 10/100/1000Mbit/s Ethernet", the "clear housing lets you peek at the RTL8156 controller that powers it". Which is another way to say "we didn't completely finish prod on this one, so it kind of looks like we 3D-printed this in the shop".... The card is a little bulky, but I guess that's inevitable considering the RJ-45 form factor when compared to the thin Framework laptop. I have had a serious issue when trying it at first: the link LEDs just wouldn't come up. I made a full bug report in the forum and with upstream support, but eventually figured it out on my own. It's (of course) a power saving issue: if you reboot the machine, the links come up when the laptop is running the BIOS POST check and even when the Linux kernel boots. I first thought that the problem is likely related to the powertop service which I run at boot time to tweak some power saving settings. It seems like this:
echo 'on' > '/sys/bus/usb/devices/4-2/power/control'
... is a good workaround to bring the card back online. You can even return to power saving mode and the card will still work:
echo 'auto' > '/sys/bus/usb/devices/4-2/power/control'
Further research by Matt_Hartley from the Framework Team found this issue in the tlp tracker that shows how the USB_AUTOSUSPEND setting enables the power saving even if the driver doesn't support it, which, in retrospect, just sounds like a bad idea. To quote that issue:
By default, USB power saving is active in the kernel, but not force-enabled for incompatible drivers. That is, devices that support suspension will suspend, drivers that do not, will not.
So the fix is actually to uninstall tlp or disable that setting by adding this to /etc/tlp.conf:
USB_AUTOSUSPEND=0
... but that disables auto-suspend on all USB devices, which may hurt other power usage performance. I have found that a a combination of:
USB_AUTOSUSPEND=1
USB_DENYLIST="0bda:8156"
and this on the kernel commandline:
usbcore.quirks=0bda:8156:k
... actually does work correctly. I now have this in my /etc/default/grub.d/framework-tweaks.cfg file:
# net.ifnames=0: normal interface names ffs (e.g. eth0, wlan0, not wlp166
s0)
# nvme.noacpi=1: reduce SSD disk power usage (not working)
# mem_sleep_default=deep: reduce power usage during sleep (not working)
# usbcore.quirk is a workaround for the ethernet card suspend bug: https:
//guides.frame.work/Guide/Fedora+37+Installation+on+the+Framework+Laptop/
108?lang=en
GRUB_CMDLINE_LINUX="net.ifnames=0 nvme.noacpi=1 mem_sleep_default=deep usbcore.quirks=0bda:8156:k"
# fix the resolution in grub for fonts to not be tiny
GRUB_GFXMODE=1024x768
Other than that, I haven't been able to max out the card because I don't have other 2.5Gbit/s equipment at home, which is strangely satisfying. But running against my Turris Omnia router, I could pretty much max a gigabit fairly easily:
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  1.09 GBytes   937 Mbits/sec  238             sender
[  5]   0.00-10.00  sec  1.09 GBytes   934 Mbits/sec                  receiver
The card doesn't require any proprietary firmware blobs which is surprising. Other than the power saving issues, it just works. In my power tests (see powerstat-wayland), the Ethernet card seems to use about 1.6W of power idle, without link, in the above "quirky" configuration where the card is functional but without autosuspend.

Proprietary firmware blobs The framework does need proprietary firmware to operate. Specifically:
  • the WiFi network card shipped with the DIY kit is a AX210 card that requires a 5.19 kernel or later, and the firmware-iwlwifi non-free firmware package
  • the Bluetooth adapter also loads the firmware-iwlwifi package (untested)
  • the graphics work out of the box without firmware, but certain power management features come only with special proprietary firmware, normally shipped in the firmware-misc-nonfree but currently missing from the package
Note that, at the time of writing, the latest i915 firmware from linux-firmware has a serious bug where loading all the accessible firmware results in noticeable I estimate 200-500ms lag between the keyboard (not the mouse!) and the display. Symptoms also include tearing and shearing of windows, it's pretty nasty. One workaround is to delete the two affected firmware files:
cd /lib/firmware && rm adlp_guc_70.1.1.bin adlp_guc_69.0.3.bin
update-initramfs -u
You will get the following warning during build, which is good as it means the problematic firmware is disabled:
W: Possible missing firmware /lib/firmware/i915/adlp_guc_69.0.3.bin for module i915
W: Possible missing firmware /lib/firmware/i915/adlp_guc_70.1.1.bin for module i915
But then it also means that critical firmware isn't loaded, which means, among other things, a higher battery drain. I was able to move from 8.5-10W down to the 7W range after making the firmware work properly. This is also after turning the backlight all the way down, as that takes a solid 2-3W in full blast. The proper fix is to use some compositing manager. I ended up using compton with the following systemd unit:
[Unit]
Description=start compositing manager
PartOf=graphical-session.target
ConditionHost=angela
[Service]
Type=exec
ExecStart=compton --show-all-xerrors --backend glx --vsync opengl-swc
Restart=on-failure
[Install]
RequiredBy=graphical-session.target
compton is orphaned however, so you might be tempted to use picom instead, but in my experience the latter uses much more power (1-2W extra, similar experience). I also tried compiz but it would just crash with:
anarcat@angela:~$ compiz --replace
compiz (core) - Warn: No XI2 extension
compiz (core) - Error: Another composite manager is already running on screen: 0
compiz (core) - Fatal: No manageable screens found on display :0
When running from the base session, I would get this instead:
compiz (core) - Warn: No XI2 extension
compiz (core) - Error: Couldn't load plugin 'ccp'
compiz (core) - Error: Couldn't load plugin 'ccp'
Thanks to EmanueleRocca for figuring all that out. See also this discussion about power management on the Framework forum. Note that Wayland environments do not require any special configuration here and actually work better, see my Wayland migration notes for details.
Also note that the iwlwifi firmware also looks incomplete. Even with the package installed, I get those errors in dmesg:
[   19.534429] Intel(R) Wireless WiFi driver for Linux
[   19.534691] iwlwifi 0000:a6:00.0: enabling device (0000 -> 0002)
[   19.541867] iwlwifi 0000:a6:00.0: firmware: failed to load iwlwifi-ty-a0-gf-a0-72.ucode (-2)
[   19.541881] iwlwifi 0000:a6:00.0: firmware: failed to load iwlwifi-ty-a0-gf-a0-72.ucode (-2)
[   19.541882] iwlwifi 0000:a6:00.0: Direct firmware load for iwlwifi-ty-a0-gf-a0-72.ucode failed with error -2
[   19.541890] iwlwifi 0000:a6:00.0: firmware: failed to load iwlwifi-ty-a0-gf-a0-71.ucode (-2)
[   19.541895] iwlwifi 0000:a6:00.0: firmware: failed to load iwlwifi-ty-a0-gf-a0-71.ucode (-2)
[   19.541896] iwlwifi 0000:a6:00.0: Direct firmware load for iwlwifi-ty-a0-gf-a0-71.ucode failed with error -2
[   19.541903] iwlwifi 0000:a6:00.0: firmware: failed to load iwlwifi-ty-a0-gf-a0-70.ucode (-2)
[   19.541907] iwlwifi 0000:a6:00.0: firmware: failed to load iwlwifi-ty-a0-gf-a0-70.ucode (-2)
[   19.541908] iwlwifi 0000:a6:00.0: Direct firmware load for iwlwifi-ty-a0-gf-a0-70.ucode failed with error -2
[   19.541913] iwlwifi 0000:a6:00.0: firmware: failed to load iwlwifi-ty-a0-gf-a0-69.ucode (-2)
[   19.541916] iwlwifi 0000:a6:00.0: firmware: failed to load iwlwifi-ty-a0-gf-a0-69.ucode (-2)
[   19.541917] iwlwifi 0000:a6:00.0: Direct firmware load for iwlwifi-ty-a0-gf-a0-69.ucode failed with error -2
[   19.541922] iwlwifi 0000:a6:00.0: firmware: failed to load iwlwifi-ty-a0-gf-a0-68.ucode (-2)
[   19.541926] iwlwifi 0000:a6:00.0: firmware: failed to load iwlwifi-ty-a0-gf-a0-68.ucode (-2)
[   19.541927] iwlwifi 0000:a6:00.0: Direct firmware load for iwlwifi-ty-a0-gf-a0-68.ucode failed with error -2
[   19.541933] iwlwifi 0000:a6:00.0: firmware: failed to load iwlwifi-ty-a0-gf-a0-67.ucode (-2)
[   19.541937] iwlwifi 0000:a6:00.0: firmware: failed to load iwlwifi-ty-a0-gf-a0-67.ucode (-2)
[   19.541937] iwlwifi 0000:a6:00.0: Direct firmware load for iwlwifi-ty-a0-gf-a0-67.ucode failed with error -2
[   19.544244] iwlwifi 0000:a6:00.0: firmware: direct-loading firmware iwlwifi-ty-a0-gf-a0-66.ucode
[   19.544257] iwlwifi 0000:a6:00.0: api flags index 2 larger than supported by driver
[   19.544270] iwlwifi 0000:a6:00.0: TLV_FW_FSEQ_VERSION: FSEQ Version: 0.63.2.1
[   19.544523] iwlwifi 0000:a6:00.0: firmware: failed to load iwl-debug-yoyo.bin (-2)
[   19.544528] iwlwifi 0000:a6:00.0: firmware: failed to load iwl-debug-yoyo.bin (-2)
[   19.544530] iwlwifi 0000:a6:00.0: loaded firmware version 66.55c64978.0 ty-a0-gf-a0-66.ucode op_mode iwlmvm
Some of those are available in the latest upstream firmware package (iwlwifi-ty-a0-gf-a0-71.ucode, -68, and -67), but not all (e.g. iwlwifi-ty-a0-gf-a0-72.ucode is missing) . It's unclear what those do or don't, as the WiFi seems to work well without them. I still copied them in from the latest linux-firmware package in the hope they would help with power management, but I did not notice a change after loading them. There are also multiple knobs on the iwlwifi and iwlmvm drivers. The latter has a power_schmeme setting which defaults to 2 (balanced), setting it to 3 (low power) could improve battery usage as well, in theory. The iwlwifi driver also has power_save (defaults to disabled) and power_level (1-5, defaults to 1) settings. See also the output of modinfo iwlwifi and modinfo iwlmvm for other driver options.

Graphics acceleration After loading the latest upstream firmware and setting up a compositing manager (compton, above), I tested the classic glxgears. Running in a window gives me odd results, as the gears basically grind to a halt:
Running synchronized to the vertical refresh.  The framerate should be
approximately the same as the monitor refresh rate.
137 frames in 5.1 seconds = 26.984 FPS
27 frames in 5.4 seconds =  5.022 FPS
Ouch. 5FPS! But interestingly, once the window is in full screen, it does hit the monitor refresh rate:
300 frames in 5.0 seconds = 60.000 FPS
I'm not really a gamer and I'm not normally using any of that fancy graphics acceleration stuff (except maybe my browser does?). I installed intel-gpu-tools for the intel_gpu_top command to confirm the GPU was engaged when doing those simulations. A nice find. Other useful diagnostic tools include glxgears and glxinfo (in mesa-utils) and (vainfo in vainfo). Following to this post, I also made sure to have those settings in my about:config in Firefox, or, in user.js:
user_pref("media.ffmpeg.vaapi.enabled", true);
Note that the guide suggests many other settings to tweak, but those might actually be overkill, see this comment and its parents. I did try forcing hardware acceleration by setting gfx.webrender.all to true, but everything became choppy and weird. The guide also mentions installing the intel-media-driver package, but I could not find that in Debian. The Arch wiki has, as usual, an excellent reference on hardware acceleration in Firefox.

Chromium / Signal desktop bugs It looks like both Chromium and Signal Desktop misbehave with my compositor setup (compton + i3). The fix is to add a persistent flag to Chromium. In Arch, it's conveniently in ~/.config/chromium-flags.conf but that doesn't actually work in Debian. I had to put the flag in /etc/chromium.d/disable-compositing, like this:
export CHROMIUM_FLAGS="$CHROMIUM_FLAGS --disable-gpu-compositing"
It's possible another one of the hundreds of flags might fix this issue better, but I don't really have time to go through this entire, incomplete, and unofficial list (!?!). Signal Desktop is a similar problem, and doesn't reuse those flags (because of course it doesn't). Instead I had to rewrite the wrapper script in /usr/local/bin/signal-desktop to use this instead:
exec /usr/bin/flatpak run --branch=stable --arch=x86_64 org.signal.Signal --disable-gpu-compositing "$@"
This was mostly done in this Puppet commit. I haven't figured out the root of this problem. I did try using picom and xcompmgr; they both suffer from the same issue. Another Debian testing user on Wayland told me they haven't seen this problem, so hopefully this can be fixed by switching to wayland.

Graphics card hangs I believe I might have this bug which results in a total graphical hang for 15-30 seconds. It's fairly rare so it's not too disruptive, but when it does happen, it's pretty alarming. The comments on that bug report are encouraging though: it seems this is a bug in either mesa or the Intel graphics driver, which means many people have this problem so it's likely to be fixed. There's actually a merge request on mesa already (2022-12-29). It could also be that bug because the error message I get is actually:
Jan 20 12:49:10 angela kernel: Asynchronous wait on fence 0000:00:02.0:sway[104431]:cb0ae timed out (hint:intel_atomic_commit_ready [i915]) 
Jan 20 12:49:15 angela kernel: i915 0000:00:02.0: [drm] GPU HANG: ecode 12:0:00000000 
Jan 20 12:49:15 angela kernel: i915 0000:00:02.0: [drm] Resetting chip for stopped heartbeat on rcs0 
Jan 20 12:49:15 angela kernel: i915 0000:00:02.0: [drm] GuC firmware i915/adlp_guc_70.1.1.bin version 70.1 
Jan 20 12:49:15 angela kernel: i915 0000:00:02.0: [drm] HuC firmware i915/tgl_huc_7.9.3.bin version 7.9 
Jan 20 12:49:15 angela kernel: i915 0000:00:02.0: [drm] HuC authenticated 
Jan 20 12:49:15 angela kernel: i915 0000:00:02.0: [drm] GuC submission enabled 
Jan 20 12:49:15 angela kernel: i915 0000:00:02.0: [drm] GuC SLPC enabled
It's a solid 30 seconds graphical hang. Maybe the keyboard and everything else keeps working. The latter bug report is quite long, with many comments, but this one from January 2023 seems to say that Sway 1.8 fixed the problem. There's also an earlier patch to add an extra kernel parameter that supposedly fixes that too. There's all sorts of other workarounds in there, for example this:
echo "options i915 enable_dc=1 enable_guc_loading=1 enable_guc_submission=1 edp_vswing=0 enable_guc=2 enable_fbc=1 enable_psr=1 disable_power_well=0"   sudo tee /etc/modprobe.d/i915.conf
from this comment... So that one is unsolved, as far as the upstream drivers are concerned, but maybe could be fixed through Sway.

Weird USB hangs / graphical glitches I have had weird connectivity glitches better described in this post, but basically: my USB keyboard and mice (connected over a USB hub) drop keys, lag a lot or hang, and I get visual glitches. The fix was to tighten the screws around the CPU on the motherboard (!), which is, thankfully, a rather simple repair.

USB docks are hell Note that the monitors are hooked up to angela through a USB-C / Thunderbolt dock from Cable Matters, with the lovely name of 201053-SIL. It has issues, see this blog post for an in-depth discussion.

Shipping details I ordered the Framework in August 2022 and received it about a month later, which is sooner than expected because the August batch was late. People (including me) expected this to have an impact on the September batch, but it seems Framework have been able to fix the delivery problems and keep up with the demand. As of early 2023, their website announces that laptops ship "within 5 days". I have myself ordered a few expansion cards in November 2022, and they shipped on the same day, arriving 3-4 days later.

The supply pipeline There are basically 6 steps in the Framework shipping pipeline, each (except the last) accompanied with an email notification:
  1. pre-order
  2. preparing batch
  3. preparing order
  4. payment complete
  5. shipping
  6. (received)
This comes from the crowdsourced spreadsheet, which should be updated when the status changes here. I was part of the "third batch" of the 12th generation laptop, which was supposed to ship in September. It ended up arriving on my door step on September 27th, about 33 days after ordering. It seems current orders are not processed in "batches", but in real time, see this blog post for details on shipping.

Shipping trivia I don't know about the others, but my laptop shipped through no less than four different airplane flights. Here are the hops it took: I can't quite figure out how to calculate exactly how much mileage that is, but it's huge. The ride through Alaska is surprising enough but the bounce back through Winnipeg is especially weird. I guess the route happens that way because of Fedex shipping hubs. There was a related oddity when I had my Purism laptop shipped: it left from the west coast and seemed to enter on an endless, two week long road trip across the continental US.

Other resources

2 February 2023

John Goerzen: Using Yggdrasil As an Automatic Mesh Fabric to Connect All Your Docker Containers, VMs, and Servers

Sometimes you might want to run Docker containers on more than one host. Maybe you want to run some at one hosting facility, some at another, and so forth. Maybe you d like run VMs at various places, and let them talk to Docker containers and bare metal servers wherever they are. And maybe you d like to be able to easily migrate any of these from one provider to another. There are all sorts of very complicated ways to set all this stuff up. But there s also a simple one: Yggdrasil. My blog post Make the Internet Yours Again With an Instant Mesh Network explains some of the possibilities of Yggdrasil in general terms. Here I want to show you how to use Yggdrasil to solve some of these issues more specifically. Because Yggdrasil is always Encrypted, some of the security lifting is done for us.

Background Often in Docker, we connect multiple containers to a single network that runs on a given host. That much is easy. Once you start talking about containers on multiple hosts, then you start adding layers and layers of complexity. Once you start talking multiple providers, maybe multiple continents, then the complexity can increase. And, if you want to integrate everything from bare metal servers to VMs into this well, there are ways, but they re not easy. I m a believer in the KISS principle. Let s not make things complex when we don t have to.

Enter Yggdrasil As I ve explained before, Yggdrasil can automatically form a global mesh network. This is pretty cool! As most people use it, they join it to the main Yggdrasil network. But Yggdrasil can be run entirely privately as well. You can run your own private mesh, and that s what we ll talk about here. All we have to do is run Yggdrasil inside each container, VM, server, or whatever. We handle some basics of connectivity, and bam! Everything is host- and location-agnostic.

Setup in Docker The installation of Yggdrasil on a regular system is pretty straightforward. Docker is a bit more complicated for several reasons:
  • It blocks IPv6 inside containers by default
  • The default set of permissions doesn t permit you to set up tunnels inside a container
  • It doesn t typically pass multicast (broadcast) packets
Normally, Yggdrasil could auto-discover peers on a LAN interface. However, aside from some esoteric Docker networking approaches, Docker doesn t permit that. So my approach is going to be setting up one or more Yggdrasil router containers on a given Docker host. All the other containers talk directly to the router container and it s all good.

Basic installation In my Dockerfile, I have something like this:
FROM jgoerzen/debian-base-security:bullseye
RUN echo "deb http://deb.debian.org/debian bullseye-backports main" >> /etc/apt/sources.list && \
    apt-get --allow-releaseinfo-change update && \
    apt-get -y --no-install-recommends -t bullseye-backports install yggdrasil
...
COPY yggdrasil.conf /etc/yggdrasil/
RUN set -x; \
    chown root:yggdrasil /etc/yggdrasil/yggdrasil.conf && \
    chmod 0750 /etc/yggdrasil/yggdrasil.conf && \
    systemctl enable yggdrasil
The magic parameters to docker run to make Yggdrasil work are:
--cap-add=NET_ADMIN --sysctl net.ipv6.conf.all.disable_ipv6=0 --device=/dev/net/tun:/dev/net/tun
This example uses my docker-debian-base images, so if you use them as well, you ll also need to add their parameters. Note that it is NOT necessary to use --privileged. In fact, due to the network namespaces in use in Docker, this command does not let the container modify the host s networking (unless you use --net=host, which I do not recommend). The --sysctl parameter was the result of a lot of banging my head against the wall. Apparently Docker tries to disable IPv6 in the container by default. Annoying.

Configuration of the router container(s) The idea is that the router node (or more than one, if you want redundancy) will be the only ones to have an open incoming port. Although the normal Yggdrasil case of directly detecting peers in a broadcast domain is more convenient and more robust, this can work pretty well too. You can, of course, generate a template yggdrasil.conf with yggdrasil -genconf like usual. Some things to note for this one:
  • You ll want to change Listen to something like Listen: ["tls://[::]:12345"] where 12345 is the port number you ll be listening on.
  • You ll want to disable the MulticastInterfaces entirely by just setting it to [] since it doesn t work anyway.
  • If you expose the port to the Internet, you ll certainly want to firewall it to only authorized peers. Setting AllowedPublicKeys is another useful step.
  • If you have more than one router container on a host, each of them will both Listen and act as a client to the others. See below.

Configuration of the non-router nodes Again, you can start with a simple configuration. Some notes here:
  • You ll want to set Peers to something like Peers: ["tls://routernode:12345"] where routernode is the Docker hostname of the router container, and 12345 is its port number as defined above. If you have more than one local router container, you can simply list them all here. Yggdrasil will then fail over nicely if any one of them go down.
  • Listen should be empty.
  • As above, MulticastInterfaces should be empty.

Using the interfaces At this point, you should be able to ping6 between your containers. If you have multiple hosts running Docker, you can simply set up the router nodes on each to connect to each other. Now you have direct, secure, container-to-container communication that is host-agnostic! You can also set up Yggdrasil on a bare metal server or VM using standard procedures and everything will just talk nicely!

Security notes Yggdrasil s mesh is aggressively greedy. It will peer with any node it can find (unless told otherwise) and will find a route to anywhere it can. There are two main ways to make sure your internal comms stay private: by restricting who can talk to your mesh, and by firewalling the Yggdrasil interface. Both can be used, and they can be used simultaneously. By disabling multicast discovery, you eliminate the chance for random machines on the LAN to join the mesh. By making sure that you firewall off (outside of Yggdrasil) who can connect to a Yggdrasil node with a listening port, you can authorize only your own machines. And, by setting AllowedPublicKeys on the nodes with listening ports, you can authenticate the Yggdrasil peers. Note that part of the benefit of the Yggdrasil mesh is normally that you don t have to propagate a configuration change to every participatory node that s a nice thing in general! You can also run a firewall inside your container (I like firehol for this purpose) and aggressively firewall the IPs that are allowed to connect via the Yggdrasil interface. I like to set a stable interface name like ygg0 in yggdrasil.conf, and then it becomes pretty easy to firewall the services. The Docker parameters that allow Yggdrasil to run are also sufficient to run firehol.

Naming Yggdrasil peers You probably don t want to hard-code Yggdrasil IPs all over the place. There are a few solutions:
  • You could run an internal DNS service
  • You can do a bit of scripting around Docker s --add-host command to add things to /etc/hosts

Other hints & conclusion Here are some other helpful use cases:
  • If you are migrating between hosts, you could leave your reverse proxy up at both hosts, both pointing to the target containers over Yggdrasil. The targets will be automatically found from both sides of the migration while you wait for DNS caches to update and such.
  • This can make services integrate with local networks a lot more painlessly than they might otherwise.
This is just an idea. The point of Yggdrasil is expanding our ideas of what we can do with a network, so here s one such expansion. Have fun!
Note: This post also has a permanent home on my webiste, where it may be periodically updated.

30 December 2022

Chris Lamb: Favourite books of 2022: Non-fiction

In my three most recent posts, I went over the memoirs and biographies, classics and fiction books that I enjoyed the most in 2022. But in the last of my book-related posts for 2022, I'll be going over my favourite works of non-fiction. Books that just missed the cut here include Adam Hochschild's King Leopold's Ghost (1998) on the role of Leopold II of Belgium in the Congo Free State, Johann Hari's Stolen Focus (2022) (a personal memoir on relating to how technology is increasingly fragmenting our attention), Amia Srinivasan's The Right to Sex (2021) (a misleadingly named set of philosophic essays on feminism), Dana Heller et al.'s The Selling of 9/11: How a National Tragedy Became a Commodity (2005), John Berger's mindbending Ways of Seeing (1972) and Louise Richardson's What Terrorists Want (2006).

The Great War and Modern Memory (1975)
Wartime: Understanding and Behavior in the Second World War (1989) Paul Fussell Rather than describe the battles, weapons, geopolitics or big personalities of the two World Wars, Paul Fussell's The Great War and Modern Memory & Wartime are focused instead on how the two wars have been remembered by their everyday participants. Drawing on the memoirs and memories of soldiers and civilians along with a brief comparison with the actual events that shaped them, Fussell's two books are a compassionate, insightful and moving piece of analysis. Fussell primarily sets himself against the admixture of nostalgia and trauma that obscures the origins and unimaginable experience of participating in these wars; two wars that were, in his view, a "perceptual and rhetorical scandal from which total recovery is unlikely." He takes particular aim at the dishonesty of hindsight:
For the past fifty years, the Allied war has been sanitised and romanticised almost beyond recognition by the sentimental, the loony patriotic, the ignorant and the bloodthirsty. I have tried to balance the scales. [And] in unbombed America especially, the meaning of the war [seems] inaccessible.
The author does not engage in any of the customary rose-tinted view of war, yet he remains understanding and compassionate towards those who try to locate a reason within what was quite often senseless barbarism. If anything, his despondency and pessimism about the Second World War (the war that Fussell himself fought in) shines through quite acutely, and this is especially the case in what he chooses to quote from others:
"It was common [ ] throughout the [Okinawa] campaign for replacements to get hit before we even knew their names. They came up confused, frightened, and hopeful, got wounded or killed, and went right back to the rear on the route by which they had come, shocked, bleeding, or stiff. They were forlorn figures coming up to the meat grinder and going right back out of it like homeless waifs, unknown and faceless to us, like unread books on a shelf."
It would take a rather heartless reader to fail to be sobered by this final simile, and an even colder one to view Fussell's citation of such an emotive anecdote to be manipulative. Still, stories and cruel ironies like this one infuse this often-angry book, but it is not without astute and shrewd analysis as well, especially on the many qualitative differences between the two conflicts that simply cannot be captured by facts and figures alone. For example:
A measure of the psychological distance of the Second [World] War from the First is the rarity, in 1914 1918, of drinking and drunkenness poems.
Indeed so. In fact, what makes Fussell's project so compelling and perhaps even unique is that he uses these non-quantitive measures to try and take stock of what happened. After all, this was a war conducted by humans, not the abstract school of statistics. And what is the value of a list of armaments destroyed by such-and-such a regiment when compared with truly consequential insights into both how the war affected, say, the psychology of postwar literature ("Prolonged trench warfare, whether enacted or remembered, fosters paranoid melodrama, which I take to be a primary mode in modern writing."), the specific words adopted by combatants ("It is a truism of military propaganda that monosyllabic enemies are easier to despise than others") as well as the very grammar of interaction:
The Field Service Post Card [in WW1] has the honour of being the first widespread exemplary of that kind of document which uniquely characterises the modern world: the "Form". [And] as the first widely known example of dehumanised, automated communication, the post card popularised a mode of rhetoric indispensable to the conduct of later wars fought by great faceless conscripted armies.
And this wouldn't be a book review without argument-ending observations that:
Indicative of the German wartime conception [of victory] would be Hitler and Speer's elaborate plans for the ultimate reconstruction of Berlin, which made no provision for a library.
Our myths about the two world wars possess an undisputed power, in part because they contain an essential truth the atrocities committed by Germany and its allies were not merely extreme or revolting, but their full dimensions (embodied in the Holocaust and the Holodomor) remain essentially inaccessible within our current ideological framework. Yet the two wars are better understood as an abyss in which we were all dragged into the depths of moral depravity, rather than a battle pitched by the forces of light against the forces of darkness. Fussell is one of the few observers that can truly accept and understand this truth and is still able to speak to us cogently on the topic from the vantage point of experience. The Second World War which looms so large in our contemporary understanding of the modern world (see below) may have been necessary and unavoidable, but Fussell convinces his reader that it was morally complicated "beyond the power of any literary or philosophic analysis to suggest," and that the only way to maintain a na ve belief in the myth that these wars were a Manichaean fight between good and evil is to overlook reality. There are many texts on the two World Wars that can either stir the intellect or move the emotions, but Fussell's two books do both. A uniquely perceptive and intelligent commentary; outstanding.

Longitude (1995) Dava Sobel Since Man first decided to sail the oceans, knowing one's location has always been critical. Yet doing so reliably used to be a serious problem if you didn't know where you were, you are far more likely to die and/or lose your valuable cargo. But whilst finding one's latitude (ie. your north south position) had effectively been solved by the beginning of the 17th century, finding one's (east west) longitude was far from trustworthy in comparison. This book first published in 1995 is therefore something of an anachronism. As in, we readily use the GPS facilities of our phones today without hesitation, so we find it difficult to imagine a reality in which knowing something fundamental like your own location is essentially unthinkable. It became clear in the 18th century, though, that in order to accurately determine one's longitude, what you actually needed was an accurate clock. In Longitude, therefore, we read of the remarkable story of John Harrison and his quest to create a timepiece that would not only keep time during a long sea voyage but would survive the rough ocean conditions as well. Self-educated and a carpenter by trade, Harrison made a number of important breakthroughs in keeping accurate time at sea, and Longitude describes his novel breakthroughs in a way that is both engaging and without talking down to the reader. Still, this book covers much more than that, including the development of accurate longitude going hand-in-hand with advancements in cartography as well as in scientific experiments to determine the speed of light: experiments that led to the formulation of quantum mechanics. It also outlines the work being done by Harrison's competitors. 'Competitors' is indeed the correct word here, as Parliament offered a huge prize to whoever could create such a device, and the ramifications of this tremendous financial incentive are an essential part of this story. For the most part, though, Longitude sticks to the story of Harrison and his evolving obsession with his creating the perfect timepiece. Indeed, one reason that Longitude is so resonant with readers is that many of the tropes of the archetypical 'English inventor' are embedded within Harrison himself. That is to say, here is a self-made man pushing against the establishment of the time, with his groundbreaking ideas being underappreciated in his life, or dishonestly purloined by his intellectual inferiors. At the level of allegory, then, I am minded to interpret this portrait of Harrison as a symbolic distillation of postwar Britain a nation acutely embarrassed by the loss of the Empire that is now repositioning itself as a resourceful but plucky underdog; a country that, with a combination of the brains of boffins and a healthy dose of charisma and PR, can still keep up with the big boys. (It is this same search for postimperial meaning I find in the fiction of John le Carr , and, far more famously, in the James Bond franchise.) All of this is left to the reader, of course, as what makes Longitute singularly compelling is its gentle manner and tone. Indeed, at times it was as if the doyenne of sci-fi Ursula K. LeGuin had a sideline in popular non-fiction. I realise it's a mark of critical distinction to downgrade the importance of popular science in favour of erudite academic texts, but Latitude is ample evidence that so-called 'pop' science need not be patronising or reductive at all.

Closed Chambers: The Rise, Fall, and Future of the Modern Supreme Court (1998) Edward Lazarus After the landmark decision by the U.S. Supreme Court in *Dobbs v. Jackson Women's Health Organization that ended the Constitutional right to abortion conferred by Roe v Wade, I prioritised a few books in the queue about the judicial branch of the United States. One of these books was Closed Chambers, which attempts to assay, according to its subtitle, "The Rise, Fall and Future of the Modern Supreme Court". This book is not merely simply a learned guide to the history and functioning of the Court (although it is completely creditable in this respect); it's actually an 'insider' view of the workings of the institution as Lazurus was a clerk for Justice Harry Blackmun during the October term of 1988. Lazarus has therefore combined his experience as a clerk and his personal reflections (along with a substantial body of subsequent research) in order to communicate the collapse in comity between the Justices. Part of this book is therefore a pure history of the Court, detailing its important nineteenth-century judgements (such as Dred Scott which ruled that the Constitution did not consider Blacks to be citizens; and Plessy v. Ferguson which failed to find protection in the Constitution against racial segregation laws), as well as many twentieth-century cases that touch on the rather technical principle of substantive due process. Other layers of Lazurus' book are explicitly opinionated, however, and they capture the author's assessment of the Court's actions in the past and present [1998] day. Given the role in which he served at the Court, particular attention is given by Lazarus to the function of its clerks. These are revealed as being far more than the mere amanuenses they were hitherto believed to be. Indeed, the book is potentially unique in its the claim that the clerks have played a pivotal role in the deliberations, machinations and eventual rulings of the Court. By implication, then, the clerks have plaedy a crucial role in the internal controversies that surround many of the high-profile Supreme Court decisions decisions that, to the outsider at least, are presented as disinterested interpretations of Constitution of the United States. This is of especial importance given that, to Lazarus, "for all the attention we now pay to it, the Court remains shrouded in confusion and misunderstanding." Throughout his book, Lazarus complicates the commonplace view that the Court is divided into two simple right vs. left political factions, and instead documents an ever-evolving series of loosely held but strongly felt series of cabals, quid pro quo exchanges, outright equivocation and pure personal prejudices. (The age and concomitant illnesses of the Justices also appears to have a not insignificant effect on the Court's rulings as well.) In other words, Closed Chambers is not a book that will be read in a typical civics class in America, and the only time the book resorts to the customary breathless rhetoric about the US federal government is in its opening chapter:
The Court itself, a Greek-style temple commanding the crest of Capitol Hill, loomed above them in the dim light of the storm. Set atop a broad marble plaza and thirty-six steps, the Court stands in splendid isolation appropriate to its place at the pinnacle of the national judiciary, one of the three independent and "coequal" branches of American government. Once dubbed the Ivory Tower by architecture critics, the Court has a Corinthian colonnade and massive twenty-foot-high bronze doors that guard the single most powerful judicial institution in the Western world. Lights still shone in several offices to the right of the Court's entrance, and [ ]
Et cetera, et cetera. But, of course, this encomium to the inherent 'nobility' of the Supreme Court is quickly revealed to be a narrative foil, as Lazarus soon razes this dangerously na ve conception to the ground:
[The] institution is [now] broken into unyielding factions that have largely given up on a meaningful exchange of their respective views or, for that matter, a meaningful explication or defense of their own views. It is of Justices who in many important cases resort to transparently deceitful and hypocritical arguments and factual distortions as they discard judicial philosophy and consistent interpretation in favor of bottom-line results. This is a Court so badly splintered, yet so intent on lawmaking, that shifting 5-4 majorities, or even mere pluralities, rewrite whole swaths of constitutional law on the authority of a single, often idiosyncratic vote. It is also a Court where Justices yield great and excessive power to immature, ideologically driven clerks, who in turn use that power to manipulate their bosses and the institution they ostensibly serve.
Lazurus does not put forward a single, overarching thesis, but in the final chapters, he does suggest a potential future for the Court:
In the short run, the cure for what ails the Court lies solely with the Justices. It is their duty, under the shield of life tenure, to recognize the pathologies affecting their work and to restore the vitality of American constitutionalism. Ultimately, though, the long-term health of the Court depends on our own resolve on whom [we] select to join that institution.
Back in 1998, Lazurus might have had room for this qualified optimism. But from the vantage point of 2022, it appears that the "resolve" of the United States citizenry was not muscular enough to meet his challenge. After all, Lazurus was writing before Bush v. Gore in 2000, which arrogated to the judicial branch the ability to decide a presidential election; the disillusionment of Barack Obama's failure to nominate a replacement for Scalia; and many other missteps in the Court as well. All of which have now been compounded by the Trump administration's appointment of three Republican-friendly justices to the Court, including hypocritically appointing Justice Barrett a mere 38 days before the 2020 election. And, of course, the leaking and ruling in Dobbs v. Jackson, the true extent of which has not been yet. Not of a bit of this is Lazarus' fault, of course, but the Court's recent decisions (as well as the liberal hagiographies of 'RBG') most perforce affect one's reading of the concluding chapters. The other slight defect of Closed Chambers is that, whilst it often implies the importance of the federal and state courts within the judiciary, it only briefly positions the Supreme Court's decisions in relation to what was happening in the House, Senate and White House at the time. This seems to be increasingly relevant as time goes on: after all, it seems fairly clear even to this Brit that relying on an activist Supreme Court to enact progressive laws must be interpreted as a failure of the legislative branch to overcome the perennial problems of the filibuster, culture wars and partisan bickering. Nevertheless, Lazarus' book is in equal parts ambitious, opinionated, scholarly and dare I admit it? wonderfully gossipy. By juxtaposing history, memoir, and analysis, Closed Chambers combines an exacting evaluation of the Court's decisions with a lively portrait of the intellectual and emotional intensity that has grown within the Supreme Court's pseudo-monastic environment all while it struggles with the most impactful legal issues of the day. This book is an excellent and well-written achievement that will likely never be repeated, and a must-read for anyone interested in this ever-increasingly important branch of the US government.

Crashed: How a Decade of Financial Crises Changed the World (2018)
Shutdown: How Covid Shook the World's Economy (2021) Adam Tooze The economic historian Adam Tooze has often been labelled as an unlikely celebrity, but in the fourteen years since the global financial crisis of 2008, a growing audience has been looking for answers about the various failures of the modern economy. Tooze, a professor of history at New York's Columbia University, has written much that is penetrative and thought-provoking on this topic, and as a result, he has generated something of a cult following amongst economists, historians and the online left. I actually read two Tooze books this year. The first, Crashed (2018), catalogues the scale of government intervention required to prop up global finance after the 2008 financial crisis, and it characterises the different ways that countries around the world failed to live up to the situation, such as doing far too little, or taking action far too late. The connections between the high-risk subprime loans, credit default swaps and the resulting liquidity crisis in the US in late 2008 is fairly well known today in part thanks to films such as Adam McKay's 2015 The Big Short and much improved economic literacy in media reportage. But Crashed makes the implicit claim that, whilst the specific and structural origins of the 2008 crisis are worth scrutinising in exacting detail, it is the reaction of states in the months and years after the crash that has been overlooked as a result. After all, this is a reaction that has not only shaped a new economic order, it has created one that does not fit any conventional idea about the way the world 'ought' to be run. Tooze connects the original American banking crisis to the (multiple) European debt crises with a larger crisis of liberalism. Indeed, Tooze somehow manages to cover all these topics and more, weaving in Trump, Brexit and Russia's 2014 annexation of Crimea, as well as the evolving role of China in the post-2008 economic order. Where Crashed focused on the constellation of consequences that followed the events of 2008, Shutdown is a clear and comprehensive account of the way the world responded to the economic impact of Covid-19. The figures are often jaw-dropping: soon after the disease spread around the world, 95% of the world's economies contracted simultaneously, and at one point, the global economy shrunk by approximately 20%. Tooze's keen and sobering analysis of what happened is made all the more remarkable by the fact that it came out whilst the pandemic was still unfolding. In fact, this leads quickly to one of the book's few flaws: by being published so quickly, Shutdown prematurely over-praises China's 'zero Covid' policy, and these remarks will make a reader today squirm in their chair. Still, despite the regularity of these references (after all, mentioning China is very useful when one is directly comparing economic figures in early 2021, for examples), these are actually minor blemishes on the book's overall thesis. That is to say, Crashed is not merely a retelling of what happened in such-and-such a country during the pandemic; it offers in effect a prediction about what might be coming next. Whilst the economic responses to Covid averted what could easily have been another Great Depression (and thus showed it had learned some lessons from 2008), it had only done so by truly discarding the economic rule book. The by-product of inverting this set of written and unwritten conventions that have governed the world for the past 50 years, this 'Washington consensus' if you well, has yet to be fully felt. Of course, there are many parallels between these two books by Tooze. Both the liquidity crisis outlined in Crashed and the economic response to Covid in Shutdown exposed the fact that one of the central tenets of the modern economy ie. that financial markets can be trusted to regulate themselves was entirely untrue, and likely was false from the very beginning. And whilst Adam Tooze does not offer a singular piercing insight (conveying a sense of rigorous mastery instead), he may as well be asking whether we're simply going to lurch along from one crisis to the next, relying on the technocrats in power to fix problems when everything blows up again. The answer may very well be yes.

Looking for the Good War: American Amnesia and the Violent Pursuit of Happiness (2021) Elizabeth D. Samet Elizabeth D. Samet's Looking for the Good War answers the following question what would be the result if you asked a professor of English to disentangle the complex mythology we have about WW2 in the context of the recent US exit of Afghanistan? Samet's book acts as a twenty-first-century update of a kind to Paul Fussell's two books (reviewed above), as well as a deeper meditation on the idea that each new war is seen through the lens of the previous one. Indeed, like The Great War and Modern Memory (1975) and Wartime (1989), Samet's book is a perceptive work of demystification, but whilst Fussell seems to have been inspired by his own traumatic war experience, Samet is not only informed by her teaching West Point military cadets but by the physical and ontological wars that have occurred during her own life as well. A more scholarly and dispassionate text is the result of Samet's relative distance from armed combat, but it doesn't mean Looking for the Good War lacks energy or inspiration. Samet shares John Adams' belief that no political project can entirely shed the innate corruptions of power and ambition and so it is crucial to analyse and re-analyse the role of WW2 in contemporary American life. She is surely correct that the Second World War has been universally elevated as a special, 'good' war. Even those with exceptionally giddy minds seem to treat WW2 as hallowed:
It is nevertheless telling that one of the few occasions to which Trump responded with any kind of restraint while he was in office was the 75th anniversary of D-Day in 2019.
What is the source of this restraint, and what has nurtured its growth in the eight decades since WW2 began? Samet posits several reasons for this, including the fact that almost all of the media about the Second World War is not only suffused with symbolism and nostalgia but, less obviously, it has been made by people who have no experience of the events that they depict. Take Stephen Ambrose, author of Steven Spielberg's Band of Brothers miniseries: "I was 10 years old when the war ended," Samet quotes of Ambrose. "I thought the returning veterans were giants who had saved the world from barbarism. I still think so. I remain a hero worshiper." If Looking for the Good War has a primary thesis, then, it is that childhood hero worship is no basis for a system of government, let alone a crusading foreign policy. There is a straight line (to quote this book's subtitle) from the "American Amnesia" that obscures the reality of war to the "Violent Pursuit of Happiness." Samet's book doesn't merely just provide a modern appendix to Fussell's two works, however, as it adds further layers and dimensions he overlooked. For example, Samet provides some excellent insight on the role of Western, gangster and superhero movies, and she is especially good when looking at noir films as a kind of kaleidoscopic response to the Second World War:
Noir is a world ruled by bad decisions but also by bad timing. Chance, which plays such a pivotal role in war, bleeds into this world, too.
Samet rightfully weaves the role of women into the narrative as well. Women in film noir are often celebrated as 'independent' and sassy, correctly reflecting their newly-found independence gained during WW2. But these 'liberated' roles are not exactly a ringing endorsement of this independence: the 'femme fatale' and the 'tart', etc., reflect a kind of conditional freedom permitted to women by a post-War culture which is still wedded to an outmoded honour culture. In effect, far from being novel and subversive, these roles for women actually underwrote the ambient cultural disapproval of women's presence in the workforce. Samet later connects this highly-conditional independence with the liberation of Afghan women, which:
is inarguably one of the more palatable outcomes of our invasion, and the protection of women's rights has been invoked on the right and the left as an argument for staying the course in Afghanistan. How easily consequence is becoming justification. How flattering it will be one day to reimagine it as original objective.
Samet has ensured her book has a predominantly US angle as well, for she ends her book with a chapter on the pseudohistorical Lost Cause of the Civil War. The legacy of the Civil War is still visible in the physical phenomena of Confederate statues, but it also exists in deep-rooted racial injustice that has been shrouded in euphemism and other psychological devices for over 150 years. Samet believes that a key part of what drives the American mythology about the Second World War is the way in which it subconsciously cleanses the horrors of brother-on-brother murder that were seen in the Civil War. This is a book that is not only of interest to historians of the Second World War; it is a work for anyone who wishes to understand almost any American historical event, social issue, politician or movie that has appeared since the end of WW2. That is for better or worse everyone on earth.

5 November 2022

Anuradha Weeraman: Getting started with Linkerd

If you ve done anything in the Kubernetes space in recent years, you ve most likely come across the words Service Mesh . It s backed by a set of mature technologies that provides cross-cutting networking, security, infrastructure capabilities to be used by workloads running in Kubernetes in a manner that is transparent to the actual workload. This abstraction enables application developers to not worry about building in otherwise sophisticated capabilities for networking, routing, circuit-breaking and security, and simply rely on the services offered by the service mesh.In this post, I ll be covering Linkerd, which is an alternative to Istio. It has gone through a significant re-write when it transitioned from the JVM to a Go-based Control Plane and a Rust-based Data Plane a few years back and is now a part of the CNCF and is backed by Buoyant. It has proven itself widely for use in production workloads and has a healthy community and release cadence.It achieves this with a side-car container that communicates with a Linkerd control plane that allows central management of policy, telemetry, mutual TLS, traffic routing, shaping, retries, load balancing, circuit-breaking and other cross-cutting concerns before the traffic hits the container. This has made the task of implementing the application services much simpler as it is managed by container orchestrator and service mesh. I covered Istio in a prior post a few years back, and much of the content is still applicable for this post, if you d like to have a look.Here are the broad architectural components of Linkerd:
The components are separated into the control plane and the data plane.The control plane components live in its own namespace and consists of a controller that the Linkerd CLI interacts with via the Kubernetes API. The destination service is used for service discovery, TLS identity, policy on access control for inter-service communication and service profile information on routing, retries, timeouts. The identity service acts as the Certificate Authority which responds to Certificate Signing Requests (CSRs) from proxies for initialization and for service-to-service encrypted traffic. The proxy injector is an admission webhook that injects the Linkerd proxy side car and the init container automatically into a pod when the linkerd.io/inject: enabled is available on the namespace or workload.On the data plane side are two components. First, the init container, which is responsible for automatically forwarding incoming and outgoing traffic through the Linkerd proxy via iptables rules. Second, the Linkerd proxy, which is a lightweight micro-proxy written in Rust, is the data plane itself.I will be walking you through the setup of Linkerd (2.12.2 at the time of writing) on a Kubernetes cluster.Let s see what s running on the cluster currently. This assumes you have a cluster running and kubectl is installed and available on the PATH.
$ kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-kube-controllers-59697b644f-7fsln 1/1 Running 2 (119m ago) 7d
kube-system calico-node-6ptsh 1/1 Running 2 (119m ago) 7d
kube-system calico-node-7x5j8 1/1 Running 2 (119m ago) 7d
kube-system calico-node-qlnf6 1/1 Running 2 (119m ago) 7d
kube-system coredns-565d847f94-79jlw 1/1 Running 2 (119m ago) 7d
kube-system coredns-565d847f94-fqwn4 1/1 Running 2 (119m ago) 7d
kube-system etcd-k8s-master 1/1 Running 2 (119m ago) 7d
kube-system kube-apiserver-k8s-master 1/1 Running 2 (119m ago) 7d
kube-system kube-controller-manager-k8s-master 1/1 Running 2 (119m ago) 7d
kube-system kube-proxy-4n9b7 1/1 Running 2 (119m ago) 7d
kube-system kube-proxy-k4rzv 1/1 Running 2 (119m ago) 7d
kube-system kube-proxy-lz2dd 1/1 Running 2 (119m ago) 7d
kube-system kube-scheduler-k8s-master 1/1 Running 2 (119m ago) 7d
The first step would be to setup the Linkerd CLI:
$ curl --proto '=https' --tlsv1.2 -sSfL https://run.linkerd.io/install   sh
On most systems, this should be sufficient to setup the CLI. You may need to restart your terminal to load the updated paths. If you have a non-standard configuration and linkerd is not found after the installation, add the following to your PATH to be able to find the cli:
export PATH=$PATH:~/.linkerd2/bin/
At this point, checking the version would give you the following:
$ linkerd version
Client version: stable-2.12.2
Server version: unavailable
Setting up Linkerd Control PlaneBefore installing Linkerd on the cluster, run the following step to check the cluster for pre-requisites:
$ linkerd check --pre
Linkerd core checks
===================
kubernetes-api
--------------
can initialize the client
can query the Kubernetes API
kubernetes-version
------------------
is running the minimum Kubernetes API version
is running the minimum kubectl version
pre-kubernetes-setup
--------------------
control plane namespace does not already exist
can create non-namespaced resources
can create ServiceAccounts
can create Services
can create Deployments
can create CronJobs
can create ConfigMaps
can create Secrets
can read Secrets
can read extension-apiserver-authentication configmap
no clock skew detected
linkerd-version
---------------
can determine the latest version
cli is up-to-date
Status check results are  
All the pre-requisites appear to be good right now, and so installation can proceed.The first step of the installation is to setup the Custom Resource Definitions (CRDs) that Linkerd requires. The linkerd cli only prints the resource YAMLs to standard output and does not create them directly in Kubernetes, so you would need to pipe the output to kubectl apply to create the resources in the cluster that you re working with.
$ linkerd install --crds   kubectl apply -f -
Rendering Linkerd CRDs...
Next, run linkerd install kubectl apply -f - to install the control plane.
customresourcedefinition.apiextensions.k8s.io/authorizationpolicies.policy.linkerd.io created
customresourcedefinition.apiextensions.k8s.io/httproutes.policy.linkerd.io created
customresourcedefinition.apiextensions.k8s.io/meshtlsauthentications.policy.linkerd.io created
customresourcedefinition.apiextensions.k8s.io/networkauthentications.policy.linkerd.io created
customresourcedefinition.apiextensions.k8s.io/serverauthorizations.policy.linkerd.io created
customresourcedefinition.apiextensions.k8s.io/servers.policy.linkerd.io created
customresourcedefinition.apiextensions.k8s.io/serviceprofiles.linkerd.io created
Next, install the Linkerd control plane components in the same manner, this time without the crds switch:
$ linkerd install   kubectl apply -f -       
namespace/linkerd created
clusterrole.rbac.authorization.k8s.io/linkerd-linkerd-identity created
clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-identity created
serviceaccount/linkerd-identity created
clusterrole.rbac.authorization.k8s.io/linkerd-linkerd-destination created
clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-destination created
serviceaccount/linkerd-destination created
secret/linkerd-sp-validator-k8s-tls created
validatingwebhookconfiguration.admissionregistration.k8s.io/linkerd-sp-validator-webhook-config created
secret/linkerd-policy-validator-k8s-tls created
validatingwebhookconfiguration.admissionregistration.k8s.io/linkerd-policy-validator-webhook-config created
clusterrole.rbac.authorization.k8s.io/linkerd-policy created
clusterrolebinding.rbac.authorization.k8s.io/linkerd-destination-policy created
role.rbac.authorization.k8s.io/linkerd-heartbeat created
rolebinding.rbac.authorization.k8s.io/linkerd-heartbeat created
clusterrole.rbac.authorization.k8s.io/linkerd-heartbeat created
clusterrolebinding.rbac.authorization.k8s.io/linkerd-heartbeat created
serviceaccount/linkerd-heartbeat created
clusterrole.rbac.authorization.k8s.io/linkerd-linkerd-proxy-injector created
clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-proxy-injector created
serviceaccount/linkerd-proxy-injector created
secret/linkerd-proxy-injector-k8s-tls created
mutatingwebhookconfiguration.admissionregistration.k8s.io/linkerd-proxy-injector-webhook-config created
configmap/linkerd-config created
secret/linkerd-identity-issuer created
configmap/linkerd-identity-trust-roots created
service/linkerd-identity created
service/linkerd-identity-headless created
deployment.apps/linkerd-identity created
service/linkerd-dst created
service/linkerd-dst-headless created
service/linkerd-sp-validator created
service/linkerd-policy created
service/linkerd-policy-validator created
deployment.apps/linkerd-destination created
cronjob.batch/linkerd-heartbeat created
deployment.apps/linkerd-proxy-injector created
service/linkerd-proxy-injector created
secret/linkerd-config-overrides created
Kubernetes will start spinning up the data plane components and you should see the following when you list the pods:
$ kubectl get pods -A
...
linkerd linkerd-destination-67b9cc8749-xqcbx 4/4 Running 0 69s
linkerd linkerd-identity-59b46789cc-ntfcx 2/2 Running 0 69s
linkerd linkerd-proxy-injector-7fc85556bf-vnvw6 1/2 Running 0 69s
The components are running in the new linkerd namespace.To verify the setup, run a check:
$ linkerd check
Linkerd core checks
===================
kubernetes-api
--------------
can initialize the client
can query the Kubernetes API
kubernetes-version
------------------
is running the minimum Kubernetes API version
is running the minimum kubectl version
linkerd-existence
-----------------
'linkerd-config' config map exists
heartbeat ServiceAccount exist
control plane replica sets are ready
no unschedulable pods
control plane pods are ready
cluster networks contains all pods
cluster networks contains all services
linkerd-config
--------------
control plane Namespace exists
control plane ClusterRoles exist
control plane ClusterRoleBindings exist
control plane ServiceAccounts exist
control plane CustomResourceDefinitions exist
control plane MutatingWebhookConfigurations exist
control plane ValidatingWebhookConfigurations exist
proxy-init container runs as root user if docker container runtime is used
linkerd-identity
----------------
certificate config is valid
trust anchors are using supported crypto algorithm
trust anchors are within their validity period
trust anchors are valid for at least 60 days
issuer cert is using supported crypto algorithm
issuer cert is within its validity period
issuer cert is valid for at least 60 days
issuer cert is issued by the trust anchor
linkerd-webhooks-and-apisvc-tls
-------------------------------
proxy-injector webhook has valid cert
proxy-injector cert is valid for at least 60 days
sp-validator webhook has valid cert
sp-validator cert is valid for at least 60 days
policy-validator webhook has valid cert
policy-validator cert is valid for at least 60 days
linkerd-version
---------------
can determine the latest version
cli is up-to-date
control-plane-version
---------------------
can retrieve the control plane version
control plane is up-to-date
control plane and cli versions match
linkerd-control-plane-proxy
---------------------------
control plane proxies are healthy
control plane proxies are up-to-date
control plane proxies and cli versions match
Status check results are  
Everything looks good.Setting up the Viz ExtensionAt this point, the required components for the service mesh are setup, but let s also install the viz extension, which provides a good visualization capabilities that will come in handy subsequently. Once again, linkerd uses the same pattern for installing the extension.
$ linkerd viz install   kubectl apply -f -
namespace/linkerd-viz created
clusterrole.rbac.authorization.k8s.io/linkerd-linkerd-viz-metrics-api created
clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-viz-metrics-api created
serviceaccount/metrics-api created
clusterrole.rbac.authorization.k8s.io/linkerd-linkerd-viz-prometheus created
clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-viz-prometheus created
serviceaccount/prometheus created
clusterrole.rbac.authorization.k8s.io/linkerd-linkerd-viz-tap created
clusterrole.rbac.authorization.k8s.io/linkerd-linkerd-viz-tap-admin created
clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-viz-tap created
clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-viz-tap-auth-delegator created
serviceaccount/tap created
rolebinding.rbac.authorization.k8s.io/linkerd-linkerd-viz-tap-auth-reader created
secret/tap-k8s-tls created
apiservice.apiregistration.k8s.io/v1alpha1.tap.linkerd.io created
role.rbac.authorization.k8s.io/web created
rolebinding.rbac.authorization.k8s.io/web created
clusterrole.rbac.authorization.k8s.io/linkerd-linkerd-viz-web-check created
clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-viz-web-check created
clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-viz-web-admin created
clusterrole.rbac.authorization.k8s.io/linkerd-linkerd-viz-web-api created
clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-viz-web-api created
serviceaccount/web created
server.policy.linkerd.io/admin created
authorizationpolicy.policy.linkerd.io/admin created
networkauthentication.policy.linkerd.io/kubelet created
server.policy.linkerd.io/proxy-admin created
authorizationpolicy.policy.linkerd.io/proxy-admin created
service/metrics-api created
deployment.apps/metrics-api created
server.policy.linkerd.io/metrics-api created
authorizationpolicy.policy.linkerd.io/metrics-api created
meshtlsauthentication.policy.linkerd.io/metrics-api-web created
configmap/prometheus-config created
service/prometheus created
deployment.apps/prometheus created
service/tap created
deployment.apps/tap created
server.policy.linkerd.io/tap-api created
authorizationpolicy.policy.linkerd.io/tap created
clusterrole.rbac.authorization.k8s.io/linkerd-tap-injector created
clusterrolebinding.rbac.authorization.k8s.io/linkerd-tap-injector created
serviceaccount/tap-injector created
secret/tap-injector-k8s-tls created
mutatingwebhookconfiguration.admissionregistration.k8s.io/linkerd-tap-injector-webhook-config created
service/tap-injector created
deployment.apps/tap-injector created
server.policy.linkerd.io/tap-injector-webhook created
authorizationpolicy.policy.linkerd.io/tap-injector created
networkauthentication.policy.linkerd.io/kube-api-server created
service/web created
deployment.apps/web created
serviceprofile.linkerd.io/metrics-api.linkerd-viz.svc.cluster.local created
serviceprofile.linkerd.io/prometheus.linkerd-viz.svc.cluster.local created
A few seconds later, you should see the following in your pod list:
$ kubectl get pods -A
...
linkerd-viz prometheus-b5865f776-w5ssf 1/2 Running 0 35s
linkerd-viz tap-64f5c8597b-rqgbk 2/2 Running 0 35s
linkerd-viz tap-injector-7c75cfff4c-wl9mx 2/2 Running 0 34s
linkerd-viz web-8c444745-jhzr5 2/2 Running 0 34s
The viz components live in the linkerd-viz namespace.You can now checkout the viz dashboard:
$ linkerd viz dashboard
Linkerd dashboard available at:
http://localhost:50750
Grafana dashboard available at:
http://localhost:50750/grafana
Opening Linkerd dashboard in the default browser
Opening in existing browser session.
The Meshed column indicates the workload that is currently integrated with the Linkerd control plane. As you can see, there are no application deployments right now that are running.Injecting the Linkerd Data Plane componentsThere are two ways to integrate Linkerd to the application containers:1 by manually injecting the Linkerd data plane components
2 by instructing Kubernetes to automatically inject the data plane componentsInject Linkerd data plane manuallyLet s try the first option. Below is a simple nginx-app that I will deploy into the cluster:
$ cat deploy.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 2
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
$ kubectl apply -f deploy.yaml
Back in the viz dashboard, I do see the workload deployed, but it isn t currently communicating with the Linkerd control plane, and so doesn t show any metrics, and the Meshed count is 0:
Looking at the Pod s deployment YAML, I can see that it only includes the nginx container:
$ kubectl get pod nginx-deployment-cd55c47f5-cgxw2 -o yaml
apiVersion: v1
kind: Pod
metadata:
annotations:
cni.projectcalico.org/containerID: aee0295dda906f7935ce5c150ae30360005f5330e98c75a550b7cc0d1532f529
cni.projectcalico.org/podIP: 172.16.36.89/32
cni.projectcalico.org/podIPs: 172.16.36.89/32
creationTimestamp: "2022-11-05T19:35:12Z"
generateName: nginx-deployment-cd55c47f5-
labels:
app: nginx
pod-template-hash: cd55c47f5
name: nginx-deployment-cd55c47f5-cgxw2
namespace: default
ownerReferences:
- apiVersion: apps/v1
blockOwnerDeletion: true
controller: true
kind: ReplicaSet
name: nginx-deployment-cd55c47f5
uid: b604f5c4-f662-4333-aaa0-bd1a2b8b08c6
resourceVersion: "22979"
uid: 8fe30214-491b-4753-9fb2-485b6341376c
spec:
containers:
- image: nginx:latest
imagePullPolicy: Always
name: nginx
ports:
- containerPort: 80
protocol: TCP
resources:
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: kube-api-access-2bt6z
readOnly: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
nodeName: k8s-node1
preemptionPolicy: PreemptLowerPriority
priority: 0
restartPolicy: Always
schedulerName: default-scheduler
securityContext:
serviceAccount: default
serviceAccountName: default
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- name: kube-api-access-2bt6z
projected:
defaultMode: 420
sources:
- serviceAccountToken:
expirationSeconds: 3607
path: token
- configMap:
items:
- key: ca.crt
path: ca.crt
name: kube-root-ca.crt
- downwardAPI:
items:
- fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
path: namespace
status:
conditions:
- lastProbeTime: null
lastTransitionTime: "2022-11-05T19:35:12Z"
status: "True"
type: Initialized
- lastProbeTime: null
lastTransitionTime: "2022-11-05T19:35:16Z"
status: "True"
type: Ready
- lastProbeTime: null
lastTransitionTime: "2022-11-05T19:35:16Z"
status: "True"
type: ContainersReady
- lastProbeTime: null
lastTransitionTime: "2022-11-05T19:35:13Z"
status: "True"
type: PodScheduled
containerStatuses:
- containerID: containerd://f088f200315b44cbeed16499aba9b2d1396f9f81645e53b032d4bfa44166128a
image: docker.io/library/nginx:latest
imageID: docker.io/library/nginx@sha256:943c25b4b66b332184d5ba6bb18234273551593016c0e0ae906bab111548239f
lastState:
name: nginx
ready: true
restartCount: 0
started: true
state:
running:
startedAt: "2022-11-05T19:35:15Z"
hostIP: 192.168.2.216
phase: Running
podIP: 172.16.36.89
podIPs:
- ip: 172.16.36.89
qosClass: BestEffort
startTime: "2022-11-05T19:35:12Z"
Let s directly inject the linkerd data plane into this running container. We do this by retrieving the YAML of the deployment, piping it to linkerd cli to inject the necessary components and then piping to kubectl apply the changed resources.
$ kubectl get deploy nginx-deployment -o yaml   linkerd inject -   kubectl apply -f -
deployment "nginx-deployment" injected
deployment.apps/nginx-deployment configured
Back in the viz dashboard, the workload now is integrated into Linkerd control plane.
Looking at the updated Pod definition, we see a number of changes that the linkerd has injected that allows it to integrate with the control plane. Let s have a look:
$ kubectl get pod nginx-deployment-858bdd545b-55jpf -o yaml
apiVersion: v1
kind: Pod
metadata:
annotations:
cni.projectcalico.org/containerID: 1ec3d345f859be8ead0374a7e880bcfdb9ba74a121b220a6fccbd342ac4b7ea8
cni.projectcalico.org/podIP: 172.16.36.90/32
cni.projectcalico.org/podIPs: 172.16.36.90/32
linkerd.io/created-by: linkerd/proxy-injector stable-2.12.2
linkerd.io/inject: enabled
linkerd.io/proxy-version: stable-2.12.2
linkerd.io/trust-root-sha256: 354fe6f49331e8e03d8fb07808e00a3e145d2661181cbfec7777b41051dc8e22
viz.linkerd.io/tap-enabled: "true"
creationTimestamp: "2022-11-05T19:44:15Z"
generateName: nginx-deployment-858bdd545b-
labels:
app: nginx
linkerd.io/control-plane-ns: linkerd
linkerd.io/proxy-deployment: nginx-deployment
linkerd.io/workload-ns: default
pod-template-hash: 858bdd545b
name: nginx-deployment-858bdd545b-55jpf
namespace: default
ownerReferences:
- apiVersion: apps/v1
blockOwnerDeletion: true
controller: true
kind: ReplicaSet
name: nginx-deployment-858bdd545b
uid: 2e618972-aa10-4e35-a7dd-084853673a80
resourceVersion: "23820"
uid: 62f1857a-b701-4a19-8996-b5b605ff8488
spec:
containers:
- env:
- name: _pod_name
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: _pod_ns
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
- name: _pod_nodeName
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
- name: LINKERD2_PROXY_LOG
value: warn,linkerd=info
- name: LINKERD2_PROXY_LOG_FORMAT
value: plain
- name: LINKERD2_PROXY_DESTINATION_SVC_ADDR
value: linkerd-dst-headless.linkerd.svc.cluster.local.:8086
- name: LINKERD2_PROXY_DESTINATION_PROFILE_NETWORKS
value: 10.0.0.0/8,100.64.0.0/10,172.16.0.0/12,192.168.0.0/16
- name: LINKERD2_PROXY_POLICY_SVC_ADDR
value: linkerd-policy.linkerd.svc.cluster.local.:8090
- name: LINKERD2_PROXY_POLICY_WORKLOAD
value: $(_pod_ns):$(_pod_name)
- name: LINKERD2_PROXY_INBOUND_DEFAULT_POLICY
value: all-unauthenticated
- name: LINKERD2_PROXY_POLICY_CLUSTER_NETWORKS
value: 10.0.0.0/8,100.64.0.0/10,172.16.0.0/12,192.168.0.0/16
- name: LINKERD2_PROXY_INBOUND_CONNECT_TIMEOUT
value: 100ms
- name: LINKERD2_PROXY_OUTBOUND_CONNECT_TIMEOUT
value: 1000ms
- name: LINKERD2_PROXY_CONTROL_LISTEN_ADDR
value: 0.0.0.0:4190
- name: LINKERD2_PROXY_ADMIN_LISTEN_ADDR
value: 0.0.0.0:4191
- name: LINKERD2_PROXY_OUTBOUND_LISTEN_ADDR
value: 127.0.0.1:4140
- name: LINKERD2_PROXY_INBOUND_LISTEN_ADDR
value: 0.0.0.0:4143
- name: LINKERD2_PROXY_INBOUND_IPS
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: status.podIPs
- name: LINKERD2_PROXY_INBOUND_PORTS
value: "80"
- name: LINKERD2_PROXY_DESTINATION_PROFILE_SUFFIXES
value: svc.cluster.local.
- name: LINKERD2_PROXY_INBOUND_ACCEPT_KEEPALIVE
value: 10000ms
- name: LINKERD2_PROXY_OUTBOUND_CONNECT_KEEPALIVE
value: 10000ms
- name: LINKERD2_PROXY_INBOUND_PORTS_DISABLE_PROTOCOL_DETECTION
value: 25,587,3306,4444,5432,6379,9300,11211
- name: LINKERD2_PROXY_DESTINATION_CONTEXT
value:
"ns":"$(_pod_ns)", "nodeName":"$(_pod_nodeName)"
- name: _pod_sa
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.serviceAccountName
- name: _l5d_ns
value: linkerd
- name: _l5d_trustdomain
value: cluster.local
- name: LINKERD2_PROXY_IDENTITY_DIR
value: /var/run/linkerd/identity/end-entity
- name: LINKERD2_PROXY_IDENTITY_TRUST_ANCHORS
value:
-----BEGIN CERTIFICATE-----
MIIBiDCCAS6gAwIBAgIBATAKBggqhkjOPQQDAjAcMRowGAYDVQQDExFpZGVudGl0
eS5saW5rZXJkLjAeFw0yMjExMDUxOTIxMDlaFw0yMzExMDUxOTIxMjlaMBwxGjAY
BgNVBAMTEWlkZW50aXR5LmxpbmtlcmQuMFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcD
QgAE8AgxbWWa1qgEgN3ykFAOJ3sw9nSugUk1N5Qfvo6jXX/8/TZUW0ddko/N71+H
EcKc72kK0tlclj8jDi3pzJ4C0KNhMF8wDgYDVR0PAQH/BAQDAgEGMB0GA1UdJQQW
MBQGCCsGAQUFBwMBBggrBgEFBQcDAjAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW
BBThSr0yAj5joW7pj/NZPYcfIIepbzAKBggqhkjOPQQDAgNIADBFAiAomg0TVn6N
UxhOyzZdg848lAvH0Io9Ra/Ef2hxZGN0LgIhAIKjrsgDUqZA8XHiiciYYicxFnKr
Tw5yj9gBhVAgYCaB
-----END CERTIFICATE-----
- name: LINKERD2_PROXY_IDENTITY_TOKEN_FILE
value: /var/run/secrets/tokens/linkerd-identity-token
- name: LINKERD2_PROXY_IDENTITY_SVC_ADDR
value: linkerd-identity-headless.linkerd.svc.cluster.local.:8080
- name: LINKERD2_PROXY_IDENTITY_LOCAL_NAME
value: $(_pod_sa).$(_pod_ns).serviceaccount.identity.linkerd.cluster.local
- name: LINKERD2_PROXY_IDENTITY_SVC_NAME
value: linkerd-identity.linkerd.serviceaccount.identity.linkerd.cluster.local
- name: LINKERD2_PROXY_DESTINATION_SVC_NAME
value: linkerd-destination.linkerd.serviceaccount.identity.linkerd.cluster.local
- name: LINKERD2_PROXY_POLICY_SVC_NAME
value: linkerd-destination.linkerd.serviceaccount.identity.linkerd.cluster.local
- name: LINKERD2_PROXY_TAP_SVC_NAME
value: tap.linkerd-viz.serviceaccount.identity.linkerd.cluster.local
image: cr.l5d.io/linkerd/proxy:stable-2.12.2
imagePullPolicy: IfNotPresent
lifecycle:
postStart:
exec:
command:
- /usr/lib/linkerd/linkerd-await
- --timeout=2m
livenessProbe:
failureThreshold: 3
httpGet:
path: /live
port: 4191
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
name: linkerd-proxy
ports:
- containerPort: 4143
name: linkerd-proxy
protocol: TCP
- containerPort: 4191
name: linkerd-admin
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /ready
port: 4191
scheme: HTTP
initialDelaySeconds: 2
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
resources:
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsUser: 2102
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: FallbackToLogsOnError
volumeMounts:
- mountPath: /var/run/linkerd/identity/end-entity
name: linkerd-identity-end-entity
- mountPath: /var/run/secrets/tokens
name: linkerd-identity-token
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: kube-api-access-9zpnn
readOnly: true
- image: nginx:latest
imagePullPolicy: Always
name: nginx
ports:
- containerPort: 80
protocol: TCP
resources:
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: kube-api-access-9zpnn
readOnly: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
initContainers:
- args:
- --incoming-proxy-port
- "4143"
- --outgoing-proxy-port
- "4140"
- --proxy-uid
- "2102"
- --inbound-ports-to-ignore
- 4190,4191,4567,4568
- --outbound-ports-to-ignore
- 4567,4568
image: cr.l5d.io/linkerd/proxy-init:v2.0.0
imagePullPolicy: IfNotPresent
name: linkerd-init
resources:
limits:
cpu: 100m
memory: 20Mi
requests:
cpu: 100m
memory: 20Mi
securityContext:
allowPrivilegeEscalation: false
capabilities:
add:
- NET_ADMIN
- NET_RAW
privileged: false
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 65534
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: FallbackToLogsOnError
volumeMounts:
- mountPath: /run
name: linkerd-proxy-init-xtables-lock
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: kube-api-access-9zpnn
readOnly: true
nodeName: k8s-node1
preemptionPolicy: PreemptLowerPriority
priority: 0
restartPolicy: Always
schedulerName: default-scheduler
securityContext:
serviceAccount: default
serviceAccountName: default
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- name: kube-api-access-9zpnn
projected:
defaultMode: 420
sources:
- serviceAccountToken:
expirationSeconds: 3607
path: token
- configMap:
items:
- key: ca.crt
path: ca.crt
name: kube-root-ca.crt
- downwardAPI:
items:
- fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
path: namespace
- emptyDir:
name: linkerd-proxy-init-xtables-lock
- emptyDir:
medium: Memory
name: linkerd-identity-end-entity
- name: linkerd-identity-token
projected:
defaultMode: 420
sources:
- serviceAccountToken:
audience: identity.l5d.io
expirationSeconds: 86400
path: linkerd-identity-token
status:
conditions:
- lastProbeTime: null
lastTransitionTime: "2022-11-05T19:44:16Z"
status: "True"
type: Initialized
- lastProbeTime: null
lastTransitionTime: "2022-11-05T19:44:19Z"
status: "True"
type: Ready
- lastProbeTime: null
lastTransitionTime: "2022-11-05T19:44:19Z"
status: "True"
type: ContainersReady
- lastProbeTime: null
lastTransitionTime: "2022-11-05T19:44:15Z"
status: "True"
type: PodScheduled
containerStatuses:
- containerID: containerd://62028867c48aaa726df48249a27c52cd8820cd33e8e5695ad0d322540924754e
image: cr.l5d.io/linkerd/proxy:stable-2.12.2
imageID: cr.l5d.io/linkerd/proxy@sha256:787db5055b2a46a3c4318ef3b632461261f81254c8e47bf4b9b8dab2c42575e4
lastState:
name: linkerd-proxy
ready: true
restartCount: 0
started: true
state:
running:
startedAt: "2022-11-05T19:44:16Z"
- containerID: containerd://8f8ce663c19360a7b6868ace68a4a5119f0b18cd57ffebcc2d19331274038381
image: docker.io/library/nginx:latest
imageID: docker.io/library/nginx@sha256:943c25b4b66b332184d5ba6bb18234273551593016c0e0ae906bab111548239f
lastState:
name: nginx
ready: true
restartCount: 0
started: true
state:
running:
startedAt: "2022-11-05T19:44:19Z"
hostIP: 192.168.2.216
initContainerStatuses:
- containerID: containerd://c0417ea9c8418ab296bf86077e81c5d8be06fe9b87390c138d1c5d7b73cc577c
image: cr.l5d.io/linkerd/proxy-init:v2.0.0
imageID: cr.l5d.io/linkerd/proxy-init@sha256:7d5e66b9e176b1ebbdd7f40b6385d1885e82c80a06f4c6af868247bb1dffe262
lastState:
name: linkerd-init
ready: true
restartCount: 0
state:
terminated:
containerID: containerd://c0417ea9c8418ab296bf86077e81c5d8be06fe9b87390c138d1c5d7b73cc577c
exitCode: 0
finishedAt: "2022-11-05T19:44:16Z"
reason: Completed
startedAt: "2022-11-05T19:44:15Z"
phase: Running
podIP: 172.16.36.90
podIPs:
- ip: 172.16.36.90
qosClass: Burstable
startTime: "2022-11-05T19:44:15Z"
At this point, the necessary components are setup for you to explore Linkerd further. You can also try out the jaeger and multicluster extensions, similar to the process of installing and using the viz extension and try out their capabilities.Inject Linkerd data plane automaticallyIn this approach, we shall we how to instruct Kubernetes to automatically inject the Linkerd data plane to workloads at deployment time.We can achieve this by adding the linkerd.io/inject annotation to the deployment descriptor which causes the proxy injector admission hook to execute and inject linkerd data plane components automatically at the time of deployment.
$ cat deploy.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 2
template:
metadata:
labels:
app: nginx
annotations:
linkerd.io/inject: enabled
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
This annotation can also be specified at the namespace level to affect all the workloads within the namespace. Note that any resources created before the annotation was added to the namespace will require a rollout restart to trigger the injection of the Linkerd components.Uninstalling LinkerdNow that we have walked through the installation and setup process of Linkerd, let s also cover how to remove it from the infrastructure and go back to the state prior to its installation.The first step would be to remove extensions, such as viz.
$ linkerd viz uninstall   kubectl delete -f -
clusterrole.rbac.authorization.k8s.io "linkerd-linkerd-viz-metrics-api" deleted
clusterrole.rbac.authorization.k8s.io "linkerd-linkerd-viz-prometheus" deleted
clusterrole.rbac.authorization.k8s.io "linkerd-linkerd-viz-tap" deleted
clusterrole.rbac.authorization.k8s.io "linkerd-linkerd-viz-tap-admin" deleted
clusterrole.rbac.authorization.k8s.io "linkerd-linkerd-viz-web-api" deleted
clusterrole.rbac.authorization.k8s.io "linkerd-linkerd-viz-web-check" deleted
clusterrole.rbac.authorization.k8s.io "linkerd-tap-injector" deleted
clusterrolebinding.rbac.authorization.k8s.io "linkerd-linkerd-viz-metrics-api" deleted
clusterrolebinding.rbac.authorization.k8s.io "linkerd-linkerd-viz-prometheus" deleted
clusterrolebinding.rbac.authorization.k8s.io "linkerd-linkerd-viz-tap" deleted
clusterrolebinding.rbac.authorization.k8s.io "linkerd-linkerd-viz-tap-auth-delegator" deleted
clusterrolebinding.rbac.authorization.k8s.io "linkerd-linkerd-viz-web-admin" deleted
clusterrolebinding.rbac.authorization.k8s.io "linkerd-linkerd-viz-web-api" deleted
clusterrolebinding.rbac.authorization.k8s.io "linkerd-linkerd-viz-web-check" deleted
clusterrolebinding.rbac.authorization.k8s.io "linkerd-tap-injector" deleted
role.rbac.authorization.k8s.io "web" deleted
rolebinding.rbac.authorization.k8s.io "linkerd-linkerd-viz-tap-auth-reader" deleted
rolebinding.rbac.authorization.k8s.io "web" deleted
apiservice.apiregistration.k8s.io "v1alpha1.tap.linkerd.io" deleted
mutatingwebhookconfiguration.admissionregistration.k8s.io "linkerd-tap-injector-webhook-config" deleted
namespace "linkerd-viz" deleted
authorizationpolicy.policy.linkerd.io "admin" deleted
authorizationpolicy.policy.linkerd.io "metrics-api" deleted
authorizationpolicy.policy.linkerd.io "proxy-admin" deleted
authorizationpolicy.policy.linkerd.io "tap" deleted
authorizationpolicy.policy.linkerd.io "tap-injector" deleted
server.policy.linkerd.io "admin" deleted
server.policy.linkerd.io "metrics-api" deleted
server.policy.linkerd.io "proxy-admin" deleted
server.policy.linkerd.io "tap-api" deleted
server.policy.linkerd.io "tap-injector-webhook" deleted
In order to uninstall the control plane, you would need to first uninject the Linkerd control plane components from any existing running pods by:
$ kubectl get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-deployment 2/2 2 2 10m
$ kubectl get deployment nginx-deployment -o yaml   linkerd uninject -   kubectl apply -f -
deployment "nginx-deployment" uninjected
deployment.apps/nginx-deployment configured
Now you can delete the control plane.
$ linkerd uninstall   kubectl delete -f -
clusterrole.rbac.authorization.k8s.io "linkerd-heartbeat" deleted
clusterrole.rbac.authorization.k8s.io "linkerd-linkerd-destination" deleted
clusterrole.rbac.authorization.k8s.io "linkerd-linkerd-identity" deleted
clusterrole.rbac.authorization.k8s.io "linkerd-linkerd-proxy-injector" deleted
clusterrole.rbac.authorization.k8s.io "linkerd-policy" deleted
clusterrolebinding.rbac.authorization.k8s.io "linkerd-destination-policy" deleted
clusterrolebinding.rbac.authorization.k8s.io "linkerd-heartbeat" deleted
clusterrolebinding.rbac.authorization.k8s.io "linkerd-linkerd-destination" deleted
clusterrolebinding.rbac.authorization.k8s.io "linkerd-linkerd-identity" deleted
clusterrolebinding.rbac.authorization.k8s.io "linkerd-linkerd-proxy-injector" deleted
role.rbac.authorization.k8s.io "linkerd-heartbeat" deleted
rolebinding.rbac.authorization.k8s.io "linkerd-heartbeat" deleted
customresourcedefinition.apiextensions.k8s.io "authorizationpolicies.policy.linkerd.io" deleted
customresourcedefinition.apiextensions.k8s.io "httproutes.policy.linkerd.io" deleted
customresourcedefinition.apiextensions.k8s.io "meshtlsauthentications.policy.linkerd.io" deleted
customresourcedefinition.apiextensions.k8s.io "networkauthentications.policy.linkerd.io" deleted
customresourcedefinition.apiextensions.k8s.io "serverauthorizations.policy.linkerd.io" deleted
customresourcedefinition.apiextensions.k8s.io "servers.policy.linkerd.io" deleted
customresourcedefinition.apiextensions.k8s.io "serviceprofiles.linkerd.io" deleted
mutatingwebhookconfiguration.admissionregistration.k8s.io "linkerd-proxy-injector-webhook-config" deleted
validatingwebhookconfiguration.admissionregistration.k8s.io "linkerd-policy-validator-webhook-config" deleted
validatingwebhookconfiguration.admissionregistration.k8s.io "linkerd-sp-validator-webhook-config" deleted
namespace "linkerd" deleted
At this point we re back to the original state:
$ kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
default nginx-deployment-cd55c47f5-99xf2 1/1 Running 0 82s
default nginx-deployment-cd55c47f5-tt58t 1/1 Running 0 86s
kube-system calico-kube-controllers-59697b644f-7fsln 1/1 Running 2 (3h39m ago) 7d1h
kube-system calico-node-6ptsh 1/1 Running 2 (3h39m ago) 7d1h
kube-system calico-node-7x5j8 1/1 Running 2 (3h39m ago) 7d1h
kube-system calico-node-qlnf6 1/1 Running 2 (3h39m ago) 7d1h
kube-system coredns-565d847f94-79jlw 1/1 Running 2 (3h39m ago) 7d2h
kube-system coredns-565d847f94-fqwn4 1/1 Running 2 (3h39m ago) 7d2h
kube-system etcd-k8s-master 1/1 Running 2 (3h39m ago) 7d2h
kube-system kube-apiserver-k8s-master 1/1 Running 2 (3h39m ago) 7d2h
kube-system kube-controller-manager-k8s-master 1/1 Running 2 (3h39m ago) 7d2h
kube-system kube-proxy-4n9b7 1/1 Running 2 (3h39m ago) 7d2h
kube-system kube-proxy-k4rzv 1/1 Running 2 (3h39m ago) 7d2h
kube-system kube-proxy-lz2dd 1/1 Running 2 (3h39m ago) 7d2h
kube-system kube-scheduler-k8s-master 1/1 Running 2 (3h39m ago) 7d2h
I hope you find this useful to get you started on your journey with Linkerd. Head on over to the docs for more information, guides and best practices.

8 January 2022

Jonathan Dowland: 2021 in Fiction

Cover for *This is How You Lose the Time War*
Cover for *Robot*
Cover for *The Glass Hotel*
Following on from last year's round-up of my reading, here's a look at the fiction I enjoyed in 2021. I managed to read 42 books in 2021, up from 31 last year. That's partly to do with buying an ereader: 33/36% of my reading (by pages/by books) was ebooks. I think this demonstrates that ebooks have mostly complemented paper books for me, rather than replacing them. My book of the year (although it was published in 2019) was This is How You Lose the Time War by Amal El-Mohtar and Max Gladstone: A short epistolary love story between warring time travellers and quite unlike anything else I've read for a long time. Other notables were The Glass Hotel by Emily St John Mandel and Robot by Adam Wi niewski-Snerg. The biggest disappointment for me was The Ministry for the Future by Kim Stanley Robinson (KSR), which I haven't even finished. I love KSRs writing: I've written about him many times on this blog, at least in 2002, 2006 and 2009, I think I've read every other novel he's published and most of his short stories. But this one was too much of something for me. He's described this novel a the end-point of a particular journey and approach to writing he's taken, which I felt relieved to learn, assuming he writes any more novels (and I really hope that he does) they will likely be in a different "mode". My "new author discovery" for 2021 was Chris Beckett: I tore through Two Tribes and America City before promptly buying all his other work. He fits roughly into the same bracket as Adam Roberts and Christopher Priest, two of my other favourite authors. 5 of the books I read (12%) were from my "backlog" of already-purchased physical books. I'd like to try and reduce my Backlog further so I hope to push this figure up next year. I made a small effort to read more diverse authors this year. 24% of the books I read (by book count and page count) were by women. 15% by page count were (loosely) BAME (19% by book count). Again I'd like to increase these numbers modestly in 2022. Unlike 2020, I didn't complete any short story collections in 2021! This is partly because there was only one issue of Interzone published in all of 2021, a double-issue which I haven't yet finished. This is probably a sad date point in terms of Interzone's continued existence, but it's not dead yet.

29 May 2021

Shirish Agarwal: Planes, Pandemic and Medical Devices I

The Great Electric Airplane Race It took me quite sometime to write as have been depressed about things. Then a few days back saw Nova s The Great Electric Airplane Race. While it was fabulous and a pleasure to see and know that there are more than 200 odd startups who are in the race of making an electric airplane which works and has FAA certification. I was disappointed though that there no coverage of any University projects. From what little I know, almost all advanced materials which U.S. had made has been first researched in mostly Universities and when it is close to fruition then either spin-off as a startup or give to some commercial organization/venture to make it scalable and profitable. If they had, I am sure more people could be convinced to join sciences and engineering in college. I actually do want to come to this as part of both general medicine and vaccine development in U.S. but will come later. The idea that industry works alone should be discouraged, but that perhaps may require another article to articulate why I believe so.

Medical Device Ventilators in India Before the pandemic, probably most didn t know what a ventilator is and was, at least I didn t, although I probably used it during my somewhat brief hospital stay a couple of years ago. It entered into the Indian twitter lexicon more so in the second wave as the number of people who got infected became more and more and the ventilators which were serving them became less and less just due to sheer mismatch of numbers and requirements. Rich countries donated/gifted ventilators to India on which GOI put GST of 28%. Apparently, they are a luxury item, just like my hearing aid. Last week Delhi High Court passed a judgement that imposition of GST should not be on a gift like ventilators or oxygenators. The order can be found here. Even without reading the judgement the shout from the right was judicial activism while after reading it is a good judgement which touches on several points. The first, in itself, stating the dichotomy that if a commercial organization wanted to import a ventilator or an oxygenator the IGST payable is nil while for an individual it is 12%. The State (here State refers to State Government in this case Gujarat Govt.) did reduce the IGST for state from 12% to NIL IGST for federal states but that to till only 30.06.2021. No relief to individuals on that account. The Court also made use of Mr. Arvind Datar, as Amicus Curiae or friend of court. The petitioner, an 85-year-old gentleman who has put it up has put broad assertions under Article 21 (right to live) and the court in its wisdom also added Article 14 which enshrines equality of everyone before law. The Amicus Curiae, as his duty, guided the court into how the IGST law works and shared a brief history of the law and the changes happening before and after it. During his submissions, he also shared the Mega Exemption Notification no. 50/2017 under which several items are there which are exempted from putting IGST. The Amicus Curiae did note that such exemptions were also there before Mega Exemption Notification had come into play. However, DGFT (Directorate General of Foreign Trade) on 30-04-2021 issued notification No. 4/2015-2020 through which oxygenators had been exempted from Custom Duty/BCD (Basic Customs Duty. In another notification on no. 30/2021 dated 01.05.2021 it reduced IGST from 28% to 12% for personal use. If however the oxygenator was procured by a canalizing agency (bodies such as State Trading Corporation of India (STC) or/and Metals and Minerals Corporation (MMTC) and such are defined as canalising agents) then it will be fully exempted from paying any sort of IGST, albeit subject to certain conditions. What the conditions are were not shared in the open court. The Amicus Curiae further observed that it is contrary to practice where both BCD and IGST has been exempted for canalising agents and others, some IGST has to be paid for personal use. To share within the narrow boundaries of the topic, he shared entry no. 607A of General Exemption no.190 where duty and IGST in case of life-saving drugs are zero provided the life-saving drugs imported have been provided by zero cost from an overseas supplier for personal use. He further shared that the oxygen generator would fall in the same entry of 607A as it fulfills all the criteria as shared for life-saving medicines and devices. He also used the help of Drugs and Cosmetics Act 1940 which provides such a relief. The Amicus Curiae further noted that GOI amended its foreign trade policy (2015-2020) via notification no.4/2015-2020, dated 30.04.2021, issued by DGFT where Rakhi and life-saving drugs for personal use has been exempted from BCD till 30-07-2021. No reason not to give the same exemption to oxygenators which fulfill the same thing. The Amicus Curiae, further observes that there are exceptional circumstances provisions as adverted to in sub-section (2) of Section 25 of the Customs Act, whereby Covid-19 which is known and labelled as a pandemic where the distinctions between the two classes of individuals or agencies do not make any sense. While he did make the observation that exemption from duty is not a right, in the light of the pandemic and Article 14, it does not make sense to have distinctions between the two classes of importers. He further shared from Circular no. 9/2014-Customs, dated 19.08.2014 by CBEC (Central Board of Excise and Customs) which gave broad exemptions under Section 25 (2) of the same act in respect of goods and services imported for safety and rehabilitation of people suffering and effected by natural disasters and epidemics. He further submits that the impugned notification is irrational as there is no intelligible differentia rule applied or observed in classifying the import of oxygen concentrators into two categories. One, by the State and its agencies; and the other, by an individual for personal use by way of gift. So there was an absence of adequate determining principle . To bolster his argument, he shared the judgements of

a) Union of India vs. N.S. Rathnam & Sons, (2015) 10 SCC 681 (N.S. Ratnams and Sons Case) b) Shayara Bano vs. Union of India, (2017) 9 SCC 1 (Shayara Bano Case) The Amicus Curiae also rightly observed that the right to life also encompasses within it, the right to health. You cannot have one without the other and within that is the right to have affordable treatment. He further stated that the state does not only have a duty but a positive obligation is cast upon it to ensure that the citizen s health is secured. He again cited Navtej Singh Johars vs Union of India (Navtej Singh Johar Case) in defence of right to life. Mr. Datar also shared that unlike in normal circumstances, it is and should be enough to show distinct and noticeable burdensomeness which is directly attributable to the impugned/questionable tax. The gentleman cited Indian Express Newspapers (Bombay) Private Limited vs. Union of India, (1985) 1 SCC 641 (Indian Express case) 1985 which shared both about Article 19 (1) (a) and Article 21. Bloggers note At this juncture, I should point out which I am sharing the judgement and I would be sharing only the Amicus Curiae POV and then the judge s final observations. While I was reading it, I was stuck by the fact that the Amicus Curiae had cited 4 cases till now, 3 of them are pretty well known both in the legal fraternity and even among public at large. Another 3 which have been shared below which are also of great significance. Hence, felt the need to share the whole judgement. The Amicus Curiae further observed that this tax would have to be disproportionately will have to be paid by the old and the infirm, and they might find it difficult to pay the amounts needed to pay the customs duty/IGST as well as find the agent to pay in this pandemic. Blogger Note The situation with the elderly is something like this. Now there are a few things to note, only Central Govt. employees and pensioners get pensions which has been freezed since last year. The rest of the elderly population does not. The rate of interest has fallen to record lows from 5-6% in savings interest rate to 2% and on Fixed Deposits at 4.9% while the nominal inflation rate has up by 6% while CPI and real inflation rates are and would be much more. And this is when there is absolutely no demand in the economy. To add to all this, RBI shared a couple of months ago that fraud of 5 trillion rupees has been committed between 2015 and 2019 in banks. And this is different from the number of record NPA s that have been both in Public and Private Sector banks. To get out of this, the banks have squeezed their customers and are squeezing as well as asking GOI for bailouts. How much GOI is responsible for the frauds as well as NPA s would probably require its own space. And even now, RBI and banks have made heavy provisions as lockdowns are still a facet and are supposed to remain a facet till the end of the year or even next year (all depending upon when we get the vaccine). The Amicus Curiae further argued that the ventilators which are available locally are of bad quality. The result of all this has resulted in a huge amount of unsurmountable pressure on hospitals which they are unable to overcome. Therefore, the levy of IGST on oxygenators has direct impact on health of the citizen. So the examination of the law should not be by what intention it was but how it is affecting citizen rights now. For this he shared R.C.Cooper vs Union of India (another famous case R.C. Cooper vs Union of India) especially paragraph 49 and Federation of Hotel & Restaurant Association of India vs. Union of India, (1989) at paragraph 46 (Federation of Hotel Case) Mr. Datar further shared the Supreme Court order dated 18.12.2020, passed in Suo Moto Writ Petition(Civil) No.7/2020, to buttress the plea that the right to health includes the right to affordable treatment. Blogger s Note For those, who don t know Suo Moto is when the Court, whether Supreme Court or the High Courts take up a matter for public good. It could be in anything, law and order, Banking, Finance, Public Health etc. etc. This was the norm before 2014. The excesses of the executive were curtailed by both the Higher and the lower Judiciary. That is and was the reason that Judiciary is and was known as the third pillar of Indian democracy. A good characterization of Suo Moto can be found here. Before ending his submission, the learned Amicus Curiae also shared Jeeja Ghosh vs. Union of India, (2016) (Jeeja Ghosh Case, an outstanding case as it deals with people with disabilities and their rights and the observations made by the Division Bench of Hon ble Mr. Justice A. K. Sikri as well as Hon ble Mr. Justice R. K. Agrawal.) After Amicus Curiae completed his submissions, it was the turn of Mr. Sudhir Nandrajog, and he adopted the arguments and submissions made by the Amicus Curiae. The gentleman reiterated the facts of the case and how the impugned notification was violative of both Article 14 and 21 of the Indian Constitution. Blogger s Note The High Court s judgement which shows all the above arguments by the Amicus Curiae and the petitioner s lawyer also shared the State s view. It is only on page 24, where the Delhi High Court starts to share its own observations on the arguments of both sides. Judgement continued The first observation that the Court makes is that while the petitioner demonstrated that the impugned tax imposition would have a distinct and noticeable burdensomeness while the State did not state or share in any way how much of a loss it would incur if such a tax were let go and how much additional work would have to be done in order to receive this specific tax. It didn t need to do something which is down the wire or mathematically precise, but it didn t even care to show even theoretically how many people will be affected by the above. The counter-affidavit by the State is silent on the whole issue. The Court also contended that the State failed to prove how collecting IGST from the concerned individuals would help in fighting coronavirus in any substantial manner for the public at large. The High Court shared observations from the Navtej Singh Johar case where it is observed that the State has both negative and positive obligations to ensure that its citizens are able to enjoy the right to health. The High Court further made the point that no respectable person does like to be turned into a charity case. If the State contends that those who obey the law should pay the taxes then it is also obligatory on the state s part to lessen exactions such as taxes at the very least in times of war, famine, floods, epidemics and pandemics. Such an approach would lead a person to live a life of dignity which is part of Article 21 of the Constitution. Another point that was made by the State that only the GST council is able to make any changes as regards to exemptions rather than the State were found to be false as the State had made some exemptions without going to the GST council using its own powers under Section 25 of the Customs Act. The Court also points out that it does send a discriminatory pattern when somebody like petitioner has to pay the tax for personal use while those who are buying it for commercial use do not have to pay the tax. The Court agreed of the view of the Amicus Curiae, Mr. Datar that oxygenator should be taxed at NIL rate at IGST as it is part of life-saving drugs and oxygenator fits the bill as medical equipment as it is used in the treatment, mitigation and prevention of spread of Coronavirus. Mr. Datar also did show that oxygenator is placed at the same level as other life-saving drugs. The Court felt further emboldened as the observations by Supreme Court in State of Andhra Pradesh vs. Linde India Limited, 2020 ( State of Andhra Pradesh vs Linde Ltd.) The Court further shared many subsequent notifications from the State and various press releases by the State itself which does make the Court s point that oxygenators indeed are drugs as defined in the court case above. The State should have it as part of notification 190. This would preserve the start of the notification date from 03.05.2021 and the state would not have to issue a new notification. The Court further went to postulate that any persons similar to the petitioner could avail of the same, if they furnish a letter of undertaking to an officer designated by the State that the medical equipment would not be put to commercial use. Till the state does not do that, in the interim the importer could give the same undertaking to Joint Secretary, Customs or their nominee can hand over the same to custom officer. The Court also shared that it does not disagree with the State s arguments but the challenges which have arisen are in a unique time period/circumstances, so they are basing their judgement based on how the situation is. The Court also mentioned an order given by Supreme Court Diary No. 10669/2020 passed on 20.03.2020 where SC has taken pains to understand the issues faced by the citizens. The court also mentioned the Small Scale Industrial Manufactures Association Case (both of these cases I don t know) . So in conclusion, the Court holds the imposition of IGST on oxygenator which are imported by individuals as gifts from their relatives as unconstitutional. They also shared that any taxes taken by GOI in above scenario have to be returned. The relief to the state is they will not have to pay interest cost on the same. To check misuse of the same, the petitioner or people who are in similar circumstances would have to give a letter of undertaking to an officer designated by the State within 7 days of the state notifying the patient or anybody authorized by him/her to act on their behalf to share the letter of undertaking with the State. And till the State doesn t provide an officer, the above will continue. Hence, both the writ petition and the pending application are disposed off. The Registry is directed to release any money deposited by the petitioner along with any interest occurred on it (if any) . At the end they record appreciation of Mr. Arvind Datar, Mr. Zoheb Hossain, Mr. Sudhir Nandrajog as well as Mr. Siddharth Bambha. It is only due to their assistance that the court could reach the conclusion it did. For Delhi High Court RAJIV SHAKDHER, J. TALWANT SINGH, J. May 21, 2020 Blogger s Observations Now, after the verdict GOI does have few choices, either accept the verdict or appeal in the SC. A third choice is to make a committee and come to the same conclusions via the committee. GOI has done something similar in the past. If that happens and the same conclusions are reached as before, then the aggrieved may have no choice but to appear in the highest court of law. And this will put the aggrieved at a much more vulnerable place than before as SC court fees, lawyer fees etc. are quite high compared to High Courts. So, there is a possibility that the petitioner may not even approach unless and until some non-profit (NGO) does decide to fight and put it up as common cause or something similar. There is another judgement that I will share, probably tomorrow. Thankfully, that one is pretty short compared to this one. So it should be far more easier to read. FWIW, I did learn about the whole freeenode stuff and many channels who have shifted from freenode to libera. I will share my own experience of the same but that probably will take a day or two.
Zeeshan of IYC (India Youth Congress) along with Salman Khan s non-profit Being Human getting oxygenators
The above picture of Zeeshan. There have been a whole team of Indian Youth Congress workers (main opposition party to the ruling party) who have doing lot of relief effort. They have been buying Oxygenators from abroad with help of Being Human Foundation started by Salman Khan, an actor who works in A-grade movies in Bollywood.

18 February 2021

Jonathan McDowell: Hacking and Bricking the EE Opsrey 2 Mini

I ve mentioned in the past my twisted EE network setup from when I moved in to my current house. The 4GEE WiFi Mini (also known as the EE Osprey 2 Mini or the EE40VB, and actually a rebadged Alcatel Y853VB) has been sitting unused since then, so I figured I d see about trying to get a shell on it. TL;DR: Of course it s running Linux, there s a couple of test points internally which bring out the serial console, but after finding those and logging in I discovered it s running ADB on port 5555 quite happily available without authentication both via wifi and the USB port. So if you have physical or local network access, instant root shell. Well done, folks. And then I bricked it before I could do anything more interesting. There s a lack of information about this device out there - most of the links I can find are around removing the SIM lock - so I thought I d document the pieces I found just in case anyone else is trying to figure it out. It s based around a Qualcomm MDM9607 SoC, paired with 64M RAM and 256M NAND flash. Wifi is via an RTL8192ES. Kernel is 3.18.20. Busybox is v1.23.1. It s running dnsmasq but I didn t grab the version. Of course there s no source or offer of source provided. Taking it apart is fairly easy. There s a single screw to remove, just beside the SIM slot. The coloured rim can then be carefully pried away from the back, revealing the battery. There are then 4 screws in the corners which need removed in order to be able to lift out the actual PCB and gain access to the serial console test points. EE40VB PCB serial console test points My mistake was going poking around trying to figure out where the updates are downloaded from - I know I m running a slightly older release than what s current, and the device can do an automatic download + update. Top tip; don t run Jrdrecovery. It ll error on finding /cache/update.zip and wipe the main partition anyway. That ll leave you in a boot loop where the device boots the recovery partition which tries to install /cache/update.zip which of course still doesn t exist. So. Where next? First, I need to get the device into a state where I can actually do something other than watch it boot into recovery, fail to flash and reboot. Best guess at present is to try and get it to enter the Qualcomm EDL (Emergency Download) mode. That might be possible with a custom USB cable that grounds D+ on boot. Alternatively I need to probe some of the other test points on the PCB and see if grounding any of those helps enter EDL mode. I then need a suitable firehose OEM-signed programmer image. And then I need to actually get hold of a proper EE40VB firmware image, either via one of the OTA update files or possibly via an Alcatel ADSU image (though no idea how to get hold of one, other than by posting to a random GSM device forum and hoping for the kindness of strangers). More updates if/when I make progress
Qualcomm bootloader log
Format: Log Type - Time(microsec) - Message - Optional Info
Log Type: B - Since Boot(Power On Reset),  D - Delta,  S - Statistic
S - QC_IMAGE_VERSION_STRING=BOOT.BF.3.1.2-00053
S - IMAGE_VARIANT_STRING=LAATANAZA
S - OEM_IMAGE_VERSION_STRING=linux3
S - Boot Config, 0x000002e1
B -    105194 - SBL1, Start
D -     61885 - QSEE Image Loaded, Delta - (451964 Bytes)
D -     30286 - RPM Image Loaded, Delta - (151152 Bytes)
B -    459330 - Roger:boot_jrd_oem_main
B -    461526 - Welcome to key_check_poweron!!!
B -    466436 - REG0x00, rc=47
B -    469120 - REG0x01, rc=1f
B -    472018 - REG0x02, rc=1c
B -    474885 - REG0x03, rc=47
B -    477782 - REG0x04, rc=b2
B -    480558 - REG0x05, rc=
B -    483272 - REG0x06, rc=9e
B -    486139 - REG0x07, rc=
B -    488854 - REG0x08, rc=a4
B -    491721 - REG0x09, rc=80
B -    494130 - bq24295_probe: vflt/vsys/vprechg=0mV/0mV/0mV, tprechg/tfastchg=0Min/0Min, [0C, 0C]
B -    511546 - come to calculate vol and temperature!!
B -    511637 - ##############battery_core_convert_vntc: NTC_voltage=1785690
B -    517280 - battery_core_convert_vntc: <-44C, 1785690uV>, present=0
B -    529358 - bq24295_set_current_limit: setting=0mA, mode=-1, input/fastchg/prechg/termchg=-1mA/0mA/0mA/0mA
B -    534360 - bq24295_set_charge_current, rc=0,reg_val=0,i=0
B -    539636 - bq24295_enable_charge: setting=0, chg_enable=-1, otg_enable=0
B -    546072 - bq24295_enable_charging: enable_charging=0
B -    552172 - bq24295_set_current_limit: setting=0mA, mode=-1, input/fastchg/prechg/termchg=-1mA/0mA/0mA/0mA
B -    561566 - bq24295_set_charge_current, rc=0,reg_val=0,i=0
B -    567056 - bq24295_enable_charge: setting=0, chg_enable=0, otg_enable=0
B -    579286 - come to calculate vol and temperature!!
B -    579378 - ##############battery_core_convert_vntc: NTC_voltage=1785777
B -    585539 - battery_core_convert_vntc: <-44C, 1785777uV>, present=0
B -    597617 - charge_main: battery is plugout!!
B -    597678 - Welcome to pca955x_probe!!!
B -    601063 - pca955x_probe: PCA955X probed successfully!
D -     27511 - APPSBL Image Loaded, Delta - (179348 Bytes)
B -    633271 - QSEE Execution, Start
D -       213 - QSEE Execution, Delta
B -    638944 - >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>Start writting JRD RECOVERY BOOT
B -    650107 - >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>Start writting  RECOVERY BOOT
B -    653218 - >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>read_buf[0] == 0
B -    659044 - SBL1, End
D -    556137 - SBL1, Delta
S - Throughput, 2000 KB/s  (782884 Bytes,  278155 us)
S - DDR Frequency, 240 MHz
littlekernel aboot log
Android Bootloader - UART_DM Initialized!!!
[0] welcome to lk
[0] SCM call: 0x2000601 failed with :fffffffc
[0] Failed to initialize SCM
[10] platform_init()
[10] target_init()
[10] smem ptable found: ver: 4 len: 17
[10] ERROR: No devinfo partition found
[10] Neither 'config' nor 'frp' partition found
[30] voltage of NTC  is 1789872!
[30] voltage of BAT  is 3179553!
[30] usb present is 1!
[30] Loading (boot) image (4171776): start
[530] Loading (boot) image (4171776): done
[540] DTB Total entry: 25, DTB version: 3
[540] Using DTB entry 0x00000129/00010000/0x00000008/0 for device 0x00000129/00010000/0x00010008/0
[560] JRD_CHG_OFF_FEATURE!
[560] come to jrd_target_pause_for_battery_charge!
[570] power_on_status.hard_reset = 0x0
[570] power_on_status.smpl = 0x0
[570] power_on_status.rtc = 0x0
[580] power_on_status.dc_chg = 0x0
[580] power_on_status.usb_chg = 0x0
[580] power_on_status.pon1 = 0x1
[590] power_on_status.cblpwr = 0x0
[590] power_on_status.kpdpwr = 0x0
[590] power_on_status.bugflag = 0x0
[590] cmdline: noinitrd  rw console=ttyHSL0,115200,n8 androidboot.hardware=qcom ehci-hcd.park=3 msm_rtb.filter=0x37 lpm_levels.sleep_disabled=1  earlycon=msm_hsl_uart,0x78b3000  androidboot.serialno=7e6ba58c androidboot.baseband=msm rootfstype=ubifs rootflags=b
[620] Updating device tree: start
[720] Updating device tree: done
[720] booting linux @ 0x80008000, ramdisk @ 0x80008000 (0), tags/device tree @ 0x81e00000
Linux kernel console boot log
[    0.000000] Booting Linux on physical CPU 0x0
[    0.000000] Linux version 3.18.20 (linux3@linux3) (gcc version 4.9.2 (GCC) ) #1 PREEMPT Thu Aug 10 11:57:07 CST 2017
[    0.000000] CPU: ARMv7 Processor [410fc075] revision 5 (ARMv7), cr=10c53c7d
[    0.000000] CPU: PIPT / VIPT nonaliasing data cache, VIPT aliasing instruction cache
[    0.000000] Machine model: Qualcomm Technologies, Inc. MDM 9607 MTP
[    0.000000] Early serial console at I/O port 0x0 (options '')
[    0.000000] bootconsole [uart0] enabled
[    0.000000] Reserved memory: reserved region for node 'modem_adsp_region@0': base 0x82a00000, size 56 MiB
[    0.000000] Reserved memory: reserved region for node 'external_image_region@0': base 0x87c00000, size 4 MiB
[    0.000000] Removed memory: created DMA memory pool at 0x82a00000, size 56 MiB
[    0.000000] Reserved memory: initialized node modem_adsp_region@0, compatible id removed-dma-pool
[    0.000000] Removed memory: created DMA memory pool at 0x87c00000, size 4 MiB
[    0.000000] Reserved memory: initialized node external_image_region@0, compatible id removed-dma-pool
[    0.000000] cma: Reserved 4 MiB at 0x87800000
[    0.000000] Memory policy: Data cache writeback
[    0.000000] CPU: All CPU(s) started in SVC mode.
[    0.000000] Built 1 zonelists in Zone order, mobility grouping on.  Total pages: 17152
[    0.000000] Kernel command line: noinitrd  rw console=ttyHSL0,115200,n8 androidboot.hardware=qcom ehci-hcd.park=3 msm_rtb.filter=0x37 lpm_levels.sleep_disabled=1  earlycon=msm_hsl_uart,0x78b3000  androidboot.serialno=7e6ba58c androidboot.baseband=msm rootfstype=ubifs rootflags=bulk_read root=ubi0:rootfs ubi.mtd=16
[    0.000000] PID hash table entries: 512 (order: -1, 2048 bytes)
[    0.000000] Dentry cache hash table entries: 16384 (order: 4, 65536 bytes)
[    0.000000] Inode-cache hash table entries: 8192 (order: 3, 32768 bytes)
[    0.000000] Memory: 54792K/69632K available (5830K kernel code, 399K rwdata, 2228K rodata, 276K init, 830K bss, 14840K reserved)
[    0.000000] Virtual kernel memory layout:
[    0.000000]     vector  : 0xffff0000 - 0xffff1000   (   4 kB)
[    0.000000]     fixmap  : 0xffc00000 - 0xfff00000   (3072 kB)
[    0.000000]     vmalloc : 0xc8800000 - 0xff000000   ( 872 MB)
[    0.000000]     lowmem  : 0xc0000000 - 0xc8000000   ( 128 MB)
[    0.000000]     modules : 0xbf000000 - 0xc0000000   (  16 MB)
[    0.000000]       .text : 0xc0008000 - 0xc07e6c38   (8060 kB)
[    0.000000]       .init : 0xc07e7000 - 0xc082c000   ( 276 kB)
[    0.000000]       .data : 0xc082c000 - 0xc088fdc0   ( 400 kB)
[    0.000000]        .bss : 0xc088fe84 - 0xc095f798   ( 831 kB)
[    0.000000] SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=1, Nodes=1
[    0.000000] Preemptible hierarchical RCU implementation.
[    0.000000] NR_IRQS:16 nr_irqs:16 16
[    0.000000] GIC CPU mask not found - kernel will fail to boot.
[    0.000000] GIC CPU mask not found - kernel will fail to boot.
[    0.000000] mpm_init_irq_domain(): Cannot find irq controller for qcom,gpio-parent
[    0.000000] MPM 1 irq mapping errored -517
[    0.000000] Architected mmio timer(s) running at 19.20MHz (virt).
[    0.000011] sched_clock: 56 bits at 19MHz, resolution 52ns, wraps every 3579139424256ns
[    0.007975] Switching to timer-based delay loop, resolution 52ns
[    0.013969] Switched to clocksource arch_mem_counter
[    0.019687] Console: colour dummy device 80x30
[    0.023344] Calibrating delay loop (skipped), value calculated using timer frequency.. 38.40 BogoMIPS (lpj=192000)
[    0.033666] pid_max: default: 32768 minimum: 301
[    0.038411] Mount-cache hash table entries: 1024 (order: 0, 4096 bytes)
[    0.044902] Mountpoint-cache hash table entries: 1024 (order: 0, 4096 bytes)
[    0.052445] CPU: Testing write buffer coherency: ok
[    0.057057] Setting up static identity map for 0x8058aac8 - 0x8058ab20
[    0.064242]
[    0.064242] **********************************************************
[    0.071251] **   NOTICE NOTICE NOTICE NOTICE NOTICE NOTICE NOTICE   **
[    0.077817] **                                                      **
[    0.084302] ** trace_printk() being used. Allocating extra memory.  **
[    0.090781] **                                                      **
[    0.097320] ** This means that this is a DEBUG kernel and it is     **
[    0.103802] ** unsafe for produciton use.                           **
[    0.110339] **                                                      **
[    0.116850] ** If you see this message and you are not debugging    **
[    0.123333] ** the kernel, report this immediately to your vendor!  **
[    0.129870] **                                                      **
[    0.136380] **   NOTICE NOTICE NOTICE NOTICE NOTICE NOTICE NOTICE   **
[    0.142865] **********************************************************
[    0.150225] MSM Memory Dump base table set up
[    0.153739] MSM Memory Dump apps data table set up
[    0.168125] VFP support v0.3: implementor 41 architecture 2 part 30 variant 7 rev 5
[    0.176332] pinctrl core: initialized pinctrl subsystem
[    0.180930] regulator-dummy: no parameters
[    0.215338] NET: Registered protocol family 16
[    0.220475] DMA: preallocated 256 KiB pool for atomic coherent allocations
[    0.284034] cpuidle: using governor ladder
[    0.314026] cpuidle: using governor menu
[    0.344024] cpuidle: using governor qcom
[    0.355452] msm_watchdog b017000.qcom,wdt: wdog absent resource not present
[    0.361656] msm_watchdog b017000.qcom,wdt: MSM Watchdog Initialized
[    0.371373] irq: no irq domain found for /soc/pinctrl@1000000 !
[    0.381268] spmi_pmic_arb 200f000.qcom,spmi: PMIC Arb Version-2 0x20010000
[    0.389733] platform 4080000.qcom,mss: assigned reserved memory node modem_adsp_region@0
[    0.397409] mem_acc_corner: 0 <--> 0 mV
[    0.401937] hw-breakpoint: found 5 (+1 reserved) breakpoint and 4 watchpoint registers.
[    0.408966] hw-breakpoint: maximum watchpoint size is 8 bytes.
[    0.416287] __of_mpm_init(): MPM driver mapping exists
[    0.420940] msm_rpm_glink_dt_parse: qcom,rpm-glink compatible not matches
[    0.427235] msm_rpm_dev_probe: APSS-RPM communication over SMD
[    0.432977] smd_open() before smd_init()
[    0.437544] msm_mpm_dev_probe(): Cannot get clk resource for XO: -517
[    0.445730] smd_channel_probe_now: allocation table not initialized
[    0.453100] mdm9607_s1: 1050 <--> 1350 mV at 1225 mV normal idle
[    0.458566] spm_regulator_probe: name=mdm9607_s1, range=LV, voltage=1225000 uV, mode=AUTO, step rate=4800 uV/us
[    0.468817] cpr_efuse_init: apc_corner: efuse_addr = 0x000a4000 (len=0x1000)
[    0.475353] cpr_read_fuse_revision: apc_corner: fuse revision = 2
[    0.481345] cpr_parse_speed_bin_fuse: apc_corner: [row: 37]: 0x79e8bd327e6ba58c, speed_bits = 4
[    0.490124] cpr_pvs_init: apc_corner: pvs voltage: [1050000 1100000 1275000] uV
[    0.497342] cpr_pvs_init: apc_corner: ceiling voltage: [1050000 1225000 1350000] uV
[    0.504979] cpr_pvs_init: apc_corner: floor voltage: [1050000 1050000 1150000] uV
[    0.513125] i2c-msm-v2 78b8000.i2c: probing driver i2c-msm-v2
[    0.518335] i2c-msm-v2 78b8000.i2c: error on clk_get(core_clk):-517
[    0.524478] i2c-msm-v2 78b8000.i2c: error probe() failed with err:-517
[    0.531111] i2c-msm-v2 78b7000.i2c: probing driver i2c-msm-v2
[    0.536788] i2c-msm-v2 78b7000.i2c: error on clk_get(core_clk):-517
[    0.542886] i2c-msm-v2 78b7000.i2c: error probe() failed with err:-517
[    0.549618] i2c-msm-v2 78b9000.i2c: probing driver i2c-msm-v2
[    0.555202] i2c-msm-v2 78b9000.i2c: error on clk_get(core_clk):-517
[    0.561374] i2c-msm-v2 78b9000.i2c: error probe() failed with err:-517
[    0.570613] msm-thermal soc:qcom,msm-thermal: msm_thermal:Failed reading node=/soc/qcom,msm-thermal, key=qcom,core-limit-temp. err=-22. KTM continues
[    0.583049] msm-thermal soc:qcom,msm-thermal: probe_therm_reset:Failed reading node=/soc/qcom,msm-thermal, key=qcom,therm-reset-temp err=-22. KTM continues
[    0.596926] msm_thermal:msm_thermal_dev_probe Failed reading node=/soc/qcom,msm-thermal, key=qcom,online-hotplug-core. err:-517
[    0.609370] sps:sps is ready.
[    0.613137] msm_rpm_glink_dt_parse: qcom,rpm-glink compatible not matches
[    0.619020] msm_rpm_dev_probe: APSS-RPM communication over SMD
[    0.625773] mdm9607_s2: 750 <--> 1275 mV at 750 mV normal idle
[    0.631584] mdm9607_s3_level: 0 <--> 0 mV at 0 mV normal idle
[    0.637085] mdm9607_s3_level_ao: 0 <--> 0 mV at 0 mV normal idle
[    0.643092] mdm9607_s3_floor_level: 0 <--> 0 mV at 0 mV normal idle
[    0.649512] mdm9607_s3_level_so: 0 <--> 0 mV at 0 mV normal idle
[    0.655750] mdm9607_s4: 1800 <--> 1950 mV at 1800 mV normal idle
[    0.661791] mdm9607_l1: 1250 mV normal idle
[    0.666090] mdm9607_l2: 1800 mV normal idle
[    0.670276] mdm9607_l3: 1800 mV normal idle
[    0.674541] mdm9607_l4: 3075 mV normal idle
[    0.678743] mdm9607_l5: 1700 <--> 3050 mV at 1700 mV normal idle
[    0.684904] mdm9607_l6: 1700 <--> 3050 mV at 1700 mV normal idle
[    0.690892] mdm9607_l7: 1700 <--> 1900 mV at 1700 mV normal idle
[    0.697036] mdm9607_l8: 1800 mV normal idle
[    0.701238] mdm9607_l9: 1200 <--> 1250 mV at 1200 mV normal idle
[    0.707367] mdm9607_l10: 1050 mV normal idle
[    0.711662] mdm9607_l11: 1800 mV normal idle
[    0.716089] mdm9607_l12_level: 0 <--> 0 mV at 0 mV normal idle
[    0.721717] mdm9607_l12_level_ao: 0 <--> 0 mV at 0 mV normal idle
[    0.727946] mdm9607_l12_level_so: 0 <--> 0 mV at 0 mV normal idle
[    0.734099] mdm9607_l12_floor_lebel: 0 <--> 0 mV at 0 mV normal idle
[    0.740706] mdm9607_l13: 1800 <--> 2850 mV at 2850 mV normal idle
[    0.746883] mdm9607_l14: 2650 <--> 3000 mV at 2650 mV normal idle
[    0.752515] msm_mpm_dev_probe(): Cannot get clk resource for XO: -517
[    0.759036] cpr_efuse_init: apc_corner: efuse_addr = 0x000a4000 (len=0x1000)
[    0.765807] cpr_read_fuse_revision: apc_corner: fuse revision = 2
[    0.771809] cpr_parse_speed_bin_fuse: apc_corner: [row: 37]: 0x79e8bd327e6ba58c, speed_bits = 4
[    0.780586] cpr_pvs_init: apc_corner: pvs voltage: [1050000 1100000 1275000] uV
[    0.787808] cpr_pvs_init: apc_corner: ceiling voltage: [1050000 1225000 1350000] uV
[    0.795443] cpr_pvs_init: apc_corner: floor voltage: [1050000 1050000 1150000] uV
[    0.803094] cpr_init_cpr_parameters: apc_corner: up threshold = 2, down threshold = 3
[    0.810752] cpr_init_cpr_parameters: apc_corner: CPR is enabled by default.
[    0.817687] cpr_init_cpr_efuse: apc_corner: [row:65] = 0x15000277277383
[    0.824272] cpr_init_cpr_efuse: apc_corner: CPR disable fuse = 0
[    0.830225] cpr_init_cpr_efuse: apc_corner: Corner[1]: ro_sel = 0, target quot = 631
[    0.837976] cpr_init_cpr_efuse: apc_corner: Corner[2]: ro_sel = 0, target quot = 631
[    0.845703] cpr_init_cpr_efuse: apc_corner: Corner[3]: ro_sel = 0, target quot = 899
[    0.853592] cpr_config: apc_corner: Timer count: 0x17700 (for 5000 us)
[    0.860426] apc_corner: 0 <--> 0 mV
[    0.864044] i2c-msm-v2 78b8000.i2c: probing driver i2c-msm-v2
[    0.869261] i2c-msm-v2 78b8000.i2c: error on clk_get(core_clk):-517
[    0.875492] i2c-msm-v2 78b8000.i2c: error probe() failed with err:-517
[    0.882225] i2c-msm-v2 78b7000.i2c: probing driver i2c-msm-v2
[    0.887775] i2c-msm-v2 78b7000.i2c: error on clk_get(core_clk):-517
[    0.893941] i2c-msm-v2 78b7000.i2c: error probe() failed with err:-517
[    0.900719] i2c-msm-v2 78b9000.i2c: probing driver i2c-msm-v2
[    0.906256] i2c-msm-v2 78b9000.i2c: error on clk_get(core_clk):-517
[    0.912430] i2c-msm-v2 78b9000.i2c: error probe() failed with err:-517
[    0.919472] msm-thermal soc:qcom,msm-thermal: msm_thermal:Failed reading node=/soc/qcom,msm-thermal, key=qcom,core-limit-temp. err=-22. KTM continues
[    0.932372] msm-thermal soc:qcom,msm-thermal: probe_therm_reset:Failed reading node=/soc/qcom,msm-thermal,
key=qcom,therm-reset-temp err=-22. KTM continues
[    0.946361] msm_thermal:get_kernel_cluster_info CPU0 topology not initialized.
[    0.953824] cpu cpu0: dev_pm_opp_get_opp_count: device OPP not found (-19)
[    0.960300] msm_thermal:get_cpu_freq_plan_len Error reading CPU0 freq table len. error:-19
[    0.968533] msm_thermal:vdd_restriction_reg_init Defer vdd rstr freq init.
[    0.975846] cpu cpu0: dev_pm_opp_get_opp_count: device OPP not found (-19)
[    0.982219] msm_thermal:get_cpu_freq_plan_len Error reading CPU0 freq table len. error:-19
[    0.991378] cpu cpu0: dev_pm_opp_get_opp_count: device OPP not found (-19)
[    0.997544] msm_thermal:get_cpu_freq_plan_len Error reading CPU0 freq table len. error:-19
[    1.013642] qcom,gcc-mdm9607 1800000.qcom,gcc: Registered GCC clocks
[    1.019451] clock-a7 b010008.qcom,clock-a7: Speed bin: 4 PVS Version: 0
[    1.025693] a7ssmux: set OPP pair(400000000 Hz: 1 uV) on cpu0
[    1.031314] a7ssmux: set OPP pair(1305600000 Hz: 7 uV) on cpu0
[    1.038805] i2c-msm-v2 78b8000.i2c: probing driver i2c-msm-v2
[    1.043587] AXI: msm_bus_scale_register_client(): msm_bus_scale_register_client: Bus driver not ready.
[    1.052935] i2c-msm-v2 78b8000.i2c: msm_bus_scale_register_client(mstr-id:86):0 (not a problem)
[    1.062006] irq: no irq domain found for /soc/wcd9xxx-irq !
[    1.069884] i2c-msm-v2 78b7000.i2c: probing driver i2c-msm-v2
[    1.074814] AXI: msm_bus_scale_register_client(): msm_bus_scale_register_client: Bus driver not ready.
[    1.083716] i2c-msm-v2 78b7000.i2c: msm_bus_scale_register_client(mstr-id:86):0 (not a problem)
[    1.093850] i2c-msm-v2 78b9000.i2c: probing driver i2c-msm-v2
[    1.098889] AXI: msm_bus_scale_register_client(): msm_bus_scale_register_client: Bus driver not ready.
[    1.107779] i2c-msm-v2 78b9000.i2c: msm_bus_scale_register_client(mstr-id:86):0 (not a problem)
[    1.167871] KPI: Bootloader start count = 24097
[    1.171364] KPI: Bootloader end count = 48481
[    1.175855] KPI: Bootloader display count = 3884474147
[    1.180825] KPI: Bootloader load kernel count = 16420
[    1.185905] KPI: Kernel MPM timestamp = 105728
[    1.190286] KPI: Kernel MPM Clock frequency = 32768
[    1.195209] socinfo_print: v0.10, id=297, ver=1.0, raw_id=72, raw_ver=0, hw_plat=8, hw_plat_ver=65536
[    1.195209]  accessory_chip=0, hw_plat_subtype=0, pmic_model=65539, pmic_die_revision=131074 foundry_id=0 serial_number=2120983948
[    1.216731] sdcard_ext_vreg: no parameters
[    1.220555] rome_vreg: no parameters
[    1.224133] emac_lan_vreg: no parameters
[    1.228177] usbcore: registered new interface driver usbfs
[    1.233156] usbcore: registered new interface driver hub
[    1.238578] usbcore: registered new device driver usb
[    1.244507] cpufreq: driver msm up and running
[    1.248425] ION heap system created
[    1.251895] msm_bus_fabric_init_driver
[    1.262563] qcom,qpnp-power-on qpnp-power-on-c7303800: PMIC@SID0 Power-on reason: Triggered from PON1 (secondary PMIC) and 'cold' boot
[    1.273747] qcom,qpnp-power-on qpnp-power-on-c7303800: PMIC@SID0: Power-off reason: Triggered from UVLO (Under Voltage Lock Out)
[    1.285430] input: qpnp_pon as /devices/virtual/input/input0
[    1.291246] PMIC@SID0: PM8019 v2.2 options: 3, 2, 2, 2
[    1.296706] Advanced Linux Sound Architecture Driver Initialized.
[    1.302493] Add group failed
[    1.305291] cfg80211: Calling CRDA to update world regulatory domain
[    1.311216] cfg80211: World regulatory domain updated:
[    1.317109] Switched to clocksource arch_mem_counter
[    1.334091] cfg80211:  DFS Master region: unset
[    1.337418] cfg80211:   (start_freq - end_freq @ bandwidth), (max_antenna_gain, max_eirp), (dfs_cac_time)
[    1.354087] cfg80211:   (2402000 KHz - 2472000 KHz @ 40000 KHz), (N/A, 2000 mBm), (N/A)
[    1.361055] cfg80211:   (2457000 KHz - 2482000 KHz @ 40000 KHz), (N/A, 2000 mBm), (N/A)
[    1.370545] NET: Registered protocol family 2
[    1.374082] cfg80211:   (2474000 KHz - 2494000 KHz @ 20000 KHz), (N/A, 2000 mBm), (N/A)
[    1.381851] cfg80211:   (5170000 KHz - 5250000 KHz @ 80000 KHz), (N/A, 2000 mBm), (N/A)
[    1.389876] cfg80211:   (5250000 KHz - 5330000 KHz @ 80000 KHz), (N/A, 2000 mBm), (N/A)
[    1.397857] cfg80211:   (5490000 KHz - 5710000 KHz @ 80000 KHz), (N/A, 2000 mBm), (N/A)
[    1.405841] cfg80211:   (5735000 KHz - 5835000 KHz @ 80000 KHz), (N/A, 2000 mBm), (N/A)
[    1.413795] cfg80211:   (57240000 KHz - 63720000 KHz @ 2160000 KHz), (N/A, 0 mBm), (N/A)
[    1.422355] TCP established hash table entries: 1024 (order: 0, 4096 bytes)
[    1.428921] TCP bind hash table entries: 1024 (order: 0, 4096 bytes)
[    1.435192] TCP: Hash tables configured (established 1024 bind 1024)
[    1.441528] TCP: reno registered
[    1.444738] UDP hash table entries: 256 (order: 0, 4096 bytes)
[    1.450521] UDP-Lite hash table entries: 256 (order: 0, 4096 bytes)
[    1.456950] NET: Registered protocol family 1
[    1.462779] futex hash table entries: 256 (order: -1, 3072 bytes)
[    1.474555] msgmni has been set to 115
[    1.478551] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251)
[    1.485041] io scheduler noop registered
[    1.488818] io scheduler deadline registered
[    1.493200] io scheduler cfq registered (default)
[    1.502142] msm_rpm_log_probe: OK
[    1.506717] msm_serial_hs module loaded
[    1.509803] msm_serial_hsl_probe: detected port #0 (ttyHSL0)
[    1.515324] AXI: get_pdata(): Error: Client name not found
[    1.520626] AXI: msm_bus_cl_get_pdata(): client has to provide missing entry for successful registration
[    1.530171] msm_serial_hsl_probe: Bus scaling is disabled                      [    1.074814] AXI: msm_bus_scale_register_client(): msm_bus_scale_register_client: Bus driver not ready.
[    1.083716] i2c-msm-v2 78b7000.i2c: msm_bus_scale_register_client(mstr-id:86):0 (not a problem)
[    1.093850] i2c-msm-v2 78b9000.i2c: probing driver i2c-msm-v2
[    1.098889] AXI: msm_bus_scale_register_client(): msm_bus_scale_register_client: Bus driver not ready.
[    1.107779] i2c-msm-v2 78b9000.i2c: msm_bus_scale_register_client(mstr-id:86):0 (not a problem)
[    1.167871] KPI: Bootloader start count = 24097
[    1.171364] KPI: Bootloader end count = 48481
[    1.175855] KPI: Bootloader display count = 3884474147
[    1.180825] KPI: Bootloader load kernel count = 16420
[    1.185905] KPI: Kernel MPM timestamp = 105728
[    1.190286] KPI: Kernel MPM Clock frequency = 32768
[    1.195209] socinfo_print: v0.10, id=297, ver=1.0, raw_id=72, raw_ver=0, hw_plat=8, hw_plat_ver=65536
[    1.195209]  accessory_chip=0, hw_plat_subtype=0, pmic_model=65539, pmic_die_revision=131074 foundry_id=0 serial_number=2120983948
[    1.216731] sdcard_ext_vreg: no parameters
[    1.220555] rome_vreg: no parameters
[    1.224133] emac_lan_vreg: no parameters
[    1.228177] usbcore: registered new interface driver usbfs
[    1.233156] usbcore: registered new interface driver hub
[    1.238578] usbcore: registered new device driver usb
[    1.244507] cpufreq: driver msm up and running
[    1.248425] ION heap system created
[    1.251895] msm_bus_fabric_init_driver
[    1.262563] qcom,qpnp-power-on qpnp-power-on-c7303800: PMIC@SID0 Power-on reason: Triggered from PON1 (secondary PMIC) and 'cold' boot
[    1.273747] qcom,qpnp-power-on qpnp-power-on-c7303800: PMIC@SID0: Power-off reason: Triggered from UVLO (Under Voltage Lock Out)
[    1.285430] input: qpnp_pon as /devices/virtual/input/input0
[    1.291246] PMIC@SID0: PM8019 v2.2 options: 3, 2, 2, 2
[    1.296706] Advanced Linux Sound Architecture Driver Initialized.
[    1.302493] Add group failed
[    1.305291] cfg80211: Calling CRDA to update world regulatory domain
[    1.311216] cfg80211: World regulatory domain updated:
[    1.317109] Switched to clocksource arch_mem_counter
[    1.334091] cfg80211:  DFS Master region: unset
[    1.337418] cfg80211:   (start_freq - end_freq @ bandwidth), (max_antenna_gain, max_eirp), (dfs_cac_time)
[    1.354087] cfg80211:   (2402000 KHz - 2472000 KHz @ 40000 KHz), (N/A, 2000 mBm), (N/A)
[    1.361055] cfg80211:   (2457000 KHz - 2482000 KHz @ 40000 KHz), (N/A, 2000 mBm), (N/A)
[    1.370545] NET: Registered protocol family 2
[    1.374082] cfg80211:   (2474000 KHz - 2494000 KHz @ 20000 KHz), (N/A, 2000 mBm), (N/A)
[    1.381851] cfg80211:   (5170000 KHz - 5250000 KHz @ 80000 KHz), (N/A, 2000 mBm), (N/A)
[    1.389876] cfg80211:   (5250000 KHz - 5330000 KHz @ 80000 KHz), (N/A, 2000 mBm), (N/A)
[    1.397857] cfg80211:   (5490000 KHz - 5710000 KHz @ 80000 KHz), (N/A, 2000 mBm), (N/A)
[    1.405841] cfg80211:   (5735000 KHz - 5835000 KHz @ 80000 KHz), (N/A, 2000 mBm), (N/A)
[    1.413795] cfg80211:   (57240000 KHz - 63720000 KHz @ 2160000 KHz), (N/A, 0 mBm), (N/A)
[    1.422355] TCP established hash table entries: 1024 (order: 0, 4096 bytes)
[    1.428921] TCP bind hash table entries: 1024 (order: 0, 4096 bytes)
[    1.435192] TCP: Hash tables configured (established 1024 bind 1024)
[    1.441528] TCP: reno registered
[    1.444738] UDP hash table entries: 256 (order: 0, 4096 bytes)
[    1.450521] UDP-Lite hash table entries: 256 (order: 0, 4096 bytes)
[    1.456950] NET: Registered protocol family 1
[    1.462779] futex hash table entries: 256 (order: -1, 3072 bytes)
[    1.474555] msgmni has been set to 115
[    1.478551] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251)
[    1.485041] io scheduler noop registered
[    1.488818] io scheduler deadline registered
[    1.493200] io scheduler cfq registered (default)
[    1.502142] msm_rpm_log_probe: OK
[    1.506717] msm_serial_hs module loaded
[    1.509803] msm_serial_hsl_probe: detected port #0 (ttyHSL0)
[    1.515324] AXI: get_pdata(): Error: Client name not found
[    1.520626] AXI: msm_bus_cl_get_pdata(): client has to provide missing entry for successful registration
[    1.530171] msm_serial_hsl_probe: Bus scaling is disabled
[    1.535696] 78b3000.serial: ttyHSL0 at MMIO 0x78b3000 (irq = 153, base_baud = 460800 [    1.544155] msm_hsl_console_setup: console setup on port #0
[    1.548727] console [ttyHSL0] enabled
[    1.548727] console [ttyHSL0] enabled
[    1.556014] bootconsole [uart0] disabled
[    1.556014] bootconsole [uart0] disabled
[    1.564212] msm_serial_hsl_init: driver initialized
[    1.578450] brd: module loaded
[    1.582920] loop: module loaded
[    1.589183] sps: BAM device 0x07984000 is not registered yet.
[    1.594234] sps:BAM 0x07984000 is registered.
[    1.598072] msm_nand_bam_init: msm_nand_bam_init: BAM device registered: bam_handle 0xc69f6400
[    1.607103] sps:BAM 0x07984000 (va:0xc89a0000) enabled: ver:0x18, number of pipes:7
[    1.616588] msm_nand_parse_smem_ptable: Parsing partition table info from SMEM
[    1.622805] msm_nand_parse_smem_ptable: SMEM partition table found: ver: 4 len: 17
[    1.630391] msm_nand_version_check: nand_major:1, nand_minor:5, qpic_major:1, qpic_minor:5
[    1.638642] msm_nand_scan: NAND Id: 0x1590aa98 Buswidth: 8Bits Density: 256 MByte
[    1.646069] msm_nand_scan: pagesize: 2048 Erasesize: 131072 oobsize: 128 (in Bytes)
[    1.653676] msm_nand_scan: BCH ECC: 8 Bit
[    1.657710] msm_nand_scan: CFG0: 0x290408c0,           CFG1: 0x0804715c
[    1.657710]             RAWCFG0: 0x2b8400c0,        RAWCFG1: 0x0005055d
[    1.657710]           ECCBUFCFG: 0x00000203,      ECCBCHCFG: 0x42040d10
[    1.657710]           RAWECCCFG: 0x42000d11, BAD BLOCK BYTE: 0x000001c5
[    1.684101] Creating 17 MTD partitions on "7980000.nand":
[    1.689447] 0x000000000000-0x000000140000 : "sbl"
[    1.694867] 0x000000140000-0x000000280000 : "mibib"
[    1.699560] 0x000000280000-0x000000e80000 : "efs2"
[    1.704408] 0x000000e80000-0x000000f40000 : "tz"
[    1.708934] 0x000000f40000-0x000000fa0000 : "rpm"
[    1.713625] 0x000000fa0000-0x000001000000 : "aboot"
[    1.718582] 0x000001000000-0x0000017e0000 : "boot"
[    1.723281] 0x0000017e0000-0x000002820000 : "scrub"
[    1.728174] 0x000002820000-0x000005020000 : "modem"
[    1.732968] 0x000005020000-0x000005420000 : "rfbackup"
[    1.738156] 0x000005420000-0x000005820000 : "oem"
[    1.742770] 0x000005820000-0x000005f00000 : "recovery"
[    1.747972] 0x000005f00000-0x000009100000 : "cache"
[    1.752787] 0x000009100000-0x000009a40000 : "recoveryfs"
[    1.758389] 0x000009a40000-0x00000aa40000 : "cdrom"
[    1.762967] 0x00000aa40000-0x00000ba40000 : "jrdresource"
[    1.768407] 0x00000ba40000-0x000010000000 : "system"
[    1.773239] msm_nand_probe: NANDc phys addr 0x7980000, BAM phys addr 0x7984000, BAM IRQ 164
[    1.781074] msm_nand_probe: Allocated DMA buffer at virt_addr 0xc7840000, phys_addr 0x87840000
[    1.791872] PPP generic driver version 2.4.2
[    1.801126] cnss_sdio 87a00000.qcom,cnss-sdio: CNSS SDIO Driver registered
[    1.807554] msm_otg 78d9000.usb: msm_otg probe
[    1.813333] msm_otg 78d9000.usb: OTG regs = c88f8000
[    1.820702] gbridge_init: gbridge_init successs.
[    1.826344] msm_otg 78d9000.usb: phy_reset: success
[    1.830294] qcom,qpnp-rtc qpnp-rtc-c7307000: rtc core: registered qpnp_rtc as rtc0
[    1.838474] i2c /dev entries driver
[    1.842459] unable to find DT imem DLOAD mode node
[    1.846588] unable to find DT imem EDLOAD mode node
[    1.851332] unable to find DT imem dload-type node
[    1.856921] bq24295-charger 4-006b: bq24295 probe enter
[    1.861161] qcom,iterm-ma = 128
[    1.864476] bq24295_otg_vreg: no parameters
[    1.868502] charger_core_register: Charger Core Version 5.0.0(Built at 20151202-21:36)!
[    1.877007] i2c-msm-v2 78b8000.i2c: msm_bus_scale_register_client(mstr-id:86):0x3 (ok)
[    1.885559] bq24295-charger 4-006b: bq24295_set_bhot_mode 3
[    1.890150] bq24295-charger 4-006b: power_good is 1,vbus_stat is 2
[    1.896588] bq24295-charger 4-006b: bq24295_set_thermal_threshold 100
[    1.902952] bq24295-charger 4-006b: bq24295_set_sys_min 3700
[    1.908639] bq24295-charger 4-006b: bq24295_set_max_target_voltage 4150
[    1.915223] bq24295-charger 4-006b: bq24295_set_recharge_threshold 300
[    1.922119] bq24295-charger 4-006b: bq24295_set_terminal_current_limit iterm_disabled=0, iterm_ma=128
[    1.930917] bq24295-charger 4-006b: bq24295_set_precharge_current_limit bdi->prech_cur=128
[    1.940038] bq24295-charger 4-006b: bq24295_set_safty_timer 0
[    1.945088] bq24295-charger 4-006b: bq24295_set_input_voltage_limit 4520
[    1.972949] sdhci: Secure Digital Host Controller Interface driver
[    1.978151] sdhci: Copyright(c) Pierre Ossman
[    1.982441] sdhci-pltfm: SDHCI platform and OF driver helper
[    1.989092] sdhci_msm 7824900.sdhci: sdhci_msm_probe: ICE device is not enabled
[    1.995473] sdhci_msm 7824900.sdhci: No vreg data found for vdd
[    2.001530] sdhci_msm 7824900.sdhci: sdhci_msm_pm_qos_parse_irq: error -22 reading irq cpu
[    2.009809] sdhci_msm 7824900.sdhci: sdhci_msm_pm_qos_parse: PM QoS voting for IRQ will be disabled
[    2.018600] sdhci_msm 7824900.sdhci: sdhci_msm_pm_qos_parse: PM QoS voting for cpu group will be disabled
[    2.030541] sdhci_msm 7824900.sdhci: sdhci_msm_probe: sdiowakeup_irq = 353
[    2.036867] sdhci_msm 7824900.sdhci: No vmmc regulator found
[    2.042027] sdhci_msm 7824900.sdhci: No vqmmc regulator found
[    2.048266] mmc0: SDHCI controller on 7824900.sdhci [7824900.sdhci] using 32-bit ADMA in legacy mode
[    2.080401] Welcome to pca955x_probe!!
[    2.084362] leds-pca955x 3-0020: leds-pca955x: Using pca9555 16-bit LED driver at slave address 0x20
[    2.095400] sdhci_msm 7824900.sdhci: card claims to support voltages below defined range
[    2.103125] i2c-msm-v2 78b7000.i2c: msm_bus_scale_register_client(mstr-id:86):0x5 (ok)
[    2.114183] msm_otg 78d9000.usb: Avail curr from USB = 1500
[    2.120251] come to USB_SDP_CHARGER!
[    2.123215] Welcome to sn3199_probe!
[    2.126718] leds-sn3199 5-0064: leds-sn3199: Using sn3199 9-bit LED driver at slave address 0x64
[    2.136511] sn3199->led_en_gpio=21
[    2.139143] i2c-msm-v2 78b9000.i2c: msm_bus_scale_register_client(mstr-id:86):0x6 (ok)
[    2.150207] usbcore: registered new interface driver usbhid
[    2.154864] usbhid: USB HID core driver
[    2.159825] sps:BAM 0x078c4000 is registered.
[    2.163573] bimc-bwmon 408000.qcom,cpu-bwmon: BW HWmon governor registered.
[    2.171080] devfreq soc:qcom,cpubw: Couldn't update frequency transition information.
[    2.178513] coresight-fuse a601c.fuse: QPDI fuse not specified
[    2.184242] coresight-fuse a601c.fuse: Fuse initialized
[    2.192407] coresight-csr 6001000.csr: CSR initialized
[    2.197263] coresight-tmc 6026000.tmc: Byte Counter feature enabled
[    2.203204] sps:BAM 0x06084000 is registered.
[    2.207301] coresight-tmc 6026000.tmc: TMC initialized
[    2.212681] coresight-tmc 6025000.tmc: TMC initialized
[    2.220071] nidnt boot config: 0
[    2.224563] mmc0: new ultra high speed SDR50 SDIO card at address 0001
[    2.231120] coresight-tpiu 6020000.tpiu: NIDnT on SDCARD only mode
[    2.236440] coresight-tpiu 6020000.tpiu: TPIU initialized
[    2.242808] coresight-replicator 6024000.replicator: REPLICATOR initialized
[    2.249372] coresight-stm 6002000.stm: STM initialized
[    2.255034] coresight-hwevent 606c000.hwevent: Hardware Event driver initialized
[    2.262312] Netfilter messages via NETLINK v0.30.
[    2.266306] nf_conntrack version 0.5.0 (920 buckets, 3680 max)
[    2.272312] ctnetlink v0.93: registering with nfnetlink.
[    2.277565] ip_set: protocol 6
[    2.280568] ip_tables: (C) 2000-2006 Netfilter Core Team
[    2.285723] arp_tables: (C) 2002 David S. Miller
[    2.290146] TCP: cubic registered
[    2.293915] NET: Registered protocol family 10
[    2.298740] ip6_tables: (C) 2000-2006 Netfilter Core Team
[    2.303407] sit: IPv6 over IPv4 tunneling driver
[    2.308481] NET: Registered protocol family 17
[    2.312340] bridge: automatic filtering via arp/ip/ip6tables has been deprecated. Update your scripts to load br_netfilter if you need this.
[    2.325094] Bridge firewalling registered
[    2.328930] Ebtables v2.0 registered
[    2.333260] NET: Registered protocol family 27
[    2.341362] battery_core_register: Battery Core Version 5.0.0(Built at 20151202-21:36)!
[    2.348466] pmu_battery_probe: vbat_channel=21, tbat_channel=17
[    2.420236] ubi0: attaching mtd16
[    2.723941] ubi0: scanning is finished
[    2.732997] ubi0: attached mtd16 (name "system", size 69 MiB)
[    2.737783] ubi0: PEB size: 131072 bytes (128 KiB), LEB size: 126976 bytes
[    2.744601] ubi0: min./max. I/O unit sizes: 2048/2048, sub-page size 2048
[    2.751333] ubi0: VID header offset: 2048 (aligned 2048), data offset: 4096
[    2.758540] ubi0: good PEBs: 556, bad PEBs: 2, corrupted PEBs: 0
[    2.764305] ubi0: user volume: 3, internal volumes: 1, max. volumes count: 128
[    2.771476] ubi0: max/mean erase counter: 192/64, WL threshold: 4096, image sequence number: 35657280
[    2.780708] ubi0: available PEBs: 0, total reserved PEBs: 556, PEBs reserved for bad PEB handling: 38
[    2.789921] ubi0: background thread "ubi_bgt0d" started, PID 96
[    2.796395] android_bind cdev: 0xC6583E80, name: ci13xxx_msm
[    2.801508] file system registered
[    2.804974] mbim_init: initialize 1 instances
[    2.809228] mbim_init: Initialized 1 ports
[    2.815074] rndis_qc_init: initialize rndis QC instance
[    2.819713] jrd device_desc.bcdDevice: [0x0242]
[    2.823779] android_bind scheduled usb start work: name: ci13xxx_msm
[    2.830230] android_usb gadget: android_usb ready
[    2.834845] msm_hsusb msm_hsusb: [ci13xxx_start] hw_ep_max = 32
[    2.840741] msm_hsusb msm_hsusb: CI13XXX_CONTROLLER_RESET_EVENT received
[    2.847433] msm_hsusb msm_hsusb: CI13XXX_CONTROLLER_UDC_STARTED_EVENT received
[    2.855851] input: gpio-keys as /devices/soc:gpio_keys/input/input1
[    2.861452] qcom,qpnp-rtc qpnp-rtc-c7307000: setting system clock to 1970-01-01 06:36:41 UTC (23801)
[    2.870315] open file error /usb_conf/usb_config.ini
[    2.876412] jrd_usb_start_work open file erro /usb_conf/usb_config.ini, retry_count:0
[    2.884324] parse_legacy_cluster_params(): Ignoring cluster params
[    2.889468] ------------[ cut here ]------------
[    2.894186] WARNING: CPU: 0 PID: 1 at /home/linux3/jrd/yanping.an/ee40/0810/MDM9607.LE.1.0-00130/apps_proc/oe-core/build/tmp-glibc/work-shared/mdm9607/kernel-source/drivers/cpuidle/lpm-levels-of.c:739 parse_cluster+0xb50/0xcb4()
[    2.914366] Modules linked in:
[    2.917339] CPU: 0 PID: 1 Comm: swapper Not tainted 3.18.20 #1
[    2.923171] [<c00132ac>] (unwind_backtrace) from [<c0011460>] (show_stack+0x10/0x14)
[    2.931092] [<c0011460>] (show_stack) from [<c001c6ac>] (warn_slowpath_common+0x68/0x88)
[    2.939175] [<c001c6ac>] (warn_slowpath_common) from [<c001c75c>] (warn_slowpath_null+0x18/0x20)
[    2.947895] [<c001c75c>] (warn_slowpath_null) from [<c034e180>] (parse_cluster+0xb50/0xcb4)
[    2.956189] [<c034e180>] (parse_cluster) from [<c034b6b4>] (lpm_probe+0xc/0x1d4)
[    2.963527] [<c034b6b4>] (lpm_probe) from [<c024857c>] (platform_drv_probe+0x30/0x7c)
[    2.971380] [<c024857c>] (platform_drv_probe) from [<c0246d54>] (driver_probe_device+0xb8/0x1e8)
[    2.980118] [<c0246d54>] (driver_probe_device) from [<c0246f30>] (__driver_attach+0x68/0x8c)
[    2.988467] [<c0246f30>] (__driver_attach) from [<c02455d0>] (bus_for_each_dev+0x6c/0x90)
[    2.996626] [<c02455d0>] (bus_for_each_dev) from [<c02465a4>] (bus_add_driver+0xe0/0x1c8)
[    3.004786] [<c02465a4>] (bus_add_driver) from [<c02477bc>] (driver_register+0x9c/0xe0)
[    3.012739] [<c02477bc>] (driver_register) from [<c080c3d8>] (lpm_levels_module_init+0x14/0x38)
[    3.021459] [<c080c3d8>] (lpm_levels_module_init) from [<c0008980>] (do_one_initcall+0xf8/0x1a0)
[    3.030217] [<c0008980>] (do_one_initcall) from [<c07e7d4c>] (kernel_init_freeable+0xf0/0x1b0)
[    3.038818] [<c07e7d4c>] (kernel_init_freeable) from [<c0582d48>] (kernel_init+0x8/0xe4)
[    3.046888] [<c0582d48>] (kernel_init) from [<c000dda0>] (ret_from_fork+0x14/0x34)
[    3.054432] ---[ end trace e9ec50b1ec4c8f73 ]---
[    3.059012] ------------[ cut here ]------------
[    3.063604] WARNING: CPU: 0 PID: 1 at /home/linux3/jrd/yanping.an/ee40/0810/MDM9607.LE.1.0-00130/apps_proc/oe-core/build/tmp-glibc/work-shared/mdm9607/kernel-source/drivers/cpuidle/lpm-levels-of.c:739 parse_cluster+0xb50/0xcb4()
[    3.083858] Modules linked in:
[    3.086870] CPU: 0 PID: 1 Comm: swapper Tainted: G        W      3.18.20 #1
[    3.093814] [<c00132ac>] (unwind_backtrace) from [<c0011460>] (show_stack+0x10/0x14)
[    3.101575] [<c0011460>] (show_stack) from [<c001c6ac>] (warn_slowpath_common+0x68/0x88)
[    3.109641] [<c001c6ac>] (warn_slowpath_common) from [<c001c75c>] (warn_slowpath_null+0x18/0x20)
[    3.118412] [<c001c75c>] (warn_slowpath_null) from [<c034e180>] (parse_cluster+0xb50/0xcb4)
[    3.126745] [<c034e180>] (parse_cluster) from [<c034b6b4>] (lpm_probe+0xc/0x1d4)
[    3.134126] [<c034b6b4>] (lpm_probe) from [<c024857c>] (platform_drv_probe+0x30/0x7c)
[    3.141906] [<c024857c>] (platform_drv_probe) from [<c0246d54>] (driver_probe_device+0xb8/0x1e8)
[    3.150702] [<c0246d54>] (driver_probe_device) from [<c0246f30>] (__driver_attach+0x68/0x8c)
[    3.159120] [<c0246f30>] (__driver_attach) from [<c02455d0>] (bus_for_each_dev+0x6c/0x90)
[    3.167285] [<c02455d0>] (bus_for_each_dev) from [<c02465a4>] (bus_add_driver+0xe0/0x1c8)
[    3.175444] [<c02465a4>] (bus_add_driver) from [<c02477bc>] (driver_register+0x9c/0xe0)
[    3.183398] [<c02477bc>] (driver_register) from [<c080c3d8>] (lpm_levels_module_init+0x14/0x38)
[    3.192107] [<c080c3d8>] (lpm_levels_module_init) from [<c0008980>] (do_one_initcall+0xf8/0x1a0)
[    3.200877] [<c0008980>] (do_one_initcall) from [<c07e7d4c>] (kernel_init_freeable+0xf0/0x1b0)
[    3.209475] [<c07e7d4c>] (kernel_init_freeable) from [<c0582d48>] (kernel_init+0x8/0xe4)
[    3.217542] [<c0582d48>] (kernel_init) from [<c000dda0>] (ret_from_fork+0x14/0x34)
[    3.225090] ---[ end trace e9ec50b1ec4c8f74 ]---
[    3.229667] /soc/qcom,lpm-levels/qcom,pm-cluster@0: No CPU phandle, assuming single cluster
[    3.239954] qcom,cc-debug-mdm9607 1800000.qcom,debug: Registered Debug Mux successfully
[    3.247619] emac_lan_vreg: disabling
[    3.250507] mem_acc_corner: disabling
[    3.254196] clock_late_init: Removing enables held for handed-off clocks
[    3.262690] ALSA device list:
[    3.264732]   No soundcard [    3.274083] UBIFS (ubi0:0): background thread "ubifs_bgt0_0" started, PID 102
[    3.305224] UBIFS (ubi0:0): recovery needed
[    3.466156] UBIFS (ubi0:0): recovery completed
[    3.469627] UBIFS (ubi0:0): UBIFS: mounted UBI device 0, volume 0, name "rootfs"
[    3.476987] UBIFS (ubi0:0): LEB size: 126976 bytes (124 KiB), min./max. I/O unit sizes: 2048 bytes/2048 bytes
[    3.486876] UBIFS (ubi0:0): FS size: 45838336 bytes (43 MiB, 361 LEBs), journal size 9023488 bytes (8 MiB, 72 LEBs)
[    3.497417] UBIFS (ubi0:0): reserved for root: 0 bytes (0 KiB)
[    3.503078] UBIFS (ubi0:0): media format: w4/r0 (latest is w4/r0), UUID 4DBB2F12-34EB-43B6-839B-3BA930765BAE, small LPT model
[    3.515582] VFS: Mounted root (ubifs filesystem) on device 0:12.
[    3.520940] Freeing unused kernel memory: 276K (c07e7000 - c082c000)
INIT: version 2.88 booting

8 December 2020

Russell Coker: Links December 2020

Business Insider has an informative article about the way that Google users can get locked out with no apparent reason and no recourse [1]. Something to share with clients when they consider putting everything in the cloud . Vice has an interestoing article about people jailbreaking used Teslas after Tesla has stolen software licenses that were bought with the car [2]. The Atlantic has an interesting article titled This Article Won t Change Your Mind [3]. It s one of many on the topic of echo chambers but has some interesting points that others don t seem to cover, such as regarding the benefits of groups when not everyone agrees. Inequality.org has lots of useful information about global inequality [4]. Jeffrey Goldberg has an insightful interview with Barack Obama for the Atlantic about the future course of American politics and a retrospective on his term in office [5]. A Game Designer s Analysis Of QAnon is an insightful Medium article comparing QAnon to an augmented reality game [6]. This is one of the best analysis of QAnon operations that I ve seen. Decrypting Rita is one of the most interesting web comics I ve read [7]. It makes good use of side scrolling and different layers to tell multiple stories at once. PC Mag has an article about the new features in Chrome 87 to reduce CPU use [8]. On my laptop I have 1/3 of all CPU time being used when it is idle, the majority of which is from Chrome. As the CPU has 2 cores this means the equivalent of 1 core running about 66% of the time just for background tabs. I have over 100 tabs open which I admit is a lot. But it means that the active tabs (as opposed to the plain HTML or PDF ones) are averaging more than 1% CPU time on an i7 which seems obviously unreasonable. So Chrome 87 doesn t seem to live up to Google s claims. The movie Bad President starring Stormy Daniels as herself is out [9]. Poe s Law is passe. Interesting summary of Parler, seems that it was designed by the Russians [10]. Wired has an interesting article about Indistinguishability Obfuscation, how to encrypt the operation of a program [11]. Joerg Jaspert wrote an interesting blog post about the difficulties packagine Rust and Go for Debian [12]. I think that the problem is many modern languages aren t designed well for library updates. This isn t just a problem for Debian, it s a problem for any long term support of software that doesn t involve transferring a complete archive of everything and it s a problem for any disconnected development (remote sites and sites dealing with serious security. Having an automatic system for downloading libraries is fine. But there should be an easy way of getting the same source via an archive format (zip will do as any archive can be converted to any other easily enough) and with version numbers.

18 October 2020

Iustin Pop: Serendipity

To start off, let me say it again: I hate light pollution. I really, really hate it. I love the night sky where you look up and see thousands of stars, and constellations besides Ursa Major. As somebody said once, You haven t lived until you ve seen your shadow by the light of the Milky Way . But, ahem, I live in a large city, and despite my attempts using star trackers, special filters, etc. you simply can t escape it. So, whenever we go on vacation in the mountains, I m trying to think if I can do a bit of astro-photography (not that I m good at it). Which bring me to our recent vacation up in the mountains. I was looking forward to it, until in the week before, when the weather prognosis was switching between snow, rain and overcast for the entire week. No actual day or night with clear skies, so I didn t take a tripod, I didn t take a wide lens, and put night photography out of my mind. Vacation itself was good, especially the quietness of the place, so I usually went to be early-ish and didn t look outside. The weather was as forecasted - no new snow (but there was enough up in the mountains), but heavy clouds all the time, and the sun only showed itself for a few minutes at a time. One night I was up a bit longer than usual, working on the laptop and being very annoyed by a buzzing sound. At first I thought maybe I was imagining it, but from time to time it was stopping briefly, so it was a real noise; I started hunting for the source. Not my laptop, not the fridge, not the TV but it was getting stronger near the window. I open the door to the balcony, and bam! Very loud noise, from the hotel nearby, where at midnight the pool was being cleaned. I look at the people doing the work, trying to estimate how long it ll be until they finish, but it was looking like a long time. Fortunately with the door closed the noise was not bad enough to impact my sleep, so I debate getting angry or just resigned, and since it was late, I just sigh, roll my eyes not metaphorically, but actually roll my eyes and look up, and I can t believe my eyes. Completely clear sky, no trace of clouds anywhere, and stars. Lots of starts. I sit there, looking at the sky and enjoying the view, and I think to myself that it won t look that nice on the camera, for sure. Especially without a real trip, and without a fast lens. Nevertheless, I grab my camera and just for kicks take one handheld picture. To my surprise (and almost disbelief), blurry pixels aside, the photo does look like what I was seeing, so I grab my tiny tripod that I carried along, and (with only a 24-70 zoom lens), grab a photo. And another, and another and then I realise that if I can make the composition work, and find a good shutter speed, this can turn out a good picture. I didn t have a remote release, the tripod was not very stable and it cannot point the camera upwards (it s basically an emergency tripod), so it was quite sub-optimal; still, I try multiple shots (different compositions, different shutter speeds); they look on the camera screen and on the phone pretty good, so just for safety I take a few more, and, very happy, go to bed. Coming back from vacation, on the large monitor, it turns out that the first 28 out of the 30 pictures were either blurry or not well focused (as I was focusing manually), and the 29th was almost OK but still not very good. Only the last, the really last picture, was technically good and also composition-wise OK. Luck? Foresight? Don t know, but it was worth deleting 28 pictures to get this one. One of my best night shots, despite being so unprepared
Stars! Lots of stars! And mountains Stars! Lots of stars! And mountains
Of course, compared to other people s pictures, this is not special. But for me, it will be a keepsake of how a real night sky should look like. If you want to zoom in, higher resolution on flickr. Technically, the challenges for the picture were two-fold: Next time when travelling by car, I ll surely take a proper tripod Until next time, clear and dark skies

23 July 2020

Dima Kogan: Finding long runs of "notable" data in a log

Here's yet another instance where the data processing I needed done could be acomplished entirely in the shell, with vnlog tools. I have some time-series data in a text table. Via some join and filter operations, I have boiled down this table to a sequence of time indices where something interesting happened. For instance let's say it looks like this: t.vnl
# time
1976
1977
1978
1979
1980
1986
1987
1988
1989
2011
2012
2013
2014
2015
4679
4680
4681
4682
4683
4684
4685
4686
4687
7281
7282
7283
7291
7292
7293
I'd like to find the longest contiguous chunk of time where the interesting thing kept happening. How? Like this!
$ < t.vnl vnl-filter -p 'time,d=diff(time)'  
          vnl-uniq -c -f -1  
          vnl-filter 'd==1' -p 'count=count+1,time=time-1'  
          vnl-sort -nrk count  
          vnl-align
# count time
9       4679
5       2011
5       1976
4       1986
3       7291
3       7281
Bam! So the longest run was 9-frames-long, starting at time = 4679.

30 June 2020

Russ Allbery: Review: The Fifth Risk

Review: The Fifth Risk, by Michael Lewis
Publisher: W.W. Norton
Copyright: 2018
Printing: 2019
ISBN: 0-393-35745-7
Format: Kindle
Pages: 254
The Fifth Risk starts with the presidential transition. Max Stier, the first person profiled by Lewis in this book, is the founder of the Partnership for Public Service. That foundation helped push through laws to provide more resources and structure for the transition of the United States executive branch from one president to the next. The goal was to fight wasted effort, unnecessary churn, and pointless disruption in the face of each administration's skepticism about everyone who worked for the previous administration.
"It's Groundhog Day," said Max. "The new people come in and think that the previous administration and the civil service are lazy or stupid. Then they actually get to know the place they are managing. And when they leave, they say, 'This was a really hard job, and those are the best people I've ever worked with.' This happens over and over and over."
By 2016, Stier saw vast improvements, despite his frustration with other actions of the Obama administration. He believed their transition briefings were one of the best courses ever produced on how the federal government works. Then that transition process ran into Donald Trump. Or, to be more accurate, that transition did not run into Donald Trump, because neither he nor anyone who worked for him were there. We'll never know how good the transition information was because no one ever listened to or read it. Meetings were never scheduled. No one showed up. This book is not truly about the presidential transition, though, despite its presence as a continuing theme. The Fifth Risk is, at its heart, an examination of government work, the people who do it, why it matters, and why you should care about it. It's a study of the surprising and misunderstood responsibilities of the departments of the United States federal government. And it's a series of profiles of the people who choose this work as a career, not in the upper offices of political appointees, but deep in the civil service, attempting to keep that system running. I will warn now that I am far too happy that this book exists to be entirely objective about it. The United States desperately needs basic education about the government at all levels, but particularly the federal civil service. The public impression of government employees is skewed heavily towards the small number of public-facing positions and towards paperwork frustrations, over which the agency usually has no control because they have been sabotaged by Congress (mostly by Republicans, although the Democrats get involved occasionally). Mental images of who works for the government are weirdly selective. The Coast Guard could say "I'm from the government and I'm here to help" every day, to the immense gratitude of the people they rescue, but Reagan was still able to use that as a cheap applause line in his attack on government programs. Other countries have more functional and realistic social attitudes towards their government workers. The United States is trapped in a politically-fueled cycle of contempt and ignorance. It has to stop. And one way to help stop it is someone with Michael Lewis's story-telling skills writing a different narrative. The Fifth Risk is divided into a prologue about presidential transitions, three main parts, and an afterword (added in current editions) about a remarkable government worker whom you likely otherwise would never hear about. Each of the main parts talks about a different federal department: the Department of Energy, the Department of Agriculture, and the Department of Commerce. In keeping with the theme of the book, the people Lewis profiles do not do what you might expect from the names of those departments. Lewis's title comes from his discussion with John MacWilliams, a former Goldman Sachs banker who quit the industry in search of more personally meaningful work and became the chief risk officer for the Department of Energy. Lewis asks him for the top five risks he sees, and if you know that the DOE is responsible for safeguarding nuclear weapons, you will be able to guess several of them: nuclear weapons accidents, North Korea, and Iran. If you work in computer security, you may share his worry about the safety of the electrical grid. But his fifth risk was project management. Can the government follow through on long-term hazardous waste safety and cleanup projects, despite constant political turnover? Can it attract new scientists to the work of nuclear non-proliferation before everyone with the needed skills retires? Can it continue to lay the groundwork with basic science for innovation that we'll need in twenty or fifty years? This is what the Department of Energy is trying to do. Lewis's profiles of other departments are similarly illuminating. The Department of Agriculture is responsible for food stamps, the most effective anti-poverty program in the United States with the possible exception of Social Security. The section on the Department of Commerce is about weather forecasting, specifically about NOAA (the National Oceanic and Atmospheric Administration). If you didn't know that all of the raw data and many of the forecasts you get from weather apps and web sites are the work of government employees, and that AccuWeather has lobbied Congress persistently for years to prohibit the NOAA from making their weather forecasts public so that AccuWeather can charge you more for data your taxes already paid for, you should read this book. The story of American contempt for government work is partly about ignorance, but it's also partly about corporations who claim all of the credit while selling taxpayer-funded resources back to you at absurd markups. The afterword I'll leave for you to read for yourself, but it's the story of Art Allen, a government employee you likely have never heard of but whose work for the Coast Guard has saved more lives than we are able to measure. I found it deeply moving. If you, like I, are a regular reader of long-form journalism and watch for new Michael Lewis essays in particular, you've probably already read long sections of this book. By the time I sat down with it, I think I'd read about a third in other forms on-line. But the profiles that I had already read were so good that I was happy to read them again, and the additional stories and elaboration around previously published material was more than worth the cost and time investment in the full book.
It was never obvious to me that anyone would want to read what had interested me about the United States government. Doug Stumpf, my magazine editor for the past decade, persuaded me that, at this strange moment in American history, others might share my enthusiasm.
I'll join Michael Lewis in thanking Doug Stumpf. The Fifth Risk is not a proposal for how to fix government, or politics, or polarization. It's not even truly a book about the Trump presidency or about the transition. Lewis's goal is more basic: The United States government is full of hard-working people who are doing good and important work. They have effectively no public relations department. Achievements that would result in internal and external press releases in corporations, not to mention bonuses and promotions, go unnoticed and uncelebrated. If you are a United States citizen, this is your government and it does important work that you should care about. It deserves the respect of understanding and thoughtful engagement, both from the citizenry and from the politicians we elect. Rating: 10 out of 10

9 May 2020

Sandro Tosi: It's a waiting game... but just how long we gotta wait?

While waiting for my priority date to become current, and with enough "quarantine time" on my hand, i just come up with a very simple Python tool to parse the USCIS Visa Bulletin to gather some data from that.

You can find code and images in this GitHub repo.

For now it only contains a single plot for the EB3 final action date; it answers a simple question: how many months ago your priority date should be if you want to file your AOS on that month. We started from FY2016, to cover the final full year of the Obama administration.

If you're interested in more classes/visas, let me know and the tool could be easily extended to cover that too. PRs are always welcome.

7 April 2020

Shirish Agarwal: GMRT 2020 and lots of stories

First of all, congratulations to all those who got us 2022 Debconf, so we will finally have a debconf in India. There is of course, lot of work to be done between now and then. For those who would be looking forward to visit India and especially Kochi I would suggest you to hear this enriching tale
I am sorry I used youtube link but it is too good a podcast not to be shared. Those who don t want youtube can use the invidio.us link for the same as shared below. https://www.invidio.us/watch?v=BvjgKuKmnQ4 I am sure there are lot more details, questions, answers etc. but would direct them gently to Praveen, Shruti, Balasankar and the rest who are from Kochi to answer if you have any questions about that history.

National Science Day, GMRT 2020 First, as always, we are and were grateful to both NCRA as well as GMRT for taking such good care of us. Even though Akshat was not around, probably getting engaged, a few of us were there. About 6-7 from the Mozilla Nasik while the rest representing the foss community. Here is a small picture which commentrates the event
National Science Day, GMRT 2020
While there is and was a lot to share about the event. For e.g. Akshay had bought RPI- Zero as well as RPI-2 (Raspberry Pi s ) and showed some things. He had also bought up a Debian stable live drive with persistence although the glare from the sun was too much that we couldn t show it to clearly to students. This was also the case with RPI but still we shared what and how much we could. Maybe next year, we either ask them to have double screens or give us dark room so we can showcase things much better. We did try playing with contrast and all but it didn t have much of an effect  . Of course in another stall few students had used RPI s as part of their projects so at times we did tell some of the newbies to go to those stalls and see and ask about those projects so they would have a much wider experience of things. The Mozilla people were pushing VR as well as Mozilla lite the browser for the mobile. We also gossiped quite a bit. I shared about indicatelts , a third-party certificate extension although I dunno if I should file a wnpp about it or not. We didn t have a good experience of when I had put an RFP (Request for Package) which was accepted for an extension which had similar functionality which we later come to know was sharing the sites people were using the extension to call home and share both the URL and the IP Address they were using it from. Sadly, didn t leave a good taste in mouth

Delhi Riots One thing I have been disappointed with is the lack of general awareness about things especially in the youth. We have people who didn t know that for e.g. in the Delhi riots which happened recently the law and order (Police) lies with Home Minister of India, Amit Shah. This is perhaps the only capital in the world which has its own Chief Minister but doesn t have any say on its law and order. And this has been the case for last 70 years i.e. since independance. The closest I know so far is the UK but they too changed their tune in 2012. India and especially Delhi seems to be in a time-capsule which while being dysfunctional somehow is made to work. In many ways, it s three body or a body split into three personalities which often makes governance a messy issue but that probably is a topic for another day. In fact, scroll had written a beautiful editorial that full statehood for Delhi was not only Arvind Kejriwal s call (AAP) but also something that both BJP as well as Congress had asked in the past. In fact, nothing about the policing is in AAP s power. All salaries, postings, transfers of police personnel everything is done by the Home Ministry, so if any blame has to be given it has to be given to the Home Ministry for the same.

American Capitalism and Ventilators America had been having a history of high cost healthcare as can be seen in this edition of USA today from 2017 . The Affordable Care Act was signed as a law by President Obama in 2010 which Mr. Trump curtailed when he came into power couple of years back. An estimated 80,000 people died due to seasonal flu in 2018-19 . Similarly, anywhere between 24-63,000 have supposed to have died from Last October to February-March this year. Now if the richest country can t take care of their population which is 1/3rd of the population of this country while at the same time United States has thrice the area that India has. This I am sharing as seasonal flu also strikes the elderly as well as young children more than adults. So in one senses, the vulnerable groups overlap although from some of the recent stats, for Covid-19 even those who are 20+ are also vulnerable but that s another story altogether. If you see the CDC graph of the seasonal flu it is clear that American health experts knew about it. One another common factor which joins both the seasonal flu and covid is both need ventilators for the most serious cases. So, in 2007 it was decided that the number of ventilators needed to be ramped up, they had approximately 62k ventilators at that point in time all over U.S. The U.S. in 2010, asked for bids and got bid from a small californian company called Newport Medic Instruments. The price of the ventilators was approximately INR 700000 at 2010 prices, while Newport said they would be able to mass-produce at INR 200000 at 2010 prices. The company got the order and they started designing the model which needed to be certified by FDA. By 2011, they got the product ready when a big company called Covidgen bought Newport Medic and shutdown the project. This was shared in a press release in 2012. The whole story was broken by New York Times again, just a few days ago which highlighted how America s capitalism rough shod over public health and put people s life unnecessarily in jeopardy. If those new-age ventilators would have been a reality then not just U.S. but India and many other countries would have bought the ventilators as every county has same/similar needs but are unable to pay the high cost which in many cases would be passed on to their citizens either as price of service, or by raising taxes or a mixture of both with public being none the wiser. Due to dearth of ventilators and specialized people to operate it and space, there is possibility that many countries including India may have to make tough choices like Italian doctors had to make as to who to give ventilator to and have the mental and emotional guilt which would be associated with the choices made.

Some science coverage about diseases in wire and other publications Since Covid coverage broke out, the wire has been bringing various reports of India s handling of various epidemics, mysteries, some solved, some still remaining unsolved due to lack of interest or funding or both. The Nipah virus has been amply discussed in the movie Virus (2019) which I shared in the last blog post and how easily it could have been similar to Italy in Kerala. Thankfully, only 24 people including a nurse succumbed to that outbreak as shared in the movie. I had shared about Kerala nurses professionalism when I was in hospital couple of years back. It s no wonder that their understanding of hygeine and nursing procedures are a cut above the rest hence they are sought after not just in India but world-over including US and UK and the middle-east. Another study on respitory illness was bought to my attention by my friend Pavithran.

Possibility of extended lockdown in India There was talk in the media of extended lockdown or better put an environment is being created so that an extended lockdown can be done. This is probably in part due to a mathematical model and its derivatives shared by two Indian-origin Cambridge scholars who predict that a minimum 49 days lockdown may be necessary to flatten the covid curve about a week back.
Predictions of the outcome of the current 21-day lockdown (Source: Rajesh Singh, R. Adhikari, Cambridge University)
Alternative lockdown strategies suggested by the Cambridge model (Source: Rajesh Singh, R. Adhikari, Cambridge University)
India caving to US pressure on Hydroxychloroquine While there has been lot of speculation in U.S. about Hydroxychloroquine as the wonder cure, last night Mr. Trump threatened India in a response to a reporter that Mr. Modi has to say no for Hydroxychloroquine and there may be retaliations.
As shared before if youtube is not your cup you can see the same on invidio.us https://www.invidio.us/watch?v=YP-ewgoJPLw Now while there have been several instances in the past of U.S. trying to bully India, going all the way back to 1954. In fact, in recent memory, there were sanctions on India by US under Atal Bihari Vajpayee Government (BJP) 1998 but he didn t buckle under the pressure and now we see our current PM taking down our own notification from a day ago and not just sharing Hydroxychloroquine but also Paracetemol to other countries so it would look as if India is sharing with other countries. Keep in mind, that India, Brazil haven t seen eye to eye on trade agreements of late and Paracetemol prices have risen in India. The price rise has been because the API (Active Pharmaceutical Ingredients) for the same come from China where the supply chain will take time to be fixed and we would also have to open up, although should we, should we not is another question altogether. I talk about supply chains as lean supply chains were the talk since late 90 s when the Japanese introduced Just-in-time manufacturing which lead to lean supply chains as well as lot of outsourcing as consequence. Of course, the companies saved money but at the cost of flexibility and how this model was perhaps flawed was shared by a series of articles in Economist as early as 2004 when there were lot of shocks to that model and would be exaberated since then. There have been frequent shocks to these fragile ecosystem more since 2008 after the financial meltdown and this would put more companies out of business than ever before. The MSME sector in India had already been severely impacted first by demonetization and then by the horrendous implementation of GST whose cries can be heard from all sectors. Also the frequent changing of GST taxes has made markets jumpy and investors unsure. With judgements such as retrospective taxes, AGR (Adjusted Gross Revenue) etc. it made not only the international investors scared, but also domestic investors. The flight of the capital has been noticeable. This I had shared before when Indian Government shared about LRS report which it hasn t since then. In fact Outlook Business had an interesting article about it where incidentally it talked about localcircles, a community networking platform where you get to know of lot of things and whom I am also a member of. At the very end I apologize for not sharing the blog post before but then I was feeling down but then I m not the only one.

6 April 2020

Russ Allbery: Review: Thick

Review: Thick, by Tressie McMillan Cottom
Publisher: The New Press
Copyright: 2019
ISBN: 1-62097-437-1
Format: Kindle
Pages: 247
Tressie McMillan Cottom is an associate professor of sociology at Virginia Commonwealth University. I first became aware of her via retweets and recommendations from other people I follow on Twitter, and she is indeed one of the best writers on that site. Thick: And Other Essays is an essay collection focused primarily on how American culture treats black women. I will be honest here, in part because I think much of the regular audience for my book reviews is similar to me (white, well-off from working in tech, and leftist but privileged) and therefore may identify with my experience. This is the sort of book that I always want to read and then struggle to start because I find it intimidating. It received a huge amount of praise on release, including being named as a finalist for the National Book Award, and that praise focused on its incisiveness, its truth-telling, and its depth and complexity. Complex and incisive books about racism are often hard for me to read; they're painful, depressing, and infuriating, and I have to fight my tendency to come away from them feeling more cynical and despairing. (Despite loving his essays, I'm still procrastinating reading Ta-Nehisi Coates's books.) I want to learn and understand but am not good at doing anything with the information, so this reading can feel like homework. If that's also your reaction, read this book. I regret having waited as long as I did. Thick is still, at times, painful, depressing, and infuriating. It's also brilliantly written in a way that makes the knowledge being conveyed easier to absorb. Rather than a relentless onslaught of bearing witness (for which, I should stress, there is an important place), it is a scalpel. Each essay lays open the heart of a subject in a few deft strokes, points out important features that the reader has previously missed, and then steps aside, leaving you alone with your thoughts to come to terms with what you've just learned. I needed this book to be an essay collection, with each thought just long enough to have an impact and not so long that I became numb. It's the type of collection that demands a pause at the end of each essay, a moment of mental readjustment, and perhaps a paging back through the essay again to remember the sharpest points. The essays often start with seeds of the personal, drawing directly on McMillan Cottom's own life to wrap context around their point. In the first essay, "Thick," she uses advice given her younger self against writing too many first-person essays to talk about the writing form, its critics, and how the backlash against it has become part of systematic discrimination because black women are not allowed to write any other sort of authoritative essay. She then draws a distinction between her own writing and personal essays, not because she thinks less of that genre but because that genre does not work for her as a writer. The essays in Thick do this repeatedly. They appear to head in one direction, then deepen and shift with the added context of precise sociological analysis, defying predictability and reaching a more interesting conclusion than the reader had expected. And, despite those shifts, McMillan Cottom never lost me in a turn. This is a book that is not only comfortable with complexity and nuance, but helps the reader become comfortable with that complexity as well. The second essay, "In the Name of Beauty," is perhaps my favorite of the book. Its spark was backlash against an essay McMillan Cottom wrote about Miley Cyrus, but the topic of the essay wasn't what sparked the backlash.
What many black women were angry about was how I located myself in what I'd written. I said, blithely as a matter of observable fact, that I am unattractive. Because I am unattractive, the argument went, I have a particular kind of experience of beauty, race, racism, and interacting with what we might call the white gaze. I thought nothing of it at the time I was writing it, which is unusual. I can usually pinpoint what I have said, written, or done that will piss people off and which people will be pissed off. I missed this one entirely.
What follows is one of the best essays on the social construction of beauty I've ever read. It barely pauses at the typical discussion of unrealistic beauty standards as a feminist issue, instead diving directly into beauty as whiteness, distinguishing between beauty standards that change with generations and the more lasting rules that instead police the bounds between white and not white. McMillan Cottom then goes on to explain how beauty is a form of capital, a poor and problematic one but nonetheless one of the few forms of capital women have access to, and therefore why black women have fought to be included in beauty despite all of the problems with judging people by beauty standards. And the essay deepens from there into a trenchant critique of both capitalism and white feminism that is both precise and illuminating.
When I say that I am unattractive or ugly, I am not internalizing the dominant culture's assessment of me. I am naming what has been done to me. And signaling who did it. I am glad that doing so unsettles folks, including the many white women who wrote to me with impassioned cases for how beautiful I am. They offered me neoliberal self-help nonsense that borders on the religious. They need me to believe beauty is both achievable and individual, because the alternative makes them vulnerable.
I could go on. Every essay in this book deserves similar attention. I want to quote from all of them. These essays are about racism, feminism, capitalism, and economics, all at the same time. They're about power, and how it functions in society, and what it does to people. There is an essay about Obama that contains the most concise explanation for his appeal to white voters that I've read. There is a fascinating essay about the difference between ethnic black and black-black in U.S. culture. There is so much more.
We do not share much in the U.S. culture of individualism except our delusions about meritocracy. God help my people, but I can talk to hundreds of black folks who have been systematically separated from their money, citizenship, and personhood and hear at least eighty stories about how no one is to blame but themselves. That is not about black people being black but about people being American. That is what we do. If my work is about anything it is about making plain precisely how prestige, money, and power structure our so-called democratic institutions so that most of us will always fail.
I, like many other people in my profession, was always more comfortable with the technical and scientific classes in college. I liked math and equations and rules, dreaded essay courses, and struggled to engage with the mandatory humanities courses. Something that I'm still learning, two decades later, is the extent to which this was because the humanities are harder work than the sciences and I wasn't yet up to the challenge of learning them properly. The problems are messier and more fluid. The context required is broader. It's harder to be clear and precise. And disciplines like sociology deal with our everyday lived experience, which means that we all think we're entitled to an opinion. Books like this, which can offer me a hand up and a grounding in the intellectual rigor while simultaneously being engaging and easy to read, are a treasure. They help me fill in the gaps in my education and help me recognize and appreciate the depth of thought in disciplines that don't come as naturally to me. This book was homework, but the good kind, the kind that exposes gaps in my understanding, introduces topics I hadn't considered, and makes the time fly until I come up for air, awed and thinking hard. Highly recommended. Rating: 9 out of 10

23 March 2020

Norbert Preining: De-uglify GTk3 tabs of terminals

If you are puzzled by the indistinguishability of the active tab from inactive tabs in any of the GTK3 based terminal emulators (mate-terminal, gnome-terminal, terminator, ), you are not alone. I have been plagued by that for far too long, and finally found a working solution.
In the above screen shot you see the tabs in the upper part are hardly distinguishable, and in the lower part the active tab is clearly indicated with different colors. Searching for relevant search terms brings up a variety of suggestions, most of them not working. I finally found this one and adapted the code therein to fit my need. The solution is to edit ~/.config/gtk-3.0/gtk.css, and add the following at the end:
notebook tab  
    /* background-color: #222; */
    padding: 0.4em;
    border: 0;
    border-color: #444;
    border-style: solid;
    border-width: 1px;
 
 
notebook tab:checked  
    /* background-color: #000; */
    background-image: none;
    border-color: #76C802;
 
 
notebook tab:checked label  
    color: #76C802;
    font-weight: 500;
 
 
notebook tab button  
    padding: 0;
    background-color: transparent;
    color: #ccc;
 
 
notebook tab button:hover  
  border: 0;
  background-image: none;
  border-color: #444;
  border-style: solid;
  border-width: 1px;
 
Here I didn t change the background color (commented above), but changed the styling of the tab title and added a frame. The above changes the tab layout for all GTk3 elements that use the notebook widget, if one wants to restrict it to a certain terminal one need to prefix each CSS selector with the correct terminal indicator, as shown in the original post. Hope that helps.

19 October 2017

Clint Adams: Scuttlebutt at the precinct

So who s getting patchwork into Debian?
Posted on 2017-10-19
Tags: bamamba

31 August 2017

Chris Lamb: Free software activities in August 2017

Here is my monthly update covering what I have been doing in the free software world in August 2017 (previous month):
Reproducible builds

Whilst anyone can inspect the source code of free software for malicious flaws, most software is distributed pre-compiled to end users. The motivation behind the Reproducible Builds effort is to allow verification that no flaws have been introduced either maliciously or accidentally during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised. I have generously been awarded a grant from the Core Infrastructure Initiative to fund my work in this area. This month I:
  • Presented a status update at Debconf17 in Montr al, Canada alongside Holger Levsen, Maria Glukhova, Steven Chamberlain, Vagrant Cascadian, Valerie Young and Ximin Luo.
  • I worked on the following issues upstream:
    • glib2.0: Please make the output of gio-querymodules reproducible. (...)
    • gcab: Please make the output reproducible. (...)
    • gtk+2.0: Please make the immodules.cache files reproducible. (...)
    • desktop-file-utils: Please make the output reproducible. (...)
  • Within Debian:
  • Categorised a large number of packages and issues in the Reproducible Builds "notes" repository.
  • Worked on publishing our weekly reports. (#118, #119, #120, #121 & #122)

I also made the following changes to our tooling:
diffoscope

diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues.

  • Use name attribute over path to avoid leaking comparison full path in output. (commit)
  • Add missing skip_unless_module_exists import. (commit)
  • Tidy diffoscope.progress and the XML comparator (commit, commit)

disorderfs

disorderfs is our FUSE-based filesystem that deliberately introduces non-determinism into directory system calls in order to flush out reproducibility issues.

  • Add a simple autopkgtest smoke test. (commit)


Debian
Patches contributed
  • openssh: Quote the IP address in ssh-keygen -f suggestions. (#872643)
  • libgfshare:
    • SIGSEGV if /dev/urandom is not accessible. (#873047)
    • Add bindnow hardening. (#872740)
    • Support nodoc build profile. (#872739)
  • devscripts:
  • memcached: Add hardening to systemd .service file. (#871610)
  • googler: Tidy long and short package descriptions. (#872461)
  • gnome-split: Homepage points to domain-parked website. (#873037)

Uploads
  • python-django 1:1.11.4-1 New upstream release.
  • redis:
    • 4:4.0.1-3 Drop yet more non-deterministic tests.
    • 4:4.0.1-4 Tighten systemd/seccomp hardening.
    • 4:4.0.1-5 Drop even more tests with timing issues.
    • 4:4.0.1-6 Don't install completions to /usr/share/bash-completion/completions/debian/bash_completion/.
    • 4:4.0.1-7 Don't let sentinel integration tests fail the build as they use too many timers to be meaningful. (#872075)
  • python-gflags 1.5.1-3 If SOURCE_DATE_EPOCH is set, either use that as a source of current dates or the UTC-version of the file's modification time (#836004), don't call update-alternatives --remove in postrm. update debian/watch/Homepage & refresh/tidy the packaging.
  • bfs 1.1.1-1 New upstream release, tidy autopkgtest & patches, organising the latter with Pq-Topic.
  • python-daiquiri 1.2.2-1 New upstream release, tidy autopkgtests & update travis.yml from travis.debian.net.
  • aptfs 2:0.10-2 Add upstream signing key, refer to /usr/share/common-licenses/GPL-3 in debian/copyright & tidy autopkgtests.
  • adminer 4.3.1-2 Add a simple autopkgtest & don't install the Selenium-based tests in the binary package.
  • zoneminder (1.30.4+dfsg-2) Prevent build failures with GCC 7 (#853717) & correct example /etc/fstab entries in README.Debian (#858673).

Finally, I reviewed and sponsored uploads of astral, inflection, more-itertools, trollius-redis & wolfssl.

Debian LTS

This month I have been paid to work 18 hours on Debian Long Term Support (LTS). In that time I did the following:
  • "Frontdesk" duties, triaging CVEs, etc.
  • Issued DLA 1049-1 for libsndfile preventing a remote denial of service attack.
  • Issued DLA 1052-1 against subversion to correct an arbitrary code execution vulnerability.
  • Issued DLA 1054-1 for the libgxps XML Paper Specification library to prevent a remote denial of service attack.
  • Issued DLA 1056-1 for cvs to prevent a command injection vulnerability.
  • Issued DLA 1059-1 for the strongswan VPN software to close a denial of service attack.

Debian bugs filed
  • wget: Please hash the hostname in ~/.wget-hsts files. (#870813)
  • debian-policy: Clarify whether mailing lists in Maintainers/Uploaders may be moderated. (#871534)
  • git-buildpackage: "pq export" discards text within square brackets. (#872354)
  • qa.debian.org: Escape HTML in debcheck before outputting. (#872646)
  • pristine-tar: Enable multithreaded compression in pristine-xz. (#873229)
  • tryton-meta: Please combine tryton-modules-* into a single source package with multiple binaries. (#873042)
  • azure-cli:
  • fwupd-tests: Don't ship test files to generic /usr/share/installed-tests dir. (#872458)
  • libvorbis: Maintainer fields points to a moderated mailing list. (#871258)
  • rmlint-gui: Ship a rmlint-gui binary. (#872162)
  • template-glib: debian/copyright references online source without quotation. (#873619)

FTP Team

As a Debian FTP assistant I ACCEPTed 147 packages: abiword, adacgi, adasockets, ahven, animal-sniffer, astral, astroidmail, at-at-clojure, audacious, backdoor-factory, bdfproxy, binutils, blag-fortune, bluez-qt, cheshire-clojure, core-match-clojure, core-memoize-clojure, cypari2, data-priority-map-clojure, debian-edu, debian-multimedia, deepin-gettext-tools, dehydrated-hook-ddns-tsig, diceware, dtksettings, emacs-ivy, farbfeld, gcc-7-cross-ports, git-lfs, glewlwyd, gnome-recipes, gnome-shell-extension-tilix-dropdown, gnupg2, golang-github-aliyun-aliyun-oss-go-sdk, golang-github-approvals-go-approval-tests, golang-github-cheekybits-is, golang-github-chzyer-readline, golang-github-denverdino-aliyungo, golang-github-glendc-gopher-json, golang-github-gophercloud-gophercloud, golang-github-hashicorp-go-rootcerts, golang-github-matryer-try, golang-github-opentracing-contrib-go-stdlib, golang-github-opentracing-opentracing-go, golang-github-tdewolff-buffer, golang-github-tdewolff-minify, golang-github-tdewolff-parse, golang-github-tdewolff-strconv, golang-github-tdewolff-test, golang-gopkg-go-playground-validator.v8, gprbuild, gsl, gtts, hunspell-dz, hyperlink, importmagic, inflection, insighttoolkit4, isa-support, jaraco.itertools, java-classpath-clojure, java-jmx-clojure, jellyfish1, lazymap-clojure, libblockdev, libbytesize, libconfig-zomg-perl, libdazzle, libglvnd, libjs-emojify, libjwt, libmysofa, libundead, linux, lua-mode, math-combinatorics-clojure, math-numeric-tower-clojure, mediagoblin, medley-clojure, more-itertools, mozjs52, openssh-ssh1, org-mode, oysttyer, pcscada, pgsphere, poppler, puppetdb, py3status, pycryptodome, pysha3, python-cliapp, python-coloredlogs, python-consul, python-deprecation, python-django-celery-results, python-dropbox, python-fswrap, python-hbmqtt, python-intbitset, python-meshio, python-parameterized, python-pgpy, python-py-zipkin, python-pymeasure, python-thriftpy, python-tinyrpc, python-udatetime, python-wither, python-xapp, pythonqt, r-cran-bit, r-cran-bit64, r-cran-blob, r-cran-lmertest, r-cran-quantmod, r-cran-ttr, racket-mode, restorecond, rss-bridge, ruby-declarative, ruby-declarative-option, ruby-errbase, ruby-google-api-client, ruby-rash-alt, ruby-representable, ruby-test-xml, ruby-uber, sambamba, semodule-utils, shimdandy, sjacket-clojure, soapysdr, stencil-clojure, swath, template-glib, tools-analyzer-jvm-clojure, tools-namespace-clojure, uim, util-linux, vim-airline, vim-airline-themes, volume-key, wget2, xchat, xfce4-eyes-plugin & xorg-gtest. I additionally filed 6 RC bugs against packages that had incomplete debian/copyright files against: gnome-recipes, golang-1.9, libdazzle, poppler, python-py-zipkin & template-glib.

6 June 2017

Reproducible builds folks: Reproducible Builds: week 110 in Stretch cycle

Here's what happened in the Reproducible Builds effort between Sunday May 28 and Saturday June 3 2017: Past an upcoming events Documentation updates Toolchain development and fixes Patches and bugs filed 4 package reviews have been added, 6 have been updated and 25 have been removed in this week, adding to our knowledge about identified issues. Weekly QA work During our reproducibility testing, FTBFS bugs have been detected and reported by: diffoscope development tests.reproducible-builds.org Mattia Rizzolo: Daniel Kahn Gillmor: Vagrant Cascadian: Holger Levsen: Misc. This week's edition was written by Chris Lamb, Bernhard M. Wiedemann and Holger Levsen & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Next.