Search Results: "tachi"

5 April 2025

Russell Coker: More About the HP ML110 Gen9 and z640

In May 2021 I bought a ML110 Gen9 to use as a deskside workstation [1]. I started writing this post in April 2022 when it had been my main workstation for almost a year. While this post was in a draft state in Feb 2023 I upgraded it to an 18 core E5-2696 v3 CPU [2]. It s now March 2025 and I have replaced it. Hardware Issues My previous state with this was not having adequate cooling to allow it to boot and not having a PCIe power cable for a video card. As an experiment I connected the CPU fan to the PCIe fan power and discovered that all power and monitoring wires for the CPU and PCIe fans are identical. This allowed me to buy a CPU fan which was cheaper ($26.09 including postage) and easier to obtain than a PCIe fan (presumably due to CPU fans being more commonly used and manufactured in larger quantities). I had to be creative in attaching the CPU fan as it s cable wasn t long enough to reach the usual location for a PCIe fan. The PCIe fan also required a baffle to direct the air to the right place which annoyingly HP apparently doesn t ship with the low end servers, so I made one from a Corn Flakes packet and duct tape. The Wikipedia page listing AMD GPUs lists many newer ones that draw less than 80W and don t need a PCIe power cable. I ordered a Radeon RX560 4G video card which cost $246.75. It only uses 8 lanes of PCIe but that s enough for me, the only 3D game I play is Warzone 2100 which works well at 4K resolution on that card. It would be really annoying if I had to just spend $246.75 to get the system working, but I had another system in need of a better video card which had a PCIe power cable so the effective cost was small. I think of it as upgrading 2 systems for $123 each. The operation of the PCIe video card was a little different than non-server systems. The built in VGA card displayed the hardware status at the start and then kept displaying that after the system had transitioned to PCIe video. This could be handy in some situations if you know what it s doing but was confusing initially. Booting One insidious problem is that when booting in legacy mode the boot process takes an unreasonably long time and often hangs, the UEFI implementation on this system seems much more reliable and also supports booting from NVMe. Even with UEFI the boot process on this system was slow. Also the early stage of the power on process involves fans being off and the power light flickering which leads you to think that it s not booting and needs to have the power button pressed again which turns it off. The Dell power on sequence of turning most LEDs on and instantly running the fans at high speed leaves no room for misunderstanding. This is also something that companies making electric cars could address. When turning on a machine you should never be left wondering if it is actually on. Noise This was always a noisy system. When I upgraded the CPU from an 8 core with 85W typical TDP to an 18 core with 145W typical TDP it became even louder. Then over time as dust accumulated inside the machine it became louder still until it was annoyingly loud outside the room when all 18 cores were busy. Replacement I recently blogged about options for getting 8K video to work on Linux [3]. This requires PCIe power which the z640s have (all the ones I have seen have it I don t know if all that HP made have it) and which the cheaper models in the ML-110 line don t have. Since then I have ordered an Intel Arc card which apparently has 190W TDP. There are adaptors to provide PCIe power from SATA or SAS power which I could have used, but having a E5-2696 v3 CPU that draws 145W [4] and a GPU that draws 190W [4] in a system with a 350W PSU doesn t seem viable. I replaced it with one of the HP z640 workstations I got in 2023 [5]. The current configuration of the z640 has 3*32G RDIMMs compared to the ML110 having 8*32G, going from 256G to 96G is a significant decrease but most tasks run well enough like that. A limitation of the z640 is that when run with a single CPU it only has 4 DIMM slots which gives a maximum of 512G if you get 128G LRDIMMs, but as all DDR4 DIMMs larger than 32G are unreasonably expensive at this time the practical limit is 128G (which costs about $120AU). In this case I have 96G because the system I m using has a motherboard problem which makes the fourth DIMM slot unusable. Currently my desire to get more than 96G of RAM is less than my desire to avoid swapping CPUs. At this time I m not certain that I will make my main workstation the one that talks to an 8K display. But I really want to keep my options open and there are other benefits. The z640 boots faster. It supports PCIe bifurcation (with a recent BIOS) so I now have 4 NVMe devices in a single PCIe slot. It is very quiet, the difference is shocking. I initially found it disconcertingly quiet. The biggest problem with the z640 is having only 4 DIMM sockets and the particular one I m using has a problem limiting it to 3. Another problem with the z640 when compared to the ML110 Gen9 is that it runs the RAM at 2133 while the ML110 runs it at 2400, that s a significant performance reduction. But the benefits outweigh the disadvantages. Conclusion I have no regrets about buying the ML-110. It was the only DDR4 ECC system that was in the price range I wanted at the time. If I knew that the z640 systems would run so quietly then I might have replaced it earlier. But it was only late last year that 32G DIMMs became affordable, before then I had 8*16G DIMMs to give 128G because I had some issues of programs running out of memory when I had less.

20 February 2025

Paul Tagliamonte: boot2kier

I can t remember exactly the joke I was making at the time in my work s slack instance (I m sure it wasn t particularly funny, though; and not even worth re-reading the thread to work out), but it wound up with me writing a UEFI binary for the punchline. Not to spoil the ending but it worked - no pesky kernel, no messing around with userland . I guess the only part of this you really need to know for the setup here is that it was a Severance joke, which is some fantastic TV. If you haven t seen it, this post will seem perhaps weirder than it actually is. I promise I haven t joined any new cults. For those who have seen it, the payoff to my joke is that I wanted my machine to boot directly to an image of Kier Eagan. As for how to do it I figured I d give the uefi crate a shot, and see how it is to use, since this is a low stakes way of trying it out. In general, this isn t the sort of thing I d usually post about except this wound up being easier and way cleaner than I thought it would be. That alone is worth sharing, in the hopes someome comes across this in the future and feels like they, too, can write something fun targeting the UEFI. First thing s first gotta create a rust project (I ll leave that part to you depending on your life choices), and to add the uefi crate to your Cargo.toml. You can either use cargo add or add a line like this by hand:
uefi =   version = "0.33", features = ["panic_handler", "alloc", "global_allocator"]  
We also need to teach cargo about how to go about building for the UEFI target, so we need to create a rust-toolchain.toml with one (or both) of the UEFI targets we re interested in:
[toolchain]
targets = ["aarch64-unknown-uefi", "x86_64-unknown-uefi"]
Unfortunately, I wasn t able to use the image crate, since it won t build against the uefi target. This looks like it s because rustc had no way to compile the required floating point operations within the image crate without hardware floating point instructions specifically. Rust tends to punt a lot of that to libm usually, so this isnt entirely shocking given we re nostd for a non-hardfloat target. So-called softening requires a software floating point implementation that the compiler can use to polyfill (feels weird to use the term polyfill here, but I guess it s spiritually right?) the lack of hardware floating point operations, which rust hasn t implemented for this target yet. As a result, I changed tactics, and figured I d use ImageMagick to pre-compute the pixels from a jpg, rather than doing it at runtime. A bit of a bummer, since I need to do more out of band pre-processing and hardcoding, and updating the image kinda sucks as a result but it s entirely manageable.
$ convert -resize 1280x900 kier.jpg kier.full.jpg
$ convert -depth 8 kier.full.jpg rgba:kier.bin
This will take our input file (kier.jpg), resize it to get as close to the desired resolution as possible while maintaining aspect ration, then convert it from a jpg to a flat array of 4 byte RGBA pixels. Critically, it s also important to remember that the size of the kier.full.jpg file may not actually be the requested size it will not change the aspect ratio, so be sure to make a careful note of the resulting size of the kier.full.jpg file. Last step with the image is to compile it into our Rust bianary, since we don t want to struggle with trying to read this off disk, which is thankfully real easy to do.
const KIER: &[u8] = include_bytes!("../kier.bin");
const KIER_WIDTH: usize = 1280;
const KIER_HEIGHT: usize = 641;
const KIER_PIXEL_SIZE: usize = 4;
Remember to use the width and height from the final kier.full.jpg file as the values for KIER_WIDTH and KIER_HEIGHT. KIER_PIXEL_SIZE is 4, since we have 4 byte wide values for each pixel as a result of our conversion step into RGBA. We ll only use RGB, and if we ever drop the alpha channel, we can drop that down to 3. I don t entirely know why I kept alpha around, but I figured it was fine. My kier.full.jpg image winds up shorter than the requested height (which is also qemu s default resolution for me) which means we ll get a semi-annoying black band under the image when we go to run it but it ll work. Anyway, now that we have our image as bytes, we can get down to work, and write the rest of the code to handle moving bytes around from in-memory as a flat block if pixels, and request that they be displayed using the UEFI GOP. We ll just need to hack up a container for the image pixels and teach it how to blit to the display.
/// RGB Image to move around. This isn't the same as an
///  image::RgbImage , but we can associate the size of
/// the image along with the flat buffer of pixels.
struct RgbImage  
/// Size of the image as a tuple, as the
 /// (width, height)
 size: (usize, usize),
/// raw pixels we'll send to the display.
 inner: Vec<BltPixel>,
 
impl RgbImage  
/// Create a new  RgbImage .
 fn new(width: usize, height: usize) -> Self  
RgbImage  
size: (width, height),
inner: vec![BltPixel::new(0, 0, 0); width * height],
 
 
/// Take our pixels and request that the UEFI GOP
 /// display them for us.
 fn write(&self, gop: &mut GraphicsOutput) -> Result  
gop.blt(BltOp::BufferToVideo  
buffer: &self.inner,
src: BltRegion::Full,
dest: (0, 0),
dims: self.size,
 )
 
 
impl Index<(usize, usize)> for RgbImage  
type Output = BltPixel;
fn index(&self, idx: (usize, usize)) -> &BltPixel  
let (x, y) = idx;
&self.inner[y * self.size.0 + x]
 
 
impl IndexMut<(usize, usize)> for RgbImage  
fn index_mut(&mut self, idx: (usize, usize)) -> &mut BltPixel  
let (x, y) = idx;
&mut self.inner[y * self.size.0 + x]
 
 
We also need to do some basic setup to get a handle to the UEFI GOP via the UEFI crate (using uefi::boot::get_handle_for_protocol and uefi::boot::open_protocol_exclusive for the GraphicsOutput protocol), so that we have the object we need to pass to RgbImage in order for it to write the pixels to the display. The only trick here is that the display on the booted system can really be any resolution so we need to do some capping to ensure that we don t write more pixels than the display can handle. Writing fewer than the display s maximum seems fine, though.
fn praise() -> Result  
let gop_handle = boot::get_handle_for_protocol::<GraphicsOutput>()?;
let mut gop = boot::open_protocol_exclusive::<GraphicsOutput>(gop_handle)?;
// Get the (width, height) that is the minimum of
 // our image and the display we're using.
 let (width, height) = gop.current_mode_info().resolution();
let (width, height) = (width.min(KIER_WIDTH), height.min(KIER_HEIGHT));
let mut buffer = RgbImage::new(width, height);
for y in 0..height  
for x in 0..width  
let idx_r = ((y * KIER_WIDTH) + x) * KIER_PIXEL_SIZE;
let pixel = &mut buffer[(x, y)];
pixel.red = KIER[idx_r];
pixel.green = KIER[idx_r + 1];
pixel.blue = KIER[idx_r + 2];
 
 
buffer.write(&mut gop)?;
Ok(())
 
Not so bad! A bit tedious we could solve some of this by turning KIER into an RgbImage at compile-time using some clever Cow and const tricks and implement blitting a sub-image of the image but this will do for now. This is a joke, after all, let s not go nuts. All that s left with our code is for us to write our main function and try and boot the thing!
#[entry]
fn main() -> Status  
uefi::helpers::init().unwrap();
praise().unwrap();
boot::stall(100_000_000);
Status::SUCCESS
 
If you re following along at home and so interested, the final source is over at gist.github.com. We can go ahead and build it using cargo (as is our tradition) by targeting the UEFI platform.
$ cargo build --release --target x86_64-unknown-uefi

Testing the UEFI Blob While I can definitely get my machine to boot these blobs to test, I figured I d save myself some time by using QEMU to test without a full boot. If you ve not done this sort of thing before, we ll need two packages, qemu and ovmf. It s a bit different than most invocations of qemu you may see out there so I figured it d be worth writing this down, too.
$ doas apt install qemu-system-x86 ovmf
qemu has a nice feature where it ll create us an EFI partition as a drive and attach it to the VM off a local directory so let s construct an EFI partition file structure, and drop our binary into the conventional location. If you haven t done this before, and are only interested in running this in a VM, don t worry too much about it, a lot of it is convention and this layout should work for you.
$ mkdir -p esp/efi/boot
$ cp target/x86_64-unknown-uefi/release/*.efi \
 esp/efi/boot/bootx64.efi
With all this in place, we can kick off qemu, booting it in UEFI mode using the ovmf firmware, attaching our EFI partition directory as a drive to our VM to boot off of.
$ qemu-system-x86_64 \
 -enable-kvm \
 -m 2048 \
 -smbios type=0,uefi=on \
 -bios /usr/share/ovmf/OVMF.fd \
 -drive format=raw,file=fat:rw:esp
If all goes well, soon you ll be met with the all knowing gaze of Chosen One, Kier Eagan. The thing that really impressed me about all this is this program worked first try it all went so boringly normal. Truly, kudos to the uefi crate maintainers, it s incredibly well done.

Booting a live system Sure, we could stop here, but anyone can open up an app window and see a picture of Kier Eagan, so I knew I needed to finish the job and boot a real machine up with this. In order to do that, we need to format a USB stick. BE SURE /dev/sda IS CORRECT IF YOU RE COPY AND PASTING. All my drives are NVMe, so BE CAREFUL if you use SATA, it may very well be your hard drive! Please do not destroy your computer over this.
$ doas fdisk /dev/sda
Welcome to fdisk (util-linux 2.40.4).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.
Command (m for help): n
Partition type
p primary (0 primary, 0 extended, 4 free)
e extended (container for logical partitions)
Select (default p): p
Partition number (1-4, default 1):
First sector (2048-4014079, default 2048):
Last sector, +/-sectors or +/-size K,M,G,T,P  (2048-4014079, default 4014079):
Created a new partition 1 of type 'Linux' and of size 1.9 GiB.
Command (m for help): t
Selected partition 1
Hex code or alias (type L to list all): ef
Changed type of partition 'Linux' to 'EFI (FAT-12/16/32)'.
Command (m for help): w
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.
Once that looks good (depending on your flavor of udev you may or may not need to unplug and replug your USB stick), we can go ahead and format our new EFI partition (BE CAREFUL THAT /dev/sda IS YOUR USB STICK) and write our EFI directory to it.
$ doas mkfs.fat /dev/sda1
$ doas mount /dev/sda1 /mnt
$ cp -r esp/efi /mnt
$ find /mnt
/mnt
/mnt/efi
/mnt/efi/boot
/mnt/efi/boot/bootx64.efi
Of course, naturally, devotion to Kier shouldn t mean backdooring your system. Disabling Secure Boot runs counter to the Core Principals, such as Probity, and not doing this would surely run counter to Verve, Wit and Vision. This bit does require that you ve taken the step to enroll a MOK and know how to use it, right about now is when we can use sbsign to sign our UEFI binary we want to boot from to continue enforcing Secure Boot. The details for how this command should be run specifically is likely something you ll need to work out depending on how you ve decided to manage your MOK.
$ doas sbsign \
 --cert /path/to/mok.crt \
 --key /path/to/mok.key \
 target/x86_64-unknown-uefi/release/*.efi \
 --output esp/efi/boot/bootx64.efi
I figured I d leave a signed copy of boot2kier at /boot/efi/EFI/BOOT/KIER.efi on my Dell XPS 13, with Secure Boot enabled and enforcing, just took a matter of going into my BIOS to add the right boot option, which was no sweat. I m sure there is a way to do it using efibootmgr, but I wasn t smart enough to do that quickly. I let er rip, and it booted up and worked great! It was a bit hard to get a video of my laptop, though but lucky for me, I have a Minisforum Z83-F sitting around (which, until a few weeks ago was running the annual http server to control my christmas tree ) so I grabbed it out of the christmas bin, wired it up to a video capture card I have sitting around, and figured I d grab a video of me booting a physical device off the boot2kier USB stick.
Attentive readers will notice the image of Kier is smaller then the qemu booted system which just means our real machine has a larger GOP display resolution than qemu, which makes sense! We could write some fancy resize code (sounds annoying), center the image (can t be assed but should be the easy way out here) or resize the original image (pretty hardware specific workaround). Additionally, you can make out the image being written to the display before us (the Minisforum logo) behind Kier, which is really cool stuff. If we were real fancy we could write blank pixels to the display before blitting Kier, but, again, I don t think I care to do that much work.

But now I must away If I wanted to keep this joke going, I d likely try and find a copy of the original video when Helly 100%s her file and boot into that or maybe play a terrible midi PC speaker rendition of Kier, Chosen One, Kier after rendering the image. I, unfortunately, don t have any friends involved with production (yet?), so I reckon all that s out for now. I ll likely stop playing with this the joke was done and I m only writing this post because of how great everything was along the way. All in all, this reminds me so much of building a homebrew kernel to boot a system into but like, good, though, and it s a nice reminder of both how fun this stuff can be, and how far we ve come. UEFI protocols are light-years better than how we did it in the dark ages, and the tooling for this is SO much more mature. Booting a custom UEFI binary is miles ahead of trying to boot your own kernel, and I can t believe how good the uefi crate is specifically. Praise Kier! Kudos, to everyone involved in making this so delightful .

28 January 2025

Bits from Debian: New Debian Developers and Maintainers (November and December 2024)

The following contributors got their Debian Developer accounts in the last two months: The following contributors were added as Debian Maintainers in the last two months: Congratulations!

11 August 2024

Ravi Dwivedi: My Austrian Visa Refusal Story

Vienna - the capital of Austria - is one of the most visited cities in the world, popular for its rich history, gardens, and cafes, along with well-known artists like Beethoven, Mozart, G del, and Freud. It has also been consistently ranked as the most livable city in the world. For these reasons, I was elated when my friend Snehal invited me last year to visit Vienna for a few days. We included Christmas and New Year s Eve in my itinerary due to the city s popular Christmas markets and lively events. The festive season also ensured that Snehal had some days off for sightseeing. Indians require a visa to visit Austria. Since the travel dates were near, I rushed to book an appointment online with VFS Global in Delhi, and quickly arranged the required documents. However, at VFS, I found out that I had applied in the wrong appointment category (tourist), which depends on the purpose of the visit, and that my travel dates do not allow enough time for visa authorities to make a decision. Apparently, even if you plan to stay only for a part of the trip with the host, you need to apply under the category Visiting Friends and Family . Thus, I had to book another appointment under this category, and took the opportunity to shift my travel dates to allow at least 15 business days for the visa application to be processed, removing Christmas and New Year s Eve from my itinerary. The process went smoothly, and my visa application was submitted by VFS. For reference, here s a list of documents I submitted - The following charges were collected from me.
Service Description Amount (Indian Rupees)
Cash Handling Charge - SAC Code: (SAC:998599) 0
VFS Fee - India - SAC Code: (SAC:998599) 1,820
VISA Fee - India - SAC Code: 7,280
Convenience Fee - SAC Code: (SAC:998599) 182
Courier Service - SAC Code: (SAC:998599) 728
Courier Assurance - SAC Code: (SAC:998599) 182
Total 10,192
I later learned that the courier charges (728 INR) and the courier assurance charges (182 INR) mentioned above were optional. However, VFS didn t ask whether I wanted to include them. When the emabssy is done processing your application, it will send your passport back to VFS, from where you can either collect it yourself or get it couriered back home, which requires you to pay courier charges. However, courier assurance charges do not add any value as VFS cannot assure anything about courier and I suggest you get them removed. My visa application was submitted on the 21st of December 2023. A few days later, on the 29th of December 2023, I received an email from the Austrian embassy asking me to submit an additional document -
Subject: AUSTRIAN VISA APPLICATION - AMENDMENT REQUEST: Ravi Dwivedi VIS 4331 Dear Applicant, On 22.12.2023 your application for Visa C was registered at the Embassy. You are requested to kindly send the scanned copies of the following documents via email to the Embassy or submit the documents at the nearest VFS centre, for further processing of your application:
  • Kindly submit Electronic letter of guarantee EVE- Elektronische Verpflichtungserkl rung obtained from the Fremdenpolizeibeh rde of the sponsor s district in Austria. Once your host company/inviting company has obtained the EVE, please share the reference number (starting from DEL_____) received from the authorities, with the Embassy.
Kindly Note: It is in your own interest to fulfil the requirements as indicated above and submit the missing documents within 14 days of the receipt of this email. Otherwise a decision will be taken based on the documentation available. Sie werden in Ihrem Interesse ersucht, die gekennzeichneten M ngel so schnell wie m glich zu beheben bzw. fehlende Unterlagen umgehend nachzureichen, um die weitere Bearbeitung des Antrages zu erm glichen. Sollten Sie innerhalb 14 Tagen die gekennzeichneten M ngel nicht beheben bzw. die fehlenden Unterlagen nicht nachreichen, wird ber den vorliegenden Antrag ohne diese Unterlagen bzw. M ngelbehebung entschieden. Austrian Embassy New Delhi R.J/ Consular Section +91 11 2419 2700 EP-13, Chandragupta Marg, Chanakyapuri, New Delhi 110 021, India bmeia.gv.at/botschaft/new-delhi facebook.at/AustrianEmbassyNewDelhihttp://www.facebook.at/AustrianEmbassyNewDelhi twitter.com/MFA_Austriahttp://www.twitter.com/MFA_Austria [refocus1][Signatur_V+30]https://www.bmeia.gv.at/en/european-foreign-policy/foreign-trade/refocus-austria/[Logo_AT_IN_22px]
I misunderstood the required document (the EVE) to be a scanned copy of the letter of guarantee form signed by Snehal, and responded by attaching it. Upon researching, Snehal determined that the document is an electronic letter of guarantee, and is supposed to be obtained at a local police station in Vienna. He visited a police station the next day and had a hard time conversing due to the language barrier (German is the common language in Austria, whereas Snehal speaks English). That day was a weekend, so he took an appointment for Monday, but in the meantime the embassy had finished processing my visa. My visa was denied, and the refusal letter stated:
The Austrian embassy in Delhi examined your application; the visa has been refused. The decision is based on the following reason(s):
  • The information submitted regarding the justification for the purpose and conditions of the intended stay was not reliable.
  • There are reasonable doubts as to your intention to leave the territory of the Member States before the expiry of the visa.
Other remarks: You have been given an amendment request, which you have failed to fulfil, or have only fulfilled inadequately, within the deadline set. You are a first-time traveller. The social and economic roots with the home country are not evident. The return from Schengen territory does therefore not seem to be certain.
I could have reapplied after obtaining the EVE, but I didn t because I found the following line
The social and economic roots with the home country are not evident.
offensive for someone who was born and raised in India, got the impression that the absence of electronic guarantee letter was not the only reason behind the refusal, had already wasted 12,000 INR on this application, and my friend s stay in Austria was uncertain after January. In fact, my friend soon returned to India. To summarize -
  1. If you are visiting a host, then the category of appointment at VFS must be Visiting Friends and Family rather than Tourist .
  2. VFS charged me for courier assurance, which is an optional service. Make sure to get these removed from your bill.
  3. Neither my travel agent nor the VFS application center mentioned the EVE.
  4. While the required documents list from the VFS website does mention it in point 6, it leads to a dead link.
  5. Snehal informed me that a mere two months ago, his wife s visa was approved without an EVE. This hints at inconsistency in processing of applications, even those under identical categories.
Such incidents are a waste of time and money for applicants, and an embarrassment to VFS and the Austrian visa authorities. I suggest that the Austrian visa authorities fix that URL, and provide instructions for hosts to obtain the EVE. Credits to Snehal and Contrapunctus for editing, Badri for proofreading.

20 December 2023

Melissa Wen: The Rainbow Treasure Map Talk: Advanced color management on Linux with AMD/Steam Deck.

Last week marked a major milestone for me: the AMD driver-specific color management properties reached the upstream linux-next! And to celebrate, I m happy to share the slides notes from my 2023 XDC talk, The Rainbow Treasure Map along with the individual recording that just dropped last week on youtube talk about happy coincidences!

Steam Deck Rainbow: Treasure Map & Magic Frogs While I may be bubbly and chatty in everyday life, the stage isn t exactly my comfort zone (hallway talks are more my speed). But the journey of developing the AMD color management properties was so full of discoveries that I simply had to share the experience. Witnessing the fantastic work of Jeremy and Joshua bring it all to life on the Steam Deck OLED was like uncovering magical ingredients and whipping up something truly enchanting. For XDC 2023, we split our Rainbow journey into two talks. My focus, The Rainbow Treasure Map, explored the new color features we added to the Linux kernel driver, diving deep into the hardware capabilities of AMD/Steam Deck. Joshua then followed with The Rainbow Frogs and showed the breathtaking color magic released on Gamescope thanks to the power unlocked by the kernel driver s Steam Deck color properties.

Packing a Rainbow into 15 Minutes I had so much to tell, but a half-slot talk meant crafting a concise presentation. To squeeze everything into 15 minutes (and calm my pre-talk jitters a bit!), I drafted and practiced those slides and notes countless times. So grab your map, and let s embark on the Rainbow journey together! Slide 1: The Rainbow Treasure Map - Advanced Color Management on Linux with AMD/SteamDeck Intro: Hi, I m Melissa from Igalia and welcome to the Rainbow Treasure Map, a talk about advanced color management on Linux with AMD/SteamDeck. Slide 2: List useful links for this technical talk Useful links: First of all, if you are not used to the topic, you may find these links useful.
  1. XDC 2022 - I m not an AMD expert, but - Melissa Wen
  2. XDC 2022 - Is HDR Harder? - Harry Wentland
  3. XDC 2022 Lightning - HDR Workshop Summary - Harry Wentland
  4. Color management and HDR documentation for FOSS graphics - Pekka Paalanen et al.
  5. Cinematic Color - 2012 SIGGRAPH course notes - Jeremy Selan
  6. AMD Driver-specific Properties for Color Management on Linux (Part 1) - Melissa Wen
Slide 3: Why do we need advanced color management on Linux? Context: When we talk about colors in the graphics chain, we should keep in mind that we have a wide variety of source content colorimetry, a variety of output display devices and also the internal processing. Users expect consistent color reproduction across all these devices. The userspace can use GPU-accelerated color management to get it. But this also requires an interface with display kernel drivers that is currently missing from the DRM/KMS framework. Slide 4: Describe our work on AMD driver-specific color properties Since April, I ve been bothering the DRM community by sending patchsets from the work of me and Joshua to add driver-specific color properties to the AMD display driver. In parallel, discussions on defining a generic color management interface are still ongoing in the community. Moreover, we are still not clear about the diversity of color capabilities among hardware vendors. To bridge this gap, we defined a color pipeline for Gamescope that fits the latest versions of AMD hardware. It delivers advanced color management features for gamut mapping, HDR rendering, SDR on HDR, and HDR on SDR. Slide 5: Describe the AMD/SteamDeck - our hardware AMD/Steam Deck hardware: AMD frequently releases new GPU and APU generations. Each generation comes with a DCN version with display hardware improvements. Therefore, keep in mind that this work uses the AMD Steam Deck hardware and its kernel driver. The Steam Deck is an APU with a DCN3.01 display driver, a DCN3 family. It s important to have this information since newer AMD DCN drivers inherit implementations from previous families but aldo each generation of AMD hardware may introduce new color capabilities. Therefore I recommend you to familiarize yourself with the hardware you are working on. Slide 6: Diagram with the three layers of the AMD display driver on Linux The AMD display driver in the kernel space: It consists of three layers, (1) the DRM/KMS framework, (2) the AMD Display Manager, and (3) the AMD Display Core. We extended the color interface exposed to userspace by leveraging existing DRM resources and connecting them using driver-specific functions for color property management. Slide 7: Three-layers diagram highlighting AMD Display Manager, DM - the layer that connects DC and DRM Bridging DC color capabilities and the DRM API required significant changes in the color management of AMD Display Manager - the Linux-dependent part that connects the AMD DC interface to the DRM/KMS framework. Slide 8: Three-layers diagram highlighting AMD Display Core, DC - the shared code The AMD DC is the OS-agnostic layer. Its code is shared between platforms and DCN versions. Examining this part helps us understand the AMD color pipeline and hardware capabilities, since the machinery for hardware settings and resource management are already there. Slide 9: Diagram of the AMD Display Core Next architecture with main elements and data flow The newest architecture for AMD display hardware is the AMD Display Core Next. Slide 10: Diagram of the AMD Display Core Next where only DPP and MPC blocks are highlighted In this architecture, two blocks have the capability to manage colors:
  • Display Pipe and Plane (DPP) - for pre-blending adjustments;
  • Multiple Pipe/Plane Combined (MPC) - for post-blending color transformations.
Let s see what we have in the DRM API for pre-blending color management. Slide 11: Blank slide with no content only a title 'Pre-blending: DRM plane' DRM plane color properties: This is the DRM color management API before blending. Nothing! Except two basic DRM plane properties: color_encoding and color_range for the input colorspace conversion, that is not covered by this work. Slide 12: Diagram with color capabilities and structures in AMD DC layer without any DRM plane color interface (before blending), only the DRM CRTC color interface for post blending In case you re not familiar with AMD shared code, what we need to do is basically draw a map and navigate there! We have some DRM color properties after blending, but nothing before blending yet. But much of the hardware programming was already implemented in the AMD DC layer, thanks to the shared code. Slide 13: Previous Diagram with a rectangle to highlight the empty space in the DRM plane interface that will be filled by AMD plane properties Still both the DRM interface and its connection to the shared code were missing. That s when the search begins! Slide 14: Color Pipeline Diagram with the plane color interface filled by AMD plane properties but without connections to AMD DC resources AMD driver-specific color pipeline: Looking at the color capabilities of the hardware, we arrive at this initial set of properties. The path wasn t exactly like that. We had many iterations and discoveries until reached to this pipeline. Slide 15: Color Pipeline Diagram connecting AMD plane degamma properties, LUT and TF, to AMD DC resources The Plane Degamma is our first driver-specific property before blending. It s used to linearize the color space from encoded values to light linear values. Slide 16: Describe plane degamma properties and hardware capabilities We can use a pre-defined transfer function or a user lookup table (in short, LUT) to linearize the color space. Pre-defined transfer functions for plane degamma are hardcoded curves that go to a specific hardware block called DPP Degamma ROM. It supports the following transfer functions: sRGB EOTF, BT.709 inverse OETF, PQ EOTF, and pure power curves Gamma 2.2, Gamma 2.4 and Gamma 2.6. We also have a one-dimensional LUT. This 1D LUT has four thousand ninety six (4096) entries, the usual 1D LUT size in the DRM/KMS. It s an array of drm_color_lut that goes to the DPP Gamma Correction block. Slide 17: Color Pipeline Diagram connecting AMD plane CTM property to AMD DC resources We also have now a color transformation matrix (CTM) for color space conversion. Slide 18: Describe plane CTM property and hardware capabilities It s a 3x4 matrix of fixed points that goes to the DPP Gamut Remap Block. Both pre- and post-blending matrices were previously gone to the same color block. We worked on detaching them to clear both paths. Now each CTM goes on its own way. Slide 19: Color Pipeline Diagram connecting AMD plane HDR multiplier property to AMD DC resources Next, the HDR Multiplier. HDR Multiplier is a factor applied to the color values of an image to increase their overall brightness. Slide 20: Describe plane HDR mult property and hardware capabilities This is useful for converting images from a standard dynamic range (SDR) to a high dynamic range (HDR). As it can range beyond [0.0, 1.0] subsequent transforms need to use the PQ(HDR) transfer functions. Slide 21: Color Pipeline Diagram connecting AMD plane shaper properties, LUT and TF, to AMD DC resources And we need a 3D LUT. But 3D LUT has a limited number of entries in each dimension, so we want to use it in a colorspace that is optimized for human vision. It means in a non-linear space. To deliver it, userspace may need one 1D LUT before 3D LUT to delinearize content and another one after to linearize content again for blending. Slide 22: Describe plane shaper properties and hardware capabilities The pre-3D-LUT curve is called Shaper curve. Unlike Degamma TF, there are no hardcoded curves for shaper TF, but we can use the AMD color module in the driver to build the following shaper curves from pre-defined coefficients. The color module combines the TF and the user LUT values into the LUT that goes to the DPP Shaper RAM block. Slide 23: Color Pipeline Diagram connecting AMD plane 3D LUT property to AMD DC resources Finally, our rockstar, the 3D LUT. 3D LUT is perfect for complex color transformations and adjustments between color channels. Slide 24: Describe plane 3D LUT property and hardware capabilities 3D LUT is also more complex to manage and requires more computational resources, as a consequence, its number of entries is usually limited. To overcome this restriction, the array contains samples from the approximated function and values between samples are estimated by tetrahedral interpolation. AMD supports 17 and 9 as the size of a single-dimension. Blue is the outermost dimension, red the innermost. Slide 25: Color Pipeline Diagram connecting AMD plane blend properties, LUT and TF, to AMD DC resources As mentioned, we need a post-3D-LUT curve to linearize the color space before blending. This is done by Blend TF and LUT. Slide 26: Describe plane blend properties and hardware capabilities Similar to shaper TF, there are no hardcoded curves for Blend TF. The pre-defined curves are the same as the Degamma block, but calculated by the color module. The resulting LUT goes to the DPP Blend RAM block. Slide 27: Color Pipeline Diagram  with all AMD plane color properties connect to AMD DC resources and links showing the conflict between plane and CRTC degamma Now we have everything connected before blending. As a conflict between plane and CRTC Degamma was inevitable, our approach doesn t accept that both are set at the same time. Slide 28: Color Pipeline Diagram connecting AMD CRTC gamma TF property to AMD DC resources We also optimized the conversion of the framebuffer to wire encoding by adding support to pre-defined CRTC Gamma TF. Slide 29: Describe CRTC gamma TF property and hardware capabilities Again, there are no hardcoded curves and TF and LUT are combined by the AMD color module. The same types of shaper curves are supported. The resulting LUT goes to the MPC Gamma RAM block. Slide 30: Color Pipeline Diagram with all AMD driver-specific color properties connect to AMD DC resources Finally, we arrived in the final version of DRM/AMD driver-specific color management pipeline. With this knowledge, you re ready to better enjoy the rainbow treasure of AMD display hardware and the world of graphics computing. Slide 31: SteamDeck/Gamescope Color Pipeline Diagram with rectangles labeling each block of the pipeline with the related AMD color property With this work, Gamescope/Steam Deck embraces the color capabilities of the AMD GPU. We highlight here how we map the Gamescope color pipeline to each AMD color block. Slide 32: Final slide. Thank you! Future works: The search for the rainbow treasure is not over! The Linux DRM subsystem contains many hidden treasures from different vendors. We want more complex color transformations and adjustments available on Linux. We also want to expose all GPU color capabilities from all hardware vendors to the Linux userspace. Thanks Joshua and Harry for this joint work and the Linux DRI community for all feedback and reviews. The amazing part of this work comes in the next talk with Joshua and The Rainbow Frogs! Any questions?
References:
  1. Slides of the talk The Rainbow Treasure Map.
  2. Youtube video of the talk The Rainbow Treasure Map.
  3. Patch series for AMD driver-specific color management properties (upstream Linux 6.8v).
  4. SteamDeck/Gamescope color management pipeline
  5. XDC 2023 website.
  6. Igalia website.

7 December 2023

Dima Kogan: roslanch and =LD_PRELOAD=

This is part 2 of our series entitled "ROS people don't know how to use computers". This is about ROS1. ROS2 is presumably broken in some completely different way, but I don't know. Unlike normal people, the ROS people don't "run" applications. They "launch" "nodes" from "packages" (these are "ROS" packages; obviously). You run
roslaunch PACKAGE THING.launch
Then it tries to find this PACKAGE (using some rules that nobody understands), and tries to find the file THING.launch within this package. The .launch file contains inscrutable xml, which includes other inscrutable xml. And if you dig, you eventually find stuff like
<node pkg="PACKAGE"
      name="NAME"
      type="TYPE"
      args="...."
      ...>
This defines the thing that runs. Unexpectedly, the executable that ends up running is called TYPE. I know that my particular program is broken, and needs an LD_PRELOAD (exciting details described in another rant in the near future). But the above definition doesn't have a clear way to add that. Adding it to the type fails (with a very mysterious error message). Reading the docs tells you about launch-prefix, which sounds exactly like what I want. But when I add LD_PRELOAD=/tmp/whatever.so I get
RLException: Roslaunch got a 'No such file or directory' error while attempting to run:
LD_PRELOAD=/tmp/whatever.so ..../TYPE .....
But this is how you're supposed to be attaching gdb and such! Presumably it looks at the first token, and makes sure it's a file, instead of simply prepending it to the string it passes to the shell. So your options are: I'm expert-enough. You do this:
launch-prefix="/lib64/ld-linux-x86-64.so.2 --preload /tmp/whatever.so"

20 October 2023

Freexian Collaborators: Debian Contributions: Freexian meetup, debusine updates, lpr/lpd in Debian, and more! (by Utkarsh Gupta, Stefano Rivera)

Contributing to Debian is part of Freexian s mission. This article covers the latest achievements of Freexian and their collaborators. All of this is made possible by organizations subscribing to our Long Term Support contracts and consulting services.

Freexian Meetup, by Stefano Rivera, Utkarsh Gupta, et al. During DebConf, Freexian organized a meetup for its collaborators and those interested in learning more about Freexian and its services. It was well received and many people interested in Freexian showed up. Some developers who were interested in contributing to LTS came to get more details about joining the project. And some prospective customers came to get to know us and ask questions. Sadly, the tragic loss of Abraham shook DebConf, both individually and structurally. The meetup got rescheduled to a small room without video coverage. With that, we still had a wholesome interaction and here s a quick picture from the meetup taken by Utkarsh (which is also why he s missing!).

Debusine, by Rapha l Hertzog, et al. Freexian has been investing into debusine for a while, but development speed is about to increase dramatically thanks to funding from SovereignTechFund.de. Rapha l laid out the 5 milestones of the funding contract, and filed the issues for the first milestone. Together with Enrico and Stefano, they established a workflow for the expanded team. Among the first steps of this milestone, Enrico started to work on a developer-friendly description of debusine that we can use when we reach out to the many Debian contributors that we will have to interact with. And Rapha l started the design work of the autopkgtest and lintian tasks, i.e. what s the interface to schedule such tasks, what behavior and what associated options do we support? At this point you might wonder what debusine is supposed to be let us try to answer this: Debusine manages scheduling and distribution of Debian-related build and QA tasks to a network of worker machines. It also manages the resulting artifacts and provides the results in an easy to consume way. We want to make it easy for Debian contributors to leverage all the great QA tools that Debian provides. We want to build the next generation of Debian s build infrastructure, one that will continue to reliably do what it already does, but that will also enable distribution-wide experiments, custom package repositories and custom workflows with advanced package reviews. If this all sounds interesting to you, don t hesitate to watch the project on salsa.debian.org and to contribute.

lpr/lpd in Debian, by Thorsten Alteholz During Debconf23, Till Kamppeter presented CPDB (Common Print Dialog Backend), a new way to handle print queues. After this talk it was discussed whether the old lpr/lpd based printing system could be abandoned in Debian or whether there is still demand for it. So Thorsten asked on the debian-devel email list whether anybody uses it. Oddly enough, these old packages are still useful:
  • Within a small network it is easier to distribute a printcap file, than to properly configure cups clients.
  • One of the biggest manufacturers of WLAN router and DSL boxes only supports raw queues when attaching an USB printer to their hardware. Admittedly the CPDB still has problems with such raw queues.
  • The Pharos printing system at MIT is still lpd-based.
As a result, the lpr/lpd stuff is not yet ready to be abandoned and Thorsten will adopt the relevant packages (or rather move them under the umbrella of the debian-printing team). Though it is not planned to develop new features, those packages should at least have a maintainer. This month Thorsten adopted rlpr, an utility for lpd printing without using /etc/printcap. The next one he is working on is lprng, a lpr/lpd printer spooling system. If you know of any other package that is also needed and still maintained by the QA team, please tell Thorsten.

/usr-merge, by Helmut Grohne Discussion about lifting the file move moratorium has been initiated with the CTTE and the release team. A formal lift is dependent on updating debootstrap in older suites though. A significant number of packages can automatically move their systemd unit files if dh_installsystemd and systemd.pc change their installation targets. Unfortunately, doing so makes some packages FTBFS and therefore patches have been filed. The analysis tool, dumat, has been enhanced to better understand which upgrade scenarios are considered supported to reduce false positive bug filings and gained a mode for local operation on a .changes file meant for inclusion in salsa-ci. The filing of bugs from dumat is still manual to improve the quality of reports. Since September, the moratorium has been lifted.

Miscellaneous contributions
  • Rapha l updated Django s backport in bullseye-backports to match the latest security release that was published in bookworm. Tracker.debian.org is still using that backport.
  • Helmut Grohne sent 13 patches for cross build failures.
  • Helmut Grohne performed a maintenance upload of debvm enabling its use in autopkgtests.
  • Helmut Grohne wrote an API-compatible reimplementation of autopkgtest-build-qemu. It is powered by mmdebstrap, therefore unprivileged, EFI-only and will soon be included in mmdebstrap.
  • Santiago continued the work regarding how to make it easier to (automatically) test reverse dependencies. An example of the ongoing work was presented during the Salsa CI BoF at DebConf 23.
    In fact, omniorb-dfsg test pipelines as the above were used for the omniorb-dfsg 4.3.0 transition, verifying how the reverse dependencies (tango, pytango and omnievents) were built and how their autopkgtest jobs run with the to-be-uploaded omniorb-dfsg new release.
  • Utkarsh and Stefano attended and helped run DebConf 23. Also continued winding up DebConf 22 accounting.
  • Anton Gladky did some science team uploads to fix RC bugs.

30 September 2023

Ian Jackson: DKIM: rotate and publish your keys

If you are an email system administrator, you are probably using DKIM to sign your outgoing emails. You should be rotating the key regularly and automatically, and publishing old private keys. I have just released dkim-rotate 1.0; dkim-rotate is a tool to do this key rotation and publication. If you are an email user, your email provider ought to be doing this. If this is not done, your emails are non-repudiable , meaning that if they are leaked, anyone (eg, journalists, haters) can verify that they are authentic, and prove that to others. This is not desirable (for you). Non-repudiation of emails is undesirable This problem was described at some length in Matthew Green s article Ok Google: please publish your DKIM secret keys. Avoiding non-repudiation sounds a bit like lying. After all, I m advising creating a situation where some people can t verify that something is true, even though it is. So I m advocating casting doubt. Crucially, though, it s doubt about facts that ought to be private. When you send an email, that s between you and the recipient. Normally you don t intend for anyone, anywhere, who happens to get a copy, to be able to verify that it was really you that sent it. In practical terms, this verifiability has already been used by journalists to verify stolen emails. Associated Press provide a verification tool. Advice for all email users As a user, you probably don t want your emails to be non-repudiable. (Other people might want to be able to prove you sent some email, but your email system ought to serve your interests, not theirs.) So, your email provider ought to be rotating their DKIM keys, and publishing their old ones. At a rough guess, your provider probably isn t :-(. How to tell by looking at email headers A quick and dirty way to guess is to have a friend look at the email headers of a message you sent. (It is important that the friend uses a different email provider, since often DKIM signatures are not applied within a single email system.) If your friend sees a DKIM-Signature header then the message is DKIM signed. If they don t, then it wasn t. Most email traversing the public internet is DKIM signed nowadays; so if they don t see the header probably they re not looking using the right tools, or they re actually on the same email system as you. In messages signed by a system running dkim-rotate, there will also be a header about the key rotation, to notify potential verifiers of the situation. Other systems that avoid non-repudiation-through-DKIM might do something similar. dkim-rotate s header looks like this:
DKIM-Signature-Warning: NOTE REGARDING DKIM KEY COMPROMISE
 https://www.chiark.greenend.org.uk/dkim-rotate/README.txt
 https://www.chiark.greenend.org.uk/dkim-rotate/ae/aeb689c2066c5b3fee673355309fe1c7.pem
But an email system might do half of the job of dkim-rotate: regularly rotating the key would cause the signatures of old emails to fail to verify, which is a good start. In that case there probably won t be such a header. Testing verification of new and old messages You can also try verifying the signatures. This isn t entirely straightforward, especially if you don t have access to low-level mail tooling. Your friend will need to be able to save emails as raw whole headers and body, un-decoded, un-rendered. If your friend is using a traditional Unix mail program, they should save the message as an mbox file. Otherwise, ProPublica have instructions for attaching and transferring and obtaining the raw email. (Scroll down to How to Check DKIM and ARC .) Checking that recent emails are verifiable Firstly, have your friend test that they can in fact verify a DKIM signature. This will demonstrate that the next test, where the verification is supposed to fail, is working properly and fails for the right reasons. Send your friend a test email now, and have them do this on a Linux system:
    # save the message as test-email.mbox
    apt install libmail-dkim-perl # or equivalent on another distro
    dkimproxy-verify <test-email.mbox
You should see output containing something like this:
    originator address: ijackson@chiark.greenend.org.uk
    signature identity: @chiark.greenend.org.uk
    verify result: pass
    ...
If the output ontains verify result: fail (body has been altered) then probably your friend didn t manage to faithfully save the unalterered raw message. Checking old emails cannot be verified When you both have that working, have your friend find an older email of yours, from (say) month ago. Perform the same steps. Hopefully they will see something like this:
    originator address: ijackson@chiark.greenend.org.uk
    signature identity: @chiark.greenend.org.uk
    verify result: fail (bad RSA signature)
or maybe
    verify result: invalid (public key: not available)
This indicates that this old email can no longer be verified. That s good: it means that anyone who steals a copy, can t verify it either. If it s leaked, the journalist who receives it won t know it s genuine and unmodified; they should then be suspicious. If your friend sees verify result: pass, then they have verified that that old email of yours is genuine. Anyone who had a copy of the mail can do that. This is good for email thieves, but not for you. For email admins: announcing dkim-rotate 1.0 I have been running dkim-rotate 0.4 on my infrastructure, since last August. and I had entirely forgotten about it: it has run flawlessly for a year. I was reminded of the topic by seeing DKIM in other blog posts. Obviously, it is time to decreee that dkim-rotate is 1.0. If you re a mail system administrator, your users are best served if you use something like dkim-rotate. The package is available in Debian stable, and supports Exim out of the box, but other MTAs should be easy to support too, via some simple ad-hoc scripting. Limitation of this approach Even with this key rotation approach, emails remain nonrepudiable for a short period after they re sent - typically, a few days. Someone who obtains a leaked email very promptly, and shows it to the journalist (for example) right away, can still convince the journalist. This is not great, but at least it doesn t apply to the vast bulk of your email archive. There are possible email protocol improvements which might help, but they re quite out of scope for this article.
Edited 2023-10-01 00:20 +01:00 to fix some grammar


comment count unavailable comments

13 September 2023

Matthew Garrett: Reconstructing an invalid TPM event log

TPMs contain a set of registers ("Platform Configuration Registers", or PCRs) that are used to track what a system boots. Each time a new event is measured, a cryptographic hash representing that event is passed to the TPM. The TPM appends that hash to the existing value in the PCR, hashes that, and stores the final result in the PCR. This means that while the PCR's value depends on the precise sequence and value of the hashes presented to it, the PCR value alone doesn't tell you what those individual events were. Different PCRs are used to store different event types, but there are still more events than there are PCRs so we can't avoid this problem by simply storing each event separately.

This is solved using the event log. The event log is simply a record of each event, stored in RAM. The algorithm the TPM uses to calculate the PCR values is known, so we can reproduce that by simply taking the events from the event log and replaying the series of events that were passed to the TPM. If the final calculated value is the same as the value in the PCR, we know that the event log is accurate, which means we now know the value of each individual event and can make an appropriate judgement regarding its security.

If any value in the event log is invalid, we'll calculate a different PCR value and it won't match. This isn't terribly helpful - we know that at least one entry in the event log doesn't match what was passed to the TPM, but we don't know which entry. That means we can't trust any of the events associated with that PCR. If you're trying to make a security determination based on this, that's going to be a problem.

PCR 7 is used to track information about the secure boot policy on the system. It contains measurements of whether or not secure boot is enabled, and which keys are trusted and untrusted on the system in question. This is extremely helpful if you want to verify that a system booted with secure boot enabled before allowing it to do something security or safety critical. Unfortunately, if the device gives you an event log that doesn't replay correctly for PCR 7, you now have no idea what the security state of the system is.

We ran into that this week. Examination of the event log revealed an additional event other than the expected ones - a measurement accompanied by the string "Boot Guard Measured S-CRTM". Boot Guard is an Intel feature where the CPU verifies the firmware is signed with a trusted key before executing it, and measures information about the firmware in the process. Previously I'd only encountered this as a measurement into PCR 0, which is the PCR used to track information about the firmware itself. But it turns out that at least some versions of Boot Guard also measure information about the Boot Guard policy into PCR 7. The argument for this is that this is effectively part of the secure boot policy - having a measurement of the Boot Guard state tells you whether Boot Guard was enabled, which tells you whether or not the CPU verified a signature on your firmware before running it (as I wrote before, I think Boot Guard has user-hostile default behaviour, and that enforcing this on consumer devices is a bad idea).

But there's a problem here. The event log is created by the firmware, and the Boot Guard measurements occur before the firmware is executed. So how do we get a log that represents them? That one's fairly simple - the firmware simply re-calculates the same measurements that Boot Guard did and creates a log entry after the fact[1]. All good.

Except. What if the firmware screws up the calculation and comes up with a different answer? The entry in the event log will now not match what was sent to the TPM, and replaying will fail. And without knowing what the actual value should be, there's no way to fix this, which means there's no way to verify the contents of PCR 7 and determine whether or not secure boot was enabled.

But there's still a fundamental source of truth - the measurement that was sent to the TPM in the first place. Inspired by Henri Nurmi's work on sniffing Bitlocker encryption keys, I asked a coworker if we could sniff the TPM traffic during boot. The TPM on the board in question uses SPI, a simple bus that can have multiple devices connected to it. In this case the system flash and the TPM are on the same SPI bus, which made things easier. The board had a flash header for external reprogramming of the firmware in the event of failure, and all SPI traffic was visible through that header. Attaching a logic analyser to this header made it simple to generate a record of that. The only problem was that the chip select line on the header was attached to the firmware flash chip, not the TPM. This was worked around by simply telling the analysis software that it should invert the sense of the chip select line, ignoring all traffic that was bound for the flash and paying attention to all other traffic. This worked in this case since the only other device on the bus was the TPM, but would cause problems in the event of multiple devices on the bus all communicating.

With the aid of this analyser plugin, I was able to dump all the TPM traffic and could then search for writes that included the "0182" sequence that corresponds to the command code for a measurement event. This gave me a couple of accesses to the locality 3 registers, which was a strong indication that they were coming from the CPU rather than from the firmware. One was for PCR 0, and one was for PCR 7. This corresponded to the two Boot Guard events that we expected from the event log. The hash in the PCR 0 measurement was the same as the hash in the event log, but the hash in the PCR 7 measurement differed from the hash in the event log. Replacing the event log value with the value actually sent to the TPM resulted in the event log now replaying correctly, supporting the hypothesis that the firmware was failing to correctly reconstruct the event.

What now? The simple thing to do is for us to simply hard code this fixup, but longer term we'd like to figure out how to reconstruct the event so we can calculate the expected value ourselves. Unfortunately there doesn't seem to be any public documentation on this. Sigh.

[1] What stops firmware on a system with no Boot Guard faking those measurements? TPMs have a concept of "localities", effectively different privilege levels. When Boot Guard performs its initial measurement into PCR 0, it does so at locality 3, a locality that's only available to the CPU. This causes PCR 0 to be initialised to a different initial value, affecting the final PCR value. The firmware can't access locality 3, so can't perform an equivalent measurement, so can't fake the value.

comment count unavailable comments

27 April 2023

Arturo Borrero Gonz lez: Kubecon and CloudNativeCon 2023 Europe summary

Post logo This post serves as a report from my attendance to Kubecon and CloudNativeCon 2023 Europe that took place in Amsterdam in April 2023. It was my second time physically attending this conference, the first one was in Austin, Texas (USA) in 2017. I also attended once in a virtual fashion. The content here is mostly generated for the sake of my own recollection and learnings, and is written from the notes I took during the event. The very first session was the opening keynote, which reunited the whole crowd to bootstrap the event and share the excitement about the days ahead. Some astonishing numbers were announced: there were more than 10.000 people attending, and apparently it could confidently be said that it was the largest open source technology conference taking place in Europe in recent times. It was also communicated that the next couple iteration of the event will be run in China in September 2023 and Paris in March 2024. More numbers, the CNCF was hosting about 159 projects, involving 1300 maintainers and about 200.000 contributors. The cloud-native community is ever-increasing, and there seems to be a strong trend in the industry for cloud-native technology adoption and all-things related to PaaS and IaaS. The event program had different tracks, and in each one there was an interesting mix of low-level and higher level talks for a variety of audience. On many occasions I found that reading the talk title alone was not enough to know in advance if a talk was a 101 kind of thing or for experienced engineers. But unlike in previous editions, I didn t have the feeling that the purpose of the conference was to try selling me anything. Obviously, speakers would make sure to mention, or highlight in a subtle way, the involvement of a given company in a given solution or piece of the ecosystem. But it was non-invasive and fair enough for me. On a different note, I found the breakout rooms to be often small. I think there were only a couple of rooms that could accommodate more than 500 people, which is a fairly small allowance for 10k attendees. I realized with frustration that the more interesting talks were immediately fully booked, with people waiting in line some 45 minutes before the session time. Because of this, I missed a few important sessions that I ll hopefully watch online later. Finally, on a more technical side, I ve learned many things, that instead of grouping by session I ll group by topic, given how some subjects were mentioned in several talks. On gitops and CI/CD pipelines Most of the mentions went to FluxCD and ArgoCD. At that point there were no doubts that gitops was a mature approach and both flux and argoCD could do an excellent job. ArgoCD seemed a bit more over-engineered to be a more general purpose CD pipeline, and flux felt a bit more tailored for simpler gitops setups. I discovered that both have nice web user interfaces that I wasn t previously familiar with. However, in two different talks I got the impression that the initial setup of them was simple, but migrating your current workflow to gitops could result in a bumpy ride. This is, the challenge is not deploying flux/argo itself, but moving everything into a state that both humans and flux/argo can understand. I also saw some curious mentions to the config drifts that can happen in some cases, even if the goal of gitops is precisely for that to never happen. Such mentions were usually accompanied by some hints on how to operate the situation by hand. Worth mentioning, I missed any practical information about one of the key pieces to this whole gitops story: building container images. Most of the showcased scenarios were using pre-built container images, so in that sense they were simple. Building and pushing to an image registry is one of the two key points we would need to solve in Toolforge Kubernetes if adopting gitops. In general, even if gitops were already in our radar for Toolforge Kubernetes, I think it climbed a few steps in my priority list after the conference. Another learning was this site: https://opengitops.dev/. Group On etcd, performance and resource management I attended a talk focused on etcd performance tuning that was very encouraging. They were basically talking about the exact same problems we have had in Toolforge Kubernetes, like api-server and etcd failure modes, and how sensitive etcd is to disk latency, IO pressure and network throughput. Even though Toolforge Kubernetes scale is small compared to other Kubernetes deployments out there, I found it very interesting to see other s approaches to the same set of challenges. I learned how most Kubernetes components and apps can overload the api-server. Because even the api-server talks to itself. Simple things like kubectl may have a completely different impact on the API depending on usage, for example when listing the whole list of objects (very expensive) vs a single object. The conclusion was to try avoiding hitting the api-server with LIST calls, and use ResourceVersion which avoids full-dumps from etcd (which, by the way, is the default when using bare kubectl get calls). I already knew some of this, and for example the jobs-framework-emailer was already making use of this ResourceVersion functionality. There have been a lot of improvements in the performance side of Kubernetes in recent times, or more specifically, in how resources are managed and used by the system. I saw a review of resource management from the perspective of the container runtime and kubelet, and plans to support fancy things like topology-aware scheduling decisions and dynamic resource claims (changing the pod resource claims without re-defining/re-starting the pods). On cluster management, bootstrapping and multi-tenancy I attended a couple of talks that mentioned kubeadm, and one in particular was from the maintainers themselves. This was of interest to me because as of today we use it for Toolforge. They shared all the latest developments and improvements, and the plans and roadmap for the future, with a special mention to something they called kubeadm operator , apparently capable of auto-upgrading the cluster, auto-renewing certificates and such. I also saw a comparison between the different cluster bootstrappers, which to me confirmed that kubeadm was the best, from the point of view of being a well established and well-known workflow, plus having a very active contributor base. The kubeadm developers invited the audience to submit feature requests, so I did. The different talks confirmed that the basic unit for multi-tenancy in kubernetes is the namespace. Any serious multi-tenant usage should leverage this. There were some ongoing conversations, in official sessions and in the hallway, about the right tool to implement K8s-whitin-K8s, and vcluster was mentioned enough times for me to be convinced it was the right candidate. This was despite of my impression that multiclusters / multicloud are regarded as hard topics in the general community. I definitely would like to play with it sometime down the road. On networking I attended a couple of basic sessions that served really well to understand how Kubernetes instrumented the network to achieve its goal. The conference program had sessions to cover topics ranging from network debugging recommendations, CNI implementations, to IPv6 support. Also, one of the keynote sessions had a reference to how kube-proxy is not able to perform NAT for SIP connections, which is interesting because I believe Netfilter Conntrack could do it if properly configured. One of the conclusions on the CNI front was that Calico has a massive community adoption (in Netfilter mode), which is reassuring, especially considering it is the one we use for Toolforge Kubernetes. Slide On jobs I attended a couple of talks that were related to HPC/grid-like usages of Kubernetes. I was truly impressed by some folks out there who were using Kubernetes Jobs on massive scales, such as to train machine learning models and other fancy AI projects. It is acknowledged in the community that the early implementation of things like Jobs and CronJobs had some limitations that are now gone, or at least greatly improved. Some new functionalities have been added as well. Indexed Jobs, for example, enables each Job to have a number (index) and process a chunk of a larger batch of data based on that index. It would allow for full grid-like features like sequential (or again, indexed) processing, coordination between Job and more graceful Job restarts. My first reaction was: Is that something we would like to enable in Toolforge Jobs Framework? On policy and security A surprisingly good amount of sessions covered interesting topics related to policy and security. It was nice to learn two realities:
  1. kubernetes is capable of doing pretty much anything security-wise and create greatly secured environments.
  2. it does not by default. The defaults are not security-strict on purpose.
It kind of made sense to me: Kubernetes was used for a wide range of use cases, and developers didn t know beforehand to which particular setup they should accommodate the default security levels. One session in particular covered the most basic security features that should be enabled for any Kubernetes system that would get exposed to random end users. In my opinion, the Toolforge Kubernetes setup was already doing a good job in that regard. To my joy, some sessions referred to the Pod Security Admission mechanism, which is one of the key security features we re about to adopt (when migrating away from Pod Security Policy). I also learned a bit more about Secret resources, their current implementation and how to leverage a combo of CSI and RBAC for a more secure usage of external secrets. Finally, one of the major takeaways from the conference was learning about kyverno and kubeaudit. I was previously aware of the OPA Gatekeeper. From the several demos I saw, it was to me that kyverno should help us make Toolforge Kubernetes more sustainable by replacing all of our custom admission controllers with it. I already opened a ticket to track this idea, which I ll be proposing to my team soon. Final notes In general, I believe I learned many things, and perhaps even more importantly I re-learned some stuff I had forgotten because of lack of daily exposure. I m really happy that the cloud native way of thinking was reinforced in me, which I still need because most of my muscle memory to approach systems architecture and engineering is from the old pre-cloud days. List of sessions I attended on the first day: List of sessions I attended on the second day: List of sessions I attended on third day: The videos have been published on Youtube.

11 April 2023

Aurelien Jarno: New website, or kind of...

For over 15 years, I've hardly made any updates to my website, and it remains low on my priority list. So I made a radical decision to replace it entirely with my blog. The content of the website has been reduced to just two additional pages. But nothing has been lost: nowadays, Wikipedia is a much better platform for sharing knowledge than random websites. And it happens that they already cover all that was on my website about subaquatic diving in French. They also offer a multitude of resources in electronics, including the topics that were on my website: LCD displays, I C bus, barcodes, parallel ports, serial ports, and DCF77 reception. Finally if you're in need of Debian QEMU images for various architectures, I recommend the Debian Quick Image Baker pre-baked images page instead.

16 March 2023

Valhalla's Things: Swiss Embroidery Princess Petticoat

Posted on March 16, 2023
a person wearing a blue sleeveless fitted dress with calf-length skirt; there are small ruffles on the armscyes and the hem, and white lace on the collar and just above the hem ruffle, and small white buttons on a partial placket down the center front. A few years ago a friend told me that her usual fabric shop was closing down and had a sale on all remaining stock. While being sad for yet another brick and mortar shop that was going to be missed (at least it was because the owners were retiring, not because it wasn t sustainable anymore), of course I couldn t miss the opportunity. So we drove a few hundred km, had some nice time with a friend that (because of said few hundred km) we rarely see, and spent a few hours looting the corps er helping the shop owner getting rid of stock before their retirement. A surprisingly small pile of fabric; everything is blue or black. Among other things there was a cut of lightweight swiss embroidery cotton in blue which may or may not have been enthusiastically grabbed with plans of victorian underwear. It was too nice to be buried under layers and layers of fabric (and I suspect that the embroidery wouldn t feel great directly on the skin under a corset), so the natural fit was something at the corset cover layer, and the fabric was enough for a combination garment of the kind often worn in the later Victorian age to prevent the accumulation of bulk at the waist. It also has the nice advantage that in this time of corrupted morals it is perfectly suitable as outerwear as a nice summer dress. Then life happened, the fabric remained in my stash for a long while, but finally this year I have a good late victorian block that I can adapt, and with spring coming it was a good time to start working on the summer wardrobe. scan from a vintage book with the pattern for a tight fitting jacket. The block I ve used comes from The Cutters Practical Guide to the Cutting of Ladies Garments and is for a jacket, rather than a bodice, but the bodice block from the same book had a 4 part back, which was too much for this garment. I reduced the ease around the bust a bit, which I believe worked just fine. The main pattern was easy enough to prepare, I just had to add skirt panels with a straight side towards the front and flaring out towards the back, and I did a quick mockup from an old sheet to check the fit (good) and the swish and volume of the skirt (just right at the first attempt!). The mockup was also used to get an idea of a few possible necklines, and I opted for a relatively deep V, and a front opening with a partial placket down to halfway between the waist and the hips. I also opted for a self-fabric ruffle at the hem and armscyes. same dress, same person, from the side, with one hand in the pocket slit. The only design choice left was the pocket situation: I wanted to wear this garment both as underwear (where pockets aren t needed, and add unwanted bulk) and outerwear (where no pockets is not an option), and the fabric felt too thin to support the weight of the contents of a full pocket. So I decided to add slits into the seams, with just a modesty placket, and wear pockets under the dress as needed. I decided to put the slits between the side and side back panels for two reasons: one is that this way the pockets can sit towards the back, where the fullness of the skirt is supposed to be, rather than under the flat front, and the other one is to keep the seams around the front panel clean, since they are the first ones to be changed when altering a garment for fit. For the same reason, I didn t trim the excess allowance from that seam: it means that it is a bit more bulky, but the fabric is thin enought that it s not really noticeable, and it gives an additional cm for future alterations. Then, as the garment was getting close to being finished I was measuring and storing some old cotton lace I had received as a gift, and there was a length of relatively small lace, and the finish on the neckline was pretty simple and called for embellishment, and who am I to deny embellishment to victorian inspired clothing? A ruffle pleated into a receiving tuck, each pleat is fixed with a pin, and there are a lot of pins. First I had to finish attaching the ruffles, however, and this is when I cursed myself for not using the ruffler foot I have (it would have meant not having selvedges on all seams of the ruffle), and for pleating the ruffle rather than gathering it (I prefer the look of handsewn gathers, but here I m sewing everything by machine, and that s faster, right? (it probably wasn t)). A metal box full of straight pins. Also, this is where I started to get low on pins, and I had to use the ones from the vintage1 box I ve been keeping as decoration in the sewing room. A few long sessions of pinning later, the ruffle was sewn and I could add the lace; I used white thread so that it would be hidden on the right side, but easily visible inside the garment in case I ll decide to remove or change it later. A few buttons and buttonholes later, the garment was ready, and the only thing left was to edit the step-by-step pictures and publish the pattern: it s now available as #FreeSoftWear on my patterns website. And Of course, I had to do a proper swish test of the finished dress with the ruffle, and I m happy to announce that it was fully passed. a person spinning on herself, the skirt and the ruffle are swishing out. Something in the pocket worn under the dress is causing a bit of bulge on one side. Except, maybe I shouldn t carry heavy items in my pockets when doing it? Oh, well. I have other plans for the same pattern, but they involve making some crochet lace, so I expect I can aim at making them wearable in summer 2024. Now I just have to wait for the weather to be a bit warmer, and then I can start enjoing this one.

  1. ok, even more vintage, since my usual pins come from a plastic box that has been probably bought in the 1980s.

6 January 2023

Jonathan McDowell: Finally making use of bpftrace

I am old enough to remember when BPF meant the traditional Berkeley Packet Filter, and was confined to filtering network packets. It s grown into much, much, more as eBPF and getting familiar with it so that I can add it to the suite of tips and tricks I can call upon has been on my to-do list for a while. To this end I was lucky enough to attend a live walk through of bpftrace last year. bpftrace is a high level tool that allows the easy creation and execution of eBPF tracers under Linux. Recently I ve been working on updating the RetroArch packages in Debian and as I was doing so I realised there was a need to update the quite outdated retroarch-assets package, which contains various icons and images used for the user interface. I wanted to try and re-generate as many of the artefacts as I could, to ensure the proper source was available. However it wasn t always clear which files were actually needed and which were either source or legacy. So I wanted to trace file opens by retroarch and see when it was failing to find files. Traditionally this is something I d have used strace for, but it seemed like a great opportunity to try out bpftrace. It turns out bpftrace ships with an example, opensnoop.bt which provided details of hooking the open syscall entry + exit and providing details of all files opened on the system. I only wanted to track opens by the retroarch binary that failed, so I made a couple of modifications:
retro-failed-open-snoop.bt
#!/usr/bin/env bpftrace
/*
 * retro-failed-open-snoop - snoop failed opens by RetroArch
 *
 * Based on:
 * opensnoop	Trace open() syscalls.
 *		For Linux, uses bpftrace and eBPF.
 *
 * Copyright 2018 Netflix, Inc.
 * Licensed under the Apache License, Version 2.0 (the "License")
 *
 * 08-Sep-2018	Brendan Gregg	Created this.
 */
BEGIN
 
	printf("Tracing open syscalls... Hit Ctrl-C to end.\n");
	printf("%-6s %-16s %3s %s\n", "PID", "COMM", "ERR", "PATH");
 
tracepoint:syscalls:sys_enter_open,
tracepoint:syscalls:sys_enter_openat
 
	@filename[tid] = args->filename;
 
tracepoint:syscalls:sys_exit_open,
tracepoint:syscalls:sys_exit_openat
/@filename[tid]/
 
	$ret = args->ret;
	$errno = $ret > 0 ? 0 : - $ret;
	if (($ret <= 0) && (strncmp("retroarch", comm, 9) == 0) )  
		printf("%-6d %-16s %3d %s\n", pid, comm, $errno,
		    str(@filename[tid]));
	 
	delete(@filename[tid]);
 
END
 
	clear(@filename);
 
I had to install bpftrace (apt install bpftrace) and then I ran bpftrace -o retro.log retro-failed-open-snoop.bt as root and fired up retroarch as a normal user.
bpftrace failed open log for retroarch
Attaching 6 probes...
Tracing open syscalls... Hit Ctrl-C to end.
PID    COMM             ERR PATH
3394   retroarch          2 /usr/lib/x86_64-linux-gnu/pulseaudio/glibc-hwcaps/x86-64-v2/lib
3394   retroarch          2 /usr/lib/x86_64-linux-gnu/pulseaudio/tls/x86_64/x86_64/libpulse
3394   retroarch          2 /usr/lib/x86_64-linux-gnu/pulseaudio/tls/x86_64/libpulsecommon-
3394   retroarch          2 /usr/lib/x86_64-linux-gnu/pulseaudio/tls/x86_64/libpulsecommon-
3394   retroarch          2 /usr/lib/x86_64-linux-gnu/pulseaudio/tls/libpulsecommon-16.1.so
3394   retroarch          2 /usr/lib/x86_64-linux-gnu/pulseaudio/x86_64/x86_64/libpulsecomm
3394   retroarch          2 /usr/lib/x86_64-linux-gnu/pulseaudio/x86_64/libpulsecommon-16.1
3394   retroarch          2 /usr/lib/x86_64-linux-gnu/pulseaudio/x86_64/libpulsecommon-16.1
3394   retroarch          2 /etc/gcrypt/hwf.deny
3394   retroarch          2 /lib/x86_64-linux-gnu/glibc-hwcaps/x86-64-v2/libgamemode.so.0
3394   retroarch          2 /lib/x86_64-linux-gnu/tls/x86_64/x86_64/libgamemode.so.0
3394   retroarch          2 /lib/x86_64-linux-gnu/tls/x86_64/libgamemode.so.0
3394   retroarch          2 /lib/x86_64-linux-gnu/tls/x86_64/libgamemode.so.0
3394   retroarch          2 /lib/x86_64-linux-gnu/tls/libgamemode.so.0
3394   retroarch          2 /lib/x86_64-linux-gnu/x86_64/x86_64/libgamemode.so.0
3394   retroarch          2 /lib/x86_64-linux-gnu/x86_64/libgamemode.so.0
3394   retroarch          2 /lib/x86_64-linux-gnu/x86_64/libgamemode.so.0
3394   retroarch          2 /lib/x86_64-linux-gnu/libgamemode.so.0
3394   retroarch          2 /usr/lib/x86_64-linux-gnu/glibc-hwcaps/x86-64-v2/libgamemode.so
3394   retroarch          2 /usr/lib/x86_64-linux-gnu/tls/x86_64/x86_64/libgamemode.so.0
3394   retroarch          2 /usr/lib/x86_64-linux-gnu/tls/x86_64/libgamemode.so.0
3394   retroarch          2 /usr/lib/x86_64-linux-gnu/tls/x86_64/libgamemode.so.0
3394   retroarch          2 /usr/lib/x86_64-linux-gnu/tls/libgamemode.so.0
3394   retroarch          2 /usr/lib/x86_64-linux-gnu/x86_64/x86_64/libgamemode.so.0
3394   retroarch          2 /usr/lib/x86_64-linux-gnu/x86_64/libgamemode.so.0
3394   retroarch          2 /usr/lib/x86_64-linux-gnu/x86_64/libgamemode.so.0
3394   retroarch          2 /usr/lib/x86_64-linux-gnu/libgamemode.so.0
3394   retroarch          2 /lib/glibc-hwcaps/x86-64-v2/libgamemode.so.0
3394   retroarch          2 /lib/tls/x86_64/x86_64/libgamemode.so.0
3394   retroarch          2 /lib/tls/x86_64/libgamemode.so.0
3394   retroarch          2 /lib/tls/x86_64/libgamemode.so.0
3394   retroarch          2 /lib/tls/libgamemode.so.0
3394   retroarch          2 /lib/x86_64/x86_64/libgamemode.so.0
3394   retroarch          2 /lib/x86_64/libgamemode.so.0
3394   retroarch          2 /lib/x86_64/libgamemode.so.0
3394   retroarch          2 /lib/libgamemode.so.0
3394   retroarch          2 /usr/lib/glibc-hwcaps/x86-64-v2/libgamemode.so.0
3394   retroarch          2 /usr/lib/tls/x86_64/x86_64/libgamemode.so.0
3394   retroarch          2 /usr/lib/tls/x86_64/libgamemode.so.0
3394   retroarch          2 /usr/lib/tls/x86_64/libgamemode.so.0
3394   retroarch          2 /usr/lib/tls/libgamemode.so.0
3394   retroarch          2 /usr/lib/x86_64/x86_64/libgamemode.so.0
3394   retroarch          2 /usr/lib/x86_64/libgamemode.so.0
3394   retroarch          2 /usr/lib/x86_64/libgamemode.so.0
3394   retroarch          2 /usr/lib/libgamemode.so.0
3394   retroarch          2 /lib/x86_64-linux-gnu/libgamemode.so
3394   retroarch          2 /usr/lib/x86_64-linux-gnu/libgamemode.so
3394   retroarch          2 /lib/libgamemode.so
3394   retroarch          2 /usr/lib/libgamemode.so
3394   retroarch          2 /lib/x86_64-linux-gnu/libdecor-0.so
3394   retroarch          2 /usr/lib/x86_64-linux-gnu/libdecor-0.so
3394   retroarch          2 /lib/libdecor-0.so
3394   retroarch          2 /usr/lib/libdecor-0.so
3394   retroarch          2 /etc/drirc
3394   retroarch          2 /home/noodles/.drirc
3394   retroarch          2 /etc/drirc
3394   retroarch          2 /home/noodles/.drirc
3394   retroarch          2 /usr/lib/x86_64-linux-gnu/dri/tls/iris_dri.so
3394   retroarch          2 /lib/x86_64-linux-gnu/../lib/glibc-hwcaps/x86-64-v2/libedit.so.
3394   retroarch          2 /lib/x86_64-linux-gnu/../lib/tls/x86_64/x86_64/libedit.so.2
3394   retroarch          2 /lib/x86_64-linux-gnu/../lib/tls/x86_64/libedit.so.2
3394   retroarch          2 /lib/x86_64-linux-gnu/../lib/tls/x86_64/libedit.so.2
3394   retroarch          2 /lib/x86_64-linux-gnu/../lib/tls/libedit.so.2
3394   retroarch          2 /lib/x86_64-linux-gnu/../lib/x86_64/x86_64/libedit.so.2
3394   retroarch          2 /lib/x86_64-linux-gnu/../lib/x86_64/libedit.so.2
3394   retroarch          2 /lib/x86_64-linux-gnu/../lib/x86_64/libedit.so.2
3394   retroarch          2 /lib/x86_64-linux-gnu/../lib/libedit.so.2
3394   retroarch          2 /etc/drirc
3394   retroarch          2 /home/noodles/.drirc
3394   retroarch          2 /etc/drirc
3394   retroarch          2 /home/noodles/.drirc
3394   retroarch          2 /etc/drirc
3394   retroarch          2 /home/noodles/.drirc
3394   retroarch          2 /home/noodles/.Xdefaults-udon
3394   retroarch          2 /home/noodles/.icons/default/cursors/00000000000000000000000000
3394   retroarch          2 /home/noodles/.icons/default/index.theme
3394   retroarch          2 /usr/share/icons/default/cursors/000000000000000000000000000000
3394   retroarch          2 /usr/share/pixmaps/default/cursors/0000000000000000000000000000
3394   retroarch          2 /home/noodles/.icons/Adwaita/cursors/00000000000000000000000000
3394   retroarch          2 /home/noodles/.icons/Adwaita/index.theme
3394   retroarch          2 /usr/share/icons/Adwaita/cursors/000000000000000000000000000000
3394   retroarch          2 /usr/share/pixmaps/Adwaita/cursors/0000000000000000000000000000
3394   retroarch          2 /home/noodles/.icons/hicolor/cursors/00000000000000000000000000
3394   retroarch          2 /home/noodles/.icons/hicolor/index.theme
3394   retroarch          2 /usr/share/icons/hicolor/cursors/000000000000000000000000000000
3394   retroarch          2 /usr/share/pixmaps/hicolor/cursors/0000000000000000000000000000
3394   retroarch          2 /usr/share/pixmaps/hicolor/index.theme
3394   retroarch          2 /home/noodles/.XCompose
3394   retroarch          2 /home/noodles/.icons/default/cursors/00000000000000000000000000
3394   retroarch          2 /home/noodles/.icons/default/index.theme
3394   retroarch          2 /usr/share/icons/default/cursors/000000000000000000000000000000
3394   retroarch          2 /usr/share/pixmaps/default/cursors/0000000000000000000000000000
3394   retroarch          2 /home/noodles/.icons/Adwaita/cursors/00000000000000000000000000
3394   retroarch          2 /home/noodles/.icons/Adwaita/index.theme
3394   retroarch          2 /usr/share/icons/Adwaita/cursors/000000000000000000000000000000
3394   retroarch          2 /usr/share/pixmaps/Adwaita/cursors/0000000000000000000000000000
3394   retroarch          2 /home/noodles/.icons/hicolor/cursors/00000000000000000000000000
3394   retroarch          2 /home/noodles/.icons/hicolor/index.theme
3394   retroarch          2 /usr/share/icons/hicolor/cursors/000000000000000000000000000000
3394   retroarch          2 /usr/share/pixmaps/hicolor/cursors/0000000000000000000000000000
3394   retroarch          2 /usr/share/pixmaps/hicolor/index.theme
3394   retroarch          2 /usr/share/libretro/assets/xmb/monochrome/png/disc.png
3394   retroarch          2 /usr/share/libretro/assets/xmb/monochrome/sounds
3394   retroarch          2 /usr/share/libretro/assets/sounds
3394   retroarch          2 /sys/class/power_supply/ACAD
3394   retroarch          2 /sys/class/power_supply/ACAD
3394   retroarch          2 /usr/share/libretro/assets/xmb/monochrome/png/disc.png
3394   retroarch          2 /usr/share/libretro/assets/ozone/sounds
3394   retroarch          2 /usr/share/libretro/assets/sounds
This was incredibly useful - the only theme image I was missing is disc.png from XMB Monochrome (which fails to have SVG source). I also discovered the runtime optional loading of GameMode. This is available in Debian so it was a simple matter to add libgamemode0 to the binary package Recommends. So, a very basic example of using bpftrace, but a remarkably useful intro to it from my point of view!

19 June 2022

John Goerzen: Pipes, deadlocks, and strace annoyingly fixing them

This is a complex tale I will attempt to make simple(ish). I ve (re)learned more than I cared to about the details of pipes, signals, and certain system calls and the solution is still elusive. For some time now, I have been using NNCP to back up my files. These backups are sent to my backup system, which effectively does this to process them (each ZFS send is piped to a shell script that winds up running this):
gpg -q -d   zstdcat -T0   zfs receive -u -o readonly=on "$STORE/$DEST"
This processes tens of thousands of zfs sends per week. Recently, having written Filespooler, I switched to sending the backups using Filespooler over NNCP. Now fspl (the Filespooler executable) opens the file for each stream and then connects it to what amounts to this pipeline:
bash -c 'gpg -q -d 2>/dev/null   zstdcat -T0'   zfs receive -u -o readonly=on "$STORE/$DEST"
Actually, to be more precise, it spins up the bash part of it, reads a few bytes from it, and then connects it to the zfs receive. And this works well almost always. In something like 1/1000 of the cases, it deadlocks, and I still don t know why. But I can talk about the journey of trying to figure it out (and maybe some of you will have some ideas). Filespooler is written in Rust, and uses Rust s Command system. Effectively what happens is this:
  1. The fspl process has a File handle, which after forking but before invoking bash, it dup2 s to stdin.
  2. The connection between bash and zfs receive is a standard Unix pipe.
I cannot get the problem to duplicate when I run the entire thing under strace -f. So I am left trying to peek at it from the outside. What happens if I try to attach to each component with strace -p? So the plot thickens! Why would connecting to zstdcat and zfs receive cause them to actually change behavior? strace works by using the ptrace system call, and ptrace in a number of cases requires sending SIGSTOP to a process. In a complicated set of circumstances, a system call may return EINTR when a SIGSTOP is received, with the idea that the system call should be retried. I can t see, from either zstdcat or zfs, if this is happening, though. So I thought, how about having Filespooler manually copy data from bash to zfs receive in a read/write loop instead of having them connected directly via a pipe? That is, there would be two pipes going there: one where Filespooler reads from the bash command, and one where it writes to zfs. If nothing else, I could instrument it with debugging. And so I did, and I found that when it deadlocked, it was deadlocking on write but with no discernible pattern as to where or when. So I went back to directly connected. In analyzing straces, I found a Rust bug which I reported in which it is failing to close the read end of a pipe in the parent post-fork. However, having implemented a workaround for this, it doesn t prevent the deadlock so this is orthogonal to the issue at hand. Among the two strange things here are things returning to normal when I attach strace to zstdcat, and things crashing when I attach strace to zfs. I decided to investigate the latter. It turns out that the ZFS code that is reading from stdin during zfs receive is in the kernel module, not userland. Here is the part that is triggering the imcomplete stream error:
                int err = zfs_file_read(fp, (char *)buf + done,
                    len - done, &resid);
                if (resid == len - done)  
                        /*
                         * Note: ECKSUM or ZFS_ERR_STREAM_TRUNCATED indicates
                         * that the receive was interrupted and can
                         * potentially be resumed.
                         */
                        err = SET_ERROR(ZFS_ERR_STREAM_TRUNCATED);
                 
resid is an output parameter with the number of bytes remaining from a short read, so in this case, if the read produced zero bytes, then it sets that error. What s zfs_file_read then? It boils down to a thin wrapper around kernel_read(). This winds up calling __kernel_read(), which calls read_iter on the pipe, which is pipe_read(). That s where I don t have the knowledge to get into the weeds right now. So it seems likely to me that the problem has something to do with zfs receive. But, what, and why does it only not work in this one very specific situation, and only so rarely? And why does attaching strace to zstdcat make it all work again? I m indeed puzzled! Update 2022-06-20: See the followup post which identifies this as likely a kernel bug and explains why this particular use of Filespooler made it easier to trigger.

20 March 2022

Joerg Jaspert: Another shell script moved to rust

Shell? Rust! Not the first shell script I took and made a rust version of, but probably my largest yet. This time I took my little tm (tmux helper) tool which is (well, was) a bit more than 600 lines of shell, and converted it to Rust. I got most of the functionality done now, only one major part is missing.

What s tm? tm started as a tiny shell script to make handling tmux easier. The first commit in git was in July 2013, but I started writing and using it in 2011. It started out as a kind-of wrapper around ssh, opening tmux windows with an ssh session on some other hosts. It quickly gained support to open multiple ssh sessions in one window, telling tmux to synchronize input (send input to all targets at once), which is great when you have a set of machines that ought to get the same commands.

tm vs clusterssh / mussh In spirit it is similar to clusterssh or mussh, allowing to run the same command on many hosts at the same time. clusterssh sets out to open new terminals (xterm) per host and gives you an input line, that it sends everywhere. mussh appears to take your command and then send it to all the hosts. Both have disadvantages in my opinion: clusterssh opens lots of xterm windows, and you can not easily switch between multiple sessions, mussh just seems to send things over ssh and be done. tm instead just creates a tmux session, telling it to ssh to the targets, possibly setting the tmux option to send input to all panes. And leaves all the rest of the handling to tmux. So you can
  • detach a session and reattach later easily,
  • use tmux great builtin support for copy/paste,
  • see all output, modify things even for one machine only,
  • zoom in to one machine that needs just ONE bit different (cssh can do this too),
  • let colleagues also connect to your tmux session, when needed,
  • easily add more machines to the mix, if needed,
  • and all the other extra features tmux brings.

More tm tm also supports just attaching to existing sessions as well as killing sessions, mostly for lazyness (less to type than using tmux directly). At some point tm gained support for setting up sessions according to some session file . It knows two formats now, one is simple and mostly a list of hostnames to open synchronized sessions for. This may contain LIST commands, which let tm execute that command, expected output is list of hostnames (or more LIST commands) for the session. That, combined with the replacement part, lets us have one config file that opens a set of VMs based on tags our Ganeti runs, based on tags. It is simply a LIST command asking for VMs tagged with the replacement arg and up. Very handy. Or also all VMs on host X . The second format is basically free form tmux commands . Mostly commandline tmux call, just drop the tmux in front collection. Both of them supporting a crude variable replacement.

Conversion to Rust Some while ago I started playing with Rust and it somehow clicked , I do like it. My local git tells me, that I tried starting off with go in 2017, but that appearently did not work out. Fun, everywhere I can read says that Rust ought to be harder to learn. So by now I have most of the functionality implemented in the Rust version, even if I am sure that the code isn t a good Rust example. I m learning, after all, and already have adjusted big parts of it, multiple times, whenever I learn (and understand) something more - and am also sure that this will happen again

Compatibility with old tm It turns out that my goal of staying compatible with the behaviour of the old shell script does make some things rather complicated. For example, the LIST commands in session config files - in shell I just execute them commands, and shell deals with variable/parameter expansion, I just set IFS to newline only and read in what I get back. Simple. Because shell is doing a lot of things for me. Now, in Rust, it is a different thing at all:
  • Properly splitting the line into shell words, taking care of quoting (can t simply take whitespace) (there is shlex)
  • Expanding specials like ~ and $HOME (there is home_dir).
  • Supporting environment variables in general, tm has some that adjust behaviour of it. Which shell can use globally. Used lazy_static for a similar effect - they aren t going to change at runtime ever, anyways.
Properly supporting the commandline arguments also turned out to be a bit more work. Rust appearently has multiple crates supporting this, I settled on clap, but as tm supports getopts -style as well as free-form arguments (subcommands in clap), it takes a bit to get that interpreted right.

Speed Most of the time entirely unimportant in the tool that tm is (open a tmux with one to some ssh connections to some places is not exactly hard or time consuming), there are situations, where one can notice that it s calling out to tmux over and over again, for every single bit to do, and that just takes time: Configurations that open sessions to 20 and more hosts at the same time especially lag in setup time. (My largest setup goes to 443 panes in one window). The compiled Rust version is so much faster there, it s just great. Nice side effect, that is. And yes, in the end it is also only driving tmux, still, it takes less than half the time to do so.

Code, Fun parts As this is still me learning to write Rust, I am sure the code has lots to improve. Some of which I will sure find on my own, but if you have time, I love PRs (or just mails with hints).

Github Also the first time I used Github Actions to see how it goes. Letting it build, test, run clippy and also run a code coverage tool (Yay, more than 50% covered ) on it. Unsure my tests are good, I am not used to writing tests for code, but hey, coverage!

Up next I do have to implement the last missing feature, which is reading the other config file format. A little scared, as that means somehow translating those lines into correct calls within the tmux_interface I am using, not sure that is easy. I could be bad and just shell out to tmux on it all the time, but somehow I don t like the thought of doing that. Maybe (ab)using the control mode, but then, why would I use tmux_interface, so trying to handle it with that first. Afterwards I want to gain a new command, to save existing sessions and be able to recreate them easily. Shouldn t be too hard, tmux has a way to get at that info, somewhere.

24 December 2021

Vincent Bernat: Automatic login with startx and systemd

If your workstation is using full-disk encryption, you may want to jump directly to your desktop environment after entering the passphrase to decrypt the disk. Many display managers like GDM and LightDM have an autologin feature. However, only GDM can run Xorg with standard user privileges. Here is an alternative using startx and a systemd service:
[Unit]
Description=X11 session for bernat
After=graphical.target systemd-user-sessions.service
[Service]
User=bernat
WorkingDirectory=~
PAMName=login
Environment=XDG_SESSION_TYPE=x11
TTYPath=/dev/tty8
StandardInput=tty
UnsetEnvironment=TERM
UtmpIdentifier=tty8
UtmpMode=user
StandardOutput=journal
ExecStartPre=/usr/bin/chvt 8
ExecStart=/usr/bin/startx -- vt8 -keeptty -verbose 3 -logfile /dev/null
Restart=no
[Install]
WantedBy=graphical.target
Let me explain each block: Drop this unit in /etc/systemd/system/x11-autologin.service and enable it with systemctl enable x11-autologin.service. Xorg is now running rootless and logging into the journal. After using it for a few months, I didn t notice any regression compared to LightDM with autologin.

  1. For more information on how logind provides access to devices, see this blog post. The method names do not match the current implementation, but the concepts are still correct. Xorg takes control of the session when the TTY is active.
  2. Xorg could change the type of the session itself after taking control of it, but it does not.
  3. There is some code in Xorg to do that, but it is executed too late and fails with: xf86OpenConsole: VT_ACTIVATE failed: Operation not permitted.

6 June 2021

Evgeni Golov: Controlling Somfy roller shutters using an ESP32 and ESPHome

Our house has solar powered, remote controllable roller shutters on the roof windows, built by the German company named HEIM & HAUS. However, when you look closely at the remote control or the shutter motor, you'll see another brand name: SIMU. As the shutters don't have any wiring inside the house, the only way to control them is via the remote interface. So let's go on the Internet and see how one can do that, shall we? ;) First thing we learn is that SIMU remote stuff is just re-branded Somfy. Great, another name! Looking further we find that Somfy uses some obscure protocol to prevent (replay) attacks (spoiler: it doesn't!) and there are tools for RTL-SDR and Arduino available. That's perfect! Always sniff with RTL-SDR first! Given the two re-brandings in the supply chain, I wasn't 100% sure our shutters really use the same protocol. So the first "hack" was to listen and decrypt the communication using RTL-SDR:
$ git clone https://github.com/dimhoff/radio_stuff
$ cd radio_stuff
$ make -C converters am_to_ook
$ make -C decoders decode_somfy
$ rtl_fm -M am -f 433.42M -s 270K   ./am_to_ook -d 10 -t 2000 -    ./decode_somfy
<press some buttons on the remote>
The output contains the buttons I pressed, but also the id of the remote and the command counter (which is supposed to prevent replay attacks). At this point I could just use the id and the counter to send own commands, but if I'd do that too often, the real remote would stop working, as its counter won't increase and the receiver will drop the commands when the counters differ too much. But that's good enough for now. I know I'm looking for the right protocol at the right frequency. As the end result should be an ESP32, let's move on! Acquiring the right hardware Contrary to an RTL-SDR, one usually does not have a spare ESP32 with 433MHz radio at home, so I went shopping: a NodeMCU-32S clone and a CC1101. The CC1101 is important as most 433MHz chips for Arduino/ESP only work at 433.92MHz, but Somfy uses 433.42MHz and using the wrong frequency would result in really bad reception. The CC1101 is essentially an SDR, as you can tune it to a huge spectrum of frequencies. Oh and we need some cables, a bread board, the usual stuff ;) The wiring is rather simple: ESP32 wiring for a CC1101 And the end result isn't too beautiful either, but it works: ESP32 and CC1101 in a simple case Acquiring the right software In my initial research I found an Arduino sketch and was totally prepared to port it to ESP32, but luckily somebody already did that for me! Even better, it's explicitly using the CC1101. Okay, okay, I cheated, I actually ordered the hardware after I found this port and the reference to CC1101. ;) As I am using ESPHome for my ESPs, the idea was to add a "Cover" that's controlling the shutters to it. Writing some C++, how hard can it be? Turns out, not that hard. You can see the code in my GitHub repo. It consists of two (relevant) files: somfy_cover.h and somfy.yaml. somfy_cover.h essentially wraps the communication with the Somfy_Remote_Lib library into an almost boilerplate Custom Cover for ESPHome. There is nothing too fancy in there. The only real difference to the "Custom Cover" example from the documentation is the split into SomfyESPRemote (which inherits from Component) and SomfyESPCover (which inherits from Cover) -- this is taken from the Custom Sensor documentation and allows me to define one "remote" that controls multiple "covers" using the add_cover function. The first two params of the function are the NVS name and key (think database table and row), and the third is the rolling code of the remote (stored in somfy_secrets.h, which is not in Git). In ESPHome a Cover shall define its properties as CoverTraits. Here we call set_is_assumed_state(true), as we don't know the state of the shutters - they could have been controlled using the other (real) remote - and setting this to true allows issuing open/close commands at all times. We also call set_supports_position(false) as we can't tell the shutters to move to a specific position. The one additional feature compared to a normal Cover interface is the program function, which allows to call the "program" command so that the shutters can learn a new remote. somfy.yaml is the ESPHome "configuration", which contains information about the used hardware, WiFi credentials etc. Again, mostly boilerplate. The interesting parts are the loading of the additional libraries and attaching the custom component with multiple covers and the additional PROG switches:
esphome:
  name: somfy
  platform: ESP32
  board: nodemcu-32s
  libraries:
    - SmartRC-CC1101-Driver-Lib@2.5.6
    - Somfy_Remote_Lib@0.4.0
    - EEPROM
  includes:
    - somfy_secrets.h
    - somfy_cover.h
 
cover:
  - platform: custom
    lambda:  -
      auto somfy_remote = new SomfyESPRemote();
      somfy_remote->add_cover("somfy", "badezimmer", SOMFY_REMOTE_BADEZIMMER);
      somfy_remote->add_cover("somfy", "kinderzimmer", SOMFY_REMOTE_KINDERZIMMER);
      App.register_component(somfy_remote);
      return somfy_remote->covers;
    covers:
      - id: "somfy"
        name: "Somfy Cover"
      - id: "somfy2"
        name: "Somfy Cover2"
switch:
  - platform: template
    name: "PROG"
    turn_on_action:
      - lambda:  -
          ((SomfyESPCover*)id(somfy))->program();
  - platform: template
    name: "PROG2"
    turn_on_action:
      - lambda:  -
          ((SomfyESPCover*)id(somfy2))->program();
The switch to trigger the program mode took me a bit. As the Cover interface of ESPHome does not offer any additional functions besides movement control, I first wrote code to trigger "program" when "stop" was pressed three times in a row, but that felt really cumbersome and also had the side effect that the remote would send more than needed, sometimes confusing the shutters. I then decided to have a separate button (well, switch) for that, but the compiler yelled at me I can't call program on a Cover as it does not have such a function. Turns out, you need to explicitly cast to SomfyESPCover and then it works, even if the code becomes really readable, NOT. Oh and as the switch does not have any code to actually change/report state, it effectively acts as a button that can be pressed. At this point we can just take an existing remote, press PROG for 5 seconds, see the blinds move shortly up and down a bit and press PROG on our new ESP32 remote and the shutters will learn the new remote. And thanks to the awesome integration of ESPHome into HomeAssistant, this instantly shows up as a new controllable cover there too. Future Additional Work I started writing this post about a year ago And the initial implementation had some room for improvement More than one remote The initial code only created one remote and one cover element. Sure, we could attach that to all shutters (there are 4 of them), but we really want to be able to control them separately. Thankfully I managed to read enough ESPHome docs, and learned how to operate std::vector to make the code dynamically accept new shutters. Using ESP32's NVS The ESP32 has a non-volatile key-value storage which is much nicer than throwing bits at an emulated EEPROM. The first library I used for that explicitly used EEPROM storage and it would have required quite some hacking to make it work with NVS. Thankfully the library I am using now has a plugable storage interface, and I could just write the NVS backend myself and upstream now supports that. Yay open-source! Remaining issues Real state is unknown As noted above, the ESP does not know the real state of the shutters: a command could have been lost in transmission (the Somfy protocol is send-only, there is no feedback) or the shutters might have been controlled by another remote. At least the second part could be solved by listening all the time and trying to decode commands heard over the air, but I don't think this is worth the time -- worst that can happen is that a closed (opened) shutter receives another close (open) command and that is harmless as they have integrated endstops and know that they should not move further. Can't program new remotes with ESP only To program new remotes, one has to press the "PROG" button for 5 seconds. This was not exposed in the old library, but the new one does support "long press", I just would need to add another ugly switch to the config and I currently don't plan to do so, as I do have working remotes for the case I need to learn a new one.

18 February 2021

Jonathan McDowell: Hacking and Bricking the EE Opsrey 2 Mini

I ve mentioned in the past my twisted EE network setup from when I moved in to my current house. The 4GEE WiFi Mini (also known as the EE Osprey 2 Mini or the EE40VB, and actually a rebadged Alcatel Y853VB) has been sitting unused since then, so I figured I d see about trying to get a shell on it. TL;DR: Of course it s running Linux, there s a couple of test points internally which bring out the serial console, but after finding those and logging in I discovered it s running ADB on port 5555 quite happily available without authentication both via wifi and the USB port. So if you have physical or local network access, instant root shell. Well done, folks. And then I bricked it before I could do anything more interesting. There s a lack of information about this device out there - most of the links I can find are around removing the SIM lock - so I thought I d document the pieces I found just in case anyone else is trying to figure it out. It s based around a Qualcomm MDM9607 SoC, paired with 64M RAM and 256M NAND flash. Wifi is via an RTL8192ES. Kernel is 3.18.20. Busybox is v1.23.1. It s running dnsmasq but I didn t grab the version. Of course there s no source or offer of source provided. Taking it apart is fairly easy. There s a single screw to remove, just beside the SIM slot. The coloured rim can then be carefully pried away from the back, revealing the battery. There are then 4 screws in the corners which need removed in order to be able to lift out the actual PCB and gain access to the serial console test points. EE40VB PCB serial console test points My mistake was going poking around trying to figure out where the updates are downloaded from - I know I m running a slightly older release than what s current, and the device can do an automatic download + update. Top tip; don t run Jrdrecovery. It ll error on finding /cache/update.zip and wipe the main partition anyway. That ll leave you in a boot loop where the device boots the recovery partition which tries to install /cache/update.zip which of course still doesn t exist. So. Where next? First, I need to get the device into a state where I can actually do something other than watch it boot into recovery, fail to flash and reboot. Best guess at present is to try and get it to enter the Qualcomm EDL (Emergency Download) mode. That might be possible with a custom USB cable that grounds D+ on boot. Alternatively I need to probe some of the other test points on the PCB and see if grounding any of those helps enter EDL mode. I then need a suitable firehose OEM-signed programmer image. And then I need to actually get hold of a proper EE40VB firmware image, either via one of the OTA update files or possibly via an Alcatel ADSU image (though no idea how to get hold of one, other than by posting to a random GSM device forum and hoping for the kindness of strangers). More updates if/when I make progress
Qualcomm bootloader log
Format: Log Type - Time(microsec) - Message - Optional Info
Log Type: B - Since Boot(Power On Reset),  D - Delta,  S - Statistic
S - QC_IMAGE_VERSION_STRING=BOOT.BF.3.1.2-00053
S - IMAGE_VARIANT_STRING=LAATANAZA
S - OEM_IMAGE_VERSION_STRING=linux3
S - Boot Config, 0x000002e1
B -    105194 - SBL1, Start
D -     61885 - QSEE Image Loaded, Delta - (451964 Bytes)
D -     30286 - RPM Image Loaded, Delta - (151152 Bytes)
B -    459330 - Roger:boot_jrd_oem_main
B -    461526 - Welcome to key_check_poweron!!!
B -    466436 - REG0x00, rc=47
B -    469120 - REG0x01, rc=1f
B -    472018 - REG0x02, rc=1c
B -    474885 - REG0x03, rc=47
B -    477782 - REG0x04, rc=b2
B -    480558 - REG0x05, rc=
B -    483272 - REG0x06, rc=9e
B -    486139 - REG0x07, rc=
B -    488854 - REG0x08, rc=a4
B -    491721 - REG0x09, rc=80
B -    494130 - bq24295_probe: vflt/vsys/vprechg=0mV/0mV/0mV, tprechg/tfastchg=0Min/0Min, [0C, 0C]
B -    511546 - come to calculate vol and temperature!!
B -    511637 - ##############battery_core_convert_vntc: NTC_voltage=1785690
B -    517280 - battery_core_convert_vntc: <-44C, 1785690uV>, present=0
B -    529358 - bq24295_set_current_limit: setting=0mA, mode=-1, input/fastchg/prechg/termchg=-1mA/0mA/0mA/0mA
B -    534360 - bq24295_set_charge_current, rc=0,reg_val=0,i=0
B -    539636 - bq24295_enable_charge: setting=0, chg_enable=-1, otg_enable=0
B -    546072 - bq24295_enable_charging: enable_charging=0
B -    552172 - bq24295_set_current_limit: setting=0mA, mode=-1, input/fastchg/prechg/termchg=-1mA/0mA/0mA/0mA
B -    561566 - bq24295_set_charge_current, rc=0,reg_val=0,i=0
B -    567056 - bq24295_enable_charge: setting=0, chg_enable=0, otg_enable=0
B -    579286 - come to calculate vol and temperature!!
B -    579378 - ##############battery_core_convert_vntc: NTC_voltage=1785777
B -    585539 - battery_core_convert_vntc: <-44C, 1785777uV>, present=0
B -    597617 - charge_main: battery is plugout!!
B -    597678 - Welcome to pca955x_probe!!!
B -    601063 - pca955x_probe: PCA955X probed successfully!
D -     27511 - APPSBL Image Loaded, Delta - (179348 Bytes)
B -    633271 - QSEE Execution, Start
D -       213 - QSEE Execution, Delta
B -    638944 - >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>Start writting JRD RECOVERY BOOT
B -    650107 - >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>Start writting  RECOVERY BOOT
B -    653218 - >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>read_buf[0] == 0
B -    659044 - SBL1, End
D -    556137 - SBL1, Delta
S - Throughput, 2000 KB/s  (782884 Bytes,  278155 us)
S - DDR Frequency, 240 MHz
littlekernel aboot log
Android Bootloader - UART_DM Initialized!!!
[0] welcome to lk
[0] SCM call: 0x2000601 failed with :fffffffc
[0] Failed to initialize SCM
[10] platform_init()
[10] target_init()
[10] smem ptable found: ver: 4 len: 17
[10] ERROR: No devinfo partition found
[10] Neither 'config' nor 'frp' partition found
[30] voltage of NTC  is 1789872!
[30] voltage of BAT  is 3179553!
[30] usb present is 1!
[30] Loading (boot) image (4171776): start
[530] Loading (boot) image (4171776): done
[540] DTB Total entry: 25, DTB version: 3
[540] Using DTB entry 0x00000129/00010000/0x00000008/0 for device 0x00000129/00010000/0x00010008/0
[560] JRD_CHG_OFF_FEATURE!
[560] come to jrd_target_pause_for_battery_charge!
[570] power_on_status.hard_reset = 0x0
[570] power_on_status.smpl = 0x0
[570] power_on_status.rtc = 0x0
[580] power_on_status.dc_chg = 0x0
[580] power_on_status.usb_chg = 0x0
[580] power_on_status.pon1 = 0x1
[590] power_on_status.cblpwr = 0x0
[590] power_on_status.kpdpwr = 0x0
[590] power_on_status.bugflag = 0x0
[590] cmdline: noinitrd  rw console=ttyHSL0,115200,n8 androidboot.hardware=qcom ehci-hcd.park=3 msm_rtb.filter=0x37 lpm_levels.sleep_disabled=1  earlycon=msm_hsl_uart,0x78b3000  androidboot.serialno=7e6ba58c androidboot.baseband=msm rootfstype=ubifs rootflags=b
[620] Updating device tree: start
[720] Updating device tree: done
[720] booting linux @ 0x80008000, ramdisk @ 0x80008000 (0), tags/device tree @ 0x81e00000
Linux kernel console boot log
[    0.000000] Booting Linux on physical CPU 0x0
[    0.000000] Linux version 3.18.20 (linux3@linux3) (gcc version 4.9.2 (GCC) ) #1 PREEMPT Thu Aug 10 11:57:07 CST 2017
[    0.000000] CPU: ARMv7 Processor [410fc075] revision 5 (ARMv7), cr=10c53c7d
[    0.000000] CPU: PIPT / VIPT nonaliasing data cache, VIPT aliasing instruction cache
[    0.000000] Machine model: Qualcomm Technologies, Inc. MDM 9607 MTP
[    0.000000] Early serial console at I/O port 0x0 (options '')
[    0.000000] bootconsole [uart0] enabled
[    0.000000] Reserved memory: reserved region for node 'modem_adsp_region@0': base 0x82a00000, size 56 MiB
[    0.000000] Reserved memory: reserved region for node 'external_image_region@0': base 0x87c00000, size 4 MiB
[    0.000000] Removed memory: created DMA memory pool at 0x82a00000, size 56 MiB
[    0.000000] Reserved memory: initialized node modem_adsp_region@0, compatible id removed-dma-pool
[    0.000000] Removed memory: created DMA memory pool at 0x87c00000, size 4 MiB
[    0.000000] Reserved memory: initialized node external_image_region@0, compatible id removed-dma-pool
[    0.000000] cma: Reserved 4 MiB at 0x87800000
[    0.000000] Memory policy: Data cache writeback
[    0.000000] CPU: All CPU(s) started in SVC mode.
[    0.000000] Built 1 zonelists in Zone order, mobility grouping on.  Total pages: 17152
[    0.000000] Kernel command line: noinitrd  rw console=ttyHSL0,115200,n8 androidboot.hardware=qcom ehci-hcd.park=3 msm_rtb.filter=0x37 lpm_levels.sleep_disabled=1  earlycon=msm_hsl_uart,0x78b3000  androidboot.serialno=7e6ba58c androidboot.baseband=msm rootfstype=ubifs rootflags=bulk_read root=ubi0:rootfs ubi.mtd=16
[    0.000000] PID hash table entries: 512 (order: -1, 2048 bytes)
[    0.000000] Dentry cache hash table entries: 16384 (order: 4, 65536 bytes)
[    0.000000] Inode-cache hash table entries: 8192 (order: 3, 32768 bytes)
[    0.000000] Memory: 54792K/69632K available (5830K kernel code, 399K rwdata, 2228K rodata, 276K init, 830K bss, 14840K reserved)
[    0.000000] Virtual kernel memory layout:
[    0.000000]     vector  : 0xffff0000 - 0xffff1000   (   4 kB)
[    0.000000]     fixmap  : 0xffc00000 - 0xfff00000   (3072 kB)
[    0.000000]     vmalloc : 0xc8800000 - 0xff000000   ( 872 MB)
[    0.000000]     lowmem  : 0xc0000000 - 0xc8000000   ( 128 MB)
[    0.000000]     modules : 0xbf000000 - 0xc0000000   (  16 MB)
[    0.000000]       .text : 0xc0008000 - 0xc07e6c38   (8060 kB)
[    0.000000]       .init : 0xc07e7000 - 0xc082c000   ( 276 kB)
[    0.000000]       .data : 0xc082c000 - 0xc088fdc0   ( 400 kB)
[    0.000000]        .bss : 0xc088fe84 - 0xc095f798   ( 831 kB)
[    0.000000] SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=1, Nodes=1
[    0.000000] Preemptible hierarchical RCU implementation.
[    0.000000] NR_IRQS:16 nr_irqs:16 16
[    0.000000] GIC CPU mask not found - kernel will fail to boot.
[    0.000000] GIC CPU mask not found - kernel will fail to boot.
[    0.000000] mpm_init_irq_domain(): Cannot find irq controller for qcom,gpio-parent
[    0.000000] MPM 1 irq mapping errored -517
[    0.000000] Architected mmio timer(s) running at 19.20MHz (virt).
[    0.000011] sched_clock: 56 bits at 19MHz, resolution 52ns, wraps every 3579139424256ns
[    0.007975] Switching to timer-based delay loop, resolution 52ns
[    0.013969] Switched to clocksource arch_mem_counter
[    0.019687] Console: colour dummy device 80x30
[    0.023344] Calibrating delay loop (skipped), value calculated using timer frequency.. 38.40 BogoMIPS (lpj=192000)
[    0.033666] pid_max: default: 32768 minimum: 301
[    0.038411] Mount-cache hash table entries: 1024 (order: 0, 4096 bytes)
[    0.044902] Mountpoint-cache hash table entries: 1024 (order: 0, 4096 bytes)
[    0.052445] CPU: Testing write buffer coherency: ok
[    0.057057] Setting up static identity map for 0x8058aac8 - 0x8058ab20
[    0.064242]
[    0.064242] **********************************************************
[    0.071251] **   NOTICE NOTICE NOTICE NOTICE NOTICE NOTICE NOTICE   **
[    0.077817] **                                                      **
[    0.084302] ** trace_printk() being used. Allocating extra memory.  **
[    0.090781] **                                                      **
[    0.097320] ** This means that this is a DEBUG kernel and it is     **
[    0.103802] ** unsafe for produciton use.                           **
[    0.110339] **                                                      **
[    0.116850] ** If you see this message and you are not debugging    **
[    0.123333] ** the kernel, report this immediately to your vendor!  **
[    0.129870] **                                                      **
[    0.136380] **   NOTICE NOTICE NOTICE NOTICE NOTICE NOTICE NOTICE   **
[    0.142865] **********************************************************
[    0.150225] MSM Memory Dump base table set up
[    0.153739] MSM Memory Dump apps data table set up
[    0.168125] VFP support v0.3: implementor 41 architecture 2 part 30 variant 7 rev 5
[    0.176332] pinctrl core: initialized pinctrl subsystem
[    0.180930] regulator-dummy: no parameters
[    0.215338] NET: Registered protocol family 16
[    0.220475] DMA: preallocated 256 KiB pool for atomic coherent allocations
[    0.284034] cpuidle: using governor ladder
[    0.314026] cpuidle: using governor menu
[    0.344024] cpuidle: using governor qcom
[    0.355452] msm_watchdog b017000.qcom,wdt: wdog absent resource not present
[    0.361656] msm_watchdog b017000.qcom,wdt: MSM Watchdog Initialized
[    0.371373] irq: no irq domain found for /soc/pinctrl@1000000 !
[    0.381268] spmi_pmic_arb 200f000.qcom,spmi: PMIC Arb Version-2 0x20010000
[    0.389733] platform 4080000.qcom,mss: assigned reserved memory node modem_adsp_region@0
[    0.397409] mem_acc_corner: 0 <--> 0 mV
[    0.401937] hw-breakpoint: found 5 (+1 reserved) breakpoint and 4 watchpoint registers.
[    0.408966] hw-breakpoint: maximum watchpoint size is 8 bytes.
[    0.416287] __of_mpm_init(): MPM driver mapping exists
[    0.420940] msm_rpm_glink_dt_parse: qcom,rpm-glink compatible not matches
[    0.427235] msm_rpm_dev_probe: APSS-RPM communication over SMD
[    0.432977] smd_open() before smd_init()
[    0.437544] msm_mpm_dev_probe(): Cannot get clk resource for XO: -517
[    0.445730] smd_channel_probe_now: allocation table not initialized
[    0.453100] mdm9607_s1: 1050 <--> 1350 mV at 1225 mV normal idle
[    0.458566] spm_regulator_probe: name=mdm9607_s1, range=LV, voltage=1225000 uV, mode=AUTO, step rate=4800 uV/us
[    0.468817] cpr_efuse_init: apc_corner: efuse_addr = 0x000a4000 (len=0x1000)
[    0.475353] cpr_read_fuse_revision: apc_corner: fuse revision = 2
[    0.481345] cpr_parse_speed_bin_fuse: apc_corner: [row: 37]: 0x79e8bd327e6ba58c, speed_bits = 4
[    0.490124] cpr_pvs_init: apc_corner: pvs voltage: [1050000 1100000 1275000] uV
[    0.497342] cpr_pvs_init: apc_corner: ceiling voltage: [1050000 1225000 1350000] uV
[    0.504979] cpr_pvs_init: apc_corner: floor voltage: [1050000 1050000 1150000] uV
[    0.513125] i2c-msm-v2 78b8000.i2c: probing driver i2c-msm-v2
[    0.518335] i2c-msm-v2 78b8000.i2c: error on clk_get(core_clk):-517
[    0.524478] i2c-msm-v2 78b8000.i2c: error probe() failed with err:-517
[    0.531111] i2c-msm-v2 78b7000.i2c: probing driver i2c-msm-v2
[    0.536788] i2c-msm-v2 78b7000.i2c: error on clk_get(core_clk):-517
[    0.542886] i2c-msm-v2 78b7000.i2c: error probe() failed with err:-517
[    0.549618] i2c-msm-v2 78b9000.i2c: probing driver i2c-msm-v2
[    0.555202] i2c-msm-v2 78b9000.i2c: error on clk_get(core_clk):-517
[    0.561374] i2c-msm-v2 78b9000.i2c: error probe() failed with err:-517
[    0.570613] msm-thermal soc:qcom,msm-thermal: msm_thermal:Failed reading node=/soc/qcom,msm-thermal, key=qcom,core-limit-temp. err=-22. KTM continues
[    0.583049] msm-thermal soc:qcom,msm-thermal: probe_therm_reset:Failed reading node=/soc/qcom,msm-thermal, key=qcom,therm-reset-temp err=-22. KTM continues
[    0.596926] msm_thermal:msm_thermal_dev_probe Failed reading node=/soc/qcom,msm-thermal, key=qcom,online-hotplug-core. err:-517
[    0.609370] sps:sps is ready.
[    0.613137] msm_rpm_glink_dt_parse: qcom,rpm-glink compatible not matches
[    0.619020] msm_rpm_dev_probe: APSS-RPM communication over SMD
[    0.625773] mdm9607_s2: 750 <--> 1275 mV at 750 mV normal idle
[    0.631584] mdm9607_s3_level: 0 <--> 0 mV at 0 mV normal idle
[    0.637085] mdm9607_s3_level_ao: 0 <--> 0 mV at 0 mV normal idle
[    0.643092] mdm9607_s3_floor_level: 0 <--> 0 mV at 0 mV normal idle
[    0.649512] mdm9607_s3_level_so: 0 <--> 0 mV at 0 mV normal idle
[    0.655750] mdm9607_s4: 1800 <--> 1950 mV at 1800 mV normal idle
[    0.661791] mdm9607_l1: 1250 mV normal idle
[    0.666090] mdm9607_l2: 1800 mV normal idle
[    0.670276] mdm9607_l3: 1800 mV normal idle
[    0.674541] mdm9607_l4: 3075 mV normal idle
[    0.678743] mdm9607_l5: 1700 <--> 3050 mV at 1700 mV normal idle
[    0.684904] mdm9607_l6: 1700 <--> 3050 mV at 1700 mV normal idle
[    0.690892] mdm9607_l7: 1700 <--> 1900 mV at 1700 mV normal idle
[    0.697036] mdm9607_l8: 1800 mV normal idle
[    0.701238] mdm9607_l9: 1200 <--> 1250 mV at 1200 mV normal idle
[    0.707367] mdm9607_l10: 1050 mV normal idle
[    0.711662] mdm9607_l11: 1800 mV normal idle
[    0.716089] mdm9607_l12_level: 0 <--> 0 mV at 0 mV normal idle
[    0.721717] mdm9607_l12_level_ao: 0 <--> 0 mV at 0 mV normal idle
[    0.727946] mdm9607_l12_level_so: 0 <--> 0 mV at 0 mV normal idle
[    0.734099] mdm9607_l12_floor_lebel: 0 <--> 0 mV at 0 mV normal idle
[    0.740706] mdm9607_l13: 1800 <--> 2850 mV at 2850 mV normal idle
[    0.746883] mdm9607_l14: 2650 <--> 3000 mV at 2650 mV normal idle
[    0.752515] msm_mpm_dev_probe(): Cannot get clk resource for XO: -517
[    0.759036] cpr_efuse_init: apc_corner: efuse_addr = 0x000a4000 (len=0x1000)
[    0.765807] cpr_read_fuse_revision: apc_corner: fuse revision = 2
[    0.771809] cpr_parse_speed_bin_fuse: apc_corner: [row: 37]: 0x79e8bd327e6ba58c, speed_bits = 4
[    0.780586] cpr_pvs_init: apc_corner: pvs voltage: [1050000 1100000 1275000] uV
[    0.787808] cpr_pvs_init: apc_corner: ceiling voltage: [1050000 1225000 1350000] uV
[    0.795443] cpr_pvs_init: apc_corner: floor voltage: [1050000 1050000 1150000] uV
[    0.803094] cpr_init_cpr_parameters: apc_corner: up threshold = 2, down threshold = 3
[    0.810752] cpr_init_cpr_parameters: apc_corner: CPR is enabled by default.
[    0.817687] cpr_init_cpr_efuse: apc_corner: [row:65] = 0x15000277277383
[    0.824272] cpr_init_cpr_efuse: apc_corner: CPR disable fuse = 0
[    0.830225] cpr_init_cpr_efuse: apc_corner: Corner[1]: ro_sel = 0, target quot = 631
[    0.837976] cpr_init_cpr_efuse: apc_corner: Corner[2]: ro_sel = 0, target quot = 631
[    0.845703] cpr_init_cpr_efuse: apc_corner: Corner[3]: ro_sel = 0, target quot = 899
[    0.853592] cpr_config: apc_corner: Timer count: 0x17700 (for 5000 us)
[    0.860426] apc_corner: 0 <--> 0 mV
[    0.864044] i2c-msm-v2 78b8000.i2c: probing driver i2c-msm-v2
[    0.869261] i2c-msm-v2 78b8000.i2c: error on clk_get(core_clk):-517
[    0.875492] i2c-msm-v2 78b8000.i2c: error probe() failed with err:-517
[    0.882225] i2c-msm-v2 78b7000.i2c: probing driver i2c-msm-v2
[    0.887775] i2c-msm-v2 78b7000.i2c: error on clk_get(core_clk):-517
[    0.893941] i2c-msm-v2 78b7000.i2c: error probe() failed with err:-517
[    0.900719] i2c-msm-v2 78b9000.i2c: probing driver i2c-msm-v2
[    0.906256] i2c-msm-v2 78b9000.i2c: error on clk_get(core_clk):-517
[    0.912430] i2c-msm-v2 78b9000.i2c: error probe() failed with err:-517
[    0.919472] msm-thermal soc:qcom,msm-thermal: msm_thermal:Failed reading node=/soc/qcom,msm-thermal, key=qcom,core-limit-temp. err=-22. KTM continues
[    0.932372] msm-thermal soc:qcom,msm-thermal: probe_therm_reset:Failed reading node=/soc/qcom,msm-thermal,
key=qcom,therm-reset-temp err=-22. KTM continues
[    0.946361] msm_thermal:get_kernel_cluster_info CPU0 topology not initialized.
[    0.953824] cpu cpu0: dev_pm_opp_get_opp_count: device OPP not found (-19)
[    0.960300] msm_thermal:get_cpu_freq_plan_len Error reading CPU0 freq table len. error:-19
[    0.968533] msm_thermal:vdd_restriction_reg_init Defer vdd rstr freq init.
[    0.975846] cpu cpu0: dev_pm_opp_get_opp_count: device OPP not found (-19)
[    0.982219] msm_thermal:get_cpu_freq_plan_len Error reading CPU0 freq table len. error:-19
[    0.991378] cpu cpu0: dev_pm_opp_get_opp_count: device OPP not found (-19)
[    0.997544] msm_thermal:get_cpu_freq_plan_len Error reading CPU0 freq table len. error:-19
[    1.013642] qcom,gcc-mdm9607 1800000.qcom,gcc: Registered GCC clocks
[    1.019451] clock-a7 b010008.qcom,clock-a7: Speed bin: 4 PVS Version: 0
[    1.025693] a7ssmux: set OPP pair(400000000 Hz: 1 uV) on cpu0
[    1.031314] a7ssmux: set OPP pair(1305600000 Hz: 7 uV) on cpu0
[    1.038805] i2c-msm-v2 78b8000.i2c: probing driver i2c-msm-v2
[    1.043587] AXI: msm_bus_scale_register_client(): msm_bus_scale_register_client: Bus driver not ready.
[    1.052935] i2c-msm-v2 78b8000.i2c: msm_bus_scale_register_client(mstr-id:86):0 (not a problem)
[    1.062006] irq: no irq domain found for /soc/wcd9xxx-irq !
[    1.069884] i2c-msm-v2 78b7000.i2c: probing driver i2c-msm-v2
[    1.074814] AXI: msm_bus_scale_register_client(): msm_bus_scale_register_client: Bus driver not ready.
[    1.083716] i2c-msm-v2 78b7000.i2c: msm_bus_scale_register_client(mstr-id:86):0 (not a problem)
[    1.093850] i2c-msm-v2 78b9000.i2c: probing driver i2c-msm-v2
[    1.098889] AXI: msm_bus_scale_register_client(): msm_bus_scale_register_client: Bus driver not ready.
[    1.107779] i2c-msm-v2 78b9000.i2c: msm_bus_scale_register_client(mstr-id:86):0 (not a problem)
[    1.167871] KPI: Bootloader start count = 24097
[    1.171364] KPI: Bootloader end count = 48481
[    1.175855] KPI: Bootloader display count = 3884474147
[    1.180825] KPI: Bootloader load kernel count = 16420
[    1.185905] KPI: Kernel MPM timestamp = 105728
[    1.190286] KPI: Kernel MPM Clock frequency = 32768
[    1.195209] socinfo_print: v0.10, id=297, ver=1.0, raw_id=72, raw_ver=0, hw_plat=8, hw_plat_ver=65536
[    1.195209]  accessory_chip=0, hw_plat_subtype=0, pmic_model=65539, pmic_die_revision=131074 foundry_id=0 serial_number=2120983948
[    1.216731] sdcard_ext_vreg: no parameters
[    1.220555] rome_vreg: no parameters
[    1.224133] emac_lan_vreg: no parameters
[    1.228177] usbcore: registered new interface driver usbfs
[    1.233156] usbcore: registered new interface driver hub
[    1.238578] usbcore: registered new device driver usb
[    1.244507] cpufreq: driver msm up and running
[    1.248425] ION heap system created
[    1.251895] msm_bus_fabric_init_driver
[    1.262563] qcom,qpnp-power-on qpnp-power-on-c7303800: PMIC@SID0 Power-on reason: Triggered from PON1 (secondary PMIC) and 'cold' boot
[    1.273747] qcom,qpnp-power-on qpnp-power-on-c7303800: PMIC@SID0: Power-off reason: Triggered from UVLO (Under Voltage Lock Out)
[    1.285430] input: qpnp_pon as /devices/virtual/input/input0
[    1.291246] PMIC@SID0: PM8019 v2.2 options: 3, 2, 2, 2
[    1.296706] Advanced Linux Sound Architecture Driver Initialized.
[    1.302493] Add group failed
[    1.305291] cfg80211: Calling CRDA to update world regulatory domain
[    1.311216] cfg80211: World regulatory domain updated:
[    1.317109] Switched to clocksource arch_mem_counter
[    1.334091] cfg80211:  DFS Master region: unset
[    1.337418] cfg80211:   (start_freq - end_freq @ bandwidth), (max_antenna_gain, max_eirp), (dfs_cac_time)
[    1.354087] cfg80211:   (2402000 KHz - 2472000 KHz @ 40000 KHz), (N/A, 2000 mBm), (N/A)
[    1.361055] cfg80211:   (2457000 KHz - 2482000 KHz @ 40000 KHz), (N/A, 2000 mBm), (N/A)
[    1.370545] NET: Registered protocol family 2
[    1.374082] cfg80211:   (2474000 KHz - 2494000 KHz @ 20000 KHz), (N/A, 2000 mBm), (N/A)
[    1.381851] cfg80211:   (5170000 KHz - 5250000 KHz @ 80000 KHz), (N/A, 2000 mBm), (N/A)
[    1.389876] cfg80211:   (5250000 KHz - 5330000 KHz @ 80000 KHz), (N/A, 2000 mBm), (N/A)
[    1.397857] cfg80211:   (5490000 KHz - 5710000 KHz @ 80000 KHz), (N/A, 2000 mBm), (N/A)
[    1.405841] cfg80211:   (5735000 KHz - 5835000 KHz @ 80000 KHz), (N/A, 2000 mBm), (N/A)
[    1.413795] cfg80211:   (57240000 KHz - 63720000 KHz @ 2160000 KHz), (N/A, 0 mBm), (N/A)
[    1.422355] TCP established hash table entries: 1024 (order: 0, 4096 bytes)
[    1.428921] TCP bind hash table entries: 1024 (order: 0, 4096 bytes)
[    1.435192] TCP: Hash tables configured (established 1024 bind 1024)
[    1.441528] TCP: reno registered
[    1.444738] UDP hash table entries: 256 (order: 0, 4096 bytes)
[    1.450521] UDP-Lite hash table entries: 256 (order: 0, 4096 bytes)
[    1.456950] NET: Registered protocol family 1
[    1.462779] futex hash table entries: 256 (order: -1, 3072 bytes)
[    1.474555] msgmni has been set to 115
[    1.478551] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251)
[    1.485041] io scheduler noop registered
[    1.488818] io scheduler deadline registered
[    1.493200] io scheduler cfq registered (default)
[    1.502142] msm_rpm_log_probe: OK
[    1.506717] msm_serial_hs module loaded
[    1.509803] msm_serial_hsl_probe: detected port #0 (ttyHSL0)
[    1.515324] AXI: get_pdata(): Error: Client name not found
[    1.520626] AXI: msm_bus_cl_get_pdata(): client has to provide missing entry for successful registration
[    1.530171] msm_serial_hsl_probe: Bus scaling is disabled                      [    1.074814] AXI: msm_bus_scale_register_client(): msm_bus_scale_register_client: Bus driver not ready.
[    1.083716] i2c-msm-v2 78b7000.i2c: msm_bus_scale_register_client(mstr-id:86):0 (not a problem)
[    1.093850] i2c-msm-v2 78b9000.i2c: probing driver i2c-msm-v2
[    1.098889] AXI: msm_bus_scale_register_client(): msm_bus_scale_register_client: Bus driver not ready.
[    1.107779] i2c-msm-v2 78b9000.i2c: msm_bus_scale_register_client(mstr-id:86):0 (not a problem)
[    1.167871] KPI: Bootloader start count = 24097
[    1.171364] KPI: Bootloader end count = 48481
[    1.175855] KPI: Bootloader display count = 3884474147
[    1.180825] KPI: Bootloader load kernel count = 16420
[    1.185905] KPI: Kernel MPM timestamp = 105728
[    1.190286] KPI: Kernel MPM Clock frequency = 32768
[    1.195209] socinfo_print: v0.10, id=297, ver=1.0, raw_id=72, raw_ver=0, hw_plat=8, hw_plat_ver=65536
[    1.195209]  accessory_chip=0, hw_plat_subtype=0, pmic_model=65539, pmic_die_revision=131074 foundry_id=0 serial_number=2120983948
[    1.216731] sdcard_ext_vreg: no parameters
[    1.220555] rome_vreg: no parameters
[    1.224133] emac_lan_vreg: no parameters
[    1.228177] usbcore: registered new interface driver usbfs
[    1.233156] usbcore: registered new interface driver hub
[    1.238578] usbcore: registered new device driver usb
[    1.244507] cpufreq: driver msm up and running
[    1.248425] ION heap system created
[    1.251895] msm_bus_fabric_init_driver
[    1.262563] qcom,qpnp-power-on qpnp-power-on-c7303800: PMIC@SID0 Power-on reason: Triggered from PON1 (secondary PMIC) and 'cold' boot
[    1.273747] qcom,qpnp-power-on qpnp-power-on-c7303800: PMIC@SID0: Power-off reason: Triggered from UVLO (Under Voltage Lock Out)
[    1.285430] input: qpnp_pon as /devices/virtual/input/input0
[    1.291246] PMIC@SID0: PM8019 v2.2 options: 3, 2, 2, 2
[    1.296706] Advanced Linux Sound Architecture Driver Initialized.
[    1.302493] Add group failed
[    1.305291] cfg80211: Calling CRDA to update world regulatory domain
[    1.311216] cfg80211: World regulatory domain updated:
[    1.317109] Switched to clocksource arch_mem_counter
[    1.334091] cfg80211:  DFS Master region: unset
[    1.337418] cfg80211:   (start_freq - end_freq @ bandwidth), (max_antenna_gain, max_eirp), (dfs_cac_time)
[    1.354087] cfg80211:   (2402000 KHz - 2472000 KHz @ 40000 KHz), (N/A, 2000 mBm), (N/A)
[    1.361055] cfg80211:   (2457000 KHz - 2482000 KHz @ 40000 KHz), (N/A, 2000 mBm), (N/A)
[    1.370545] NET: Registered protocol family 2
[    1.374082] cfg80211:   (2474000 KHz - 2494000 KHz @ 20000 KHz), (N/A, 2000 mBm), (N/A)
[    1.381851] cfg80211:   (5170000 KHz - 5250000 KHz @ 80000 KHz), (N/A, 2000 mBm), (N/A)
[    1.389876] cfg80211:   (5250000 KHz - 5330000 KHz @ 80000 KHz), (N/A, 2000 mBm), (N/A)
[    1.397857] cfg80211:   (5490000 KHz - 5710000 KHz @ 80000 KHz), (N/A, 2000 mBm), (N/A)
[    1.405841] cfg80211:   (5735000 KHz - 5835000 KHz @ 80000 KHz), (N/A, 2000 mBm), (N/A)
[    1.413795] cfg80211:   (57240000 KHz - 63720000 KHz @ 2160000 KHz), (N/A, 0 mBm), (N/A)
[    1.422355] TCP established hash table entries: 1024 (order: 0, 4096 bytes)
[    1.428921] TCP bind hash table entries: 1024 (order: 0, 4096 bytes)
[    1.435192] TCP: Hash tables configured (established 1024 bind 1024)
[    1.441528] TCP: reno registered
[    1.444738] UDP hash table entries: 256 (order: 0, 4096 bytes)
[    1.450521] UDP-Lite hash table entries: 256 (order: 0, 4096 bytes)
[    1.456950] NET: Registered protocol family 1
[    1.462779] futex hash table entries: 256 (order: -1, 3072 bytes)
[    1.474555] msgmni has been set to 115
[    1.478551] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251)
[    1.485041] io scheduler noop registered
[    1.488818] io scheduler deadline registered
[    1.493200] io scheduler cfq registered (default)
[    1.502142] msm_rpm_log_probe: OK
[    1.506717] msm_serial_hs module loaded
[    1.509803] msm_serial_hsl_probe: detected port #0 (ttyHSL0)
[    1.515324] AXI: get_pdata(): Error: Client name not found
[    1.520626] AXI: msm_bus_cl_get_pdata(): client has to provide missing entry for successful registration
[    1.530171] msm_serial_hsl_probe: Bus scaling is disabled
[    1.535696] 78b3000.serial: ttyHSL0 at MMIO 0x78b3000 (irq = 153, base_baud = 460800 [    1.544155] msm_hsl_console_setup: console setup on port #0
[    1.548727] console [ttyHSL0] enabled
[    1.548727] console [ttyHSL0] enabled
[    1.556014] bootconsole [uart0] disabled
[    1.556014] bootconsole [uart0] disabled
[    1.564212] msm_serial_hsl_init: driver initialized
[    1.578450] brd: module loaded
[    1.582920] loop: module loaded
[    1.589183] sps: BAM device 0x07984000 is not registered yet.
[    1.594234] sps:BAM 0x07984000 is registered.
[    1.598072] msm_nand_bam_init: msm_nand_bam_init: BAM device registered: bam_handle 0xc69f6400
[    1.607103] sps:BAM 0x07984000 (va:0xc89a0000) enabled: ver:0x18, number of pipes:7
[    1.616588] msm_nand_parse_smem_ptable: Parsing partition table info from SMEM
[    1.622805] msm_nand_parse_smem_ptable: SMEM partition table found: ver: 4 len: 17
[    1.630391] msm_nand_version_check: nand_major:1, nand_minor:5, qpic_major:1, qpic_minor:5
[    1.638642] msm_nand_scan: NAND Id: 0x1590aa98 Buswidth: 8Bits Density: 256 MByte
[    1.646069] msm_nand_scan: pagesize: 2048 Erasesize: 131072 oobsize: 128 (in Bytes)
[    1.653676] msm_nand_scan: BCH ECC: 8 Bit
[    1.657710] msm_nand_scan: CFG0: 0x290408c0,           CFG1: 0x0804715c
[    1.657710]             RAWCFG0: 0x2b8400c0,        RAWCFG1: 0x0005055d
[    1.657710]           ECCBUFCFG: 0x00000203,      ECCBCHCFG: 0x42040d10
[    1.657710]           RAWECCCFG: 0x42000d11, BAD BLOCK BYTE: 0x000001c5
[    1.684101] Creating 17 MTD partitions on "7980000.nand":
[    1.689447] 0x000000000000-0x000000140000 : "sbl"
[    1.694867] 0x000000140000-0x000000280000 : "mibib"
[    1.699560] 0x000000280000-0x000000e80000 : "efs2"
[    1.704408] 0x000000e80000-0x000000f40000 : "tz"
[    1.708934] 0x000000f40000-0x000000fa0000 : "rpm"
[    1.713625] 0x000000fa0000-0x000001000000 : "aboot"
[    1.718582] 0x000001000000-0x0000017e0000 : "boot"
[    1.723281] 0x0000017e0000-0x000002820000 : "scrub"
[    1.728174] 0x000002820000-0x000005020000 : "modem"
[    1.732968] 0x000005020000-0x000005420000 : "rfbackup"
[    1.738156] 0x000005420000-0x000005820000 : "oem"
[    1.742770] 0x000005820000-0x000005f00000 : "recovery"
[    1.747972] 0x000005f00000-0x000009100000 : "cache"
[    1.752787] 0x000009100000-0x000009a40000 : "recoveryfs"
[    1.758389] 0x000009a40000-0x00000aa40000 : "cdrom"
[    1.762967] 0x00000aa40000-0x00000ba40000 : "jrdresource"
[    1.768407] 0x00000ba40000-0x000010000000 : "system"
[    1.773239] msm_nand_probe: NANDc phys addr 0x7980000, BAM phys addr 0x7984000, BAM IRQ 164
[    1.781074] msm_nand_probe: Allocated DMA buffer at virt_addr 0xc7840000, phys_addr 0x87840000
[    1.791872] PPP generic driver version 2.4.2
[    1.801126] cnss_sdio 87a00000.qcom,cnss-sdio: CNSS SDIO Driver registered
[    1.807554] msm_otg 78d9000.usb: msm_otg probe
[    1.813333] msm_otg 78d9000.usb: OTG regs = c88f8000
[    1.820702] gbridge_init: gbridge_init successs.
[    1.826344] msm_otg 78d9000.usb: phy_reset: success
[    1.830294] qcom,qpnp-rtc qpnp-rtc-c7307000: rtc core: registered qpnp_rtc as rtc0
[    1.838474] i2c /dev entries driver
[    1.842459] unable to find DT imem DLOAD mode node
[    1.846588] unable to find DT imem EDLOAD mode node
[    1.851332] unable to find DT imem dload-type node
[    1.856921] bq24295-charger 4-006b: bq24295 probe enter
[    1.861161] qcom,iterm-ma = 128
[    1.864476] bq24295_otg_vreg: no parameters
[    1.868502] charger_core_register: Charger Core Version 5.0.0(Built at 20151202-21:36)!
[    1.877007] i2c-msm-v2 78b8000.i2c: msm_bus_scale_register_client(mstr-id:86):0x3 (ok)
[    1.885559] bq24295-charger 4-006b: bq24295_set_bhot_mode 3
[    1.890150] bq24295-charger 4-006b: power_good is 1,vbus_stat is 2
[    1.896588] bq24295-charger 4-006b: bq24295_set_thermal_threshold 100
[    1.902952] bq24295-charger 4-006b: bq24295_set_sys_min 3700
[    1.908639] bq24295-charger 4-006b: bq24295_set_max_target_voltage 4150
[    1.915223] bq24295-charger 4-006b: bq24295_set_recharge_threshold 300
[    1.922119] bq24295-charger 4-006b: bq24295_set_terminal_current_limit iterm_disabled=0, iterm_ma=128
[    1.930917] bq24295-charger 4-006b: bq24295_set_precharge_current_limit bdi->prech_cur=128
[    1.940038] bq24295-charger 4-006b: bq24295_set_safty_timer 0
[    1.945088] bq24295-charger 4-006b: bq24295_set_input_voltage_limit 4520
[    1.972949] sdhci: Secure Digital Host Controller Interface driver
[    1.978151] sdhci: Copyright(c) Pierre Ossman
[    1.982441] sdhci-pltfm: SDHCI platform and OF driver helper
[    1.989092] sdhci_msm 7824900.sdhci: sdhci_msm_probe: ICE device is not enabled
[    1.995473] sdhci_msm 7824900.sdhci: No vreg data found for vdd
[    2.001530] sdhci_msm 7824900.sdhci: sdhci_msm_pm_qos_parse_irq: error -22 reading irq cpu
[    2.009809] sdhci_msm 7824900.sdhci: sdhci_msm_pm_qos_parse: PM QoS voting for IRQ will be disabled
[    2.018600] sdhci_msm 7824900.sdhci: sdhci_msm_pm_qos_parse: PM QoS voting for cpu group will be disabled
[    2.030541] sdhci_msm 7824900.sdhci: sdhci_msm_probe: sdiowakeup_irq = 353
[    2.036867] sdhci_msm 7824900.sdhci: No vmmc regulator found
[    2.042027] sdhci_msm 7824900.sdhci: No vqmmc regulator found
[    2.048266] mmc0: SDHCI controller on 7824900.sdhci [7824900.sdhci] using 32-bit ADMA in legacy mode
[    2.080401] Welcome to pca955x_probe!!
[    2.084362] leds-pca955x 3-0020: leds-pca955x: Using pca9555 16-bit LED driver at slave address 0x20
[    2.095400] sdhci_msm 7824900.sdhci: card claims to support voltages below defined range
[    2.103125] i2c-msm-v2 78b7000.i2c: msm_bus_scale_register_client(mstr-id:86):0x5 (ok)
[    2.114183] msm_otg 78d9000.usb: Avail curr from USB = 1500
[    2.120251] come to USB_SDP_CHARGER!
[    2.123215] Welcome to sn3199_probe!
[    2.126718] leds-sn3199 5-0064: leds-sn3199: Using sn3199 9-bit LED driver at slave address 0x64
[    2.136511] sn3199->led_en_gpio=21
[    2.139143] i2c-msm-v2 78b9000.i2c: msm_bus_scale_register_client(mstr-id:86):0x6 (ok)
[    2.150207] usbcore: registered new interface driver usbhid
[    2.154864] usbhid: USB HID core driver
[    2.159825] sps:BAM 0x078c4000 is registered.
[    2.163573] bimc-bwmon 408000.qcom,cpu-bwmon: BW HWmon governor registered.
[    2.171080] devfreq soc:qcom,cpubw: Couldn't update frequency transition information.
[    2.178513] coresight-fuse a601c.fuse: QPDI fuse not specified
[    2.184242] coresight-fuse a601c.fuse: Fuse initialized
[    2.192407] coresight-csr 6001000.csr: CSR initialized
[    2.197263] coresight-tmc 6026000.tmc: Byte Counter feature enabled
[    2.203204] sps:BAM 0x06084000 is registered.
[    2.207301] coresight-tmc 6026000.tmc: TMC initialized
[    2.212681] coresight-tmc 6025000.tmc: TMC initialized
[    2.220071] nidnt boot config: 0
[    2.224563] mmc0: new ultra high speed SDR50 SDIO card at address 0001
[    2.231120] coresight-tpiu 6020000.tpiu: NIDnT on SDCARD only mode
[    2.236440] coresight-tpiu 6020000.tpiu: TPIU initialized
[    2.242808] coresight-replicator 6024000.replicator: REPLICATOR initialized
[    2.249372] coresight-stm 6002000.stm: STM initialized
[    2.255034] coresight-hwevent 606c000.hwevent: Hardware Event driver initialized
[    2.262312] Netfilter messages via NETLINK v0.30.
[    2.266306] nf_conntrack version 0.5.0 (920 buckets, 3680 max)
[    2.272312] ctnetlink v0.93: registering with nfnetlink.
[    2.277565] ip_set: protocol 6
[    2.280568] ip_tables: (C) 2000-2006 Netfilter Core Team
[    2.285723] arp_tables: (C) 2002 David S. Miller
[    2.290146] TCP: cubic registered
[    2.293915] NET: Registered protocol family 10
[    2.298740] ip6_tables: (C) 2000-2006 Netfilter Core Team
[    2.303407] sit: IPv6 over IPv4 tunneling driver
[    2.308481] NET: Registered protocol family 17
[    2.312340] bridge: automatic filtering via arp/ip/ip6tables has been deprecated. Update your scripts to load br_netfilter if you need this.
[    2.325094] Bridge firewalling registered
[    2.328930] Ebtables v2.0 registered
[    2.333260] NET: Registered protocol family 27
[    2.341362] battery_core_register: Battery Core Version 5.0.0(Built at 20151202-21:36)!
[    2.348466] pmu_battery_probe: vbat_channel=21, tbat_channel=17
[    2.420236] ubi0: attaching mtd16
[    2.723941] ubi0: scanning is finished
[    2.732997] ubi0: attached mtd16 (name "system", size 69 MiB)
[    2.737783] ubi0: PEB size: 131072 bytes (128 KiB), LEB size: 126976 bytes
[    2.744601] ubi0: min./max. I/O unit sizes: 2048/2048, sub-page size 2048
[    2.751333] ubi0: VID header offset: 2048 (aligned 2048), data offset: 4096
[    2.758540] ubi0: good PEBs: 556, bad PEBs: 2, corrupted PEBs: 0
[    2.764305] ubi0: user volume: 3, internal volumes: 1, max. volumes count: 128
[    2.771476] ubi0: max/mean erase counter: 192/64, WL threshold: 4096, image sequence number: 35657280
[    2.780708] ubi0: available PEBs: 0, total reserved PEBs: 556, PEBs reserved for bad PEB handling: 38
[    2.789921] ubi0: background thread "ubi_bgt0d" started, PID 96
[    2.796395] android_bind cdev: 0xC6583E80, name: ci13xxx_msm
[    2.801508] file system registered
[    2.804974] mbim_init: initialize 1 instances
[    2.809228] mbim_init: Initialized 1 ports
[    2.815074] rndis_qc_init: initialize rndis QC instance
[    2.819713] jrd device_desc.bcdDevice: [0x0242]
[    2.823779] android_bind scheduled usb start work: name: ci13xxx_msm
[    2.830230] android_usb gadget: android_usb ready
[    2.834845] msm_hsusb msm_hsusb: [ci13xxx_start] hw_ep_max = 32
[    2.840741] msm_hsusb msm_hsusb: CI13XXX_CONTROLLER_RESET_EVENT received
[    2.847433] msm_hsusb msm_hsusb: CI13XXX_CONTROLLER_UDC_STARTED_EVENT received
[    2.855851] input: gpio-keys as /devices/soc:gpio_keys/input/input1
[    2.861452] qcom,qpnp-rtc qpnp-rtc-c7307000: setting system clock to 1970-01-01 06:36:41 UTC (23801)
[    2.870315] open file error /usb_conf/usb_config.ini
[    2.876412] jrd_usb_start_work open file erro /usb_conf/usb_config.ini, retry_count:0
[    2.884324] parse_legacy_cluster_params(): Ignoring cluster params
[    2.889468] ------------[ cut here ]------------
[    2.894186] WARNING: CPU: 0 PID: 1 at /home/linux3/jrd/yanping.an/ee40/0810/MDM9607.LE.1.0-00130/apps_proc/oe-core/build/tmp-glibc/work-shared/mdm9607/kernel-source/drivers/cpuidle/lpm-levels-of.c:739 parse_cluster+0xb50/0xcb4()
[    2.914366] Modules linked in:
[    2.917339] CPU: 0 PID: 1 Comm: swapper Not tainted 3.18.20 #1
[    2.923171] [<c00132ac>] (unwind_backtrace) from [<c0011460>] (show_stack+0x10/0x14)
[    2.931092] [<c0011460>] (show_stack) from [<c001c6ac>] (warn_slowpath_common+0x68/0x88)
[    2.939175] [<c001c6ac>] (warn_slowpath_common) from [<c001c75c>] (warn_slowpath_null+0x18/0x20)
[    2.947895] [<c001c75c>] (warn_slowpath_null) from [<c034e180>] (parse_cluster+0xb50/0xcb4)
[    2.956189] [<c034e180>] (parse_cluster) from [<c034b6b4>] (lpm_probe+0xc/0x1d4)
[    2.963527] [<c034b6b4>] (lpm_probe) from [<c024857c>] (platform_drv_probe+0x30/0x7c)
[    2.971380] [<c024857c>] (platform_drv_probe) from [<c0246d54>] (driver_probe_device+0xb8/0x1e8)
[    2.980118] [<c0246d54>] (driver_probe_device) from [<c0246f30>] (__driver_attach+0x68/0x8c)
[    2.988467] [<c0246f30>] (__driver_attach) from [<c02455d0>] (bus_for_each_dev+0x6c/0x90)
[    2.996626] [<c02455d0>] (bus_for_each_dev) from [<c02465a4>] (bus_add_driver+0xe0/0x1c8)
[    3.004786] [<c02465a4>] (bus_add_driver) from [<c02477bc>] (driver_register+0x9c/0xe0)
[    3.012739] [<c02477bc>] (driver_register) from [<c080c3d8>] (lpm_levels_module_init+0x14/0x38)
[    3.021459] [<c080c3d8>] (lpm_levels_module_init) from [<c0008980>] (do_one_initcall+0xf8/0x1a0)
[    3.030217] [<c0008980>] (do_one_initcall) from [<c07e7d4c>] (kernel_init_freeable+0xf0/0x1b0)
[    3.038818] [<c07e7d4c>] (kernel_init_freeable) from [<c0582d48>] (kernel_init+0x8/0xe4)
[    3.046888] [<c0582d48>] (kernel_init) from [<c000dda0>] (ret_from_fork+0x14/0x34)
[    3.054432] ---[ end trace e9ec50b1ec4c8f73 ]---
[    3.059012] ------------[ cut here ]------------
[    3.063604] WARNING: CPU: 0 PID: 1 at /home/linux3/jrd/yanping.an/ee40/0810/MDM9607.LE.1.0-00130/apps_proc/oe-core/build/tmp-glibc/work-shared/mdm9607/kernel-source/drivers/cpuidle/lpm-levels-of.c:739 parse_cluster+0xb50/0xcb4()
[    3.083858] Modules linked in:
[    3.086870] CPU: 0 PID: 1 Comm: swapper Tainted: G        W      3.18.20 #1
[    3.093814] [<c00132ac>] (unwind_backtrace) from [<c0011460>] (show_stack+0x10/0x14)
[    3.101575] [<c0011460>] (show_stack) from [<c001c6ac>] (warn_slowpath_common+0x68/0x88)
[    3.109641] [<c001c6ac>] (warn_slowpath_common) from [<c001c75c>] (warn_slowpath_null+0x18/0x20)
[    3.118412] [<c001c75c>] (warn_slowpath_null) from [<c034e180>] (parse_cluster+0xb50/0xcb4)
[    3.126745] [<c034e180>] (parse_cluster) from [<c034b6b4>] (lpm_probe+0xc/0x1d4)
[    3.134126] [<c034b6b4>] (lpm_probe) from [<c024857c>] (platform_drv_probe+0x30/0x7c)
[    3.141906] [<c024857c>] (platform_drv_probe) from [<c0246d54>] (driver_probe_device+0xb8/0x1e8)
[    3.150702] [<c0246d54>] (driver_probe_device) from [<c0246f30>] (__driver_attach+0x68/0x8c)
[    3.159120] [<c0246f30>] (__driver_attach) from [<c02455d0>] (bus_for_each_dev+0x6c/0x90)
[    3.167285] [<c02455d0>] (bus_for_each_dev) from [<c02465a4>] (bus_add_driver+0xe0/0x1c8)
[    3.175444] [<c02465a4>] (bus_add_driver) from [<c02477bc>] (driver_register+0x9c/0xe0)
[    3.183398] [<c02477bc>] (driver_register) from [<c080c3d8>] (lpm_levels_module_init+0x14/0x38)
[    3.192107] [<c080c3d8>] (lpm_levels_module_init) from [<c0008980>] (do_one_initcall+0xf8/0x1a0)
[    3.200877] [<c0008980>] (do_one_initcall) from [<c07e7d4c>] (kernel_init_freeable+0xf0/0x1b0)
[    3.209475] [<c07e7d4c>] (kernel_init_freeable) from [<c0582d48>] (kernel_init+0x8/0xe4)
[    3.217542] [<c0582d48>] (kernel_init) from [<c000dda0>] (ret_from_fork+0x14/0x34)
[    3.225090] ---[ end trace e9ec50b1ec4c8f74 ]---
[    3.229667] /soc/qcom,lpm-levels/qcom,pm-cluster@0: No CPU phandle, assuming single cluster
[    3.239954] qcom,cc-debug-mdm9607 1800000.qcom,debug: Registered Debug Mux successfully
[    3.247619] emac_lan_vreg: disabling
[    3.250507] mem_acc_corner: disabling
[    3.254196] clock_late_init: Removing enables held for handed-off clocks
[    3.262690] ALSA device list:
[    3.264732]   No soundcard [    3.274083] UBIFS (ubi0:0): background thread "ubifs_bgt0_0" started, PID 102
[    3.305224] UBIFS (ubi0:0): recovery needed
[    3.466156] UBIFS (ubi0:0): recovery completed
[    3.469627] UBIFS (ubi0:0): UBIFS: mounted UBI device 0, volume 0, name "rootfs"
[    3.476987] UBIFS (ubi0:0): LEB size: 126976 bytes (124 KiB), min./max. I/O unit sizes: 2048 bytes/2048 bytes
[    3.486876] UBIFS (ubi0:0): FS size: 45838336 bytes (43 MiB, 361 LEBs), journal size 9023488 bytes (8 MiB, 72 LEBs)
[    3.497417] UBIFS (ubi0:0): reserved for root: 0 bytes (0 KiB)
[    3.503078] UBIFS (ubi0:0): media format: w4/r0 (latest is w4/r0), UUID 4DBB2F12-34EB-43B6-839B-3BA930765BAE, small LPT model
[    3.515582] VFS: Mounted root (ubifs filesystem) on device 0:12.
[    3.520940] Freeing unused kernel memory: 276K (c07e7000 - c082c000)
INIT: version 2.88 booting

16 December 2020

Jonathan McDowell: DeskPi Pro + 8GB Pi 4

DeskPi Pro Raspberry Pi case Despite having worked on a number of ARM platforms I ve never actually had an ARM based development box at home. I have a Raspberry Pi B Classic (the original 256MB rev 0002 variant) a coworker gave me some years ago, but it s not what you d choose for a build machine and generally gets used as a self contained TFTP/console server for hooking up to devices under test. Mostly I ve been able to do kernel development with the cross compilers already built as part of Debian, and either use pre-built images or Debian directly when I need userland pieces. At a previous job I had a Marvell MACCHIATObin available to me, which works out as a nice platform - quad core A72 @ 2GHz with 16GB RAM, proper SATA and a PCIe slot. However they re still a bit pricey for a casual home machine. I really like the look of the HoneyComb LX2 - 16 A72 cores, up to 64GB RAM - but it s even more expensive. So when I saw the existence of the 8GB Raspberry Pi 4 I was interested. Firstly, the Pi 4 is a proper 64 bit device (my existing Pi B is ARMv6 which means it needs to run Raspbian instead of native Debian armhf), capable of running an upstream kernel and unmodified Debian userspace. Secondly the Pi 4 has a USB 3 controller sitting on a PCIe bus rather than just the limited SoC USB 2 controller. It s not SATA, but it s still a fairly decent method of attaching some storage that s faster/more reliable than an SD card. Finally 8GB RAM is starting to get to a decent amount - for a headless build box 4GB is probably generally enough, but I wanted some headroom. The Pi comes as a bare board, so I needed a case. Ideally I wanted something self contained that could take the Pi, provide a USB/SATA adaptor and take the drive too. I came across the pre-order for the DeskPi Pro, decided it was the sort of thing I was after, and ordered one towards the end of September. It finally arrived at the start of December, at which point I got round to ordering a Pi 4 from CPC. Total cost ~ 120 for the case + Pi.

The Bad First, let s get the bad parts out of the way. Broken USB port (right) I managed to break a USB port on the Desk Pi. It has a pair of forward facing ports, I plugged my wireless keyboard dongle into it and when trying to remove it the solid spacer bit in the socket broke off. I ve never had this happen to me before and I ve been using USB devices for 20 years, so I m putting the blame on a shoddy socket. The first drive I tried was an old Crucial M500 mSATA device. I have an adaptor that makes it look like a normal 2.5 drive so I used that. Unfortunately it resulted in a boot loop; the Pi would boot its initial firmware, try to talk to the drive and then reboot before even loading Linux. The DeskPi Pro comes with an m2 adaptor and I had a spare m2 drive, so I tried that and it all worked fine. This might just be power issues, but it was an unfortunate experience especially after the USB port had broken off. (Given I ended up using an M.2 drive another case option would have been the Argon ONE M.2, which is a bit more compact.)

The Annoying DeskPi Pro without rear bezel The case is a little snug; I was worried I was going to damage things as I slid it in. Additionally the construction process is a little involved. There s a good set of instructions, but there are a lot of pieces and screws involved. This includes a couple of FFC cables to join things up. I think this is because they ve attempted to make a compact case rather than allowing a little extra room, and it does have the advantage that once assembled it feels robust without anything loose in it. DeskPi Pro with rear bezel and USB3 dongle I hate the need for an external USB3 dongle to bridge from the Pi to the USB/SATA adaptor. All the cases I ve seen with an internal drive bay have to do this, because the USB3 isn t brought out internally by the Pi, but it just looks ugly to me. It s hidden at the back, but meh. Fan control is via a USB/serial device, which is fine, but it attaches to the USB C power port which defaults to being a USB peripheral. Raspbian based kernels support device tree overlays which allows easy reconfiguration to host mode, but for a Debian based system I ended up rolling my own dtb file. I changed
#include "bcm283x-rpi-usb-peripheral.dtsi"
to
#include "bcm283x-rpi-usb-host.dtsi"
in arch/arm/boot/dts/bcm2711-rpi-4-b.dts and then I did:
cpp -nostdinc -I include -I arch -undef -x assembler-with-cpp \
    arch/arm/boot/dts/bcm2711-rpi-4-b.dts > rpi4.preprocessed
dtc -I dts -O dtb rpi4.preprocessed -o bcm2711-rpi-4-b.dtb
and the resulting bcm2711-rpi-4-b.dtb file replaced the one in /boot/firmware. This isn t a necessary step if you don t want to use the cooling fan in the case, or the front USB ports, and it s not really anyone s fault, but it was an annoying extra step to have to figure out. The DeskPi came with a microSD card that was supposed to have RaspiOS already on it. It didn t, it was blank. In my case that was fine, because I wanted to use Debian, but it was a minor niggle.

The Good I used Gunnar s pre-built Pi Debian image and it Just Worked; I dd d it to the microSD as instructed and the Pi 4 came up with working wifi, video and USB enabling me to get it configured for my network. I did an apt upgrade and got updated to the Buster 10.7 release, as well as the latest 5.9 backport kernel, and everything came back without effort after a reboot. It s lovely to be able to run Debian on this device without having to futz around with self-compiled kernels. The DeskPi makes a lot of effort to route things externally. The SD slot is brought out to the front, making it easy to fiddle with the card contents without having to open the case to replace it. All the important ports are brought out to the back either through orientation of the Pi, or extenders in the case. That means the built in Pi USB ports, the HDMI sockets (conveniently converted to full size internally), an audio jack and a USB-C power port. The aforementioned USB3 dongle for the bridge to the drive is the only external thing that s annoying. Thermally things seem good too. I haven t done a full torture test yet, but with the fan off the system is sitting at about 40 C while fairly idle. Some loops in bash that push load up to above 2 get the temperature up to 46 C or so, and turning the fan on brings it down to 40 C again. It s audible, but quieter than my laptop and not annoying. I liked the way the case came with everything I needed other than the Pi 4 and a suitable disk drive. There was an included PSU (a proper USB-C PD device, UK plug), the heatsink/fan is there, the USB/SATA converter is there and even an SD card is provided (though that s just because I had a pre-order). Speaking of the SD, I only needed it for initial setup. Recent Pi 4 bootloaders are capable of booting directly from USB mass storage devices. So I upgraded using the RPi EEPROM Recovery image (which just needs extracted to the SD FAT partition, no need for anything complicated - boot with it and the screen goes all green and you know it s ok), then created a FAT partition at the start of the drive for the kernel / bootloader config and a regular EXT4 partition for root. Copies everything over, updated paths, took out the SD and it all just works happily.

Summary My main complaint is the broken USB port, which feels like the result of a cheap connector. For a front facing port expected to see more use than the rear ports I think there s a reasonable expectation of robustness. However I m an early adopter and maybe future runs will be better. Other than that I m pretty happy. The case is exactly the sort of thing I wanted; I was looking for something that would turn the Pi into a box that can sit on my desk on the network and that I don t have to worry about knocking wires out of or lots of cables hooking bits up. Everything being included made it very convenient to get up and running. I still haven t poked the Pi that hard, but first impressions are looking good for it being a trouble free ARM64 dev box in the corner, until I can justify a HoneyComb.

15 September 2020

Russell Coker: More About the PowerEdge R710

I ve got the R710 (mentioned in my previous post [1]) online. When testing the R710 at home I noticed that sometimes the VGA monitor I was using would start flickering when in some parts of the BIOS setup, it seemed that the horizonal sync wasn t working properly. It didn t seem to be a big deal at the time. When I deployed it the KVM display that I had planned to use with it mostly didn t display anything. When the display was working the KVM keyboard wouldn t work (and would prevent a regular USB keyboard from working if they were both connected at the same time). The VGA output of the R710 also wouldn t work with my VGA->HDMI device so I couldn t get it working with my portable monitor. Fortunately the Dell front panel has a display and tiny buttons that allow configuring the IDRAC IP address, so I was able to get IDRAC going. One thing Dell really should do is allow the down button to change 0 to 9 when entering numbers, that would make it easier to enter 8.8.8.8 for the DNS server. Another thing Dell should do is make the default gateway have a default value according to the IP address and netmask of the server. When I got IDRAC going it was easy to setup a serial console, boot from a rescue USB device, create a new initrd with the driver for the MegaRAID controller, and then reboot into the server image. When I transferred the SSDs from the old server to the newer Dell server the problem I had was that the Dell drive caddies had no holes in suitable places for attaching SSDs. I ended up just pushing the SSDs in so they are hanging in mid air attached only by the SATA/SAS connectors. Plugging them in took the space from the above drive, so instead of having 2*3.5 disks I have 1*2.5 SSD and need the extra space to get my hand in. The R710 is designed for 6*3.5 disks and I m going to have trouble if I ever want to have more than 3*2.5 SSDs. Fortunately I don t think I ll need more SSDs. After booting the system I started getting alerts about a fault in one SSD, with no detail on what the fault might be. My guess is that the SSD in question is M.2 and it s in a M.2 to regular SATA adaptor which might have some problems. The data seems fine though, a BTRFS scrub found no checksum errors. I guess I ll have to buy a replacement SSD soon. I configured the system to use the nosmt kernel command line option to disable hyper-threading (which won t provide much performance benefit but which makes certain types of security attacks much easier). I ve configured BOINC to run on 6/8 CPU cores and surprisingly that didn t cause the fans to be louder than when the system was idle. It seems that a system that is designed for 6 SAS disks doesn t need a lot of cooling when run with SSDs. Update: It s a R710 not a T710. I mostly deal with Dell Tower servers and typed the wrong letter out of habit.

Next.