Search Results: "lars"

29 September 2024

Reproducible Builds: Supporter spotlight: Kees Cook on Linux kernel security

The Reproducible Builds project relies on several projects, supporters and sponsors for financial support, but they are also valued as ambassadors who spread the word about our project and the work that we do. This is the eighth installment in a series featuring the projects, companies and individuals who support the Reproducible Builds project. We started this series by featuring the Civil Infrastructure Platform project, and followed this up with a post about the Ford Foundation as well as recent ones about ARDC, the Google Open Source Security Team (GOSST), Bootstrappable Builds, the F-Droid project, David A. Wheeler and Simon Butler. Today, however, we will be talking with Kees Cook, founder of the Kernel Self-Protection Project.

Vagrant Cascadian: Could you tell me a bit about yourself? What sort of things do you work on? Kees Cook: I m a Free Software junkie living in Portland, Oregon, USA. I have been focusing on the upstream Linux kernel s protection of itself. There is a lot of support that the kernel provides userspace to defend itself, but when I first started focusing on this there was not as much attention given to the kernel protecting itself. As userspace got more hardened the kernel itself became a bigger target. Almost 9 years ago I formally announced the Kernel Self-Protection Project because the work necessary was way more than my time and expertise could do alone. So I just try to get people to help as much as possible; people who understand the ARM architecture, people who understand the memory management subsystem to help, people who understand how to make the kernel less buggy.
Vagrant: Could you describe the path that lead you to working on this sort of thing? Kees: I have always been interested in security through the aspect of exploitable flaws. I always thought it was like a magic trick to make a computer do something that it was very much not designed to do and seeing how easy it is to subvert bugs. I wanted to improve that fragility. In 2006, I started working at Canonical on Ubuntu and was mainly focusing on bringing Debian and Ubuntu up to what was the state of the art for Fedora and Gentoo s security hardening efforts. Both had really pioneered a lot of userspace hardening with compiler flags and ELF stuff and many other things for hardened binaries. On the whole, Debian had not really paid attention to it. Debian s packaging building process at the time was sort of a chaotic free-for-all as there wasn t centralized build methodology for defining things. Luckily that did slowly change over the years. In Ubuntu we had the opportunity to apply top down build rules for hardening all the packages. In 2011 Chrome OS was following along and took advantage of a bunch of the security hardening work as they were based on ebuild out of Gentoo and when they looked for someone to help out they reached out to me. We recognized the Linux kernel was pretty much the weakest link in the Chrome OS security posture and I joined them to help solve that. Their userspace was pretty well handled but the kernel had a lot of weaknesses, so focusing on hardening was the next place to go. When I compared notes with other users of the Linux kernel within Google there were a number of common concerns and desires. Chrome OS already had an upstream first requirement, so I tried to consolidate the concerns and solve them upstream. It was challenging to land anything in other kernel team repos at Google, as they (correctly) wanted to minimize their delta from upstream, so I needed to work on any major improvements entirely in upstream and had a lot of support from Google to do that. As such, my focus shifted further from working directly on Chrome OS into being entirely upstream and being more of a consultant to internal teams, helping with integration or sometimes backporting. Since the volume of needed work was so gigantic I needed to find ways to inspire other developers (both inside and outside of Google) to help. Once I had a budget I tried to get folks paid (or hired) to work on these areas when it wasn t already their job.
Vagrant: So my understanding of some of your recent work is basically defining undefined behavior in the language or compiler? Kees: I ve found the term undefined behavior to have a really strict meaning within the compiler community, so I have tried to redefine my goal as eliminating unexpected behavior or ambiguous language constructs . At the end of the day ambiguity leads to bugs, and bugs lead to exploitable security flaws. I ve been taking a four-pronged approach: supporting the work people are doing to get rid of ambiguity, identify new areas where ambiguity needs to be removed, actually removing that ambiguity from the C language, and then dealing with any needed refactoring in the Linux kernel source to adapt to the new constraints. None of this is particularly novel; people have recognized how dangerous some of these language constructs are for decades and decades but I think it is a combination of hard problems and a lot of refactoring that nobody has the interest/resources to do. So, we have been incrementally going after the lowest hanging fruit. One clear example in recent years was the elimination of C s implicit fall-through in switch statements. The language would just fall through between adjacent cases if a break (or other code flow directive) wasn t present. But this is ambiguous: is the code meant to fall-through, or did the author just forget a break statement? By defining the [[fallthrough]] statement, and requiring its use in Linux, all switch statements now have explicit code flow, and the entire class of bugs disappeared. During our refactoring we actually found that 1 in 10 added [[fallthrough]] statements were actually missing break statements. This was an extraordinarily common bug! So getting rid of that ambiguity is where we have been. Another area I ve been spending a bit of time on lately is looking at how defensive security work has challenges associated with metrics. How do you measure your defensive security impact? You can t say because we installed locks on the doors, 20% fewer break-ins have happened. Much of our signal is always secondary or retrospective, which is frustrating: This class of flaw was used X much over the last decade so, and if we have eliminated that class of flaw and will never see it again, what is the impact? Is the impact infinity? Attackers will just move to the next easiest thing. But it means that exploitation gets incrementally more difficult. As attack surfaces are reduced, the expense of exploitation goes up.
Vagrant: So it is hard to identify how effective this is how bad would it be if people just gave up? Kees: I think it would be pretty bad, because as we have seen, using secondary factors, the work we have done in the industry at large, not just the Linux kernel, has had an impact. What we, Microsoft, Apple, and everyone else is doing for their respective software ecosystems, has shown that the price of functional exploits in the black market has gone up. Especially for really egregious stuff like a zero-click remote code execution. If those were cheap then obviously we are not doing something right, and it becomes clear that it s trivial for anyone to attack the infrastructure that our lives depend on. But thankfully we have seen over the last two decades that prices for exploits keep going up and up into millions of dollars. I think it is important to keep working on that because, as a central piece of modern computer infrastructure, the Linux kernel has a giant target painted on it. If we give up, we have to accept that our computers are not doing what they were designed to do, which I can t accept. The safety of my grandparents shouldn t be any different from the safety of journalists, and political activists, and anyone else who might be the target of attacks. We need to be able to trust our devices otherwise why use them at all?
Vagrant: What has been your biggest success in recent years? Kees: I think with all these things I am not the only actor. Almost everything that we have been successful at has been because of a lot of people s work, and one of the big ones that has been coordinated across the ecosystem and across compilers was initializing stack variables to 0 by default. This feature was added in Clang, GCC, and MSVC across the board even though there were a lot of fears about forking the C language. The worry was that developers would come to depend on zero-initialized stack variables, but this hasn t been the case because we still warn about uninitialized variables when the compiler can figure that out. So you still still get the warnings at compile time but now you can count on the contents of your stack at run-time and we drop an entire class of uninitialized variable flaws. While the exploitation of this class has mostly been around memory content exposure, it has also been used for control flow attacks. So that was politically and technically a large challenge: convincing people it was necessary, showing its utility, and implementing it in a way that everyone would be happy with, resulting in the elimination of a large and persistent class of flaws in C.
Vagrant: In a world where things are generally Reproducible do you see ways in which that might affect your work? Kees: One of the questions I frequently get is, What version of the Linux kernel has feature $foo? If I know how things are built, I can answer with just a version number. In a Reproducible Builds scenario I can count on the compiler version, compiler flags, kernel configuration, etc. all those things are known, so I can actually answer definitively that a certain feature exists. So that is an area where Reproducible Builds affects me most directly. Indirectly, it is just being able to trust the binaries you are running are going to behave the same for the same build environment is critical for sane testing.
Vagrant: Have you used diffoscope? Kees: I have! One subset of tree-wide refactoring that we do when getting rid of ambiguous language usage in the kernel is when we have to make source level changes to satisfy some new compiler requirement but where the binary output is not expected to change at all. It is mostly about getting the compiler to understand what is happening, what is intended in the cases where the old ambiguity does actually match the new unambiguous description of what is intended. The binary shouldn t change. We have used diffoscope to compare the before and after binaries to confirm that yep, there is no change in binary .
Vagrant: You cannot just use checksums for that? Kees: For the most part, we need to only compare the text segments. We try to hold as much stable as we can, following the Reproducible Builds documentation for the kernel, but there are macros in the kernel that are sensitive to source line numbers and as a result those will change the layout of the data segment (and sometimes the text segment too). With diffoscope there s flexibility where I can exclude or include different comparisons. Sometimes I just go look at what diffoscope is doing and do that manually, because I can tweak that a little harder, but diffoscope is definitely the default. Diffoscope is awesome!
Vagrant: Where has reproducible builds affected you? Kees: One of the notable wins of reproducible builds lately was dealing with the fallout of the XZ backdoor and just being able to ask the question is my build environment running the expected code? and to be able to compare the output generated from one install that never had a vulnerable XZ and one that did have a vulnerable XZ and compare the results of what you get. That was important for kernel builds because the XZ threat actor was working to expand their influence and capabilities to include Linux kernel builds, but they didn t finish their work before they were noticed. I think what happened with Debian proving the build infrastructure was not affected is an important example of how people would have needed to verify the kernel builds too.
Vagrant: What do you want to see for the near or distant future in security work? Kees: For reproducible builds in the kernel, in the work that has been going on in the ClangBuiltLinux project, one of the driving forces of code and usability quality has been the continuous integration work. As soon as something breaks, on the kernel side, the Clang side, or something in between the two, we get a fast signal and can chase it and fix the bugs quickly. I would like to see someone with funding to maintain a reproducible kernel build CI. There have been places where there are certain architecture configurations or certain build configuration where we lose reproducibility and right now we have sort of a standard open source development feedback loop where those things get fixed but the time in between introduction and fix can be large. Getting a CI for reproducible kernels would give us the opportunity to shorten that time.
Vagrant: Well, thanks for that! Any last closing thoughts? Kees: I am a big fan of reproducible builds, thank you for all your work. The world is a safer place because of it.
Vagrant: Likewise for your work!


For more information about the Reproducible Builds project, please see our website at reproducible-builds.org. If you are interested in ensuring the ongoing security of the software that underpins our civilisation and wish to sponsor the Reproducible Builds project, please reach out to the project by emailing contact@reproducible-builds.org.

14 July 2024

Ravi Dwivedi: Kenya Visa Process

Prior to arrival in Kenya, you need to apply for an Electronic Travel Authorization (eTA) on their website by uploading all the required documents. This system is in place since Jan 2024 after the country abolished the visa system, implementing the eTA portal. The required documents will depend on the purpose of your visit, which in my case, was to attend a conference. Here is the list of documents I submitted for my eTA: Reservation means I didn t book the flights and hotels, but rather reserved them. Additionally, optional means that those documents were not mandatory to submit, but I submitted them in the Other Documents section in order to support my application. After submitting the eTA, I had to make a payment of around 35 US Dollars (approximately 3000 Indian Rupees). It took 40 hours for me to receive an email from Kenya stating that my eTA has been approved, along with an attached PDF, making this one of my smoothest experiences of obtaining travel documents to travel to a country :). An eTA is technically not a visa, but I put the word visa in the title due to familiarity with the term.

8 July 2024

Dirk Eddelbuettel: RcppArmadillo 14.0.0-1 on CRAN: New Upstream

armadillo image Armadillo is a powerful and expressive C++ template library for linear algebra and scientific computing. It aims towards a good balance between speed and ease of use, has a syntax deliberately close to Matlab, and is useful for algorithm development directly in C++, or quick conversion of research code into production environments. RcppArmadillo integrates this library with the R environment and language and is widely used by (currently) 1158 other packages on CRAN, downloaded 35.1 million times (per the partial logs from the cloud mirrors of CRAN), and the CSDA paper (preprint / vignette) by Conrad and myself has been cited 587 times according to Google Scholar. Conrad released a new major upstream version 14.0.0 a couple of days ago. We had been testing this new version extensively over several rounds of reverse-dependency checks across all 1100+ packages. This revealed nine packages requiring truly minor adjustments which eight maintainers made in a matter of days; all this was coordinated in issue #443. Following the upload, CRAN noticed one more issue (see issue #446) but this turned out to be local to the package. There are also renewed deprecation warnings with some Armadillo changes which we will need to address one-by-one. Last but not least with this release we also changed the package versioning scheme to follow upstream Armadillo more closely. The set of changes since the last CRAN release follows.

Changes in RcppArmadillo version 14.0.0-1 (2024-07-05)
  • Upgraded to Armadillo release 14.0.0 (Stochastic Parrot)
    • C++14 is now the minimum recommended C++ standard
    • Faster handling of compound expressions by as_scalar(), accu(), dot()
    • Faster interactions between sparse and dense matrices
    • Expanded stddev() to handle sparse matrices
    • Expanded relational operators to handle expressions between sparse matrices and scalars
    • Added .as_dense() to obtain dense vector/matrix representation of any sparse matrix expression
    • Updated physical constants to NIST 2022 CODATA values
  • New package version numbering scheme following upstream versions
  • Re-enabling ARMA_IGNORE_DEPRECATED_MARKE for silent CRAN builds

Courtesy of my CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the Rcpp R-Forge page. If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

28 June 2024

Matthew Palmer: Checking for Compromised Private Keys has Never Been Easier

As regular readers would know, since I never stop banging on about it, I run Pwnedkeys, a service which finds and collates private keys which have been disclosed or are otherwise compromised. Until now, the only way to check if a key is compromised has been to use the Pwnedkeys API, which is not necessarily trivial for everyone. Starting today, that s changing. The next phase of Pwnedkeys is to start offering more user-friendly tools for checking whether keys being used are compromised. These will typically be web-based or command-line tools intended to answer the question is the key in this (certificate, CSR, authorized_keys file, TLS connection, email, etc) known to Pwnedkeys to have been compromised? .

Opening the Toolbox Available right now are the first web-based key checking tools in this arsenal. These tools allow you to:
  1. Check the key in a PEM-format X509 data structure (such as a CSR or certificate);
  2. Check the keys in an authorized_keys file you upload; and
  3. Check the SSH keys used by a user at any one of a number of widely-used code-hosting sites.
Further planned tools include live checking of the certificates presented in TLS connections (for HTTPS, etc), SSH host keys, command-line utilities for checking local authorized_keys files, and many other goodies.

If You Are Intrigued By My Ideas and wish to subscribe to my newsletter, now you can! I m not going to be blogging every little update to Pwnedkeys, because that would probably get a bit tedious for readers who aren t as intrigued by compromised keys as I am. Instead, I ll be posting every little update in the Pwnedkeys newsletter. So, if you want to keep up-to-date with the latest and greatest news and information, subscribe to the newsletter.

Supporting Pwnedkeys All this work I m doing on my own time, and I m paying for the infrastructure from my own pocket. If you ve got a few dollars to spare, I d really appreciate it if you bought me a refreshing beverage. It helps keep the lights on here at Pwnedkeys Global HQ.

29 May 2024

Antoine Beaupr : Playing with fonts again

I am getting increasingly frustrated by Fira Mono's lack of italic support so I am looking at alternative fonts again.

Commit Mono This time I seem to be settling on either Commit Mono or Space Mono. For now I'm using Commit Mono because it's a little more compressed than Fira and does have a italic version. I don't like how Space Mono's parenthesis (()) is "squarish", it feels visually ambiguous with the square brackets ([]), a big no-no for my primary use case (code). So here I am using a new font, again. It required changing a bunch of configuration files in my home directory (which is in a private repository, sorry) and Emacs configuration (thankfully that's public!). One gotcha is I realized I didn't actually have a global font configuration in Emacs, as some Faces define their own font family, which overrides the frame defaults. This is what it looks like, before:
A dark terminal showing the test sheet in Fira Mono Fira Mono
After:
A dark terminal showing the test sheet in Fira Mono Commit Mono
(Notice how those screenshots are not sharp? I'm surprised too. The originals look sharp on my display, I suspect this is something to do with the Wayland transition. I've tried with both grim and flameshot, for what its worth.) They are pretty similar! Commit Mono feels a bit more vertically compressed maybe too much so, actually -- the line height feels too low. But it's heavily customizable so that's something that's relatively easy to fix, if it's really a problem. Its weight is also a little heavier and wider than Fira which I find a little distracting right now, but maybe I'll get used to it. All characters seem properly distinguishable, although, if I'd really want to nitpick I'd say the and are too different, with the latter (REGISTERED SIGN) being way too small, basically unreadable here. Since I see this sign approximately never, it probably doesn't matter at all. I like how the ampersand (&) is more traditional, although I'll miss the exotic one Fira produced... I like how the back quotes ( , GRAVE ACCENT) drop down low, nicely aligned with the apostrophe. As I mentioned before, I like how the bar on the "f" aligns with the other top of letters, something in Fira mono that really annoys me now that I've noticed it (it's not aligned!).

A UTF-8 test file Here's the test sheet I've made up to test various characters. I could have sworn I had a good one like this lying around somewhere but couldn't find it so here it is, I guess.
US keyboard coverage:
abcdefghijklmnopqrstuvwxyz 1234567890-=[]\;',./
ABCDEFGHIJKLMNOPQRSTUVWXYZ~!@#$%^&*()_+ :"<>?
latin1 coverage:  
EURO SIGN, TRADE MARK SIGN:  
ambiguity test:
e coC0ODQ iI71lL! 
b6G&0B83  []() /\. 
zs$S52Z%   '" 
all characters in a sentence, uppercase:
the quick fox jumps over the lazy dog
THE QUICK FOX JUMPS OVER THE LAZY DOG
same, in french:
Portez ce vieux whisky au juge blond qui fume.
d s no l, o  un z phyr ha  me v t de gla ons w rmiens, je d ne
d exquis r tis de b uf au kir,   l a  d ge m r, &c tera.
D S NO L, O  UN Z PHYR HA  ME V T DE GLA ONS W RMIENS, JE D NE
D EXQUIS R TIS DE B UF AU KIR,   L A  D GE M R, &C TERA.
Ligatures test:
-<< -< -<- <-- <--- <<- <- -> ->> --> ---> ->- >- >>-
=<< =< =<= <== <=== <<= <= => =>> ==> ===> =>= >= >>=
<-> <--> <---> <----> <=> <==> <===> <====> :: ::: __
<~~ </ </> /> ~~> == != /= ~= <> === !== !=== =/= =!=
<: := *= *+ <* <*> *> <  < >  > <. <.> .> +* =* =: :>
(* *) /* */ [   ]     ++ +++ \/ /\  - -  <!-- <!---
Box drawing alignment tests:
                                                                    
                                 
                           
                                                   
                                       
                                                 
                               
                                
Dashes alignment test:
HYPHEN-MINUS, MINUS SIGN, EN, EM DASH, HORIZONTAL BAR, LOW LINE
--------------------------------------------------
 
 
 
 
__________________________________________________
Update: here is another such sample sheet, it's pretty good and has support for more languages while being still relatively small. So there you have it, got completely nerd swiped by typography again. Now I can go back to writing a too-long proposal again. Sources and inspiration for the above:
  • the unicode(1) command, to lookup individual characters to disambiguate, for example, - (U+002D HYPHEN-MINUS, the minus sign next to zero on US keyboards) and (U+2212 MINUS SIGN, a math symbol)
  • searchable list of characters and their names - roughly equivalent to the unicode(1) command, but in one page, amazingly the /usr/share/unicode database doesn't have any one file like this
  • bits/UTF-8-Unicode-Test-Documents - full list of UTF-8 characters
  • UTF-8 encoded plain text file - nice examples of edge cases, curly quotes example and box drawing alignment test which, incidentally, showed me I needed specific faces customisation in Emacs to get the Markdown code areas to display properly, also the idea of comparing various dashes
  • sample sentences in many languages - unused, "Sentences that contain all letters commonly used in a language"
  • UTF-8 sampler - unused, similar

Other fonts In my previous blog post about fonts, I had a list of alternative fonts, but it seems people are not digging through this, so I figured I would redo the list here to preempt "but have you tried Jetbrains mono" kind of comments. My requirements are:
  • no ligatures: yes, in the previous post, I wanted ligatures but I have changed my mind. after testing this, I find them distracting, confusing, and they often break the monospace nature of the display
  • monospace: this is to display code
  • italics: often used when writing Markdown, where I do make use of italics... Emacs falls back to underlining text when lacking italics which is hard to read
  • free-ish, ultimately should be packaged in Debian
Here is the list of alternatives I have considered in the past and why I'm not using them:
  • agave: recommended by tarzeau, not sure I like the lowercase a, a bit too exotic, packaged as fonts-agave
  • Cascadia code: optional ligatures, multilingual, not liking the alignment, ambiguous parenthesis (look too much like square brackets), new default for Windows Terminal and Visual Studio, packaged as fonts-cascadia-code
  • Fira Code: ligatures, was using Fira Mono from which it is derived, lacking italics except for forks, interestingly, Fira Code succeeds the alignment test but Fira Mono fails to show the X signs properly! packaged as fonts-firacode
  • Hack: no ligatures, very similar to Fira, italics, good alternative, fails the X test in box alignment, packaged as fonts-hack
  • Hermit: no ligatures, smaller, alignment issues in box drawing and dashes, packaged as fonts-hermit somehow part of cool-retro-term
  • IBM Plex: irritating website, replaces Helvetica as the IBM corporate font, no ligatures by default, italics, proportional alternatives, serifs and sans, multiple languages, partial failure in box alignment test (X signs), fancy curly braces contrast perhaps too much with the rest of the font, packaged in Debian as fonts-ibm-plex
  • Inconsolata: no ligatures, maybe italics? more compressed than others, feels a little out of balance because of that, packaged in Debian as fonts-inconsolata
  • Intel One Mono: nice legibility, no ligatures, alignment issues in box drawing, not packaged in Debian
  • Iosevka: optional ligatures, italics, multilingual, good legibility, has a proportional option, serifs and sans, line height issue in box drawing, fails dash test, not in Debian
  • Jetbrains Mono: (mandatory?) ligatures, good coverage, originally rumored to be not DFSG-free (Debian Free Software Guidelines) but ultimately packaged in Debian as fonts-jetbrains-mono
  • Monoid: optional ligatures, feels much "thinner" than Jetbrains, not liking alignment or spacing on that one, ambiguous 2Z, problems rendering box drawing, packaged as fonts-monoid
  • Mononoki: no ligatures, looks good, good alternative, suggested by the Debian fonts team as part of fonts-recommended, problems rendering box drawing, em dash bigger than en dash, packaged as fonts-mononoki
  • Source Code Pro: italics, looks good, but dash metrics look whacky, not in Debian
  • spleen: bitmap font, old school, spacing issue in box drawing test, packaged as fonts-spleen
  • sudo: personal project, no ligatures, zero originally not dotted, relied on metrics for legibility, spacing issue in box drawing, not in Debian
So, if I get tired of Commit Mono, I might probably try, in order:
  1. Hack
  2. Jetbrains Mono
  3. IBM Plex Mono
Iosevka, Monoki and Intel One Mono are also good options, but have alignment problems. Iosevka is particularly disappointing as the EM DASH metrics are just completely wrong (much too wide). This was tested using the Programming fonts site which has all the above fonts, which cannot be said of Font Squirrel or Google Fonts, amazingly. Other such tools:

26 April 2024

Russell Coker: Humane AI Pin

I wrote a blog post The Shape of Computers [1] exploring ideas of how computers might evolve and how we can use them. One of the devices I mentioned was the Humane AI Pin, which has just been the recipient of one of the biggest roast reviews I ve ever seen [2], good work Marques Brownlee! As an aside I was once given a product to review which didn t work nearly as well as I think it should have worked so I sent an email to the developers saying sorry this product failed to work well so I can t say anything good about it and didn t publish a review. One of the first things that caught my attention in the review is the note that the AI Pin doesn t connect to your phone. I think that everything should connect to everything else as a usability feature. For security we don t want so much connecting and it s quite reasonable to turn off various connections at appropriate times for security, the Librem5 is an example of how this can be done with hardware switches to disable Wifi etc. But to just not have connectivity is bad. The next noteworthy thing is the external battery which also acts as a magnetic attachment from inside your shirt. So I guess it s using wireless charging through your shirt. A magnetically attached external battery would be a great feature for a phone, you could quickly swap a discharged battery for a fresh one and keep using it. When I tried to make the PinePhonePro my daily driver [3] I gave up and charging was one of the main reasons. One thing I learned from my experiment with the PinePhonePro is that the ratio of charge time to discharge time is sometimes more important than battery life and being able to quickly swap batteries without rebooting is a way of solving that. The reviewer of the AI Pin complains later in the video about battery life which seems to be partly due to wireless charging from the detachable battery and partly due to being physically small. It seems the phablet form factor is the smallest viable personal computer at this time. The review glosses over what could be the regarded as the 2 worst issues of the device. It does everything via the cloud (where the cloud means a computer owned by someone I probably shouldn t trust ) and it records everything. Strange that it s not getting the hate the Google Glass got. The user interface based on laser projection of menus on the palm of your hand is an interesting concept. I d rather have a Bluetooth attached tablet or something for operations that can t be conveniently done with voice. The reviewer harshly criticises the laser projection interface later in the video, maybe technology isn t yet adequate to implement this properly. The first criticism of the device in the review part of the video is of the time taken to answer questions, especially when Internet connectivity is poor. His question who designed the Washington Monument took 8 seconds to start answering it in his demonstration. I asked the Alpaca LLM the same question running on 4 cores of a E5-2696 and it took 10 seconds to start answering and then printed the words at about speaking speed. So if we had a free software based AI device for this purpose it shouldn t be difficult to get local LLM computation with less delay than the Humane device by simply providing more compute power than 4 cores of a E5-2696v3. How does a 32 core 1.05GHz Mali G72 from 2017 (as used in the Galaxy Note 9) compare to 4 cores of a 2.3GHz Intel CPU from 2015? Passmark says that Intel CPU can do 48GFlop with all 18 cores so 4 cores can presumably do about 10GFlop which seems less than the claimed 20-32GFlop of the Mali G72. It seems that with the right software even older Android phones could give adequate performance for a local LLM. The Alpaca model I m testing with takes 4.2G of RAM to run which is usable in a Note 9 with 8G of RAM or a Pixel 8 Pro with 12G. A Pixel 8 Pro could have 4.2G of RAM reserved for a LLM and still have as much RAM for other purposes as my main laptop as of a few months ago. I consider the speed of Alpaca on my workstation to be acceptable but not great. If we can get FOSS phones running a LLM at that speed then I think it would be great for a first version we can always rely on newer and faster hardware becoming available. Marques notes that the cause of some of the problems is likely due to a desire to make it a separate powerful product in the future and that if they gave it phone connectivity in the start they would have to remove that later on. I think that the real problem is that the profit motive is incompatible with good design. They want to have a product that s stand-alone and justifies the purchase price plus subscription and that means not making it a phone accessory . While I think that the best thing for the user is to allow it to talk to a phone, a PC, a car, and anything else the user wants. He compares it to the Apple Vision Pro which has the same issue of trying to be a stand-alone computer but not being properly capable of it. One of the benefits that Marques cites for the AI Pin is the ability to capture voice notes. Dictaphones have been around for over 100 years and very few people have bought them, not even in the 80s when they became cheap. While almost everyone can occasionally benefit from being able to make a note of an idea when it s not convenient to write it down there are few people who need it enough to carry a separate device, not even if that device is tiny. But a phone as a general purpose computing device with microphone can easily be adapted to such things. One possibility would be to program a phone to start a voice note when the volume up and down buttons are pressed at the same time or when some other condition is met. Another possibility is to have a phone have a hotkey function that varies by what you are doing, EG if bushwalking have the hotkey be to take a photo or if on a flight have it be taking a voice note. On the Mobile Apps page on the Debian wiki I created a section for categories of apps that I think we need [4]. In that section I added the following list:
  1. Voice input for dictation
  2. Voice assistant like Google/Apple
  3. Voice output
  4. Full operation for visually impaired people
One thing I really like about the AI Pin is that it has the potential to become a really good computing and personal assistant device for visually impaired people funded by people with full vision who want to legally control a computer while driving etc. I have some concerns about the potential uses of the AI Pin while driving (as Marques stated an aim to do), but if it replaces the use of regular phones while driving it will make things less bad. Marques concludes his video by warning against buying a product based on the promise of what it can be in future. I bought the Librem5 on exactly that promise, the difference is that I have the source and the ability to help make the promise come true. My aim is to spend thousands of dollars on test hardware and thousands of hours of development time to help make FOSS phones a product that most people can use at low price with little effort. Another interesting review of the pin is by Mrwhostheboss [5], one of his examples is of asking the pin for advice about a chair but without him knowing the pin selected a different chair in the room. He compares this to using Google s apps on a phone and seeing which item the app has selected. He also said that he doesn t want to make an order based on speech he wants to review a page of information about it. I suspect that the design of the pin had too much input from people accustomed to asking a corporate travel office to find them a flight and not enough from people who look through the details of the results of flight booking services trying to save an extra $20. Some people might say if you need to save $20 on a flight then a $24/month subscription computing service isn t for you , I reject that argument. I can afford lots of computing services because I try to get the best deal on every moderately expensive thing I pay for. Another point that Mrwhostheboss makes is regarding secret SMS, you probably wouldn t want to speak a SMS you are sending to your SO while waiting for a train. He makes it clear that changing between phone and pin while sharing resources (IE not having a separate phone number and separate data store) is a desired feature. The most insightful point Mrwhostheboss made was when he suggested that if the pin had come out before the smartphone then things might have all gone differently, but now anything that s developed has to be based around the expectations of phone use. This is something we need to keep in mind when developing FOSS software, there s lots of different ways that things could be done but we need to meet the expectations of users if we want our software to be used by many people. I previously wrote a blog post titled Considering Convergence [6] about the possible ways of using a phone as a laptop. While I still believe what I wrote there I m now considering the possibility of ease of movement of work in progress as a way of addressing some of the same issues. I ve written a blog post about Convergence vs Transferrence [7].

11 April 2024

Jonathan McDowell: Sorting out backup internet #1: recursive DNS

I work from home these days, and my nearest office is over 100 miles away, 3 hours door to door if I travel by train (and, to be honest, probably not a lot faster given rush hour traffic if I drive). So I m reliant on a functional internet connection in order to be able to work. I m lucky to have access to Openreach FTTP, provided by Aquiss, but I worry about what happens if there s a cable cut somewhere or some other long lasting problem. Worst case I could tether to my work phone, or try to find some local coworking space to use while things get sorted, but I felt like arranging a backup option was a wise move. Step 1 turned out to be sorting out recursive DNS. It s been many moons since I had to deal with running DNS in a production setting, and I ve mostly done my best to avoid doing it at home too. dnsmasq has done a decent job at providing for my needs over the years, covering DHCP, DNS (+ tftp for my test device network). However I just let it forward to my ISP s nameservers, which means if that link goes down it ll no longer be able to resolve anything outside the house. One option would have been to either point to a different recursive DNS server (Cloudfare s 1.1.1.1 or Google s Public DNS being the common choices), but I ve no desire to share my lookup information with them. As another approach I could have done some sort of failover of resolv.conf when the primary network went down, but then I would have to get into moving files around based on networking status and that felt a bit clunky. So I decided to finally setup a proper local recursive DNS server, which is something I ve kinda meant to do for a while but never had sufficient reason to look into. Last time I did this I did it with BIND 9 but there are more options these days, and I decided to go with unbound, which is primarily focused on recursive DNS. One extra wrinkle, pointed out by Lars, is that having dynamic name information from DHCP hosts is exceptionally convenient. I ve kept dnsmasq as the local DHCP server, so I wanted to be able to forward local queries there. I m doing all of this on my RB5009, running Debian. Installing unbound was a simple matter of apt install unbound. I needed 2 pieces of configuration over the default, one to enable recursive serving for the house networks, and one to enable forwarding of queries for the local domain to dnsmasq. I originally had specified the wildcard address for listening, but this caused problems with the fact my router has many interfaces and would sometimes respond from a different address than the request had come in on.
/etc/unbound/unbound.conf.d/network-resolver.conf
server:
  interface: 192.0.2.1
  interface: 2001::db8:f00d::1
  access-control: 192.0.2.0/24 allow
  access-control: 2001::db8:f00d::/56 allow

/etc/unbound/unbound.conf.d/local-to-dnsmasq.conf
server:
  domain-insecure: "example.org"
  do-not-query-localhost: no
forward-zone:
  name: "example.org"
  forward-addr: 127.0.0.1@5353

I then had to configure dnsmasq to not listen on port 53 (so unbound could), respond to requests on the loopback interface (I have dnsmasq restricted to only explicitly listed interfaces), and to hand out unbound as the appropriate nameserver in DHCP requests - once dnsmasq is not listening on port 53 it no longer does this by default.
/etc/dnsmasq.d/behind-unbound
interface=lo
port=5353
dhcp-option=option6:dns-server,[2001::db8:f00d::1]
dhcp-option=option:dns-server,192.0.2.1

With these minor changes in place I now have local recursive DNS being handled by unbound, without losing dynamic local DNS for DHCP hosts. As an added bonus I now get 10/10 on Test IPv6 - previously I was getting dinged on the ability for my DNS server to resolve purely IPv6 reachable addresses. Next step, actually sorting out a backup link.

12 February 2024

Andrew Cater: Lessons from (and for) colleagues - and, implicitly, how NOT to get on

I have had excellent colleagues both at my day job and, especially, in Debian over the last thirty-odd years. Several have attempted to give me good advice - others have been exemplars. People retire: sadly, people die. What impression do you want to leave behind when you leave here?Belatedly, I've come to realise that obduracy, sheer bloody mindedness, force of will and obstinacy will only get you so far. The following began very much as a tongue in cheek private memo to myself a good few years ago. I showed it to a colleague who suggested at the time that I should share it to a wider audience.

SOME ADVICE YOU MAY BENEFIT FROM

Personal conduct
  • Never argue with someone you believe to be arguing idiotically - a dispassionate bystander may have difficulty telling who's who.
  • You can't make yourself seem reasonable by behaving unreasonably
  • It does not matter how correct your point of view is if you get people's backs up
  • They may all be #####, @@@@@, %%%% and ******* - saying so out loud doesn't help improve matters and may make you seem intemperate.

Working with others
  • Be the change you want to be and behave the way you want others to behave in order to achieve the desired outcome.
  • You can demolish someone's argument constructively and add weight to good points rather than tearing down their ideas and hard work and being ultra-critical and negative - no-one likes to be told "You know - you've got a REALLY ugly baby there"
  • It's easier to work with someone than to work against them and have to apologise repeatedly.
  • Even when you're outstanding and superlative, even you had to learn it all once. Be generous to help others learn: you shouldn't have to teach too many times if you teach correctly once and take time in doing so.

Getting the message across
  • Stop: think: write: review: (peer review if necessary): publish.
  • Clarity is all: just because you understand it doesn't mean anyone else will.
  • It does not matter how correct your point of view is if you put it across badly.
  • If you're giving advice: make sure it is:
  • Considered
  • Constructive
  • Correct as far as you can (and)
  • Refers to other people who may be able to help
  • Say thank you promptly if someone helps you and be prepared to give full credit where credit's due.

Work is like that
  • You may not know all the answers or even have the whole picture - consult, take advice - LISTEN TO THE ADVICE
  • Sometimes the right answer is not the immediately correct answer
  • Corollary: Sometimes the right answer for the business is not your suggested/preferred outcome
  • Corollary: Just because you can do it like that in the real world doesn't mean that you can do it that way inside the business. [This realisation is INTENSELY frustrating but you have to learn to deal with it]
  • DON'T ALWAYS DO IT YOURSELF - Attempt to fix the system, sometimes allow the corporate monster to fail - then do it yourself and fix it. It is always easier and tempting to work round the system and Just Flaming Do It but it doesn't solve problems in the longer term and may create more problems and ill-feeling than it solves.

    [Worked out for Andy Cater for himself after many years of fighting the system as a misguided missile - though he will freely admit that he doesn't always follow them as often as he should :) ]


7 February 2024

Reproducible Builds: Reproducible Builds in January 2024

Welcome to the January 2024 report from the Reproducible Builds project. In these reports we outline the most important things that we have been up to over the past month. If you are interested in contributing to the project, please visit our Contribute page on our website.

How we executed a critical supply chain attack on PyTorch John Stawinski and Adnan Khan published a lengthy blog post detailing how they executed a supply-chain attack against PyTorch, a popular machine learning platform used by titans like Google, Meta, Boeing, and Lockheed Martin :
Our exploit path resulted in the ability to upload malicious PyTorch releases to GitHub, upload releases to [Amazon Web Services], potentially add code to the main repository branch, backdoor PyTorch dependencies the list goes on. In short, it was bad. Quite bad.
The attack pivoted on PyTorch s use of self-hosted runners as well as submitting a pull request to address a trivial typo in the project s README file to gain access to repository secrets and API keys that could subsequently be used for malicious purposes.

New Arch Linux forensic filesystem tool On our mailing list this month, long-time Reproducible Builds developer kpcyrd announced a new tool designed to forensically analyse Arch Linux filesystem images. Called archlinux-userland-fs-cmp, the tool is supposed to be used from a rescue image (any Linux) with an Arch install mounted to, [for example], /mnt. Crucially, however, at no point is any file from the mounted filesystem eval d or otherwise executed. Parsers are written in a memory safe language. More information about the tool can be found on their announcement message, as well as on the tool s homepage. A GIF of the tool in action is also available.

Issues with our SOURCE_DATE_EPOCH code? Chris Lamb started a thread on our mailing list summarising some potential problems with the source code snippet the Reproducible Builds project has been using to parse the SOURCE_DATE_EPOCH environment variable:
I m not 100% sure who originally wrote this code, but it was probably sometime in the ~2015 era, and it must be in a huge number of codebases by now. Anyway, Alejandro Colomar was working on the shadow security tool and pinged me regarding some potential issues with the code. You can see this conversation here.
Chris ends his message with a request that those with intimate or low-level knowledge of time_t, C types, overflows and the various parsing libraries in the C standard library (etc.) contribute with further info.

Distribution updates In Debian this month, Roland Clobus posted another detailed update of the status of reproducible ISO images on our mailing list. In particular, Roland helpfully summarised that all major desktops build reproducibly with bullseye, bookworm, trixie and sid provided they are built for a second time within the same DAK run (i.e. [within] 6 hours) . Additionally 7 of the 8 bookworm images from the official download link build reproducibly at any later time. In addition to this, three reviews of Debian packages were added, 17 were updated and 15 were removed this month adding to our knowledge about identified issues. Elsewhere, Bernhard posted another monthly update for his work elsewhere in openSUSE.

Community updates There were made a number of improvements to our website, including Bernhard M. Wiedemann fixing a number of typos of the term nondeterministic . [ ] and Jan Zerebecki adding a substantial and highly welcome section to our page about SOURCE_DATE_EPOCH to document its interaction with distribution rebuilds. [ ].
diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. This month, Chris Lamb made a number of changes such as uploading versions 254 and 255 to Debian but focusing on triaging and/or merging code from other contributors. This included adding support for comparing eXtensible ARchive (.XAR/.PKG) files courtesy of Seth Michael Larson [ ][ ], as well considerable work from Vekhir in order to fix compatibility between various and subtle incompatible versions of the progressbar libraries in Python [ ][ ][ ][ ]. Thanks!

Reproducibility testing framework The Reproducible Builds project operates a comprehensive testing framework (available at tests.reproducible-builds.org) in order to check packages and other artifacts for reproducibility. In January, a number of changes were made by Holger Levsen:
  • Debian-related changes:
    • Reduce the number of arm64 architecture workers from 24 to 16. [ ]
    • Use diffoscope from the Debian release being tested again. [ ]
    • Improve the handling when killing unwanted processes [ ][ ][ ] and be more verbose about it, too [ ].
    • Don t mark a job as failed if process marked as to-be-killed is already gone. [ ]
    • Display the architecture of builds that have been running for more than 48 hours. [ ]
    • Reboot arm64 nodes when they hit an OOM (out of memory) state. [ ]
  • Package rescheduling changes:
    • Reduce IRC notifications to 1 when rescheduling due to package status changes. [ ]
    • Correctly set SUDO_USER when rescheduling packages. [ ]
    • Automatically reschedule packages regressing to FTBFS (build failure) or FTBR (build success, but unreproducible). [ ]
  • OpenWrt-related changes:
    • Install the python3-dev and python3-pyelftools packages as they are now needed for the sunxi target. [ ][ ]
    • Also install the libpam0g-dev which is needed by some OpenWrt hardware targets. [ ]
  • Misc:
    • As it s January, set the real_year variable to 2024 [ ] and bump various copyright years as well [ ].
    • Fix a large (!) number of spelling mistakes in various scripts. [ ][ ][ ]
    • Prevent Squid and Systemd processes from being killed by the kernel s OOM killer. [ ]
    • Install the iptables tool everywhere, else our custom rc.local script fails. [ ]
    • Cleanup the /srv/workspace/pbuilder directory on boot. [ ]
    • Automatically restart Squid if it fails. [ ]
    • Limit the execution of chroot-installation jobs to a maximum of 4 concurrent runs. [ ][ ]
Significant amounts of node maintenance was performed by Holger Levsen (eg. [ ][ ][ ][ ][ ][ ][ ] etc.) and Vagrant Cascadian (eg. [ ][ ][ ][ ][ ][ ][ ][ ]). Indeed, Vagrant Cascadian handled an extended power outage for the network running the Debian armhf architecture test infrastructure. This provided the incentive to replace the UPS batteries and consolidate infrastructure to reduce future UPS load. [ ] Elsewhere in our infrastructure, however, Holger Levsen also adjusted the email configuration for @reproducible-builds.org to deal with a new SMTP email attack. [ ]

Upstream patches The Reproducible Builds project tries to detects, dissects and fix as many (currently) unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including: Separate to this, Vagrant Cascadian followed up with the relevant maintainers when reproducibility fixes were not included in newly-uploaded versions of the mm-common package in Debian this was quickly fixed, however. [ ]

If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

2 February 2024

Ian Jackson: UPS, the Useless Parcel Service; VAT and fees

I recently had the most astonishingly bad experience with UPS, the courier company. They severely damaged my parcels, and were very bad about UK import VAT, ultimately ending up harassing me on autopilot. The only thing that got their attention was my draft Particulars of Claim for intended legal action. Surprisingly, I got them to admit in writing that the disbursement fee they charge recipients alongside the actual VAT, is just something they made up with no legal basis. What happened Autumn last year I ordered some furniture from a company in Germany. This was to be shipped by them to me by courier. The supplier chose UPS. UPS misrouted one of the three parcels to Denmark. When everything arrived, it had been sat on by elephants. The supplier had to replace most of it, with considerable inconvenience and delay to me, and of course a loss to the supplier. But this post isn t mostly about that. This post is about VAT. You see, import VAT was due, because of fucking Brexit. UPS made a complete hash of collecting that VAT. Their computers can t issue coherent documents, their email helpdesk is completely useless, and their automated debt collection systems run along uninfluenced by any external input. The crazy, including legal threats and escalating late payment fees, continued even after I paid the VAT discrepancy (which I did despite them not yet having provided any coherent calculation for it). This kind of behaviour is a very small and mild version of the kind of things British Gas did to Lisa Ferguson, who eventually won substantial damages for harassment, plus 10K of costs. Having tried asking nicely, and sending stiff letters, I too threatened litigation. I would have actually started a court claim, but it would have included a claim under the Protection from Harassment Act. Those have to be filed under the Part 8 procedure , which involves sending all of the written evidence you re going to use along with the claim form. Collating all that would be a good deal of work, especially since UPS and ControlAccount didn t engage with me at all, so I had no idea which things they might actually dispute. So I decided that before issuing proceedings, I d send them a copy of my draft Particulars of Claim, along with an offer to settle if they would pay me a modest sum and stop being evil robots at me. Rather than me typing the whole tale in again, you can read the full gory details in the PDF of my draft Particulars of Claim. (I ve redacted the reference numbers). Outcome The draft Particulars finally got their attention. UPS sent me an offer: they agreed to pay me 50, in full and final settlement. That was close enough to my offer that I accepted it. I mostly wanted them to stop, and they do seem to have done so. And I ve received the 50. VAT calculation They also finally included an actual explanation of the VAT calculation. It s absurd, but it s not UPS s absurd:
The clearance was entered initially with estimated import charges of 400.03, consisting of 387.83 VAT, and 12.20 disbursement fee. This original entry regrettably did not include the freight cost for calculating the VAT, and as such when submitted for final entry the VAT value was adjusted to include this and an amended invoice was issued for an additional 39.84. HMRC calculate the amount against which VAT is raised using the value of goods, insurance and freight, however they also may apply a VAT adjustment figure. The VAT Adjustment is based on many factors (Incidental costs in regards to a shipment), which includes charge for currency conversion if the invoice does not list values in Sterling, but the main is due to the inland freight from airport of destination to the final delivery point, as this charge varies, for example, from EMA to Edinburgh would be 150, from EMA to Derby would be 1, so each year UPS must supply HMRC with all values incurred for entry build up and they give an average which UPS have to use on the entry build up as the VAT Adjustment. The correct calculation for the import charges is therefore as follows: Goods value divided by exchange rate 2,489.53 EUR / 1.1683 = 2,130.89 GBP Duty: Goods value plus freight (%) 2,130.89 GBP + 5% = 2,237.43 GBP. That total times the duty rate. X 0 % = 0 GBP VAT: Goods value plus freight (100%) 2,130.89 GBP + 0 = 2,130.89 GBP That total plus duty and VAT adjustment 2,130.89 GBP + 0 GBP + 7.49 GBP = 2,348.08 GBP. That total times 20% VAT = 427.67 GBP As detailed above we must confirm that the final VAT charges applied to the shipment were correct, and that no refund of this is therefore due.
This looks very like HMRC-originated nonsense. If only they had put it on the original bills! It s completely ridiculous that it took four months and near-litigation to obtain it. Disbursement fee One more thing. UPS billed me a 12 disbursement fee . When you import something, there s often tax to pay. The courier company pays that to the government, and the consignee pays it to the courier. Usually the courier demands it before final delivery, since otherwise they end up having to chase it as a debt. It is common for parcel companies to add a random fee of their own. As I note in my Particulars, there isn t any legal basis for this. In my own offer of settlement I proposed that UPS should:
State under what principle of English law (such as, what enactment or principle of Common Law), you levy the disbursement fee (or refund it).
To my surprise they actually responded to this in their own settlement letter. (They didn t, for example, mention the harassment at all.) They said (emphasis mine):
A disbursement fee is a fee for amounts paid or processed on behalf of a client. It is an established category of charge used by legal firms, amongst other companies, for billing of various ancillary costs which may be incurred in completion of service. Disbursement fees are not covered by a specific law, nor are they legally prohibited. Regarding UPS disbursement fee this is an administrative charge levied for the use of UPS deferment account to prepay import charges for clearance through CDS. This charge would therefore be billed to the party that is responsible for the import charges, normally the consignee or receiver of the shipment in question. The disbursement fee as applied is legitimate, and as you have stated is a commonly used and recognised charge throughout the courier industry, and I can confirm that this was charged correctly in this instance.
On UPS s analysis, they can just make up whatever fee they like. That is clearly not right (and I don t even need to refer to consumer protection law, which would also make it obviously unlawful). And, that everyone does it doesn t make it lawful. There are so many things that are ubiquitous but unlawful, especially nowadays when much of the legal system - especially consumer protection regulators - has been underfunded to beyond the point of collapse. Next time this comes up I might have a go at getting the fee back. (Obviously I ll have to pay it first, to get my parcel.) ParcelForce and Royal Mail I think this analysis doesn t apply to ParcelForce and (probably) Royal Mail. I looked into this in 2009, and I found that Parcelforce had been given the ability to write their own private laws: Schemes made under section 89 of the Postal Services Act 2000. This is obviously ridiculous but I think it was the law in 2009. I doubt the intervening governments have fixed it. Furniture Oh, yes, the actual furniture. The replacements arrived intact and are great :-).

comment count unavailable comments

22 January 2024

Russell Coker: Storage Trends 2024

It has been less than a year since my last post about storage trends [1] and enough has changed to make it worth writing again. My previous analysis was that for <2TB only SSD made sense, for 4TB SSD made sense for business use while hard drives were still a good option for home use, and for 8TB+ hard drives were clearly the best choice for most uses. I will start by looking at MSY prices, they aren't the cheapest (you can get cheaper online) but they are competitive and they make it easy to compare the different options. I'll also compare the cheapest options in each size, there are more expensive options but usually if you want to pay more then the performance benefits of SSD (both SATA and NVMe) are even more appealing. All prices are in Australian dollars and of parts that are readily available in Australia, but the relative prices of the parts are probably similar in most countries. The main issue here is when to use SSD and when to use hard disks, and then if SSD is chosen which variety to use. Small Storage For my last post the cheapest storage devices from MSY were $19 for a 128G SSD, now it s $24 for a 128G SSD or NVMe device. I don t think the Australian dollar has dropped much against foreign currencies, so I guess this is partly companies wanting more profits and partly due to the demand for more storage. Items that can t sell in quantity need higher profit margins if they are to have them in stock. 500G SSDs are around $33 and 500G NVMe devices for $36 so for most use cases it wouldn t make sense to buy anything smaller than 500G. The cheapest hard drive is $45 for a 1TB disk. A 1TB SATA SSD costs $61 and a 1TB NVMe costs $79. So 1TB disks aren t a good option for any use case. A 2TB hard drive is $89. A 2TB SATA SSD is $118 and a 2TB NVMe is $145. I don t think the small savings you can get from using hard drives makes them worth using for 2TB. For most people if you have a system that s important to you then $145 on storage isn t a lot to spend. It seems hardly worth buying less than 2TB of storage, even for a laptop. Even if you don t use all the space larger storage devices tend to support more writes before wearing out so you still gain from it. A 2TB NVMe device you buy for a laptop now could be used in every replacement laptop for the next 10 years. I only have 512G of storage in my laptop because I have a collection of SSD/NVMe devices that have been replaced in larger systems, so the 512G is essentially free for my laptop as I bought a larger device for a server. For small business use it doesn t make sense to buy anything smaller than 2TB for any system other than a router. If you buy smaller devices then you will sometimes have to pay people to install bigger ones and when the price is $145 it s best to just pay that up front and be done with it. Medium Storage A 4TB hard drive is $135. A 4TB SATA SSD is $319 and a 4TB NVMe is $299. The prices haven t changed a lot since last year, but a small increase in hard drive prices and a small decrease in SSD prices makes SSD more appealing for this market segment. A common size range for home servers and small business servers is 4TB or 8TB of storage. To do that on SSD means about $600 for 4TB of RAID-1 or $900 for 8TB of RAID-5/RAID-Z. That s quite affordable for that use. For 8TB of less important storage a 8TB hard drive costs $239 and a 8TB SATA SSD costs $899 so a hard drive clearly wins for the specific case of non-RAID single device storage. Note that the U.2 devices are more competitive for 8TB than SATA but I included them in the next section because they are more difficult to install. Serious Storage With 8TB being an uncommon and expensive option for consumer SSDs the cheapest price is for multiple 4TB devices. To have multiple NVMe devices in one PCIe slot you need PCIe bifurcation (treating the PCIe slot as multiple slots). Most of the machines I use don t support bifurcation and most affordable systems with ECC RAM don t have it. For cheap NVMe type storage there are U.2 devices (the enterprise form of NVMe). Until recently they were too expensive to use for desktop systems but now there are PCIe cards for internal U.2 devices, $14 for a card that takes a single U.2 is a common price on AliExpress and prices below $600 for a 7.68TB U.2 device are common that s cheaper on a per-TB basis than SATA SSD and NVMe! There are PCIe cards that take up to 4*U.2 devices (which probably require bifurcation) which means you could have 8+ U.2 devices in one not particularly high end PC for 56TB of RAID-Z NVMe storage. Admittedly $4200 for 56TB is moderately expensive, but it s in the price range for a small business server or a high end home server. A more common configuration might be 2*7.68TB U.2 on a single PCIe card (or 2 cards if you don t have bifurcation) for 7.68TB of RAID-1 storage. For SATA SSD AliExpress has a 6*2.5 hot-swap device that fits in a 5.25 bay for $63, so if you have 2*5.25 bays you could have 12*4TB SSDs for 44TB of RAID-Z storage. That wouldn t be much cheaper than 8*7.68TB U.2 devices and would be slower and have less space. But it would be a good option if PCIe bifurcation isn t possible. 16TB SATA hard drives cost $559 which is almost exactly half the price per TB of U.2 storage. That doesn t seem like a good deal. If you want 16TB of RAID storage then 3*7.68TB U.2 devices only costs about 50% more than 2*16TB SATA disks. In most cases paying 50% more to get NVMe instead of hard disks is a good option. As sizes go above 16TB prices go up in a more than linear manner, I guess they don t sell much volume of larger drives. 15.36TB U.2 devices are on sale for about $1300, slightly more than twice the price of a 16TB disk. It s within the price range of small businesses and serious home users. Also it should be noted that the U.2 devices are designed for enterprise levels of reliability and the hard disk prices I m comparing to are the cheapest available. If NAS hard disks were compared then the price benefit of hard disks would be smaller. Probably the biggest problem with U.2 for most people is that it s an uncommon technology that few people have much experience with or spare parts for testing. Also you can t buy U.2 gear at your local computer store which might mean that you want to have spare parts on hand which is an extra expense. For enterprise use I ve recently been involved in discussions with a vendor that sells multiple petabyte arrays of NVMe. Apparently NVMe is cheap enough that there s no need to use anything else if you want a well performing file server. Do Hard Disks Make Sense? There are specific cases like comparing a 8TB hard disk to a 8TB SATA SSD or a 16TB hard disk to a 15.36TB U.2 device where hard disks have an apparent advantage. But when comparing RAID storage and counting the performance benefits of SSD the savings of using hard disks don t seem to be that great. Is now the time that hard disks are going to die in the market? If they can t get volume sales then prices will go up due to lack of economy of scale in manufacture and increased stock time for retailers. 8TB hard drives are now more expensive than they were 9 months ago when I wrote my previous post, has a hard drive price death spiral already started? SSDs are cheaper than hard disks at the smallest sizes, faster (apart from some corner cases with contiguous IO), take less space in a computer, and make less noise. At worst they are a bit over twice the cost per TB. But the most common requirements for storage are small enough and cheap enough that being twice as expensive as hard drives isn t a problem for most people. I predict that hard disks will become less popular in future and offer less of a price advantage. The vendors are talking about 50TB hard disks being available in future but right now you can fit more than 50TB of NVMe or U.2 devices in a volume less than that of a 3.5 hard disk so for storage density SSD can clearly win. Maybe in future hard disks will be used in arrays of 100TB devices for large scale enterprise storage. But for home users and small businesses the current sizes of SSD cover most uses. At the moment it seems that the one case where hard disks can really compare well is for backup devices. For backups you want large storage, good contiguous write speeds, and low prices so you can buy plenty of them. Further Issues The prices I ve compared for SATA SSD and NVMe devices are all based on the cheapest devices available. I think it s a bit of a market for lemons [2] as devices often don t perform as well as expected and the incidence of fake products purporting to be from reputable companies is high on the cheaper sites. So you might as well buy the cheaper devices. An advantage of the U.2 devices is that you know that they will be reliable and perform well. One thing that concerns me about SSDs is the lack of knowledge of their failure cases. Filesystems like ZFS were specifically designed to cope with common failure cases of hard disks and I don t think we have that much knowledge about how SSDs fail. But with 3 copies of metadata BTFS or ZFS should survive unexpected SSD failure modes. I still have some hard drives in my home server, they keep working well enough and the prices on SSDs keep dropping. But if I was buying new storage for such a server now I d get U.2. I wonder if tape will make a comeback for backup. Does anyone know of other good storage options that I missed?

19 January 2024

Reproducible Builds (diffoscope): diffoscope 254 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 254. This version includes the following changes:
[ Chris Lamb ]
* Reflow some code according to black.
[ Seth Michael Larson ]
* Add support for comparing the 'eXtensible ARchive' (.XAR/.PKG) file format.
[ Vagrant Cascadian ]
* Add external tool on GNU Guix for 7z.
You find out more by visiting the project homepage.

31 December 2023

Chris Lamb: Favourites of 2023

This post should have marked the beginning of my yearly roundups of the favourite books and movies I read and watched in 2023. However, due to coming down with a nasty bout of flu recently and other sundry commitments, I wasn't able to undertake writing the necessary four or five blog posts In lieu of this, however, I will simply present my (unordered and unadorned) highlights for now. Do get in touch if this (or any of my previous posts) have spurred you into picking something up yourself

Books

Peter Watts: Blindsight (2006) Reymer Banham: Los Angeles: The Architecture of Four Ecologies (2006) Joanne McNeil: Lurking: How a Person Became a User (2020) J. L. Carr: A Month in the Country (1980) Hilary Mantel: A Memoir of My Former Self: A Life in Writing (2023) Adam Higginbotham: Midnight in Chernobyl (2019) Tony Judt: Postwar: A History of Europe Since 1945 (2005) Tony Judt: Reappraisals: Reflections on the Forgotten Twentieth Century (2008) Peter Apps: Show Me the Bodies: How We Let Grenfell Happen (2021) Joan Didion: Slouching Towards Bethlehem (1968)Erik Larson: The Devil in the White City (2003)

Films Recent releases

Unenjoyable experiences included Alejandro G mez Monteverde's Sound of Freedom (2023), Alex Garland's Men (2022) and Steven Spielberg's The Fabelmans (2022).
Older releases (Films released before 2022, and not including rewatches from previous years.) Distinctly unenjoyable watches included Ocean's Eleven (1960), El Topo (1970), L olo (1992), Hotel Mumbai (2018), Bulworth (1998) and and The Big Red One (1980).

27 December 2023

Russ Allbery: Review: A Study in Scarlet

Review: A Study in Scarlet, by Arthur Conan Doyle
Series: Sherlock Holmes #1
Publisher: AmazonClassics
Copyright: 1887
Printing: February 2018
ISBN: 1-5039-5525-7
Format: Kindle
Pages: 159
A Study in Scarlet is the short mystery novel (probably a novella, although I didn't count words) that introduced the world to Sherlock Holmes. I'm going to invoke the 100-year-rule and discuss the plot of this book rather freely on the grounds that even someone who (like me prior to a few days ago) has not yet read it is probably not that invested in avoiding all spoilers. If you do want to remain entirely unspoiled, exercise caution before reading on. I had somehow managed to avoid ever reading anything by Arthur Conan Doyle, not even a short story. I therefore couldn't be sure that some of the assertions I was making in my review of A Study in Honor were correct. Since A Study in Scarlet would be quick to read, I decided on a whim to do a bit of research and grab a free copy of the first Holmes novel. Holmes is such a part of English-speaking culture that I thought I had a pretty good idea of what to expect. This was largely true, but cultural osmosis had somehow not prepared me for the surprise Mormons. A Study in Scarlet establishes the basic parameters of a Holmes story: Dr. James Watson as narrator, the apartment he shares with Holmes at 221B Baker Street, the Baker Street Irregulars, Holmes's competition with police detectives, and his penchant for making leaps of logical deduction from subtle clues. The story opens with Watson meeting Holmes, agreeing to split the rent of a flat, and being baffled by the apparent randomness of Holmes's fields of study before Holmes reveals he's a consulting detective. The first case is a murder: a man is found dead in an abandoned house, without a mark on him although there are blood splatters on the walls and the word "RACHE" written in blood. Since my only prior exposure to Holmes was from cultural references and a few TV adaptations, there were a few things that surprised me. One is that Holmes is voluble and animated rather than aloof. Doyle is clearly going for passionate eccentric rather than calculating mastermind. Another is that he is intentionally and unabashedly ignorant on any topic not related to solving mysteries.
My surprise reached a climax, however, when I found incidentally that he was ignorant of the Copernican Theory and of the composition of the Solar System. That any civilized human being in this nineteenth century should not be aware that the earth travelled round the sun appeared to be to me such an extraordinary fact that I could hardly realize it. "You appear to be astonished," he said, smiling at my expression of surprise. "Now that I do know it I shall do my best to forget it." "To forget it!" "You see," he explained, "I consider that a man's brain originally is like a little empty attic, and you have to stock it with such furniture as you chose. A fool takes in all the lumber of every sort that he comes across, so that the knowledge which might be useful to him gets crowded out, or at best is jumbled up with a lot of other things so that he has a difficulty in laying his hands upon it. Now the skilful workman is very careful indeed as to what he takes into his brain-attic. He will have nothing but the tools which may help him in doing his work, but of these he has a large assortment, and all in the most perfect order. It is a mistake to think that that little room has elastic walls and can distend to any extent. Depend upon it there comes a time when for every addition of knowledge you forget something that you knew before. It is of the highest importance, therefore, not to have useless facts elbowing out the useful ones."
This is directly contrary to my expectation that the best way to make leaps of deduction is to know something about a huge range of topics so that one can draw unexpected connections, particularly given the puzzle-box construction and odd details so beloved in classic mysteries. I'm now curious if Doyle stuck with this conception, and if there were any later mysteries that involved astronomy. Speaking of classic mysteries, A Study in Scarlet isn't quite one, although one can see the shape of the genre to come. Doyle does not "play fair" by the rules that have not yet been invented. Holmes at most points knows considerably more than the reader, including bits of evidence that are not described until Holmes describes them and research that Holmes does off-camera and only reveals when he wants to be dramatic. This is not the sort of story where the reader is encouraged to try to figure out the mystery before the detective. Rather, what Doyle seems to be aiming for, and what Watson attempts (unsuccessfully) as the reader surrogate, is slightly different: once Holmes makes one of his grand assertions, the reader is encouraged to guess what Holmes might have done to arrive at that conclusion. Doyle seems to want the reader to guess technique rather than outcome, while providing only vague clues in general descriptions of Holmes's behavior at a crime scene. The structure of this story is quite odd. The first part is roughly what you would expect: first-person narration from Watson, supposedly taken from his journals but not at all in the style of a journal and explicitly written for an audience. Part one concludes with Holmes capturing and dramatically announcing the name of the killer, who the reader has never heard of before. Part two then opens with... a western?
In the central portion of the great North American Continent there lies an arid and repulsive desert, which for many a long year served as a barrier against the advance of civilization. From the Sierra Nevada to Nebraska, and from the Yellowstone River in the north to the Colorado upon the south, is a region of desolation and silence. Nor is Nature always in one mood throughout the grim district. It comprises snow-capped and lofty mountains, and dark and gloomy valleys. There are swift-flowing rivers which dash through jagged ca ons; and there are enormous plains, which in winter are white with snow, and in summer are grey with the saline alkali dust. They all preserve, however, the common characteristics of barrenness, inhospitality, and misery.
First, I have issues with the geography. That region contains some of the most beautiful areas on earth, and while a lot of that region is arid, describing it primarily as a repulsive desert is a bit much. Doyle's boundaries and distances are also confusing: the Yellowstone is a northeast-flowing river with its source in Wyoming, so the area between it and the Colorado does not extend to the Sierra Nevadas (or even to Utah), and it's not entirely clear to me that he realizes Nevada exists. This is probably what it's like for people who live anywhere else in the world when US authors write about their country. But second, there's no Holmes, no Watson, and not even the pretense of a transition from the detective novel that we were just reading. Doyle just launches into a random western with an omniscient narrator. It features a lean, grizzled man and an adorable child that he adopts and raises into a beautiful free spirit, who then falls in love with a wild gold-rush adventurer. This was written about 15 years before the first critically recognized western novel, so I can't blame Doyle for all the cliches here, but to a modern reader all of these characters are straight from central casting. Well, except for the villains, who are the Mormons. By that, I don't mean that the villains are Mormon. I mean Brigham Young is the on-page villain, plotting against the hero to force his adopted daughter into a Mormon harem (to use the word that Doyle uses repeatedly) and ruling Salt Lake City with an iron hand, border guards with passwords (?!), and secret police. This part of the book was wild. I was laughing out-loud at the sheer malevolent absurdity of the thirty-day countdown to marriage, which I doubt was the intended effect. We do eventually learn that this is the backstory of the murder, but we don't return to Watson and Holmes for multiple chapters. Which leads me to the other thing that surprised me: Doyle lays out this backstory, but then never has his characters comment directly on the morality of it, only the spectacle. Holmes cares only for the intellectual challenge (and for who gets credit), and Doyle sets things up so that the reader need not concern themselves with aftermath, punishment, or anything of that sort. I probably shouldn't have been surprised this does fit with the Holmes stereotype but I'm used to modern fiction where there is usually at least some effort to pass judgment on the events of the story. Doyle draws very clear villains, but is utterly silent on whether the murder is justified. Given its status in the history of literature, I'm not sorry to have read this book, but I didn't particularly enjoy it. It is very much of its time: everyone's moral character is linked directly to their physical appearance, and Doyle uses the occasional racial stereotype without a second thought. Prevailing writing styles have changed, so the prose feels long-winded and breathless. The rivalry between Holmes and the police detectives is tedious and annoying. I also find it hard to read novels from before the general absorption of techniques of emotional realism and interiority into all genres. The characters in A Study in Scarlet felt more like cartoon characters than fully-realized human beings. I have no strong opinion about the objective merits of this book in the context of its time other than to note that the sudden inserted western felt very weird. My understanding is that this is not considered one of the better Holmes stories, and Holmes gets some deeper characterization later on. Maybe I'll try another of Doyle's works someday, but for now my curiosity has been sated. Followed by The Sign of the Four. Rating: 4 out of 10

11 November 2023

Reproducible Builds: Reproducible Builds in October 2023

Welcome to the October 2023 report from the Reproducible Builds project. In these reports we outline the most important things that we have been up to over the past month. As a quick recap, whilst anyone may inspect the source code of free software for malicious flaws, almost all software is distributed to end users as pre-compiled binaries.

Reproducible Builds Summit 2023 Between October 31st and November 2nd, we held our seventh Reproducible Builds Summit in Hamburg, Germany! Our summits are a unique gathering that brings together attendees from diverse projects, united by a shared vision of advancing the Reproducible Builds effort, and this instance was no different. During this enriching event, participants had the opportunity to engage in discussions, establish connections and exchange ideas to drive progress in this vital field. A number of concrete outcomes from the summit will documented in the report for November 2023 and elsewhere. Amazingly the agenda and all notes from all sessions are already online. The Reproducible Builds team would like to thank our event sponsors who include Mullvad VPN, openSUSE, Debian, Software Freedom Conservancy, Allotropia and Aspiration Tech.

Reflections on Reflections on Trusting Trust Russ Cox posted a fascinating article on his blog prompted by the fortieth anniversary of Ken Thompson s award-winning paper, Reflections on Trusting Trust:
[ ] In March 2023, Ken gave the closing keynote [and] during the Q&A session, someone jokingly asked about the Turing award lecture, specifically can you tell us right now whether you have a backdoor into every copy of gcc and Linux still today?
Although Ken reveals (or at least claims!) that he has no such backdoor, he does admit that he has the actual code which Russ requests and subsequently dissects in great but accessible detail.

Ecosystem factors of reproducible builds Rahul Bajaj, Eduardo Fernandes, Bram Adams and Ahmed E. Hassan from the Maintenance, Construction and Intelligence of Software (MCIS) laboratory within the School of Computing, Queen s University in Ontario, Canada have published a paper on the Time to fix, causes and correlation with external ecosystem factors of unreproducible builds. The authors compare various response times within the Debian and Arch Linux distributions including, for example:
Arch Linux packages become reproducible a median of 30 days quicker when compared to Debian packages, while Debian packages remain reproducible for a median of 68 days longer once fixed.
A full PDF of their paper is available online, as are many other interesting papers on MCIS publication page.

NixOS installation image reproducible On the NixOS Discourse instance, Arnout Engelen (raboof) announced that NixOS have created an independent, bit-for-bit identical rebuilding of the nixos-minimal image that is used to install NixOS. In their post, Arnout details what exactly can be reproduced, and even includes some of the history of this endeavour:
You may remember a 2021 announcement that the minimal ISO was 100% reproducible. While back then we successfully tested that all packages that were needed to build the ISO were individually reproducible, actually rebuilding the ISO still introduced differences. This was due to some remaining problems in the hydra cache and the way the ISO was created. By the time we fixed those, regressions had popped up (notably an upstream problem in Python 3.10), and it isn t until this week that we were back to having everything reproducible and being able to validate the complete chain.
Congratulations to NixOS team for reaching this important milestone! Discussion about this announcement can be found underneath the post itself, as well as on Hacker News.

CPython source tarballs now reproducible Seth Larson published a blog post investigating the reproducibility of the CPython source tarballs. Using diffoscope, reprotest and other tools, Seth documents his work that led to a pull request to make these files reproducible which was merged by ukasz Langa.

New arm64 hardware from Codethink Long-time sponsor of the project, Codethink, have generously replaced our old Moonshot-Slides , which they have generously hosted since 2016 with new KVM-based arm64 hardware. Holger Levsen integrated these new nodes to the Reproducible Builds continuous integration framework.

Community updates On our mailing list during October 2023 there were a number of threads, including:
  • Vagrant Cascadian continued a thread about the implementation details of a snapshot archive server required for reproducing previous builds. [ ]
  • Akihiro Suda shared an update on BuildKit, a toolkit for building Docker container images. Akihiro links to a interesting talk they recently gave at DockerCon titled Reproducible builds with BuildKit for software supply-chain security.
  • Alex Zakharov started a thread discussing and proposing fixes for various tools that create ext4 filesystem images. [ ]
Elsewhere, Pol Dellaiera made a number of improvements to our website, including fixing typos and links [ ][ ], adding a NixOS Flake file [ ] and sorting our publications page by date [ ]. Vagrant Cascadian presented Reproducible Builds All The Way Down at the Open Source Firmware Conference.

Distribution work distro-info is a Debian-oriented tool that can provide information about Debian (and Ubuntu) distributions such as their codenames (eg. bookworm) and so on. This month, Benjamin Drung uploaded a new version of distro-info that added support for the SOURCE_DATE_EPOCH environment variable in order to close bug #1034422. In addition, 8 reviews of packages were added, 74 were updated and 56 were removed this month, all adding to our knowledge about identified issues. Bernhard M. Wiedemann published another monthly report about reproducibility within openSUSE.

Software development The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including: In addition, Chris Lamb fixed an issue in diffoscope, where if the equivalent of file -i returns text/plain, fallback to comparing as a text file. This was originally filed as Debian bug #1053668) by Niels Thykier. [ ] This was then uploaded to Debian (and elsewhere) as version 251.

Reproducibility testing framework The Reproducible Builds project operates a comprehensive testing framework (available at tests.reproducible-builds.org) in order to check packages and other artifacts for reproducibility. In October, a number of changes were made by Holger Levsen:
  • Debian-related changes:
    • Refine the handling of package blacklisting, such as sending blacklisting notifications to the #debian-reproducible-changes IRC channel. [ ][ ][ ]
    • Install systemd-oomd on all Debian bookworm nodes (re. Debian bug #1052257). [ ]
    • Detect more cases of failures to delete schroots. [ ]
    • Document various bugs in bookworm which are (currently) being manually worked around. [ ]
  • Node-related changes:
    • Integrate the new arm64 machines from Codethink. [ ][ ][ ][ ][ ][ ]
    • Improve various node cleanup routines. [ ][ ][ ][ ]
    • General node maintenance. [ ][ ][ ][ ]
  • Monitoring-related changes:
    • Remove unused Munin monitoring plugins. [ ]
    • Complain less visibly about too many installed kernels. [ ]
  • Misc:
    • Enhance the firewall handling on Jenkins nodes. [ ][ ][ ][ ]
    • Install the fish shell everywhere. [ ]
In addition, Vagrant Cascadian added some packages and configuration for snapshot experiments. [ ]

If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

25 October 2023

Russ Allbery: Review: Going Infinite

Review: Going Infinite, by Michael Lewis
Publisher: W.W. Norton & Company
Copyright: 2023
ISBN: 1-324-07434-5
Format: Kindle
Pages: 255
My first reaction when I heard that Michael Lewis had been embedded with Sam Bankman-Fried working on a book when Bankman-Fried's cryptocurrency exchange FTX collapsed into bankruptcy after losing billions of dollars of customer deposits was "holy shit, why would you talk to Michael Lewis about your dodgy cryptocurrency company?" Followed immediately by "I have to read this book." This is that book. I wasn't sure how Lewis would approach this topic. His normal (although not exclusive) area of interest is financial systems and crises, and there is lots of room for multiple books about cryptocurrency fiascoes using someone like Bankman-Fried as a pivot. But Going Infinite is not like The Big Short or Lewis's other financial industry books. It's a nearly straight biography of Sam Bankman-Fried, with just enough context for the reader to follow his life. To understand what you're getting in Going Infinite, I think it's important to understand what sort of book Lewis likes to write. Lewis is not exactly a reporter, although he does explain complicated things for a mass audience. He's primarily a storyteller who collects people he finds fascinating. This book was therefore never going to be like, say, Carreyrou's Bad Blood or Isaac's Super Pumped. Lewis's interest is not in a forensic account of how FTX or Alameda Research were structured. His interest is in what makes Sam Bankman-Fried tick, what's going on inside his head. That's not a question Lewis directly answers, though. Instead, he shows you Bankman-Fried as Lewis saw him and was able to reconstruct from interviews and sources and lets you draw your own conclusions. Boy did I ever draw a lot of conclusions, most of which were highly unflattering. However, one conclusion I didn't draw, and had been dubious about even before reading this book, was that Sam Bankman-Fried was some sort of criminal mastermind who intentionally plotted to steal customer money. Lewis clearly doesn't believe this is the case, and with the caveat that my study of the evidence outside of this book has been spotty and intermittent, I think Lewis has the better of the argument. I am utterly fascinated by this, and I'm afraid this review is going to turn into a long summary of my take on the argument, so here's the capsule review before you get bored and wander off: This is a highly entertaining book written by an excellent storyteller. I am also inclined to believe most of it is true, but given that I'm not on the jury, I'm not that invested in whether Lewis is too credulous towards the explanations of the people involved. What I do know is that it's a fantastic yarn with characters who are too wild to put in fiction, and I thoroughly enjoyed it. There are a few things that everyone involved appears to agree on, and therefore I think we can take as settled. One is that Bankman-Fried, and most of the rest of FTX and Alameda Research, never clearly distinguished between customer money and all of the other money. It's not obvious that their home-grown accounting software (written entirely by one person! who never spoke to other people! in code that no one else could understand!) was even capable of clearly delineating between their piles of money. Another is that FTX and Alameda Research were thoroughly intermingled. There was no official reporting structure and possibly not even a coherent list of employees. The environment was so chaotic that lots of people, including Bankman-Fried, could have stolen millions of dollars without anyone noticing. But it was also so chaotic that they could, and did, literally misplace millions of dollars by accident, or because Bankman-Fried had problems with object permanence. Something that was previously less obvious from news coverage but that comes through very clearly in this book is that Bankman-Fried seriously struggled with normal interpersonal and societal interactions. We know from multiple sources that he was diagnosed with ADHD and depression (Lewis describes it specifically as anhedonia, the inability to feel pleasure). The ADHD in Lewis's account is quite severe and does not sound controlled, despite medication; for example, Bankman-Fried routinely played timed video games while he was having important meetings, forgot things the moment he stopped dealing with them, was constantly on his phone or seeking out some other distraction, and often stimmed (by bouncing his leg) to a degree that other people found it distracting. Perhaps more tellingly, Bankman-Fried repeatedly describes himself in diary entries and correspondence to other people (particularly Caroline Ellison, his employee and on-and-off secret girlfriend) as being devoid of empathy and unable to access his own emotions, which Lewis supports with stories from former co-workers. I'm very hesitant to diagnose someone via a book, but, at least in Lewis's account, Bankman-Fried nearly walks down the symptom list of antisocial personality disorder in his own description of himself to other people. (The one exception is around physical violence; there is nothing in this book or in any of the other reporting that I've seen to indicate that Bankman-Fried was violent or physically abusive.) One of the recurrent themes of this book is that Bankman-Fried never saw the point in following rules that didn't make sense to him or worrying about things he thought weren't important, and therefore simply didn't. By about a third of the way into this book, before FTX is even properly started, very little about its eventual downfall will seem that surprising. There was no way that Sam Bankman-Fried was going to be able to run a successful business over time. He was extremely good at probabilistic trading and spotting exploitable market inefficiencies, and extremely bad at essentially every other aspect of living in a society with other people, other than a hit-or-miss ability to charm that worked much better with large audiences than one-on-one. The real question was why anyone would ever entrust this man with millions of dollars or decide to work for him for longer than two weeks. The answer to those questions changes over the course of this story. Later on, it was timing. Sam Bankman-Fried took the techniques of high frequency trading he learned at Jane Street Capital and applied them to exploiting cryptocurrency markets at precisely the right time in the cryptocurrency bubble. There was far more money than sense, the most ruthless financial players were still too leery to get involved, and a rising tide was lifting all boats, even the ones that were piles of driftwood. When cryptocurrency inevitably collapsed, so did his businesses. In retrospect, that seems inevitable. The early answer, though, was effective altruism. A full discussion of effective altruism is beyond the scope of this review, although Lewis offers a decent introduction in the book. The short version is that a sensible and defensible desire to use stronger standards of evidence in evaluating charitable giving turned into a bizarre navel-gazing exercise in making up statistical risks to hypothetical future people and treating those made-up numbers as if they should be the bedrock of one's personal ethics. One of the people most responsible for this turn is an Oxford philosopher named Will MacAskill. Sam Bankman-Fried was already obsessed with utilitarianism, in part due to his parents' philosophical beliefs, and it was a presentation by Will MacAskill that converted him to the effective altruism variant of extreme utilitarianism. In Lewis's presentation, this was like joining a cult. The impression I came away with feels like something out of a science fiction novel: Bankman-Fried knew there was some serious gap in his thought processes where most people had empathy, was deeply troubled by this, and latched on to effective altruism as the ethical framework to plug into that hole. So much of effective altruism sounds like a con game that it's easy to think the participants are lying, but Lewis clearly believes Bankman-Fried is a true believer. He appeared to be sincerely trying to make money in order to use it to solve existential threats to society, he does not appear to be motivated by money apart from that goal, and he was following through (in bizarre and mostly ineffective ways). I find this particularly believable because effective altruism as a belief system seems designed to fit Bankman-Fried's personality and justify the things he wanted to do anyway. Effective altruism says that empathy is meaningless, emotion is meaningless, and ethical decisions should be made solely on the basis of expected value: how much return (usually in safety) does society get for your investment. Effective altruism says that all the things that Sam Bankman-Fried was bad at were useless and unimportant, so he could stop feeling bad about his apparent lack of normal human morality. The only thing that mattered was the thing that he was exceptionally good at: probabilistic reasoning under uncertainty. And, critically to the foundation of his business career, effective altruism gave him access to investors and a recruiting pool of employees, things he was entirely unsuited to acquiring the normal way. There's a ton more of this book that I haven't touched on, but this review is already quite long, so I'll leave you with one more point. I don't know how true Lewis's portrayal is in all the details. He took the approach of getting very close to most of the major players in this drama and largely believing what they said happened, supplemented by startling access to sources like Bankman-Fried's personal diary and Caroline Ellis's personal diary. (He also seems to have gotten extensive information from the personal psychiatrist of most of the people involved; I'm not sure if there's some reasonable explanation for this, but based solely on the material in this book, it seems to be a shocking breach of medical ethics.) But Lewis is a storyteller more than he's a reporter, and his bias is for telling a great story. It's entirely possible that the events related here are not entirely true, or are skewed in favor of making a better story. It's certainly true that they're not the complete story. But, that said, I think a book like this is a useful counterweight to the human tendency to believe in moral villains. This is, frustratingly, a counterweight extended almost exclusively to higher-class white people like Bankman-Fried. This is infuriating, but that doesn't make it wrong. It means we should extend that analysis to more people. Once FTX collapsed, a lot of people became very invested in the idea that Bankman-Fried was a straightforward embezzler. Either he intended from the start to steal everyone's money or, more likely, he started losing money, panicked, and stole customer money to cover the hole. Lots of people in history have done exactly that, and lots of people involved in cryptocurrency have tenuous attachments to ethics, so this is a believable story. But people are complicated, and there's also truth in the maxim that every villain is the hero of their own story. Lewis is after a less boring story than "the crook stole everyone's money," and that leads to some bias. But sometimes the less boring story is also true. Here's the thing: even if Sam Bankman-Fried never intended to take any money, he clearly did intend to mix customer money with Alameda Research funds. In Lewis's account, he never truly believed in them as separate things. He didn't care about following accounting or reporting rules; he thought they were boring nonsense that got in his way. There is obvious criminal intent here in any reading of the story, so I don't think Lewis's more complex story would let him escape prosecution. He refused to follow the rules, and as a result a lot of people lost a lot of money. I think it's a useful exercise to leave mental space for the possibility that he had far less obvious reasons for those actions than that he was a simple thief, while still enforcing the laws that he quite obviously violated. This book was great. If you like Lewis's style, this was some of the best entertainment I've read in a while. Highly recommended; if you are at all interested in this saga, I think this is a must-read. Rating: 9 out of 10

27 August 2023

Steve McIntyre: We're back!

It's August Bank Holiday Weekend, we're in Cambridge. It must be the Debian UK OMGWTFBBQ!. We're about halfway through, and we've already polished off lots and lots of good food and beer. Lars is making pancakes as I write this, :-) We had an awesome game of Mao last night. People are having fun! Many thanks to a number of awesome friendly people for again sponsoring the important refreshments for the weekend. It's hungry/thirsty work celebrating like this!

Gunnar Wolf: Interested in adopting the RPi images for Debian?

Back in June 2018, Michael Stapelberg put the Raspberry Pi image building up for adoption. He created the first set of unofficial, experimental Raspberry Pi images for Debian. I promptly answered to him, and while it took me some time to actually warp my head around Michael s work, managed to eventually do so. By December, I started pushing some updates. Not only that: I didn t think much about it in the beginning, as the needed non-free pacakge was called raspi3-firmware, but By early 2019, I had it running for all of the then-available Raspberry families (so the package was naturally renamed to raspi-firmware). I got my Raspberry Pi 4 at DebConf19 (thanks to Andy, who brought it from Cambridge), and it soon joined the happy Debian family. The images are built daily, and are available in https://raspi.debian.net. In the process, I also adopted Lars great vmdb2 image building tool, and have kept it decently up to date (yes, I m currently lagging behind, but I ll get to it soonish ). Anyway This year, I have been seriously neglecting the Raspberry builds. I have simply not had time to regularly test built images, nor to debug why the builder has not picked up building for trixie (testing). And my time availability is not going to improve any time soon. We are close to one month away from moving for six months to Paran (Argentina), where I ll be focusing on my PhD. And while I do contemplate taking my Raspberries along, I do not forsee being able to put much energy to them. So This is basically a call for adoption for the Raspberry Debian images building service. I do intend to stick around and try to help. It s not only me (although I m responsible for the build itself) we have a nice and healthy group of Debian people hanging out in the #debian-raspberrypi channel in OFTC IRC. Don t be afraid, and come ask. I hope giving this project in adoption will breathe new life into it!

4 August 2023

Shirish Agarwal: License Raj 2.0, 2023

About a week back Jio launched a laptop called JioBook that will be manufactured in China
The most interesting thing is that the whole thing will be produced in Hunan, China. Then 3 days later India mandates a licensing requirement for Apple, Dell and other laptop/tablet manufacturers. And all of these in the guise of Make in India . It is similar how India has exempted Adani and the Tatas from buying as much solar cells as are needed and then sell the same in India. Reliance will be basically monopolizing the laptop business. And if people think that projects like Raspberry Pi, Arduino etc. will be exempted they have another think coming.

History of License Raj After India became free, in the 1980s the Congress wanted to open its markets to the world just like China did. But at that time, the BJP, though small via Jan Sangh made the argument that we are not ready for the world. The indian businessman needs a bit more time. And hence a compromise was made. The compromise was simple. Indian Industry and people who wanted to get anything from the west, needed a license. This was very much in line how the Russian economy was evolving. All the three nations, India, China and Russia were on similar paths. China broke away where it opened up limited markets for competition and gave state support to its firms. Russia and Japan on the other hand, kept their markets relatively closed. The same thing happened in India, what happened in Russia and elsewhere. The businessman got what he wanted, he just corrupted the system. Reliance, the conglomerate today abused the same system as much as it could. Its defence was to be seen as the small guy. I wouldn t go into that as that itself would be a big story in itself. Whatever was sold in India was sold with huge commissions and just like Russia scarcity became the order of the day. Monopolies flourished and competition was nowhere. These remained till 1991 when Prime Minister Mr. Manmohan Singh was forced to liberalize and open up the markets. Even at that time, the RSS through its Swadeshi Jagran Manch was sharing the end of the world prophecies for the Indian businessman.

2014 Current Regime In 2010, in U.K. the Conservative party came in power under the leadership of David Cameron who was influenced by the policies of Margaret Thatcher who arguably ditched manufacturing in the UK. David Cameron and his party did the same 2010 onwards but for public services under the name austerity. India has been doing the same. The inequality has gone up while people s purchasing power has gone drastically down. CMIE figures are much more drastic and education is a joke.
Add to that since 2016 funding for scientists have gone to the dogs and now they are even playing with doctor s careers. I do not have to remind people that a woman scientist took almost a quarter century to find a drug delivery system that others said was impossible. And she did it using public finance. Science is hard. I have already shared in a previous blog post how it took the Chinese 20 years to reach where they are and somehow we think we will be able to both China and Japan. Of the 160 odd countries that are on planet earth, only a handful of countries have both the means and the knowledge to use and expand on that. While I was not part of Taiwan Debconf, later I came to know that even Taiwan in many ways is similar to Japan in the sense that a majority of its population is stuck in low-paid jobs (apart from those employed in TSMC) which is similar to Keiretsu or Chabeol from either Japan or South Korea. In all these cases, only a small percentage of the economy is going forward while the rest is stagnating or even going backwards. Similar is the case in India as well  Unlike the Americans who chose the path to have more competition, we have chosen the path to have more monopolies. So even though, I very much liked Louis es project sooner or later finding the devices itself would be hard. While the recent notification is for laptops, what stops them from doing the same with mobiles or even desktop systems. As it is, both smartphones as well as desktop systems has been contracting since last year as food inflation has gone up. Add to that availability of products has been made scarce (whether by design or not, unknown.) The end result, the latest processor launched overseas becomes the new thing here 3-4 years later. And that was before this notification. This will only decrease competition and make Ambanis rich at cost of everyone else. So much for east of doing business . Also the backlash has been pretty much been tepid. So what I shared will probably happen again sooner or later. The only interesting thing is that it s based on Android, probably in part due to the issues people seeing in both Windows 10, 11 and whatnot. Till later. Update :- The print tried a decluttering but instead cluttered the topic. While what he shared all was true, and certainly it is a step backwards but he didn t need to show how most Indians had to go to RBI for the same. I remember my Mamaji doing the same and sharing afterwards that all he had was $100 for a day which while being a big sum was paltry if you were staying in a hotel and were there for company business. He survived on bananas and whatver cheap veg. he could find then. This is almost 35-40 odd years ago. As shared the Govt. has been doing missteps for quite sometime now. The print does try to take a balanced take so it doesn t run counter of the Government but even it knows that this is a bad take. The whole thing about security is just laughable, did they wake up after 9 years. And now in its own wisdom it apparently has shifted the ban instead from now to 3 months afterwards. Of course, most people on the right just applauding without understanding the complexities and implications of the same. Vendors like Samsung and Apple who have made assembly operations would do a double-think and shift to Taiwan, Vietnam, Mexico anywhere. Global money follows global trends. And such missteps do not help

Implications in A.I. products One of the things that has not been thought about how companies that are making A.I. products in India or even MNC s will suffer. Most of them right now are in stealth mode but are made for Intel or AMD or ARM depending upon how it works for them. There is nothing to tell if the companies made their plea and was it heard or unheard. If the Government doesn t revert it then sooner or later they would either have to go abroad or cash out/sell to somebody else. Some people on the right also know this but for whatever reason have chosen to remain silent. Till later

26 July 2023

Shirish Agarwal: Manipur Violence, Drugs, Binging on Northshore, Alaska Daily, Doogie Kamealoha and EU Digital Resilence Act.

Manipur Videos Warning: The text might be mature and will have references to violence so if there are kids or you are sensitive, please excuse. Few days back, saw the videos and I cannot share the rage, shame and many conflicting emotions that were going through me. I almost didn t want to share but couldn t stop myself. The woman in the video were being palmed, fingered, nude, later reportedly raped and murdered. And there have been more than a few cases. The next day saw another video that showed beheaded heads, and Kukis being killed just next to their houses. I couldn t imagine what those people must be feeling as the CM has been making partisan statements against them. One of the husbands of the Kuki women who had been paraded, fondled is an Army Officer in the Indian Army. The Meiteis even tried to burn his home but the Army intervened and didn t let it get burnt. The CM s own statement as shared before tells his inability to bring the situation out of crisis. In fact, his statement was dumb stating that the Internet shutdown was because there were more than 100 such cases. And it s spreading to the nearby Northeast regions. Now Mizoram, the nearest neighbor is going through similar things where the Meitis are not dominant. The Mizos have told the Meitis to get out. To date, the PM has chosen not to visit Manipur. He just made a small 1 minute statement about it saying how the women have shamed India, an approximation of what he said.While it s actually not the women but the men who have shamed India. The Wire has been talking to both the Meitis, the Kukis, the Nagas. A Kuki women sort of bared all. She is right on many counts. The GOI while wanting to paint the Kukis in a negative light have forgotten what has been happening in its own state, especially its own youth as well as in other states while also ignoring the larger geopolitics and business around it. Taliban has been cracking as even they couldn t see young boys, women becoming drug users. I had read somewhere that 1 in 4 or 1 in 5 young person in Afghanistan is now in its grip. So no wonder,the Taliban is trying to eradicate and shutdown drug use among it s own youth. Circling back to Manipur, I was under the wrong impression that the Internet shutdown is now over. After those videos became viral as well as the others I mentioned, again the orders have been given and there is shutdown. It is not fully shut but now only Govt. offices have it. so nobody can share a video that goes against any State or Central Govt. narrative  A real sad state of affairs  Update: There is conditional reopening whatever that means  When I saw the videos, the first thing is I felt was being powerless, powerless to do anything about it. The second was if I do not write about it, amplify it and don t let others know about it then what s the use of being able to blog

Mental Health, Binging on various Webseries Both the videos shocked me and I couldn t sleep that night or the night after. it. Even after doing work and all, they would come in unobtrusively in my nightmares  While I felt a bit foolish, I felt it would be nice to binge on some webseries. Little I was to know that both Northshore and Alaska Daily would have stories similar to what is happening here  While the story in Alaska Daily is fictional it resembles very closely to a real newspaper called Anchorage Daily news. Even there the Intuit women , one of the marginalized communities in Alaska. The only difference I can see between GOI and the Alaskan Government is that the Alaskan Government was much subtle in doing the same things. There are some differences though. First, the State is and was responsive to the local press and apart from one close call to one of its reporters, most reporters do not have to think about their own life in peril. Here, the press cannot look after either their livelihood or their life. It was a juvenile kid who actually shot the video, uploaded and made it viral. One needs to just remember the case details of Siddique Kappan. Just for sharing the news and the video he was arrested. Bail was denied to him time and time again citing that the Police were investigating . Only after 2 years and 3 months he got bail and that too because none of the charges that the Police had they were able to show any prima facie evidence. One of the better interviews though was of Vrinda Grover. For those who don t know her, her Wikipedia page does tell a bit about her although it is woefully incomplete. For example, most recently she had relentlessly pursued the unconstitutional Internet Shutdown that happened in Kashmir for 5 months. Just like in Manipur, the shutdown was there to bury crimes either committed or being facilitated by the State. For the issues of livelihood, one can take the cases of Bipin Yadav and Rashid Hussain. Both were fired by their employer Dainik Bhaskar because they questioned the BJP MP Smriti Irani what she has done for the state. The problems for Dainik Bhaskar or for any other mainstream media is most of them rely on Government advertisements. Private investment in India has fallen to record lows mostly due to the policies made by the Centre. If any entity or sector grows a bit then either Adani or Ambani will one way or the other take it. So, for most first and second generation entrepreneurs it doesn t make sense to grow and then finally sell it to one of these corporates at a loss  GOI on Adani, Ambani side of any deal. The MSME sector that is and used to be the second highest employer hasn t been able to recover from the shocks of demonetization, GST and then the pandemic. Each resulting in more and more closures and shutdowns. Most of the joblessness has gone up tremendously in North India which the Government tries to deny. The most interesting points in all those above examples is within a month or less, whatever the media reports gets scrubbed. Even the firing of the journos that was covered by some of the mainstream media isn t there anymore. I have to use secondary sources instead of primary sources. One can think of the chilling effects on reportage due to the above. The sad fact is even with all the money in the world the PM is unable to come to the Parliament to face questions.
Why is PM not answering in Parliament,, even Rahul Gandhi is not there - Surya Pratap Singh, prev. IAS Officer.
The above poster/question is by Surya Pratap Singh, a retired IAS officer. He asks why the PM is unable to answer in either of the houses. As shared before, the Govt. wants very limited discussion. Even yesterday, the Lok Sabha TV just showed the BJP MP s making statements but silent or mic was off during whatever questions or statements made by the opposition. If this isn t mockery of Indian democracy then I don t know what is  Even the media landscape has been altered substantially within the last few years. Both Adani and Ambani have distributed the media pie between themselves. One of the last bastions of the free press, NDTV was bought by Adani in a hostile takeover. Both Ambani and Adani are close to this Goverment. In fact, there is no sector in which one or the other is not present. Media houses like Newsclick, The Wire etc. that are a fraction of mainstream press are where most of the youth have been going to get their news as they are not partisan. Although even there, GOI has time and again interfered. The Wire has had too many 504 Gateway timeouts in the recent months and they had been forced to move most of their journalism from online to video, rather Youtube in order to escape both the censoring and the timeouts as shared above. In such a hostile environment, how both the organizations are somehow able to survive is a miracle. Most local reportage is also going to YouTube as that s the best way for them to not get into Govt. censors. Not an ideal situation, but that s the way it is. The difference between Indian and Israeli media can be seen through this
The above is a Screenshot shared by how the Israeli media has reacted to the Israeli Government s Knesset over the judicial overhaul . Here, the press itself erodes its own by giving into the Government day and night

Binging on Webseries Saw Northshore, Three Pines, Alaska Daily and Doogie Kamealoha M.D. which is based on Doogie Howser M.D. Of the four, enjoyed Doogie Kamealoha M.D. the most but then it might be because it s a copy of Doogie Howser, just updated to the new millenia and there are some good childhood memories associated with that series. The others are also good. I tried to not see European stuff as most of them are twisted and didn t want that space.

EU Digital Operational Resilience Act and impact on FOSS Few days ago, apparently the EU shared the above Act. One can read about it more here. This would have more impact on FOSS as most development of various FOSS distributions happens in EU. Fair bit of Debian s own development happens in Germany and France. While there have been calls to make things more clearer, especially for FOSS given that most developers do foss development either on side or as a hobby while their day job is and would be different. The part about consumer electronics and FOSS is a tricky one as updates can screw up your systems. Microsoft has had a huge history of devices not working after an update or upgrade. And this is not limited to Windows as they would like to believe. Even apple seems to be having its share of issues time and time again. One would have hoped that these companies that make billions of dollars from their hardware and software sales would be doing more testing and Q&A and be more aware about security issues. FOSS, on the other hand while being more responsive doesn t make as much money vis-a-vis the competitors. Let s take the most concrete example. The most successful mobile phone having FOSS is Purism. But it s phone, it has priced itself out of the market. A huge part of that is to do with both economies of scale and trying to get an infrastructure and skills in the States where none or minimally exists. Compared that to say Pinepro that is manufactured in Hong Kong and is priced 1/3rd of the same. For most people it is simply not affordable in these times. Add to that the complexity of these modern cellphones make it harder, not easier for most people to be vigilant and update the phone at all times. Maybe we need more dumphones such as Light and Punkt but then can those be remotely hacked or not, there doesn t seem to be any answers on that one. I haven t even seen anybody even ask those questions. They may have their own chicken and egg issues. For people like me who have lost hearing, while I can navigate smartphones for now but as I become old I don t see anything that would help me. For many an elderly population, both hearing and seeing are the first to fade. There doesn t seem to be any solutions targeted for them even though they are 5-10% of any population at the very least. Probably more so in Europe and the U.S. as well as Japan and China. All of them are clearly under-served markets but dunno a solution for them. At least to me that s an open question.

Next.