Search Results: "cyb"

6 June 2022

Reproducible Builds: Reproducible Builds in May 2022

Welcome to the May 2022 report from the Reproducible Builds project. In our reports we outline the most important things that we have been up to over the past month. As ever, if you are interested in contributing to the project, please visit our Contribute page on our website.

Repfix paper Zhilei Ren, Shiwei Sun, Jifeng Xuan, Xiaochen Li, Zhide Zhou and He Jiang have published an academic paper titled Automated Patching for Unreproducible Builds:
[..] fixing unreproducible build issues poses a set of challenges [..], among which we consider the localization granularity and the historical knowledge utilization as the most significant ones. To tackle these challenges, we propose a novel approach [called] RepFix that combines tracing-based fine-grained localization with history-based patch generation mechanisms.
The paper (PDF, 3.5MB) uses the Debian mylvmbackup package as an example to show how RepFix can automatically generate patches to make software build reproducibly. As it happens, Reiner Herrmann submitted a patch for the mylvmbackup package which has remained unapplied by the Debian package maintainer for over seven years, thus this paper inadvertently underscores that achieving reproducible builds will require both technical and social solutions.

Python variables Johannes Schauer discovered a fascinating bug where simply naming your Python variable _m led to unreproducible .pyc files. In particular, the types module in Python 3.10 requires the following patch to make it reproducible:
--- a/Lib/types.py
+++ b/Lib/types.py
@@ -37,8 +37,8 @@ _ag = _ag()
 AsyncGeneratorType = type(_ag)
 
 class _C:
-    def _m(self): pass
-MethodType = type(_C()._m)
+    def _b(self): pass
+MethodType = type(_C()._b)
Simply renaming the dummy method from _m to _b was enough to workaround the problem. Johannes bug report first led to a number of improvements in diffoscope to aid in dissecting .pyc files, but upstream identified this as caused by an issue surrounding interned strings and is being tracked in CPython bug #78274.

New SPDX team to incorporate build metadata in Software Bill of Materials SPDX, the open standard for Software Bill of Materials (SBOM), is continuously developed by a number of teams and committees. However, SPDX has welcomed a new addition; a team dedicated to enhancing metadata about software builds, complementing reproducible builds in creating a more secure software supply chain. The SPDX Builds Team has been working throughout May to define the universal primitives shared by all build systems, including the who, what, where and how of builds:
  • Who: the identity of the person or organisation that controls the build infrastructure.
  • What: the inputs and outputs of a given build, combining metadata about the build s configuration with an SBOM describing source code and dependencies.
  • Where: the software packages making up the build system, from build orchestration tools such as Woodpecker CI and Tekton to language-specific tools.
  • How: the invocation of a build, linking metadata of a build to the identity of the person or automation tool that initiated it.
The SPDX Builds Team expects to have a usable data model by September, ready for inclusion in the SPDX 3.0 standard. The team welcomes new contributors, inviting those interested in joining to introduce themselves on the SPDX-Tech mailing list.

Talks at Debian Reunion Hamburg Some of the Reproducible Builds team (Holger Levsen, Mattia Rizzolo, Roland Clobus, Philip Rinn, etc.) met in real life at the Debian Reunion Hamburg (official homepage). There were several informal discussions amongst them, as well as two talks related to reproducible builds. First, Holger Levsen gave a talk on the status of Reproducible Builds for bullseye and bookworm and beyond (WebM, 210MB): Secondly, Roland Clobus gave a talk called Reproducible builds as applied to non-compiler output (WebM, 115MB):

Supply-chain security attacks This was another bumper month for supply-chain attacks in package repositories. Early in the month, Lance R. Vick noticed that the maintainer of the NPM foreach package let their personal email domain expire, so they bought it and now controls foreach on NPM and the 36,826 projects that depend on it . Shortly afterwards, Drew DeVault published a related blog post titled When will we learn? that offers a brief timeline of major incidents in this area and, not uncontroversially, suggests that the correct way to ship packages is with your distribution s package manager .

Bootstrapping Bootstrapping is a process for building software tools progressively from a primitive compiler tool and source language up to a full Linux development environment with GCC, etc. This is important given the amount of trust we put in existing compiler binaries. This month, a bootstrappable mini-kernel was announced. Called boot2now, it comprises a series of compilers in the form of bootable machine images.

Google s new Assured Open Source Software service Google Cloud (the division responsible for the Google Compute Engine) announced a new Assured Open Source Software service. Noting the considerable 650% year-over-year increase in cyberattacks aimed at open source suppliers, the new service claims to enable enterprise and public sector users of open source software to easily incorporate the same OSS packages that Google uses into their own developer workflows . The announcement goes on to enumerate that packages curated by the new service would be:
  • Regularly scanned, analyzed, and fuzz-tested for vulnerabilities.
  • Have corresponding enriched metadata incorporating Container/Artifact Analysis data.
  • Are built with Cloud Build including evidence of verifiable SLSA-compliance
  • Are verifiably signed by Google.
  • Are distributed from an Artifact Registry secured and protected by Google.
(Full announcement)

A retrospective on the Rust programming language Andrew bunnie Huang published a long blog post this month promising a critical retrospective on the Rust programming language. Amongst many acute observations about the evolution of the language s syntax (etc.), the post beings to critique the languages approach to supply chain security ( Rust Has A Limited View of Supply Chain Security ) and reproducibility ( You Can t Reproduce Someone Else s Rust Build ):
There s some bugs open with the Rust maintainers to address reproducible builds, but with the number of issues they have to deal with in the language, I am not optimistic that this problem will be resolved anytime soon. Assuming the only driver of the unreproducibility is the inclusion of OS paths in the binary, one fix to this would be to re-configure our build system to run in some sort of a chroot environment or a virtual machine that fixes the paths in a way that almost anyone else could reproduce. I say almost anyone else because this fix would be OS-dependent, so we d be able to get reproducible builds under, for example, Linux, but it would not help Windows users where chroot environments are not a thing.
(Full post)

Reproducible Builds IRC meeting The minutes and logs from our May 2022 IRC meeting have been published. In case you missed this one, our next IRC meeting will take place on Tuesday 28th June at 15:00 UTC on #reproducible-builds on the OFTC network.

A new tool to improve supply-chain security in Arch Linux kpcyrd published yet another interesting tool related to reproducibility. Writing about the tool in a recent blog post, kpcyrd mentions that although many PKGBUILDs provide authentication in the context of signed Git tags (i.e. the ability to verify the Git tag was signed by one of the two trusted keys ), they do not support pinning, ie. that upstream could create a new signed Git tag with an identical name, and arbitrarily change the source code without the [maintainer] noticing . Conversely, other PKGBUILDs support pinning but not authentication. The new tool, auth-tarball-from-git, fixes both problems, as nearly outlined in kpcyrd s original blog post.

diffoscope diffoscope is our in-depth and content-aware diff utility. Not only can it locate and diagnose reproducibility issues, it can provide human-readable diffs from many kinds of binary formats. This month, Chris Lamb prepared and uploaded versions 212, 213 and 214 to Debian unstable. Chris also made the following changes:
  • New features:
    • Add support for extracting vmlinuz Linux kernel images. [ ]
    • Support both python-argcomplete 1.x and 2.x. [ ]
    • Strip sticky etc. from x.deb: sticky Debian binary package [ ]. [ ]
    • Integrate test coverage with GitLab s concept of artifacts. [ ][ ][ ]
  • Bug fixes:
    • Don t mask differences in .zip or .jar central directory extra fields. [ ]
    • Don t show a binary comparison of .zip or .jar files if we have observed at least one nested difference. [ ]
  • Codebase improvements:
    • Substantially update comment for our calls to zipinfo and zipinfo -v. [ ]
    • Use assert_diff in test_zip over calling get_data with a separate assert. [ ]
    • Don t call re.compile and then call .sub on the result; just call re.sub directly. [ ]
    • Clarify the comment around the difference between --usage and --help. [ ]
  • Testsuite improvements:
    • Test --help and --usage. [ ]
    • Test that --help includes the file formats. [ ]
Vagrant Cascadian added an external tool reference xb-tool for GNU Guix [ ] as well as updated the diffoscope package in GNU Guix itself [ ][ ][ ].

Distribution work In Debian, 41 reviews of Debian packages were added, 85 were updated and 13 were removed this month adding to our knowledge about identified issues. A number of issue types have been updated, including adding a new nondeterministic_ordering_in_deprecated_items_collected_by_doxygen toolchain issue [ ] as well as ones for mono_mastersummary_xml_files_inherit_filesystem_ordering [ ], extended_attributes_in_jar_file_created_without_manifest [ ] and apxs_captures_build_path [ ]. Vagrant Cascadian performed a rough check of the reproducibility of core package sets in GNU Guix, and in openSUSE, Bernhard M. Wiedemann posted his usual monthly reproducible builds status report.

Upstream patches The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:

Reproducible builds website Chris Lamb updated the main Reproducible Builds website and documentation in a number of small ways, but also prepared and published an interview with Jan Nieuwenhuizen about Bootstrappable Builds, GNU Mes and GNU Guix. [ ][ ][ ][ ] In addition, Tim Jones added a link to the Talos Linux project [ ] and billchenchina fixed a dead link [ ].

Testing framework The Reproducible Builds project runs a significant testing framework at tests.reproducible-builds.org, to check packages and other artifacts for reproducibility. This month, the following changes were made:
  • Holger Levsen:
    • Add support for detecting running kernels that require attention. [ ]
    • Temporarily configure a host to support performing Debian builds for packages that lack .buildinfo files. [ ]
    • Update generated webpages to clarify wishes for feedback. [ ]
    • Update copyright years on various scripts. [ ]
  • Mattia Rizzolo:
    • Provide a facility so that Debian Live image generation can copy a file remotely. [ ][ ][ ][ ]
  • Roland Clobus:
    • Add initial support for testing generated images with OpenQA. [ ]
And finally, as usual, node maintenance was also performed by Holger Levsen [ ][ ].

Misc news On our mailing list this month:

Contact If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

22 May 2022

Ulrike Uhlig: How do kids conceive the internet? - part 3

I received some feedback on the first part of interviews about the internet with children that I d like to share publicly here. Thank you! Your thoughts and experiences are important to me! In the first interview round there was this French girl.
Asked what she would change if she could, the 9 year old girl advocated for a global usage limit of the internet in order to protect the human brain. Also, she said, her parents spend way too much time on their phones and people should rather spend more time with their children.
To this bit, one person reacted saying that they first laughed when reading her proposal, but then felt extremely touched by it. Another person reacted to the same bit of text:
That s just brilliant. We spend so much time worrying about how the internet will affect children while overlooking how it has already affected us as parents. It actively harms our relationship with our children (keeping us distracted from their amazing life) and sets a bad example for them. Too often, when we worry about children, we should look at our own behavior first. Until about that age (9-10+) at least, they are such a direct reflection of us that it s frightening
Yet another person reacted to the fact that many of the interviewees in the first round seemed to believe that the internet is immaterial, located somewhere in the air, while being at the same time omnipresent:
It reminds me of one time about a dozen years ago, when i was still working closely with one of the city high schools where i d just had a terrible series of days, dealing with hardware failure, crappy service followthrough by the school s ISP, and overheating in the server closet, and had basically stayed overnight at the school and just managed to get things back to mostly-functional before kids and teachers started showing up again. That afternoon, i d been asked by the teacher of a dystopian fiction class to join them for a discussion of Feed, which they d just finished reading. i had read it the week before, and came to class prepared for their questions. (the book is about a near-future where kids have cybernetic implants and their society is basically on a runaway communications overload; not a bad Y[oung]A[dult] novel, really!) The kids all knew me from around the school, but the teacher introduced my appearance in class as one of the most Internet-connected people and they wanted to ask me about whether i really thought the internet would do this kind of thing to our culture, which i think was the frame that the teacher had prepped them with. I asked them whether they thought the book was really about the Internet, or whether it was about mobile phones. Totally threw off the teacher s lesson plans, i think, but we had a good discussion. At one point, one of the kids asked me if there was some kind of crazy disaster and all the humans died out, would the internet just keep running? what would happen on it if we were all gone? all of my labor even that grueling week was invisible to him! The internet was an immaterial thing, or if not immaterial, a force of nature, a thing that you accounted for the way you accounted for the weather, or traffic jams. It didn t occur to him, even having just read a book that asked questions about what hyperconnectivity does to a culture (including grappling with issues of disparate access, effective discrimination based on who has the latest hardware, etc), it didn t occur to him that this shit all works to the extent that it does because people make it go. I felt lost trying to explain it to him, because where i wanted to get to with the class discussion was about how we might decide collectively to make it go somewhere else that our contributions to it, and our labor to perpetuate it (or not) might actually help shape the future that the network helps us slide into. but he didn t even see that human decisions or labor played a role it in at all, let alone a potentially directive role. We were really starting at square zero, which wasn t his fault. Or the fault of his classmates that matter but maybe a little bit of fault on the teacher, who i thought should have been emphasizing this more but even the teacher clearly thought of the internet as a thing being done to us not as something we might actually drive one way or another. And she s not even wrong most people don t have much control, just like most people can t control the weather, even as our weather changes based on aggregate human activity.
I was quite impressed by seeing the internet perceived as a force of nature, so we continued this discussion a bit:
that whole story happened before we started talking about the cloud , but the cloud really reinforces this idea, i think. not that anyone actually thinks that the cloud is a literal cloud, but language shapes minds in subtle ways.
(Bold emphasis in the texts are mine.) Thanks :) I m happy and touched that these interviews prompted your wonderful reactions, and I hope that there ll be more to come on this topic. I m working on it!

31 March 2022

Russell Coker: AMT/MEBX on Debian

I ve just been playing with Intel s Active Management Technology (AMT) [1] which is also known as Management Engine Bios Extension (MEBX). Firstly a disclaimer, using this sort of technology gives remote access to your system at a level that allows in some ways overriding the OS. If this gets broken then you have big problems. Also all the code that matters is non-free. Please don t comment on this post saying that AMT is bad, take it as known that it has issues and that people are forced to use it anyway. I tested this out on a HP Z420 workstation. The first thing it to enable AMT via Intel MEBX , the default password is admin . On first use you are compelled to set a new password which must be 8+ characters containing upper and lower case, number, and punctuation characters. The Debian package amtterm (which needs the package libsoap-lite-perl ) has basic utilities for AMT. The amttool program connects to TCP port 16992 and the amtterm program connects to TCP port 16994. Note that these programs seem a little rough, you can get Perl errors (as opposed to deliberate help messages) if you enter bad command-line parameters. They basically work but could do with some improvement. If you use DHCP for the IP address the DHCP hostname will be DESKTOP-$AssetID and you can find the IP address by requesting an alert be sent to the sysadmin. Here are some examples of amttool usage:
# get AMT info
AMT_PASSWORD="$PASS" amttool $IP
# reset the system and redirect BIOS messages to serial over lan
AMT_PASSWORD="$PASS" amttool reset bios
# access serial over lan console
amtterm -p "$PASS" $IP
The following APT configuration enables the Ubuntu package wsmancli which had some features not in any Debian packages last time I checked.
deb http://us.archive.ubuntu.com/ubuntu/ bionic-updates universe
deb http://us.archive.ubuntu.com/ubuntu/ bionic universe
This Cyberciti article has information on accessing KVM over AMT [2], I haven t tried to do that yet.

Russell Coker: Links March 2022

Anarcat wrote a great blog post about switching from OpenNTP to Chrony which gives a good overview of how NTP works and how accurate the different versions are [1]. Bleeping Computer has an amusing article about criminals who copied a lot of data from NVidia servers including specs of their latest products [2], they are threatening to release all the data if NVidia doesn t stop crippling their GPUs to make them unsuitable for crypto currency mining. I don t support these criminals, but I think NVidia should allow people who buy hardware to use their property as they choose. If cryptocurrency miners buy all the NVidia products then NVidia still makes the sales, they could even auction them to make more money. NPR has a disturbing article about the way execution by lethal injection works in the US [3]. It seems that most people die in an extremely unpleasant way. It makes the North Korean execution by anti-aircraft gun seem civilised. The DirtyPipe vulnerability is the latest serious security issue in the Linux kernel [4]. The report of how it was discovered is very interesting and should be read by all sysadmins. SE Linux will not save you from this as the vulnerability allows writing to read-only files like /etc/passwd. Politico has an insightful analysis of Putin, it s not good news he wants to conquer all territory that had ever been part of a Russian empire at any time in history [5]. The Guardian has an informative article about the EU s attempts to debunk Russian propaganda about Covid19 [6]. Fortunately the sanctions are reducing Russia s ability to do such things now. The Guardian has in interesting article about a project to use literary analysis to predict wars [7]. Funded by the German military but funding was cut after it was proven to work. The Fact Act is a proposal by David Brin for political changes in the US to involve scientists and statisticians in an official advisory role in the legislative process [8], it s an idea with a lot of potential. Technology Review has an interesting interview with the leader of the NSA s Research Directorate [9]. In 2008 the EFF posted a long and informative article about the RIAA s war against music fans [10]. I had followed a lot of the news about this when it was happening, but I still learnt some things from this article that I hadn t known at the time. Also considering past legal battles in the context of the current situation is useful. As an aside all the music I want to listen to is now on YouTube and youtube-dl works really well for me. The 1952 edition of Psychiatry: Journal of Interpersonal Relations has an interesting article On Cooling the Mark Out [11] which starts about how criminal gangs engaged in fraud try to make their victims come to terms with the loss in a way that doesn t involve the police. But it goes on to cover ways of dealing with loss of status in general. The layout is hacky with words broken by hyphens in the middle of lines as it appears to have been scanned from paper, converted to MS-Word, and from there to PDF. But it s worth it. The Internet Heist by Cory Doctorow is an insightful series of 3 articles about the MPAA (MAFIAA) attempts to take over all TV distribution in the US [12]. Wired has an interesting exerpt from the book Spies, Lies, and Algorithms: The History and Future of American Intelligence , by Amy B. Zegart [13]. Interesting summary of the open source intelligence systems (which have nothing to do with open source as free software). But it would be interesting to have an open source intelligence organisation along similar lines to open source software. The guy who tracks billionaire s private jets is an example of this.

3 January 2022

Thorsten Alteholz: My Debian Activities in December 2021

FTP master This month I accepted 412 and rejected 44 packages. The overall number of packages that got accepted was 423. Debian LTS This was my ninetieth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian. This month my all in all workload has been 40h. During that time I did LTS and normal security uploads of: I also started to work on libarchive Further I worked on packages in NEW on security-master. In order to faster process such packages, I added a notification when work arrived there. Last but not least I did some days of frontdesk duties. Debian ELTS This month was the forty-second ELTS month. During my allocated time I uploaded: Last but not least I did some days of frontdesk duties. Debian Astro Related to my previous article about fun with telescopes I uploaded new versions or did source uploads for: Besides the indi-stuff I also uploaded Other stuff I celebrated christmas :-).

29 December 2021

Thorsten Alteholz: Fun with Telescopes

Recently I purchased a small telescope to look at solar spots. When choosing a mount, I checked whether it can be controlled with OSS. In this context INDI is mentioned everywhere and my desired mount was supported. indi and kstars are already part of Debian, so apt install , selecting my mount, . oh, wait, the menu shows it, but I can not select it. Ok, that was the time when I learned about the difference of indi and indi-3rdparty. The indi package just contains a few drivers and others are available from a different repository. This split has been done either due to different release cycles of the driver, a different OSS license of it, or just due to binary blobs without source being part of some drivers. Fine, there are already packages of the 3rdparty-repository available from an Ubuntu PPA, so it should be no problem to add them do Debian as well. Some manufacturers freely distribute at least the specification of their API so that others are able to write the corresponding software. Some manufacturers even write their own driver. Examples are: A minor part actually does not have binary blobs but distributes the sources of their software. Unfortunately they have licenses that are not compatible with DFSG and those packages still need to go to non-free. Examples are: But there also seem to exist lots of manufacturers of astronomically accessories, especially cameras, that just distribute some binary blobs to talk to their hardware. This is sad, but at the moment it is just the way it is and such package need to go to non-free. Luckily their blobs are accompanied with corresponding licenses. At least those manufacturers understand how software licenses work and packaging their SDK is just straight forward. Examples are: However, when looking at the license information of some Ubuntu packages, several of them were distributed under a CC license. This is not a common license for software, so I wanted to get a confirmation whether these information are correct. Unfortunately most of such manufacturers don t want to disclose their licenses. For whatever reason they distribute their tarballs without any hint and emails to their support channels are just ignored. Examples of such bad behaviour are: However the best answer comes from the Levenhuk support. My question about the license of their SDK was answered by:
I am afraid we cannot disclose any further information except the software file that is available on our website.
So strictly speaking nobody is allowed to use their software. I wonder whether such competence also becomes visible in their products. I will never really know as there are more than enough OSS friendly manufacturers available. Anyway, most of the indi-3rdparty drivers are now available and I got lots of suggestions about hardware I need to buy in the future :-).

23 October 2021

Antoine Beaupr : The Neo-Colonial Internet

I grew up with the Internet and its ethics and politics have always been important in my life. But I have also been involved at other levels, against police brutality, for Food, Not Bombs, worker autonomy, software freedom, etc. For a long time, that all seemed coherent. But the more I look at the modern Internet -- and the mega-corporations that control it -- and the less confidence I have in my original political analysis of the liberating potential of technology. I have come to believe that most of our technological development is harmful to the large majority of the population of the planet, and of course the rest of the biosphere. And now I feel this is not a new problem. This is because the Internet is a neo-colonial device, and has been from the start. Let me explain.

What is Neo-Colonialism? The term "neo-colonialism" was coined by Kwame Nkrumah, first president of Ghana. In Neo-Colonialism, the Last Stage of Imperialism (1965), he wrote:
In place of colonialism, as the main instrument of imperialism, we have today neo-colonialism ... [which] like colonialism, is an attempt to export the social conflicts of the capitalist countries. ... The result of neo-colonialism is that foreign capital is used for the exploitation rather than for the development of the less developed parts of the world. Investment, under neo-colonialism, increases, rather than decreases, the gap between the rich and the poor countries of the world.
So basically, if colonialism is Europeans bringing genocide, war, and its religion to the Africa, Asia, and the Americas, neo-colonialism is the Americans (note the "n") bringing capitalism to the world. Before we see how this applies to the Internet, we must therefore make a detour into US history. This matters, because anyone would be hard-pressed to decouple neo-colonialism from the empire under which it evolves, and here we can only name the United States of America.

US Declaration of Independence Let's start with the United States declaration of independence (1776). Many Americans may roll their eyes at this, possibly because that declaration is not actually part of the US constitution and therefore may have questionable legal standing. Still, it was obviously a driving philosophical force in the founding of the nation. As its author, Thomas Jefferson, stated:
it was intended to be an expression of the American mind, and to give to that expression the proper tone and spirit called for by the occasion
In that aging document, we find the following pearl:
We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness.
As a founding document, the Declaration still has an impact in the sense that the above quote has been called an:
"immortal declaration", and "perhaps [the] single phrase" of the American Revolutionary period with the greatest "continuing importance." (Wikipedia)
Let's read that "immortal declaration" again: "all men are created equal". "Men", in that context, is limited to a certain number of people, namely "property-owning or tax-paying white males, or about 6% of the population". Back when this was written, women didn't have the right to vote, and slavery was legal. Jefferson himself owned hundreds of slaves. The declaration was aimed at the King and was a list of grievances. A concern of the colonists was that the King:
has excited domestic insurrections amongst us, and has endeavoured to bring on the inhabitants of our frontiers, the merciless Indian Savages whose known rule of warfare, is an undistinguished destruction of all ages, sexes and conditions.
This is a clear mark of the frontier myth which paved the way for the US to exterminate and colonize the territory some now call the United States of America. The declaration of independence is obviously a colonial document, having being written by colonists. None of this is particularly surprising, historically, but I figured it serves as a good reminder of where the Internet is coming from, since it was born in the US.

A Declaration of the Independence of Cyberspace Two hundred and twenty years later, in 1996, John Perry Barlow wrote a declaration of independence of cyberspace. At this point, (almost) everyone has a right to vote (including women), slavery was abolished (although some argue it still exists in the form of the prison system); the US has made tremendous progress. Surely this text will have aged better than the previous declaration it is obviously derived from. Let's see how it reads today and how it maps to how the Internet is actually built now.

Borders of Independence One of the key ideas that Barlow brings up is that "cyberspace does not lie within your borders". In that sense, cyberspace is the final frontier: having failed to colonize the moon, Americans turn inwards, deeper into technology, but still in the frontier ideology. And indeed, Barlow is one of the co-founder of the Electronic Frontier Foundation (the beloved EFF), founded six years prior. But there are other problems with this idea. As Wikipedia quotes:
The declaration has been criticized for internal inconsistencies.[9] The declaration's assertion that 'cyberspace' is a place removed from the physical world has also been challenged by people who point to the fact that the Internet is always linked to its underlying geography.[10]
And indeed, the Internet is definitely a physical object. First controlled and severely restricted by "telcos" like AT&T, it was somewhat "liberated" from that monopoly in 1982 when an anti-trust lawsuit broke up the monopoly, a key historical event that, one could argue, made the Internet possible. (From there on, "backbone" providers could start competing and emerge, and eventually coalesce into new monopolies: Google has a monopoly on search and advertisement, Facebook on communications for a few generations, Amazon on storage and computing, Microsoft on hardware, etc. Even AT&T is now pretty much as consolidated as it was before.) The point is: all those companies have gigantic data centers and intercontinental cables. And those are definitely prioritizing the western world, the heart of the empire. Take for example Google's latest 3,900 mile undersea cable: it does not connect Argentina to South Africa or New Zealand, it connects the US to UK and Spain. Hardly a revolutionary prospect.

Private Internet But back to the Declaration:
Do not think that you can build it, as though it were a public construction project. You cannot. It is an act of nature and it grows itself through our collective actions.
In Barlow's mind, the "public" is bad, and private is good, natural. Or, in other words, a "public construction project" is unnatural. And indeed, the modern "nature" of development is private: most of the Internet is now privately owned and operated. I must admit that, as an anarchist, I loved that sentence when I read it. I was rooting for "us", the underdogs, the revolutionaries. And, in a way, I still do: I am on the board of Koumbit and work for a non-profit that has pivoted towards censorship and surveillance evasion. Yet I cannot help but think that, as a whole, we have failed to establish that independence and put too much trust in private companies. It is obvious in retrospect, but it was not, 30 years ago. Now, the infrastructure of the Internet has zero accountability to traditional political entities supposedly representing the people, or even its users. The situation is actually worse than when the US was founded (e.g. "6% of the population can vote"), because the owners of the tech giants are only a handful of people who can override any decision. There's only one Amazon CEO, he's called Jeff Bezos, and he has total control. (Update: Bezos actually ceded the CEO role to Andy Jassy, AWS and Amazon music founder, while remaining executive chairman. I would argue that, as the founder and the richest man on earth, he still has strong control over Amazon.)

Social Contract Here's another claim of the Declaration:
We are forming our own Social Contract.
I remember the early days, back when "netiquette" was a word, it did feel we had some sort of a contract. Not written in standards of course -- or barely (see RFC1855) -- but as a tacit agreement. How wrong we were. One just needs to look at Facebook to see how problematic that idea is on a global network. Facebook is the quintessential "hacker" ideology put in practice. Mark Zuckerberg explicitly refused to be "arbiter of truth" which implicitly means he will let lies take over its platforms. He also sees Facebook as place where everyone is equal, something that echoes the Declaration:
We are creating a world that all may enter without privilege or prejudice accorded by race, economic power, military force, or station of birth.
(We note, in passing, the omission of gender in that list, also mirroring the infamous "All men are created equal" claim of the US declaration.) As the Wall Street Journal's (WSJ) Facebook files later shown, both of those "contracts" have serious limitations inside Facebook. There are VIPs who systematically bypass moderation systems including fascists and rapists. Drug cartels and human traffickers thrive on the platform. Even when Zuckerberg himself tried to tame the platform -- to get people vaccinated or to make it healthier -- he failed: "vaxxer" conspiracies multiplied and Facebook got angrier. This is because the "social contract" behind Facebook and those large companies is a lie: their concern is profit and that means advertising, "engagement" with the platform, which causes increased anxiety and depression in teens, for example. Facebook's response to this is that they are working really hard on moderation. But the truth is that even that system is severely skewed. The WSJ showed that Facebook has translators for only 50 languages. It's a surprisingly hard to count human languages but estimates range the number of distinct languages between 2500 and 7000. So while 50 languages seems big at first, it's actually a tiny fraction of the human population using Facebook. Taking the first 50 of the Wikipedia list of languages by native speakers we omit languages like Dutch (52), Greek (74), and Hungarian (78), and that's just a few random nations picks from Europe. As an example, Facebook has trouble moderating even a major language like Arabic. It censored content from legitimate Arab news sources when they mentioned the word al-Aqsa because Facebook associates it with the al-Aqsa Martyrs' Brigades when they were talking about the Al-Aqsa Mosque... This bias against Arabs also shows how Facebook reproduces the American colonizer politics. The WSJ also pointed out that Facebook spends only 13% of its moderation efforts outside of the US, even if that represents 90% of its users. Facebook spends three more times moderating on "brand safety", which shows its priority is not the safety of its users, but of the advertisers.

Military Internet Sergey Brin and Larry Page are the Lewis and Clark of our generation. Just like the latter were sent by Jefferson (the same) to declare sovereignty over the entire US west coast, Google declared sovereignty over all human knowledge, with its mission statement "to organize the world's information and make it universally accessible and useful". (It should be noted that Page somewhat questioned that mission but only because it was not ambitious enough, Google having "outgrown" it.) The Lewis and Clark expedition, just like Google, had a scientific pretext, because that is what you do to colonize a world, presumably. Yet both men were military and had to receive scientific training before they left. The Corps of Discovery was made up of a few dozen enlisted men and a dozen civilians, including York an African American slave owned by Clark and sold after the expedition, with his final fate lost in history. And just like Lewis and Clark, Google has a strong military component. For example, Google Earth was not originally built at Google but is the acquisition of a company called Keyhole which had ties with the CIA. Those ties were brought inside Google during the acquisition. Google's increasing investment inside the military-industrial complex eventually led Google to workers organizing a revolt although it is currently unclear to me how much Google is involved in the military apparatus. Other companies, obviously, do not have such reserve, with Microsoft, Amazon, and plenty of others happily bidding on military contracts all the time.

Spreading the Internet I am obviously not the first to identify colonial structures in the Internet. In an article titled The Internet as an Extension of Colonialism, Heather McDonald correctly identifies fundamental problems with the "development" of new "markets" of Internet "consumers", primarily arguing that it creates a digital divide which creates a "lack of agency and individual freedom":
Many African people have gained access to these technologies but not the freedom to develop content such as web pages or social media platforms in their own way. Digital natives have much more power and therefore use this to create their own space with their own norms, shaping their online world according to their own outlook.
But the digital divide is certainly not the worst problem we have to deal with on the Internet today. Going back to the Declaration, we originally believed we were creating an entirely new world:
This governance will arise according to the conditions of our world, not yours. Our world is different.
How I dearly wished that was true. Unfortunately, the Internet is not that different from the offline world. Or, to be more accurate, the values we have embedded in the Internet, particularly of free speech absolutism, sexism, corporatism, and exploitation, are now exploding outside of the Internet, into the "real" world. The Internet was built with free software which, fundamentally, was based on quasi-volunteer labour of an elite force of white men with obviously too much time on their hands (and also: no children). The mythical writing of GCC and Emacs by Richard Stallman is a good example of this, but the entirety of the Internet now seems to be running on random bits and pieces built by hit-and-run programmers working on their copious free time. Whenever any of those fails, it can compromise or bring down entire systems. (Heck, I wrote this article on my day off...) This model of what is fundamentally "cheap labour" is spreading out from the Internet. Delivery workers are being exploited to the bone by apps like Uber -- although it should be noted that workers organise and fight back. Amazon workers are similarly exploited beyond belief, forbidden to take breaks until they pee in bottles, with ambulances nearby to carry out the bodies. During peak of the pandemic, workers were being dangerously exposed to the virus in warehouses. All this while Amazon is basically taking over the entire economy. The Declaration culminates with this prophecy:
We will spread ourselves across the Planet so that no one can arrest our thoughts.
This prediction, which first felt revolutionary, is now chilling.

Colonial Internet The Internet is, if not neo-colonial, plain colonial. The US colonies had cotton fields and slaves, we have disposable cell phones and Foxconn workers. Canada has its cultural genocide, Facebook has his own genocides in Ethiopia, Myanmar, and mob violence in India. Apple is at least implicitly accepting the Uyghur genocide. And just like the slaves of the colony, those atrocities are what makes the empire run. The Declaration actually ends like this, a quote which I have in my fortune cookies file:
We will create a civilization of the Mind in Cyberspace. May it be more humane and fair than the world your governments have made before.
That is still inspiring to me. But if we want to make "cyberspace" more humane, we need to decolonize it. Work on cyberpeace instead of cyberwar. Establish clear code of conduct, discuss ethics, and question your own privileges, biases, and culture. For me the first step in decolonizing my own mind is writing this article. Breaking up tech monopolies might be an important step, but it won't be enough: we have to do a culture shift as well, and that's the hard part.

Appendix: an apology to Barlow I kind of feel bad going through Barlow's declaration like this, point by point. It is somewhat unfair, especially since Barlow passed away a few years ago and cannot mount a response (even humbly assuming that he might read this). But then again, he himself recognized he was a bit too "optimistic" in 2009, saying: "we all get older and smarter":
I'm an optimist. In order to be libertarian, you have to be an optimist. You have to have a benign view of human nature, to believe that human beings left to their own devices are basically good. But I'm not so sure about human institutions, and I think the real point of argument here is whether or not large corporations are human institutions or some other entity we need to be thinking about curtailing. Most libertarians are worried about government but not worried about business. I think we need to be worrying about business in exactly the same way we are worrying about government.
And, in a sense, it was a little naive to expect Barlow to not be a colonist. Barlow is, among many things, a cattle rancher who grew up on a colonial ranch in Wyoming. The ranch was founded in 1907 by his great uncle, 17 years after the state joined the Union, and only a generation or two after the Powder River War (1866-1868) and Black Hills War (1876-1877) during which the US took over lands occupied by Lakota, Cheyenne, Arapaho, and other native American nations, in some of the last major First Nations Wars.

Appendix: further reading There is another article that almost has the same title as this one: Facebook and the New Colonialism. (Interestingly, the <title> tag on the article is actually "Facebook the Colonial Empire" which I also find appropriate.) The article is worth reading in full, but I loved this quote so much that I couldn't resist reproducing it here:
Representations of colonialism have long been present in digital landscapes. ( Even Super Mario Brothers, the video game designer Steven Fox told me last year. You run through the landscape, stomp on everything, and raise your flag at the end. ) But web-based colonialism is not an abstraction. The online forces that shape a new kind of imperialism go beyond Facebook.
It goes on:
Consider, for example, digitization projects that focus primarily on English-language literature. If the web is meant to be humanity s new Library of Alexandria, a living repository for all of humanity s knowledge, this is a problem. So is the fact that the vast majority of Wikipedia pages are about a relatively tiny square of the planet. For instance, 14 percent of the world s population lives in Africa, but less than 3 percent of the world s geotagged Wikipedia articles originate there, according to a 2014 Oxford Internet Institute report.
And they introduce another definition of Neo-colonialism, while warning about abusing the word like I am sort of doing here:
I m loath to toss around words like colonialism but it s hard to ignore the family resemblances and recognizable DNA, to wit, said Deepika Bahri, an English professor at Emory University who focuses on postcolonial studies. In an email, Bahri summed up those similarities in list form:
  1. ride in like the savior
  2. bandy about words like equality, democracy, basic rights
  3. mask the long-term profit motive (see 2 above)
  4. justify the logic of partial dissemination as better than nothing
  5. partner with local elites and vested interests
  6. accuse the critics of ingratitude
In the end, she told me, if it isn t a duck, it shouldn t quack like a duck.
Another good read is the classic Code and other laws of cyberspace (1999, free PDF) which is also critical of Barlow's Declaration. In "Code is law", Lawrence Lessig argues that:
computer code (or "West Coast Code", referring to Silicon Valley) regulates conduct in much the same way that legal code (or "East Coast Code", referring to Washington, D.C.) does (Wikipedia)
And now it feels like the west coast has won over the east coast, or maybe it recolonized it. In any case, Internet now christens emperors.

5 September 2021

Reproducible Builds: Reproducible Builds in August 2021

Welcome to the latest report from the Reproducible Builds project. In this post, we round up the important things that happened in the world of reproducible builds in August 2021. As always, if you are interested in contributing to the project, please visit the Contribute page on our website.
There were a large number of talks related to reproducible builds at DebConf21 this year, the 21st annual conference of the Debian Linux distribution (full schedule):
PackagingCon (@PackagingCon) is new conference for developers of package management software as well as their related communities and stakeholders. The virtual event, which is scheduled to take place on the 9th and 10th November 2021, has a mission is to bring different ecosystems together: from Python s pip to Rust s cargo to Julia s Pkg, from Debian apt over Nix to conda and mamba, and from vcpkg to Spack we hope to have many different approaches to package management at the conference . A number of people from reproducible builds community are planning on attending this new conference, and some may even present. Tickets start at $20 USD.
As reported in our May report, the president of the United States signed an executive order outlining policies aimed to improve the cybersecurity in the US. The executive order comes after a number of highly-publicised security problems such as a ransomware attack that affected an oil pipeline between Texas and New York and the SolarWinds hack that affected a large number of US federal agencies. As a followup this month, however, a detailed fact sheet was released announcing a number large-scale initiatives and that will undoubtedly be related to software supply chain security and, as a result, reproducible builds.
Lastly, We ran another productive meeting on IRC in August (original announcement) which ran for just short of two hours. A full set of notes from the meeting is available.

Software development kpcyrd announced an interesting new project this month called I probably didn t backdoor this which is an attempt to be:
a practical attempt at shipping a program and having reasonably solid evidence there s probably no backdoor. All source code is annotated and there are instructions explaining how to use reproducible builds to rebuild the artifacts distributed in this repository from source. The idea is shifting the burden of proof from you need to prove there s a backdoor to we need to prove there s probably no backdoor . This repository is less about code (we re going to try to keep code at a minimum actually) and instead contains technical writing that explains why these controls are effective and how to verify them. You are very welcome to adopt the techniques used here in your projects. ( )
As the project s README goes on the mention: the techniques used to rebuild the binary artifacts are only possible because the builds for this project are reproducible . This was also announced on our mailing list this month in a thread titled i-probably-didnt-backdoor-this: Reproducible Builds for upstreams. kpcyrd also wrote a detailed blog post about the problems surrounding Linux distributions (such as Alpine and Arch Linux) that distribute compiled Python bytecode in the form of .pyc files generated during the build process.

diffoscope diffoscope is our in-depth and content-aware diff utility. Not only can it locate and diagnose reproducibility issues, it can provide human-readable diffs from many kinds of binary formats. This month, Chris Lamb made a number of changes, including releasing version 180), version 181) and version 182) as well as the following changes:
  • New features:
    • Add support for extracting the signing block from Android APKs. [ ]
    • If we specify a suffix for a temporary file or directory within the code, ensure it starts with an underscore (ie. _ ) to make the generated filenames more human-readable. [ ]
    • Don t include short GCC lines that differ on a single prefix byte either. These are distracting, not very useful and are simply the strings(1) command s idea of the build ID, which is displayed elsewhere in the diff. [ ][ ]
    • Don t include specific .debug-like lines in the ELF-related output, as it is invariably a duplicate of the debug ID that exists better in the readelf(1) differences for this file. [ ]
  • Bug fixes:
    • Add a special case to SquashFS image extraction to not fail if we aren t the superuser. [ ]
    • Only use java -jar /path/to/apksigner.jar if we have an apksigner.jar as newer versions of apksigner in Debian use a shell wrapper script which will be rejected if passed directly to the JVM. [ ]
    • Reduce the maximum line length for calculating Wagner-Fischer, improving the speed of output generation a lot. [ ]
    • Don t require apksigner in order to compare .apk files using apktool. [ ]
    • Update calls (and tests) for the new version of odt2txt. [ ]
  • Output improvements:
    • Mention in the output if the apksigner tool is missing. [ ]
    • Profile diffoscope.diff.linediff and specialize. [ ][ ]
  • Logging improvements:
    • Format debug-level messages related to ELF sections using the diffoscope.utils.format_class. [ ]
    • Print the size of generated reports in the logs (if possible). [ ]
    • Include profiling information in --debug output if --profile is not set. [ ]
  • Codebase improvements:
    • Clarify a comment about the HUGE_TOOLS Python dictionary. [ ]
    • We can pass -f to apktool to avoid creating a strangely-named subdirectory. [ ]
    • Drop an unused File import. [ ]
    • Update the supported & minimum version of Black. [ ]
    • We don t use the logging variable in a specific place, so alias it to an underscore (ie. _ ) instead. [ ]
    • Update some various copyright years. [ ]
    • Clarify a comment. [ ]
  • Test improvements:
    • Update a test to check specific contents of SquashFS listings, otherwise it fails depending on the test systems user ID to username passwd(5) mapping. [ ]
    • Assign seen and expected values to local variables to improve contextual information in failed tests. [ ]
    • Don t print an orphan newline when the source code formatting test passes. [ ]

In addition Santiago Torres Arias added support for Squashfs version 4.5 [ ] and Felix C. Stegerman suggested a number of small improvements to the output of the new APK signing block [ ]. Lastly, Chris Lamb uploaded python-libarchive-c version 3.1-1 to Debian experimental for the new 3.x branch python-libarchive-c is used by diffoscope.

Distribution work In Debian, 68 reviews of packages were added, 33 were updated and 10 were removed this month, adding to our knowledge about identified issues. Two new issue types have been identified too: nondeterministic_ordering_in_todo_items_collected_by_doxygen and kodi_package_captures_build_path_in_source_filename_hash. kpcyrd published another monthly report on their work on reproducible builds within the Alpine and Arch Linux distributions, specifically mentioning rebuilderd, one of the components powering reproducible.archlinux.org. The report also touches on binary transparency, an important component for supply chain security. The @GuixHPC account on Twitter posted an infographic on what fraction of GNU Guix packages are bit-for-bit reproducible: Finally, Bernhard M. Wiedemann posted his monthly reproducible builds status report for openSUSE.

Upstream patches The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including: Elsewhere, it was discovered that when supporting various new language features and APIs for Android apps, the resulting APK files that are generated now vary wildly from build to build (example diffoscope output). Happily, it appears that a patch has been committed to the relevant source tree. This was also discussed on our mailing list this month in a thread titled Android desugaring and reproducible builds started by Marcus Hoffmann.

Website and documentation There were quite a few changes to the Reproducible Builds website and documentation this month, including:
  • Felix C. Stegerman:
    • Update the website self-build process to not use the buster-backports suite now that Debian Bullseye is the stable release. [ ]
  • Holger Levsen:
    • Add a new page documenting various package rebuilder solutions. [ ]
    • Add some historical talks and slides from DebConf20. [ ][ ]
    • Various improvements to the history page. [ ][ ][ ]
    • Rename the Comparison protocol documentation category to Verification . [ ]
    • Update links to F-Droid documentation. [ ]
  • Ian Muchina:
    • Increase the font size of titles and de-emphasize event details on the talk page. [ ]
    • Rename the README file to README.md to improve the user experience when browsing the Git repository in a web browser. [ ]
  • Mattia Rizzolo:
    • Drop a position:fixed CSS statement that is negatively affecting with some width settings. [ ]
    • Fix the sizing of the elements inside the side navigation bar. [ ]
    • Show gold level sponsors and above in the sidebar. [ ]
    • Updated the documentation within reprotest to mention how ldconfig conflicts with the kernel variation. [ ]
  • Roland Clobus:
    • Added a ticket number for the issue with the live Cinnamon image and diffoscope. [ ]

Testing framework The Reproducible Builds project runs a testing framework at tests.reproducible-builds.org, to check packages and other artifacts for reproducibility. This month, the following changes were made:
  • Holger Levsen:
    • Debian-related changes:
      • Make a large number of changes to support the new Debian bookworm release, including adding it to the dashboard [ ], start scheduling tests [ ], adding suitable Apache redirects [ ] etc. [ ][ ][ ][ ][ ]
      • Make the first build use LANG=C.UTF-8 to match the official Debian build servers. [ ]
      • Only test Debian Live images once a week. [ ]
      • Upgrade all nodes to use Debian Bullseye [ ] [ ]
      • Update README documentation for the Debian Bullseye release. [ ]
    • Other changes:
      • Only include rsync output if the $DEBUG variable is enabled. [ ]
      • Don t try to install mock, a tool used to build Fedora packages some time ago. [ ]
      • Drop an unused function. [ ]
      • Various documentation improvements. [ ][ ]
      • Improve the node health check to detect zombie jobs. [ ]
  • Jessica Clarke (FreeBSD-related changes):
    • Update the location and branch name for the main FreeBSD Git repository. [ ]
    • Correctly ignore the source tarball when comparing build results. [ ]
    • Drop an outdated version number from the documentation. [ ]
  • Mattia Rizzolo:
    • Block F-Droid jobs from running whilst the setup is running. [ ]
    • Enable debugging for the rsync job related to Debian Live images. [ ]
    • Pass BUILD_TAG and BUILD_URL environment for the Debian Live jobs. [ ]
    • Refactor the master_wrapper script to use a Bash array for the parameters. [ ]
    • Prefer YAML s safe_load() function over the unsafe variant. [ ]
    • Use the correct variable in the Apache config to match possible existing files on disk. [ ]
    • Stop issuing HTTP 301 redirects for things that not actually permanent. [ ]
  • Roland Clobus (Debian live image generation):
    • Increase the diffoscope timeout from 120 to 240 minutes; the Cinnamon image should now be able to finish. [ ]
    • Use the new snapshot service. [ ]
    • Make a number of improvements to artifact handling, such as moving the artifacts to the Jenkins host [ ] and correctly cleaning them up at the right time. [ ][ ][ ]
    • Where possible, link to the Jenkins build URL that created the artifacts. [ ][ ]
    • Only allow only one job to run at the same time. [ ]
  • Vagrant Cascadian:
    • Temporarily disable armhf nodes for DebConf21. [ ][ ]

Lastly, if you are interested in contributing to the Reproducible Builds project, please visit the Contribute page on our website. You can get in touch with us via:

4 June 2021

Matthew Garrett: Mike Lindell's Cyber "Evidence"

Mike Lindell, notable for absolutely nothing relevant in this field, today filed a lawsuit against a couple of voting machine manufacturers in response to them suing him for defamation after he claimed that they were covering up hacks that had altered the course of the US election. Paragraph 104 of his suit asserts that he has evidence of at least 20 documented hacks, including the number of votes that were changed. The citation is just a link to a video called Absolute 9-0, which claims to present sufficient evidence that the US supreme court will come to a 9-0 decision that the election was tampered with.

The claim is that Lindell was provided with a set of files on the 9th of January, and gave these to some cyber experts to verify. These experts identified them as packet captures. The video contains scrolling hex, and we are told that this is the raw encrypted data from the files. In reality, the hex values correspond very clearly to printable ASCII, and appear to just be the Pennsylvania voter roll. They're not encrypted, and they're not packet captures (they contain no packet headers).

20 of these packet captures were then selected and analysed, giving us the tables contained within Exhibit 12. The alleged source IPs appear to correspond to the networks the tables claim, and the latitude and longitude presumably just come from a geoip lookup of some sort (although clearly those values are far too precise to be accurate). But if we look at the target IPs, we find something interesting. Most of them resolve to the website for the county that was the nominal target (eg, 198.108.253.104 is www.deltacountymi.org). So, we're supposed to believe that in many cases, the county voting infrastructure was hosted on the county website.

Unfortunately we're not given the destination port, but 198.108.253.104 isn't listening on anything other than 80 and 443. We're told that the packet data is encrypted, so presumably it's over HTTPS. So, uh, how did they decrypt this to figure out how many votes were switched? If Mike's hackers have broken TLS, they really don't need to be dealing with this.

We're also given some background information on how it's impossible to reconstruct packet captures after the fact (untrue), or that modifying them would change their hashes (true, but in the absence of known good hash values that tells us nothing), but it's pretty clear that nothing we're shown actually demonstrates what we're told it does.

In summary: yes, any supreme court decision on this would be 9-0, just not the way he's hoping for.

Update: It was pointed out that this data appears to be part of a larger dataset. This one is even more dubious - it somehow has MAC addresses for both the source and destination (which is impossible), and almost none of these addresses are in actual issued ranges.

comment count unavailable comments

11 April 2021

Jonathan Dowland: 2020 in short fiction

Cover for *Episodes*
Following on from 2020 in Fiction: In 2020 I read a couple of collections of short fiction from some of my favourite authors. I started the year with Christopher Priest's Episodes. The stories within are collected from throughout his long career, and vary in style and tone. Priest wrote new little prologues and epilogues for each of the stories, explaining the context in which they were written. I really enjoyed this additional view into their construction.
Cover for *Adam Robots*
By contrast, Adam Robert's Adam Robots presents the stories on their own terms. Each of the stories is written in a different mode: one as golden-age SF, another as a kind of Cyberpunk, for example, although they all blend or confound sub-genres to some degree. I'm not clever enough to have decoded all their secrets on a first read, and I would have appreciated some "Cliff's Notes on any deeper meaning or intent.
Cover for *Exhalation*
Ted Chiang's Exhalation was up to the fantastic standard of his earlier collection and had some extremely thoughtful explorations of philosophical ideas. All the stories are strong but one stuck in my mind the longest: Omphalos) With my daughter I finished three of Terry Pratchett's short story collections aimed at children: Dragon at Crumbling Castle; The Witch's Vacuum Cleaner and The Time-Travelling Caveman. If you are a Pratchett fan and you've overlooked these because they're aimed at children, take another look. The quality varies, but there are some true gems in these. Several stories take place in common settings, either the town of Blackbury, in Gritshire (and the adjacent Even Moor), or the Welsh border-town of Llandanffwnfafegettupagogo. The sad thing was knowing that once I'd finished them (and the fourth, Father Christmas's Fake Beard) that was it: there will be no more.
Cover for Interzone, issue 277
8/31 of the "books" I read in 2020 were issues of Interzone. Counting them as "books" for my annual reading goal has encouraged me to read full issues, whereas before I would likely have only read a couple of stories from each issue. Reading full issues has rekindled the enjoyment I got out of it when I first discovered the magazine at the turn of the Century. I am starting to recognise stories by authors that have written stories in other issues, as well as common themes from the current era weaving their way into the work (Trump, Brexit, etc.) No doubt the Pandemic will leave its mark on 2021's stories.

9 January 2021

Thorsten Alteholz: My Debian Activities in December 2020

FTP master This month I only accepted 8 packages and like last month rejected 0. Despite the holidays 293 packages got accepted. Debian LTS This was my seventy-eighth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian. This month my all in all workload has been 26h. During that time I did LTS uploads of: Unfortunately package slirp has the same version in Stretch and Buster. So I first had to upload slirp/1:1.0.17-11 to unstable, in order to be allowed to fix the CVE in Buster and to finally upload a new version to Stretch. Meanwhile the fix for Buster has been approved by the Release Team and I am waiting for the next point release now. I also prepared a debdiff for influxdb, which will result in DSA-4823-1 in January. As there appeared new CVEs for openjpeg2, I did not do an upload yet. This is planned for January now. Last but not least I did some days of frontdesk duties. Debian ELTS This month was the thirtieth ELTS month. During my allocated time I uploaded: As well as for LTS, I did not finish work on all CVEs of openjpeg2, so the upload is postponed to January. Last but not least I did some days of frontdesk duties. Unfortunately I also had to give back some hours. Other stuff This month I uploaded new upstream versions of: I fixed one or two bugs in: I improved packaging of: Some packages just needed a source upload: and there have been even some new packages: With these uploads I finished the libosmocom- and libctl-transitions. The Debian Med Advent Calendar was again really successful this year. There was no new record, but with 109, the second most number of bugs has been closed.
year number of bugs closed
2011 63
2012 28
2013 73
2014 5
2015 150
2016 95
2017 105
2018 81
2019 104
2020 109
Well done everybody who participated. It is really nice to see that Andreas is no longer a lone wolf.

5 December 2020

Thorsten Alteholz: My Debian Activities in November 2020

FTP master Unfortunately a day only has 24h. As the freeze is approaching, I had to concentrate a bit more on keeping my packages in shape. So this month I only accepted nine packages. The good news, I rejected no package. The overall number of packages that got accepted was 328. Debian LTS This was my seventy-seventh month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian. This month my all in all workload has been 22.75h. During that time I did LTS uploads of: I also started to work on x11vnc and slirp. Last but not least I did some days of frontdesk duties. Debian ELTS This month was the twenty ninth ELTS month. During my allocated time I uploaded: Unfortunately I also had to give back some hours. Last but not least I did some days of frontdesk duties. Other stuff This month I uploaded new upstream versions of: I fixed one or two bugs in: I improved packaging of: and there have been even some new packages: As it is again this time of the year, I would also like to draw some attention to the Debian Med Advent Calendar. Like the past years, the Debian Med team starts a bug squashing event from the December 1st to 24th. Every bug that is closed will be registered in the calendar. So instead of taking something from the calendar, this special one will be filled and at Christmas hopefully every Debian Med related bug is closed. Don t hesitate, start to squash :-). The announcement on the mailing list can be found here.

21 October 2020

Reproducible Builds: Supporter spotlight: Civil Infrastructure Platform

The Reproducible Builds project depends on our many projects, supporters and sponsors. We rely on their financial support, but they are also valued ambassadors who spread the word about the Reproducible Builds project and the work that we do. This is the first installment in a series featuring the projects, companies and individuals who support the Reproducible Builds project. If you are a supporter of the Reproducible Builds project (of whatever size) and would like to be featured here, please let get in touch with us at contact@reproducible-builds.org. However, we are kicking off this series by featuring Urs Gleim and Yoshi Kobayashi of the Civil Infrastructure Platform (CIP) project.
Chris Lamb: Hi Urs and Yoshi, great to meet you. How might you relate the importance of the Civil Infrastructure Platform to a user who is non-technical? A: The Civil Infrastructure Platform (CIP) project is focused on establishing an open source base layer of industrial-grade software that acts as building blocks in civil infrastructure projects. End-users of this critical code include systems for electric power generation and energy distribution, oil and gas, water and wastewater, healthcare, communications, transportation, and community management. These systems deliver essential services, provide shelter, and support social interactions and economic development. They are society s lifelines, and CIP aims to contribute to and support these important pillars of modern society.
Chris: We have entered an age where our civilisations have become reliant on technology to keep us alive. Does the CIP believe that the software that underlies our own safety (and the safety of our loved ones) receives enough scrutiny today? A: For companies developing systems running our infrastructure and keeping our factories working, it is part of their business to ensure the availability, uptime, and security of these very systems. However, software complexity continues to increase, and the efforts spent on those systems is now exploding. What is missing is a common way of achieving this through refining the same tools, and cooperating on the hardening and maintenance of standard components such as the Linux operating system.
Chris: How does the Reproducible Builds effort help the Civil Infrastructure Platform achieve its goals? A: Reproducibility helps a great deal in software maintenance. We have a number of use-cases that should have long-term support of more than 10 years. During this period, we encounter issues that need to be fixed in the original source code. But before we make changes to the source code, we need to check whether it is actually the original source code or not. If we can reproduce exactly the same binary from the source code even after 10 years, we can start to invest time and energy into making these fixes.
Chris: Can you give us a brief history of the Civil Infrastructure Platform? Are there any specific success stories that the CIP is particularly proud of? A: The CIP Project formed in 2016 as a project hosted by Linux Foundation. It was launched out of necessity to establish an open source framework and the subsequent software foundation delivers services for civil infrastructure and economic development on a global scale. Some key milestones we have achieved as a project include our collaboration with Debian, where we are helping with the Debian Long Term Support (LTS) initiative, which aims to extend the lifetime of all Debian stable releases to at least 5 years. This is critical because most control systems for transportation, power plants, healthcare and telecommunications run on Debian-based embedded systems. In addition, CIP is focused on IEC 62443, a standards-based approach to counter security vulnerabilities in industrial automation and control systems. Our belief is that this work will help mitigate the risk of cyber attacks, but in order to deal with evolving attacks of this kind, all of the layers that make up these complex systems (such as system services and component functions, in addition to the countless operational layers) must be kept secure. For this reason, the IEC 62443 series is attracting attention as the de facto cyber-security standard.
Chris: The Civil Infrastructure Platform project comprises a number of project members from different industries, with stakeholders across multiple countries and continents. How does working together with a broad group of interests help in your effectiveness and efficiency? A: Although the members have different products, they share the requirements and issues when developing sustainable products. In the end, we are driven by common goals. For the project members, working internationally is simply daily business. We see this as an advantage over regional or efforts that focus on narrower domains or markets.
Chris: The Civil Infrastructure Platform supports a number of other existing projects and initiatives in the open source world too. How much do you feel being a part of the broader free software community helps you achieve your aims? A: Collaboration with other projects is an essential part of how CIP operates we want to enable commonly-used software components. It would not make sense to re-invent solutions that are already established and widely used in product development. To this end, we have an upstream first policy which means that, if existing projects need to be modified to our needs or are already working on issues that we also need, we work directly with them.
Chris: Open source software in desktop or user-facing contexts receives a significant amount of publicity in the media. However, how do you see the future of free software from an industrial-oriented context? A: Open source software has already become an essential part of the industry and civil infrastructure, and the importance of open source software there is still increasing. Without open source software, we cannot achieve, run and maintain future complex systems, such as smart cities and other key pieces of civil infrastructure.
Chris: If someone wanted to know more about the Civil Infrastructure Platform (or even to get involved) where should they go to look? A: We have many avenues to participate and learn more! We have a website, a wiki and you can even follow us on Twitter.

For more about the Reproducible Builds project, please see our website at reproducible-builds.org. If you are interested in ensuring the ongoing security of the software that underpins our civilisation and wish to sponsor the Reproducible Builds project, please reach out to the project by emailing contact@reproducible-builds.org.

12 October 2020

Markus Koschany: My Free Software Activities in September 2020

Welcome to gambaru.de. Here is my monthly report (+ the first week in October) that covers what I have been doing for Debian. If you re interested in Java, Games and LTS topics, this might be interesting for you. Debian Games
warzone2100
Debian Java
pdfsam
Misc Debian LTS This was my 55. month as a paid contributor and I have been paid to work 31,75 hours on Debian LTS, a project started by Rapha l Hertzog. In that time I did the following: ELTS Extended Long Term Support (ELTS) is a project led by Freexian to further extend the lifetime of Debian releases. It is not an official Debian project but all Debian users benefit from it without cost. The current ELTS release is Debian 8 Jessie . This was my 28. month and I have been paid to work 15 hours on ELTS. Thanks for reading and see you next time.

12 September 2020

Markus Koschany: My Free Software Activities in August 2020

Welcome to gambaru.de. Here is my monthly report (+ the first week in September) that covers what I have been doing for Debian. If you re interested in Java, Games and LTS topics, this might be interesting for you. Debian Games
teeworlds
Debian Java Misc Debian LTS This was my 54. month as a paid contributor and I have been paid to work 20 hours on Debian LTS, a project started by Rapha l Hertzog. In that time I did the following: ELTS Extended Long Term Support (ELTS) is a project led by Freexian to further extend the lifetime of Debian releases. It is not an official Debian project but all Debian users benefit from it without cost. The current ELTS release is Debian 8 Jessie . This was my 27. month and I have been paid to work 14,25 hours on ELTS. Thanks for reading and see you next time.

31 August 2020

Jacob Adams: Command Line 101

How to Work in a Text-Only Environment.

What is this thing? When you first open a command-line (note that I use the terms command-line and shell interchangably here, they re basically the same, but command-line is the more general term, and shell is the name for the program that executes commands for you) you ll see something like this: thisfolder This line is called a command prompt and it tells you four pieces of information:
  1. jaadams: The username of the user that is currently running this shell.
  2. bg7: The name of the computer that this shell is running on, important for when you start accessing shells on remote machines.
  3. /tmp/thisfolder: The folder or directory that your shell is currently running in. Like a file explorer (like Window s Explorer or Mac s Finder) a shell always has a working directory, from which all relative paths (see sidenote below) are resolved.
When you first opened a shell, however, you might notice that is looks more like this: home This is a shorthand notation that the shell uses to make this output shorter when possible. ~ stands for your home directory, usually /home/<username>. Like C:\Users\<username>\ on Windows or /Users/<username> on Mac, this directory is where all your files should go by default. Thus a command prompt like this: downloads actually tells you that you are currently in the /home/jaadams/Downloads directory.

Sidenote: The Unix Filesystem and Relative Paths folders on Linux and other Unix-derived systems like MacOS are usually called directories. These directories are represented by paths, strings that indicate where the directory is on the filesystem. The one unusual part is the so-called root directory . All files are stored in this folder or directories under it. Its path is just / and there are no directories above it. For example, the directory called home typically contains all user directories. This is stored in the root directory, and each users specific data is stored in a directory named after that user under home. Thus, the home directory of the user jacob is typically /home/jacob, the directory jacob under the home directory stored in the root directory /. If you re interested in more details about what goes in what directory, man hier has the basics and the Filesystem Hierarchy Standard governs the layout of the filesystem on most Linux distributions. You don t always have to use the full path, however. If the path does not begin with a /, it is assumed that the path actually begins with the path of the current directory. So if you use a path like my/folders/here, and you re in the /home/jacob directory, the path will be treated like /home/jacob/my/folders/here. Each folder also contains the symbolic links .. and .. Symbolic links are a very powerful kind of file that is actually a reference to another file. .. always represents the parent directory of the current directory, so /home/jacob/.. links to /home/. . always links to the current directory, so /home/jacob/. links to /home/jacob.

Running commands To run a command from the command prompt, you type its name and then usually some arguments to tell it what to do. For example, the echo command displays the text passed as arguments.
jacob@lovelace/home/jacob$ echo hello world
hello world
Arguments to commands are space-separated, so in the previous example hello is the first argument and world is the second. If you need an argument to contain spaces, you ll want to put quotes around it, echo "like so". Certain arguments are called flags , or options (options if they take another argument, flags otherwise) usually prefixed with a hyphen, and they change the way a program operates. For example, the ls command outputs the contents of a directory passed as an argument, but if you add -l before the directory, it will give you more details on the files in that directory.
jacob@lovelace/tmp/test$ ls /tmp/test
1  2  3  4  5  6
jacob@lovelace/tmp/test$ ls -l /tmp/test
total 0
-rw-r--r-- 1 jacob jacob 0 Aug 26 22:06 1
-rw-r--r-- 1 jacob jacob 0 Aug 26 22:06 2
-rw-r--r-- 1 jacob jacob 0 Aug 26 22:06 3
-rw-r--r-- 1 jacob jacob 0 Aug 26 22:06 4
-rw-r--r-- 1 jacob jacob 0 Aug 26 22:06 5
-rw-r--r-- 1 jacob jacob 0 Aug 26 22:06 6
jacob@lovelace/tmp/test$
Most commands take different flags to change their behavior in various ways.

File Management
  • cd <path>: Change the current directory of the running shell to <path>.
  • ls <path>: Output the contents of <path>. If no path is passed, it prints the contents of the current directory.
  • touch <filename>: create an new empty file called <filename>. Used on an existing file, it updates the file s last accessed and modified times. Most text editors can also create a new file for you, which is probably more useful.
  • mkdir <directory>: Create a new folder/directory at path <directory>.
  • mv <src> <dest>: Move a file or directory at path <src> to <dest>.
  • cp <src> <dest>: Copy a file or directory at path <src> to <dest>.
  • rm <file>: Remove a file at path <file>.
  • zip -r <zipfile> <contents...>: Create a zip file <zipfile> with contents <contents>. <contents> can be multiple arguments, and you ll usually want to use the -r argument when including directories in your zipfile, as otherwise only the directory will be included and not the files and directories within it.

Searching
  • grep <thing> <file>: Look for the string <thing> in <file>. If no <file> is passed it searches standard input.
  • find <path> -name <name>: Find a file or directory called <name> somwhere under <path>. This command is actually very powerful, but also very complex. For example you can delete all files in a directory older than 30 days with:
    find -mtime +30 -exec rm  \;
    
  • locate <name>: A much easier to use command to find a file with a given name, but it is not usually installed by default.

Outputting Files
  • cat <files...>: Output (concatenate) all the files passed as arguments.
  • head <file>: Output the beginning of <file>
  • tail <file>: Output the end of <file>

How to Find the Right Command All commands (at least on sane Linux distributions like Debian or Ubuntu) are documented with a manual page, in man section 1 (for more information on manual sections, run man intro). This can be accessed using man <command> You can search for the right command using the -k flag, as in man -k <search>. You can also view manual pages in your browser, on sites like https://manpages.debian.org or https://linux.die.net/man. This is not always helpful, however, because some command s descriptions are not particularly useful, and also there are a lot of manual pages, which can make searching for a specific one difficult. For example, finding the right command to search inside text files is quite difficult via man (it s grep). When you can t find what you need with man I recommend falling back to searching the Internet. There are lots of bad Linux tutorials out there, but here are some reputable sources I recommend:
  • https://www.cyberciti.biz: nixCraft has excellent tutorials on all things Linux
  • Hosting providers like Digital Ocean or Linode: Good intro documentation, but can sometimes be outdated
  • https://tldp.org: The Linux Documentation project is great, but it can also be a little outdated sometimes.
  • https://stackoverflow.com: Oftentimes has great answers, but quality varies wildly since anyone can answer.
These are certainly not the only options but they re the sources I would recommend when available.

How to Read a Manual Page Manual pages consist of a series of sections, each with a specific purpose. Instead of attempting to write my own description here, I m going to borrow the excellent one from The Linux Documentation Project
The NAME section is the only required section. Man pages without a name section are as useful as refrigerators at the north pole. This section also has a standardized format consisting of a comma-separated list of program or function names, followed by a dash, followed by a short (usually one line) description of the functionality the program (or function, or file) is supposed to provide. By means of makewhatis(8), the name sections make it into the whatis database files. Makewhatis is the reason the name section must exist, and why it must adhere to the format I described. (Formatting explanation cut for brevity) The SYNOPSIS section is intended to give a short overview on available program options. For functions this sections lists corresponding include files and the prototype so the programmer knows the type and number of arguments as well as the return type. The DESCRIPTION section eloquently explains why your sequence of 0s and 1s is worth anything at all. Here s where you write down all your knowledge. This is the Hall Of Fame. Win other programmers and users admiration by making this section the source of reliable and detailed information. Explain what the arguments are for, the file format, what algorithms do the dirty jobs. The OPTIONS section gives a description of how each option affects program behaviour. You knew that, didn t you? The FILES section lists files the program or function uses. For example, it lists configuration files, startup files, and files the program directly operates on. (Cut details about installing files) The ENVIRONMENT section lists all environment variables that affect your program or function and tells how, of course. Most commonly the variables will hold pathnames, filenames or default options. The DIAGNOSTICS section should give an overview of the most common error messages from your program and how to cope with them. There s no need to explain system error error messages (from perror(3)) or fatal signals (from psignal(3)) as they can appear during execution of any program. The BUGS section should ideally be non-existent. If you re brave, you can describe here the limitations, known inconveniences and features that others may regard as misfeatures. If you re not so brave, rename it the TO DO section ;-) The AUTHOR section is nice to have in case there are gross errors in the documentation or program behaviour (Bzzt!) and you want to mail a bug report. The SEE ALSO section is a list of related man pages in alphabetical order. Conventionally, it is the last section.

Remote Access One of the more powerful uses of the shell is through ssh, the secure shell. This allows you to remotely connect to another computer and run a shell on that machine:
user@host:~$ ssh other@example.com
other@example:~$
The prompt changes to reflect the change in user and host, as you can see in the example above. This allows you to work in a shell on that machine as if it was right in front of you.

Moving Files Between Machines There are several ways you can move files between machines over ssh. The first and easiest is scp, which works much like the cp command except that paths can also take a user@host argument to move files across computers. For example, if you wanted to move a file test.txt to your home directory on another machine, the command would look like:
scp test.txt other@example.com:
(The home directory is the default path) Otherwise you can move files by reversing the order of the arguments and put a path after the colon to move files from another directory on the remote host. For example, if you wanted to fetch the file /etc/issue.net from example.com:
scp other@example.com:/etc/issue.net .
Another option is the sftp command, which gives you a very simple shell-like interface in which you can cd and ls, before either puting files onto the local machine or geting files off of it. The final and most powerful option is rsync which syncs the contents of one directory to another, and doesn t copy files that haven t changed. It s powerful and complex, however, so I recommend reading the USAGE section of its man page.

Long-Running Commands The one problem with ssh is that it will stop any command running in your shell when you disconnect. If you want to leave something on and come back later then this can be a problem. This is where terminal multiplexers come in. tmux and screen both allow you to run a shell in a safe environment where it will continue even if you disconnect from it. You do this by running the command without any arguments, i.e. just tmux or just screen. In tmux you can disconnect from the current session by pressing Ctrl+b then d, and reattach with the tmux attach command. screen works similarly, but with Ctrl+a instead of b and screen -r to reattach.

Command Inputs and Outputs Arguments are not the only way to pass input to a command. They can also take input from what s called standard input , which the shell usually connects to your keyboard. Output can go to two places, standard output and standard error, both of which are directed to the screen by default.

Redirecting I/O Note that I said above that standard input/output/error are only usually connected to the keyboard and the terminal? This is because you can redirect them to other places with the shell operators <, > and the very powerful .

File redirects The operators < and > redirect the input and output of a command to a file. For example, if you wanted a file called list.txt that contained a list of all the files in a directory /this/one/here you could use:
ls /this/one/here > list.txt

Pipelines The pipe character, , allows you to direct the output of one command into the input of another. This can be very powerful. For example, the following pipeline lists the contents of the current directory searches for the string test , then counts the number of results. (wc -l counts the number of lines in its input)
ls   grep test   wc -l
For a better, but even more contrived example, say you have a file myfile, with a bunch of lines of potentially duplicated and unsorted data
test
test
1234
4567
1234
You can sort it and output only the unique lines with sort and uniq:
$ uniq < myfile   sort
1234
1234
4567
test

Save Yourself Some Typing: Globs and Tab-Completion Sometimes you don t want to type out the whole filename when writing out a command. The shell can help you here by autocompleting when you press the tab key. If you have a whole bunch of files with the same suffix, you can refer to them when writing arguments as *.suffix. This also works with prefixes, prefix*, and in fact you can put a * anywhere, *middle*. The shell will expand that * into all the files in that directory that match your criteria (ending with a specific suffix, starting with a specific prefix, and so on) and pass each file as a separate argument to the command. For example, if I have a series of files called 1.txt, 2.txt, and so on up to 9, each containing just the number for which it s named, I could use cat to output all of them like so:
jacob@lovelace/tmp/numbers$ ls
1.txt  2.txt  3.txt  4.txt  5.txt  6.txt  7.txt  8.txt	9.txt
jacob@lovelace/tmp/numbers$ cat *.txt
1
2
3
4
5
6
7
8
9
Also the ~ shorthand mentioned above that refers to your home directory can be used when passing a path as an argument to a command.

Ifs and For loops The files in the above example were generated with the following shell commands:
for i in 1 2 3 4 5 6 7 8 9
do
echo $i > $i.txt
done
But I ll have to save variables, conditionals and loops for another day because this is already too long. Needless to say the shell is a full programming language, although a very ugly and dangerous one.

14 August 2020

Markus Koschany: My Free Software Activities in July 2020

Welcome to gambaru.de. Here is my monthly report (+ the first week in August) that covers what I have been doing for Debian. If you re interested in Java, Games and LTS topics, this might be interesting for you. Debian Games
Debian Java Misc Debian LTS This was my 53. month as a paid contributor and I have been paid to work 15 hours on Debian LTS, a project started by Rapha l Hertzog. In that time I did the following: ELTS Extended Long Term Support (ELTS) is a project led by Freexian to further extend the lifetime of Debian releases. It is not an official Debian project but all Debian users benefit from it without cost. The current ELTS release is Debian 8 Jessie . This was my 26. month and I have been paid to work 13,25 hours on ELTS. Thanks for reading and see you next time.

14 July 2020

Markus Koschany: My Free Software Activities in June 2020

Welcome to gambaru.de. Here is my monthly report (+ the first week in July) that covers what I have been doing for Debian. If you re interested in Java, Games and LTS topics, this might be interesting for you. Debian Games Short news
Debian Java Misc Debian LTS This was my 52. month as a paid contributor and I have been paid to work 60 hours on Debian LTS, a project started by Rapha l Hertzog. In that time I did the following: Thanks for reading and see you next time.

11 June 2020

Markus Koschany: My Free Software Activities in May 2020

Welcome to gambaru.de. Here is my monthly report (+ the first week in June) that covers what I have been doing for Debian. If you re interested in Java, Games and LTS topics, this might be interesting for you. Debian Games
Debian Java Misc Debian LTS This was my 51. month as a paid contributor and I have been paid to work 25 hours on Debian LTS, a project started by Rapha l Hertzog. In that time I did the following: ELTS Extended Long Term Support (ELTS) is a project led by Freexian to further extend the lifetime of Debian releases. It is not an official Debian project but all Debian users benefit from it without cost. The current ELTS release is Debian 7 Wheezy . This was my 24. month and I have been paid to work 9,25 hours on ELTS. Thanks for reading and see you next time.

22 April 2020

Jonathan Dowland: SUPERHOT

Continuing a series of blog posts about casual Nintendo Switch games, next in the series is SUPERHOT. Normally 19.99, I picked it up for 13.99 in a sale. That's a little bit more than I would usually pay for a casual game. SUPERHOT first came on my radar because someone I know from a baby group worked on their VR port in some capacity.
Slow-motion buckshot Slow-motion buckshot
A first-person shooter, SUPERHOT's USP is that time only progresses when you move well, nearly. Time is slowed to a complete crawl when you are not moving. The game's visual style is very distinctive: Almost everything is a washed out grey or white colour and porcelean-like texture, except weapons and objects you can interact with, which are a matt black, and enemies, which are a bright red. It reminds me a lot of the 1992 Amiga game Robocop 3.
Robocop 3 Robocop 3
The play-style is very reminiscent of the "Bullet Time" sequences in The Matrix seemingly impossibly overwhelming odds deftly manoeuvred through thanks to superhuman reaction times. The game has a relatively short campaign of little vignettes, linked together by a cyberpunk narrative. The game is sometimes criticised for the short campaign, but for me that's ideal. And the vignettes being short and quite standalone suits my play requirements very well.
Amiga easter-egg Amiga easter-egg
The narrative interspersed between the play scenarios is a little bit over-long, and you can spend an unreasonable amount of time bashing buttons to get through it. Despite that it's a moderately interesting story. Once you've beaten the campaign, you can go back and play any of the scenarios again, or try the newly unlocked endless mode. I haven't tried that yet. The original prototype for the game is a free-to-play in-browser demo, available here. On Windows PC, there's a sequel-of-sorts in the works called MIND CONTROL DELETE with a lot of new features to add replay value.

Next.

Previous.