resolv.conf
when the primary network went down, but then I would have to get into moving files around based on networking status and that felt a bit clunky.
So I decided to finally setup a proper local recursive DNS server, which is something I ve kinda meant to do for a while but never had sufficient reason to look into. Last time I did this I did it with BIND 9 but there are more options these days, and I decided to go with unbound, which is primarily focused on recursive DNS.
One extra wrinkle, pointed out by Lars, is that having dynamic name information from DHCP hosts is exceptionally convenient. I ve kept dnsmasq as the local DHCP server, so I wanted to be able to forward local queries there.
I m doing all of this on my RB5009, running Debian. Installing unbound was a simple matter of apt install unbound
. I needed 2 pieces of configuration over the default, one to enable recursive serving for the house networks, and one to enable forwarding of queries for the local domain to dnsmasq. I originally had specified the wildcard address for listening, but this caused problems with the fact my router has many interfaces and would sometimes respond from a different address than the request had come in on.
server:
interface: 192.0.2.1
interface: 2001::db8:f00d::1
access-control: 192.0.2.0/24 allow
access-control: 2001::db8:f00d::/56 allow
server:
domain-insecure: "example.org"
do-not-query-localhost: no
forward-zone:
name: "example.org"
forward-addr: 127.0.0.1@5353
interface=lo
port=5353
dhcp-option=option6:dns-server,[2001::db8:f00d::1]
dhcp-option=option:dns-server,192.0.2.1
*-src.tar.gz
with at least the following properties:
*.po
gettext translations which are normally downloaded when building from version controlled sources../bootstrap
to set up the package to later be able to be built via ./configure
. Of course, some projects are not using the autotool ./configure
interface and will not follow this aspect either, but like most build systems that compete with autotools have instructions on how to build the project, they should document similar interfaces for bootstrapping the source tarball to allow building.make dist
that generate today s foo-1.2.3.tar.gz
files.
I think one common argument against this approach will be: Why bother with all that, and just use git-archive outputs? Or avoid the entire tarball approach and move directly towards version controlled check outs and referring to upstream releases as git URL and commit tag or id. One problem with this is that SHA-1 is broken, so placing trust in a SHA-1 identifier is simply not secure. Another counter-argument is that this optimize for packagers benefits at the cost of upstream maintainers: most upstream maintainers do not want to store gettext *.po
translations in their source code repository. A compromise between the needs of maintainers and packagers is useful, so this *-src.tar.gz
tarball approach is the indirection we need to solve that. Update: In my experiment with source-only tarballs for Libntlm I actually did use git-archive output.
What do you think?
apt install rustc cargo
. Either do that and make sure to use only Rust libraries from your distro (with the tiresome config runes below); or, just use rustup.
curl bash
curl bash
bullet
apt install rustc cargo
, you will end up using Debian s compiler but upstream libraries, directly and uncurated from crates.io.
This is not what you want. There are about two reasonable things to do, depending on your preferences.
Q. Download and run whatever code from the internet?
The key question is this:
Are you comfortable downloading code, directly from hundreds of upstream Rust package maintainers, and running it ?
That s what cargo
does. It s one of the main things it s for. Debian s cargo
behaves, in this respect, just like upstream s. Let me say that again:
Debian s cargo promiscuously downloads code from crates.io just like upstream cargo.
So if you use Debian s cargo in the most obvious way, you are still downloading and running all those random libraries. The only thing you re avoiding downloading is the Rust compiler itself, which is precisely the part that is most carefully maintained, and of least concern.
Debian s cargo can even download from crates.io when you re building official Debian source packages written in Rust: if you run dpkg-buildpackage
, the downloading is suppressed; but a plain cargo build
will try to obtain and use dependencies from the upstream ecosystem. ( Happily , if you do this, it s quite likely to bail out early due to version mismatches, before actually downloading anything.)
Option 1: WTF, no I don t want curl bash
OK, but then you must limit yourself to libraries available within Debian. Each Debian release provides a curated set. It may or may not be sufficient for your needs. Many capable programs can be written using the packages in Debian.
But any upstream Rust project that you encounter is likely to be a pain to get working, unless their maintainers specifically intend to support this. (This is fairly rare, and the Rust tooling doesn t make it easy.)
To go with this plan, apt install rustc cargo
and put this in your configuration, in $HOME/.cargo/config.toml
:
[source.debian-packages]
directory = "/usr/share/cargo/registry"
[source.crates-io]
replace-with = "debian-packages"
This causes cargo to look in /usr/share
for dependencies, rather than downloading them from crates.io. You must then install the librust-FOO-dev
packages for each of your dependencies, with apt
.
This will allow you to write your own program in Rust, and build it using cargo build
.
Option 2: Biting the curl bash
bullet
If you want to build software that isn t specifically targeted at Debian s Rust you will probably need to use packages from crates.io, not from Debian.
If you re doing to do that, there is little point not using rustup to get the latest compiler. rustup s install rune is alarming, but cargo will be doing exactly the same kind of thing, only worse (because it trusts many more people) and more hidden.
So in this case: do run the curl bash
install rune.
Hopefully the Rust project you are trying to build have shipped a Cargo.lock
; that contains hashes of all the dependencies that they last used and tested. If you run cargo build --locked
, cargo will only use those versions, which are hopefully OK.
And you can run cargo audit
to see if there are any reported vulnerabilities or problems. But you ll have to bootstrap this with cargo install --locked cargo-audit
; cargo-audit is from the RUSTSEC folks who do care about these kind of things, so hopefully running their code (and their dependencies) is fine. Note the --locked
which is needed because cargo s default behaviour is wrong.
Privilege separation
This approach is rather alarming. For my personal use, I wrote a privsep tool which allows me to run all this upstream Rust code as a separate user.
That tool is nailing-cargo. It s not particularly well productised, or tested, but it does work for at least one person besides me. You may wish to try it out, or consider alternative arrangements. Bug reports and patches welcome.
OMG what a mess
Indeed. There are large number of technical and social factors at play.
cargo itself is deeply troubling, both in principle, and in detail. I often find myself severely disappointed with its maintainers decisions. In mitigation, much of the wider Rust upstream community does takes this kind of thing very seriously, and often makes good choices. RUSTSEC is one of the results.
Debian s technical arrangements for Rust packaging are quite dysfunctional, too: IMO the scheme is based on fundamentally wrong design principles. But, the Debian Rust packaging team is dynamic, constantly working the update treadmills; and the team is generally welcoming and helpful.
Sadly last time I explored the possibility, the Debian Rust Team didn t have the appetite for more fundamental changes to the workflow (including, for example, changes to dependency version handling). Significant improvements to upstream cargo s approach seem unlikely, too; we can only hope that eventually someone might manage to supplant it.
edited 2024-03-21 21:49 to add a cut tagUbuntu is curated, so it probably wouldn t get this far. If it did, then the worst case is that it would get in the way of CI allowing other packages to be removed (again from a curated system, so people are used to removal not being self-service); but the release team would have no hesitation in removing a package like this to fix that, and it certainly wouldn t cause this amount of angst. If you did this in a PPA, then I can t think of any particular negative effects.OK, if you added lots of build-dependencies (as well as run-time dependencies) then you might be able to take out a builder. But Launchpad builders already run arbitrary user-submitted code by design and are therefore very carefully sandboxed and treated as ephemeral, so this is hardly novel. There s a lot to be said for the arrangement of having a curated system for the stuff people actually care about plus an ecosystem of add-on repositories. PPAs cover a wide range of levels of developer activity, from throwaway experiments to quasi-official distribution methods; there are certainly problems that arise from it being difficult to tell the difference between those extremes and from there being no systematic confinement, but for this particular kind of problem they re very nearly ideal. (Canonical has tried various other approaches to software distribution, and while they address some of the problems, they aren t obviously better at helping people make reliable social judgements about code they don t know.) For a hypothetical package with a huge number of dependencies, to even try to upload it directly to Ubuntu you d need to be an Ubuntu developer with upload rights (or to go via Debian, where you d have to clear a similar hurdle). If you have those, then the first upload has to pass manual review by an archive administrator. If your package passes that, then it still has to build and get through proposed-migration CI before it reaches anything that humans typically care about. On the other hand, if you were inclined to try this sort of experiment, you d almost certainly try it in a PPA, and that would trouble nobody but yourself.
Assignment FooBar/ Student A_21100_assignsubmission_file graded paper.pdf Student A's perfectly named assignment.pdf Student A's perfectly named assignment.xopp Student B_21094_assignsubmission_file graded paper.pdf Student B's perfectly named assignment.pdf Student B's perfectly named assignment.xopp Student C_21093_assignsubmission_file graded paper.pdf Student C's perfectly named assignment.pdf Student C's perfectly named assignment.xoppBefore I can upload files back to Moodle, this directory needs to be copied (I have to keep the original files), cleaned of everything but the
graded
paper.pdf
files and compressed in a ZIP.
You can see how this can quickly get tedious to do by hand. Not being a
complete tool, I often resorted to crafting a few spurious shell one-liners
each time I had to do this1. Eventually I got tired of ctrl-R
-ing my
shell history and wrote something reusable.
Behold this script! When I began writing this post, I was certain I had cheaped
out on my 2021 New Year's resolution and written it in Shell, but glory!, it
seems I used a proper scripting language instead.
#!/usr/bin/python3
# Copyright (C) 2023, Louis-Philippe V ronneau <pollo@debian.org>
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
"""
This script aims to take a directory containing PDF files exported via the
Moodle mass download function, remove everything but the final files to submit
back to the students and zip it back.
usage: ./moodle-zip.py <target_dir>
"""
import os
import shutil
import sys
import tempfile
from fnmatch import fnmatch
def sanity(directory):
"""Run sanity checks before doing anything else"""
base_directory = os.path.basename(os.path.normpath(directory))
if not os.path.isdir(directory):
sys.exit(f"Target directory directory is not a valid directory")
if os.path.exists(f"/tmp/ base_directory .zip"):
sys.exit(f"Final ZIP file path '/tmp/ base_directory .zip' already exists")
for root, dirnames, _ in os.walk(directory):
for dirname in dirnames:
corrige_present = False
for file in os.listdir(os.path.join(root, dirname)):
if fnmatch(file, 'graded paper.pdf'):
corrige_present = True
if corrige_present is False:
sys.exit(f"Directory dirname does not contain a 'graded paper.pdf' file")
def clean(directory):
"""Remove superfluous files, to keep only the graded PDF"""
with tempfile.TemporaryDirectory() as tmp_dir:
shutil.copytree(directory, tmp_dir, dirs_exist_ok=True)
for root, _, filenames in os.walk(tmp_dir):
for file in filenames:
if not fnmatch(file, 'graded paper.pdf'):
os.remove(os.path.join(root, file))
compress(tmp_dir, directory)
def compress(directory, target_dir):
"""Compress directory into a ZIP file and save it to the target dir"""
target_dir = os.path.basename(os.path.normpath(target_dir))
shutil.make_archive(f"/tmp/ target_dir ", 'zip', directory)
print(f"Final ZIP file has been saved to '/tmp/ target_dir .zip'")
def main():
"""Main function"""
target_dir = sys.argv[1]
sanity(target_dir)
clean(target_dir)
if __name__ == "__main__":
main()
graded paper.pdf
files, deleting all .pdf
and .xopp
files using find
and changing
graded paper.foobar
back to a PDF. Some clever regex or learning awk
from the ground up could've probably done the job as well, but you know,
that would have required using my brain and spending spoons...
Courtesy of my CRANberries, there is also a diffstat report for this release. If you like this or other open-source work I do, you can sponsor me at GitHub.Changes in prrd version 0.0.6 (2024-03-06)
- The summary function has received several enhancements:
- Extended summary is only running when failures are seen.
- The
summariseQueue
function now displays an anticipated completion time and remaining duration.- The use of optional package foghorn has been refined, and refactored, when running summaries.
- The
dequeueJobs.r
scripts can receive a date argument, the date can be parse viaanydate
if anytime ins present.- The
enqueeJobs.r
now considers skipped package when running 'addfailed' while ensuring selecting packages are still on CRAN.- The CI setup has been updated (twice),
- Enqueing and dequing functions and scripts now support relative directories, updated documentation (#18 by Joshua Ulrich).
This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.
This post is a review for Computing Reviews for Constructed truths truth and knowledge in a post-truth world , a book published in Springer LinkMany of us grew up used to having some news sources we could implicitly trust, such as well-positioned newspapers and radio or TV news programs. We knew they would only hire responsible journalists rather than risk diluting public trust and losing their brand s value. However, with the advent of the Internet and social media, we are witnessing what has been termed the post-truth phenomenon. The undeniable freedom that horizontal communication has given us automatically brings with it the emergence of filter bubbles and echo chambers, and truth seems to become a group belief. Contrary to my original expectations, the core topic of the book is not about how current-day media brings about post-truth mindsets. Instead it goes into a much deeper philosophical debate: What is truth? Does truth exist by itself, objectively, or is it a social construct? If activists with different political leanings debate a given subject, is it even possible for them to understand the same points for debate, or do they truly experience parallel realities? The author wrote this book clearly prompted by the unprecedented events that took place in 2020, as the COVID-19 crisis forced humanity into isolation and online communication. Donald Trump is explicitly and repeatedly presented throughout the book as an example of an actor that took advantage of the distortions caused by post-truth. The first chapter frames the narrative from the perspective of information flow over the last several decades, on how the emergence of horizontal, uncensored communication free of editorial oversight started empowering the netizens and created a temporary information flow utopia. But soon afterwards, algorithmic gatekeepers started appearing, creating a set of personalized distortions on reality; users started getting news aligned to what they already showed interest in. This led to an increase in polarization and the growth of narrative-framing-specific communities that served as echo chambers for disjoint views on reality. This led to the growth of conspiracy theories and, necessarily, to the science denial and pseudoscience that reached unimaginable peaks during the COVID-19 crisis. Finally, when readers decide based on completely subjective criteria whether a scientific theory such as global warming is true or propaganda, or question what most traditional news outlets present as facts, we face the phenomenon known as fake news. Fake news leads to post-truth, a state where it is impossible to distinguish between truth and falsehood, and serves only a rhetorical function, making rational discourse impossible. Toward the end of the first chapter, the tone of writing quickly turns away from describing developments in the spread of news and facts over the last decades and quickly goes deep into philosophy, into the very thorny subject pursued by said discipline for millennia: How can truth be defined? Can different perspectives bring about different truth values for any given idea? Does truth depend on the observer, on their knowledge of facts, on their moral compass or in their honest opinions? Zoglauer dives into epistemology, following various thinkers ideas on what can be understood as truth: constructivism (whether knowledge and truth values can be learnt by an individual building from their personal experience), objectivity (whether experiences, and thus truth, are universal, or whether they are naturally individual), and whether we can proclaim something to be true when it corresponds to reality. For the final chapter, he dives into the role information and knowledge play in assigning and understanding truth value, as well as the value of second-hand knowledge: Do we really own knowledge because we can look up facts online (even if we carefully check the sources)? Can I, without any medical training, diagnose a sickness and treatment by honestly and carefully looking up its symptoms in medical databases? Wrapping up, while I very much enjoyed reading this book, I must confess it is completely different from what I expected. This book digs much more into the abstract than into information flow in modern society, or the impact on early 2020s politics as its editorial description suggests. At 160 pages, the book is not a heavy read, and Zoglauer s writing style is easy to follow, even across the potentially very deep topics it presents. Its main readership is not necessarily computing practitioners or academics. However, for people trying to better understand epistemology through its expressions in the modern world, it will be a very worthy read.
man
now restricts the system calls that
groff
can execute and the parts of the file system that it can access.
I stand by this, but it did cause some problems that have needed a
succession of small fixes over the years. This month I issued
DLA-3731-1,
backporting some of those fixes to buster./etc/ssh/sshd_config
. This turned out to be
resolvable without any changes, but in the process of investigating I
noticed that my dodgy arrangements to avoid
ucf prompts in certain cases
had bitrotted slightly, which meant that some people might be prompted
unnecessarily. I fixed this and arranged for it not to happen
again.time_t
transition for now, but
once that s out of the way it should flow smoothly again.# megasasctl a0 PERC H730 Mini encl:1 ldrv:2 batt:good a0d0 558GiB RAID 1 1x2 optimal a0d1 3067GiB RAID 0 1x11 optimal a0e32s0 558GiB a0d0 online errs: media:0 other:19 a0e32s1 279GiB a0d1 online a0e32s2 279GiB a0d1 online a0e32s3 279GiB a0d1 online a0e32s4 279GiB a0d1 online a0e32s5 279GiB a0d1 online a0e32s6 279GiB a0d1 online a0e32s8 558GiB a0d0 online errs: media:0 other:17 a0e32s9 279GiB a0d1 online a0e32s10 279GiB a0d1 online a0e32s11 279GiB a0d1 online a0e32s12 279GiB a0d1 online a0e32s13 279GiB a0d1 online #In addition to displaying a simple status report, it can also test individual drives and print the various event logs. Perhaps you too find it useful? In the packaging process I provided some patches upstream to improve installation and ensure a Appstream metainfo file is provided to list all supported HW, to allow isenkram to propose the package on all servers with a relevant PCI card. As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.
fdk-aac
library needed to enable this support is currently
in the non-free
component of the repository, meaning that PipeWire, which
is in the main
component, cannot depend on it.
pipewire
package to include the
AAC codec. While the current version in Debian main
has been built with AAC
deliberately disabled, it is trivial to enable if you can install a version
of the fdk-aac
library.
I preface this with the usual caveats when it comes to patent
and licensing controversies. I am not a lawyer, building this package and/or
using it could get you into legal trouble.
These instructions have only been tested on an up-to-date copy of Debian 12.
pipewire
s build dependencies
sudo apt install build-essential devscripts
sudo apt build-dep pipewire
libfdk-aac-dev
sudo apt install libfdk-aac-dev
sudo sed -i 's/main/main non-free/g' /etc/apt/sources.list
sudo apt update
fdk-aac-free
which includes only those components
of AAC that are known to be patent-free3.
This is what should eventually end up in Debian to resolve this problem
(see below).
sudo apt install git-buildpackage
mkdir fdk-aac-source
cd fdk-aac-source
git clone https://salsa.debian.org/multimedia-team/fdk-aac
cd fdk-aac
gbp buildpackage
sudo dpkg -i ../libfdk-aac2_*deb ../libfdk-aac-dev_*deb
pipewire
source code
mkdir pipewire-source
cd pipewire-source
apt source pipewire
pipewire-source
directory,
but you ll only need the pipewire-<version>
folder, this contains all the
files you ll need to build the package, with all the debian-specific patches
already applied.
Note that you don t want to run the apt source
command as root, as it will
then create files that your regular user cannot edit.
pipewire-source/aac.patch
--- debian/control.orig
+++ debian/control
@@ -40,8 +40,8 @@
modemmanager-dev,
pkg-config,
python3-docutils,
- systemd [linux-any]
-Build-Conflicts: libfdk-aac-dev
+ systemd [linux-any],
+ libfdk-aac-dev
Standards-Version: 4.6.2
Vcs-Browser: https://salsa.debian.org/utopia-team/pipewire
Vcs-Git: https://salsa.debian.org/utopia-team/pipewire.git
--- debian/rules.orig
+++ debian/rules
@@ -37,7 +37,7 @@
-Dauto_features=enabled \
-Davahi=enabled \
-Dbluez5-backend-native-mm=enabled \
- -Dbluez5-codec-aac=disabled \
+ -Dbluez5-codec-aac=enabled \
-Dbluez5-codec-aptx=enabled \
-Dbluez5-codec-lc3=enabled \
-Dbluez5-codec-lc3plus=disabled \
patch
from within the pipewire-<version>
folder
created by apt source
:
patch -p0 < ../aac.patch
pipewire
cd pipewire-*
debuild
debsign
at the end of this process,
this is harmless, you simply don t have a GPG key set up to sign your
newly-built package4. Packages don t need to be signed to be installed,
and debsign uses a somewhat non-standard signing process that dpkg does not
check anyway.
libspa-0.2-bluetooth
sudo dpkg -i libspa-0.2-bluetooth_*.deb
sudo reboot
fdk-aac
library is licensed
under what
even the GNU project
acknowledges is a free software license.
However, this license
explicitly informs the user that they need to acquire
a patent license to use this software5:
3. NO PATENT LICENSE NO EXPRESS OR IMPLIED LICENSES TO ANY PATENT CLAIMS, including without limitation the patents of Fraunhofer, ARE GRANTED BY THIS SOFTWARE LICENSE. Fraunhofer provides no warranty of patent non-infringement with respect to this software. You may use this FDK AAC Codec software or modifications thereto only for purposes that are authorized by appropriate patent licenses.To quote the GNU project:
Because of this, and because the license author is a known patent aggressor, we encourage you to be careful about using or redistributing software under this license: you should first consider whether the licensor might aim to lure you into patent infringement.AAC is covered by a number of patents, which expire at some point in the 2030s6. As such the current version of the library is potentially legally dubious to ship with any other software, as it could be considered patent-infringing3.
fdk-aac
package be moved to the main component
and that the
pipwire
package be updated to build against it.
fdk-aac-free
has been uploaded to Debian
by Jeremy Bicha.
However, to make it into Debian proper, it must first pass through the
ftpmaster s NEW queue.
The current version of fdk-aac-free
has been in the NEW queue since July 2023.
Based on conversations in some of the bugs above, it s been there since at least 20227.
I hope this helps anyone stuck with AAC to get their hardware working for them
while we wait for the package to eventually make it through the NEW queue.
Discuss on Hacker News
debsign
you almost certainly don t need these instructions.
This post is a review for Computing Reviews for 10 things software developers should learn about learning , a article published in Communications of the ACMAs software developers, we understand the detailed workings of the different components of our computer systems. And probably due to how computers were presented since their appearance as digital brains in the 1940s we sometimes believe we can transpose that knowledge to how our biological brains work, be it as learners or as problem solvers. This article aims at making the reader understand several mechanisms related to how learning and problem solving actually work in our brains. It focuses on helping expert developers convey knowledge to new learners, as well as learners who need to get up to speed and start coding. The article s narrative revolves around software developers, but much of what it presents can be applied to different problem domains. The article takes this mission through ten points, with roughly the same space given to each of them, starting with wrong assumptions many people have about the similarities between computers and our brains. The first section, Human Memory Is Not Made of Bits, explains the brain processes of remembering as a way of strengthening the force of a memory ( reconsolidation ) and the role of activation in related network pathways. The second section, Human Memory Is Composed of One Limited and One Unlimited System, goes on to explain the organization of memories in the brain between long-term memory (functionally limitless, permanent storage) and working memory (storing little amounts of information used for solving a problem at hand). However, the focus soon shifts to how experience in knowledge leads to different ways of using the same concepts, the importance of going from abstract to concrete knowledge applications and back, and the role of skills repetition over time. Toward the end of the article, the focus shifts from the mechanical act of learning to expertise. Section 6, The Internet Has Not Made Learning Obsolete, emphasizes that problem solving is not just putting together the pieces of a puzzle; searching online for solutions to a problem does not activate the neural pathways that would get fired up otherwise. The final sections tackle the differences that expertise brings to play when teaching or training a newcomer: the same tools that help the beginner s productivity as training wheels will often hamper the expert user s as their knowledge has become automated. The article is written with a very informal and easy-to-read tone and vocabulary, and brings forward several issues that might seem like commonsense but do ring bells when it comes to my own experiences both as a software developer and as a teacher. The article closes by suggesting several books that further expand on the issues it brings forward. While I could not identify a single focus or thesis with which to characterize this article, the several points it makes will likely help readers better understand (and bring forward to consciousness) mental processes often taken for granted, and consider often-overlooked aspects when transmitting knowledge to newcomers.
Series: | Murderbot Diaries #7 |
Publisher: | Tordotcom |
Copyright: | 2023 |
ISBN: | 1-250-82698-5 |
Format: | Kindle |
Pages: | 245 |
ART-drone said, I wouldn t recommend it. I lack a sense of proportional response. I don t advise engaging with me on any level.Saying much about the plot of this book without spoiling Network Effect and the rest of the series is challenging. Murderbot is suffering from the aftereffects of the events of the previous book more than it expected or would like to admit. It and its humans are in the middle of a complicated multi-way negotiation with some locals, who the corporates are trying to exploit. One of the difficulties in that negotiation is getting people to believe that the corporations are as evil as they actually are, a plot element that has a depressing amount in common with current politics. Meanwhile, Murderbot is trying to keep everyone alive. I loved Network Effect, but that was primarily for the social dynamics. The planet that was central to the novel was less interesting, so another (short) novel about the same planet was a bit of a disappointment. This does give Wells a chance to show in more detail what Murderbot's new allies have been up to, but there is a lot of speculative exploration and detailed descriptions of underground tunnels that I found less compelling than the relationship dynamics of the previous book. (Murderbot, on the other hand, would much prefer exploring creepy abandoned tunnels to talking about its feelings.) One of the things this series continues to do incredibly well, though, is take non-human intelligence seriously in a world where the humans mostly don't. It perfectly fills a gap between Star Wars, where neither the humans nor the story take non-human intelligences seriously (hence the creepy slavery vibes as soon as you start paying attention to droids), and the Culture, where both humans and the story do. The corporates (the bad guys in this series) treat non-human intelligences the way Star Wars treats droids. The good guys treat Murderbot mostly like a strange human, which is better but still wrong, and still don't notice the numerous other machine intelligences. But Wells, as the author, takes all of the non-human characters seriously, which means there are complex and fascinating relationships happening at a level of the story that the human characters are mostly unaware of. I love that Murderbot rarely bothers to explain; if the humans are too blinkered to notice, that's their problem. About halfway into the story, System Collapse hits its stride, not coincidentally at the point where Murderbot befriends some new computers. The rest of the book is great. This was not as good as Network Effect. There is a bit less competence porn at the start, and although that's for good in-story reasons I still missed it. Murderbot's redaction of things it doesn't want to talk about got a bit annoying before it finally resolved. And I was not sufficiently interested in this planet to want to spend two novels on it, at least without another major revelation that didn't come. But it's still a Murderbot novel, which means it has the best first-person narrative voice I've ever read, some great moments, and possibly the most compelling and varied presentation of computer intelligence in science fiction at the moment.
There was no feed ID, but AdaCol2 supplied the name Lucia and when I asked it for more info, the gender signifier bb (which didn t translate) and he/him pronouns. (I asked because the humans would bug me for the information; I was as indifferent to human gender as it was possible to be without being unconscious.)This is not a series to read out of order, but if you have read this far, you will continue to be entertained. You don't need me to tell you this nearly everyone reviewing science fiction is saying it but this series is great and you should read it. Rating: 8 out of 10
/C=BE/O=GlobalSign nv-sa/CN=AlphaSSL CA - SHA256 - G4 /C=GB/ST=Greater Manchester/L=Salford/O=Sectigo Limited/CN=Sectigo RSA Domain Validation Secure Server CA /C=GB/ST=Greater Manchester/L=Salford/O=Sectigo Limited/CN=Sectigo RSA Organization Validation Secure Server CA /C=US/ST=Arizona/L=Scottsdale/O=GoDaddy.com, Inc./OU=http://certs.godaddy.com/repository//CN=Go Daddy Secure Certificate Authority - G2 /C=US/ST=Arizona/L=Scottsdale/O=Starfield Technologies, Inc./OU=http://certs.starfieldtech.com/repository//CN=Starfield Secure Certificate Authority - G2 /C=AT/O=ZeroSSL/CN=ZeroSSL RSA Domain Secure Site CA /C=BE/O=GlobalSign nv-sa/CN=GlobalSign GCC R3 DV TLS CA 2020Rather than try to work with raw issuers (because, as Andrew Ayer says, The SSL Certificate Issuer Field is a Lie), I mapped these issuers to the organisations that manage them, and summed the counts for those grouped issuers together.
Issuer | Compromised Count |
---|---|
Sectigo | 170 |
ISRG (Let's Encrypt) | 161 |
GoDaddy | 141 |
DigiCert | 81 |
GlobalSign | 46 |
Entrust | 3 |
SSL.com | 1 |
Issuer | Issuance Volume | Compromised Count | Compromise Rate |
---|---|---|---|
Sectigo | 88,323,068 | 170 | 1 in 519,547 |
ISRG (Let's Encrypt) | 315,476,402 | 161 | 1 in 1,959,480 |
GoDaddy | 56,121,429 | 141 | 1 in 398,024 |
DigiCert | 144,713,475 | 81 | 1 in 1,786,586 |
GlobalSign | 1,438,485 | 46 | 1 in 31,271 |
Entrust | 23,166 | 3 | 1 in 7,722 |
SSL.com | 171,816 | 1 | 1 in 171,816 |
Issuer | Issuance Volume | Compromised Count | Compromise Rate |
---|---|---|---|
Entrust | 23,166 | 3 | 1 in 7,722 |
GlobalSign | 1,438,485 | 46 | 1 in 31,271 |
SSL.com | 171,816 | 1 | 1 in 171,816 |
GoDaddy | 56,121,429 | 141 | 1 in 398,024 |
Sectigo | 88,323,068 | 170 | 1 in 519,547 |
DigiCert | 144,713,475 | 81 | 1 in 1,786,586 |
ISRG (Let's Encrypt) | 315,476,402 | 161 | 1 in 1,959,480 |
SELECT SUM(sub.NUM_ISSUED[2] - sub.NUM_EXPIRED[2]) FROM ( SELECT ca.name, max(coalesce(coalesce(nullif(trim(cc.SUBORDINATE_CA_OWNER), ''), nullif(trim(cc.CA_OWNER), '')), cc.INCLUDED_CERTIFICATE_OWNER)) as OWNER, ca.NUM_ISSUED, ca.NUM_EXPIRED FROM ccadb_certificate cc, ca_certificate cac, ca WHERE cc.CERTIFICATE_ID = cac.CERTIFICATE_ID AND cac.CA_ID = ca.ID GROUP BY ca.ID ) sub WHERE sub.name ILIKE '%Amazon%' OR sub.name ILIKE '%CloudFlare%' AND sub.owner = 'DigiCert';The number I get from running that query is 104,316,112, which should be subtracted from DigiCert s total issuance figures to get a more accurate view of what DigiCert s regular customers do with their private keys. When I do this, the compromise rates table, sorted by the compromise rate, looks like this:
Issuer | Issuance Volume | Compromised Count | Compromise Rate |
---|---|---|---|
Entrust | 23,166 | 3 | 1 in 7,722 |
GlobalSign | 1,438,485 | 46 | 1 in 31,271 |
SSL.com | 171,816 | 1 | 1 in 171,816 |
GoDaddy | 56,121,429 | 141 | 1 in 398,024 |
"Regular" DigiCert | 40,397,363 | 81 | 1 in 498,732 |
Sectigo | 88,323,068 | 170 | 1 in 519,547 |
All DigiCert | 144,713,475 | 81 | 1 in 1,786,586 |
ISRG (Let's Encrypt) | 315,476,402 | 161 | 1 in 1,959,480 |
The less humans have to do with certificate issuance, the less likely they are to compromise that certificate by exposing the private key. While it may not be surprising, it is nice to have some empirical evidence to back up the common wisdom. Fully-managed TLS providers, such as CloudFlare, AWS Certificate Manager, and whatever Azure s thing is called, is the platonic ideal of this principle: never give humans any opportunity to expose a private key. I m not saying you should use one of these providers, but the security approach they have adopted appears to be the optimal one, and should be emulated universally. The ACME protocol is the next best, in that there are a variety of standardised tools widely available that allow humans to take themselves out of the loop, but it s still possible for humans to handle (and mistakenly expose) key material if they try hard enough. Legacy issuance methods, which either cannot be automated, or require custom, per-provider automation to be developed, appear to be at least four times less helpful to the goal of avoiding compromise of the private key associated with a certificate.
So these "simple" files have way too many combinations of how they can be interpreted. I figured it would be helpful if debputy could highlight these difference, so I added support for those as well. Accordingly, debian/install is tagged with multiple tags including dh-executable-config and dh-glob-after-execute. Then, I added a datatable of these tags, so it would be easy for people to look up what they meant. Ok, this seems like a closed deal, right...?
- Will the debhelper use filearray, filedoublearray or none of them to read the file? This topic has about 2 bits of entropy.
- Will the config file be executed if it is marked executable assuming you are using the right compat level? If it is executable, does dh-exec allow renaming for this file? This topic adds 1 or 2 bit of entropy depending on the context.
- Will the config file be subject to glob expansions? This topic sounds like a boolean but is a complicated mess. The globs can be handled either by debhelper as it parses the file for you. In this case, the globs are applied to every token. However, this is not what dh_install does. Here the last token on each line is supposed to be a directory and therefore not subject to globs. Therefore, dh_install does the globbing itself afterwards but only on part of the tokens. So that is about 2 bits of entropy more. Actually, it gets worse...
- If the file is executed, debhelper will refuse to expand globs in the output of the command, which was a deliberate design choice by the original debhelper maintainer took when he introduced the feature in debhelper/8.9.12. Except, dh_install feature interacts with the design choice and does enable glob expansion in the tool output, because it does so manually after its filedoublearray call.
You can help yourself and others to better results by using the declarative way rather than using debian/rules, which is the bane of all introspection!
- When determining which commands are relevant, using Build-Depends: dh-sequence-foo is much more reliable than configuring it via the Turing complete configuration we call debian/rules.
- When debhelper commands use NOOP promise hints, dh_assistant can "see" the config files listed those hints, meaning the file will at least be detected. For new introspectable hint and the debputy plugin, it is probably better to wait until the dust settles a bit before adding any of those.
hz.tools
will be tagged
#hztools.
cos
and sin
of the multiplied phase (in the range of 0 to tau), assuming
the transmitter is emitting a carrier wave at a static amplitude and all
clocks are in perfect sync.
let observed_phases: Vec<Complex> = antennas
.iter()
.map( antenna
let distance = (antenna - tx).magnitude();
let distance = distance - (distance as i64 as f64);
((distance / wavelength) * TAU)
)
.map( phase Complex(phase.cos(), phase.sin()))
.collect();
let beamformed_phases: Vec<Complex> = ...;
let magnitude = beamformed_phases
.iter()
.zip(observed_phases.iter())
.map( (beamformed, observed) observed * beamformed)
.reduce( acc, el acc + el)
.unwrap()
.abs();
(x, y, z)
point at
(azimuth, elevation, magnitude)
. The color attached two that point is
based on its distance from (0, 0, 0)
. I opted to use the
Life Aquatic
table for this one.
After this process is complete, I have a
point cloud of
((x, y, z), (r, g, b))
points. I wrote a small program using
kiss3d to render point cloud using tons of
small spheres, and write out the frames to a set of PNGs, which get compiled
into a GIF.
Now for the fun part, let s take a look at some radiation patterns!
y
and z
axis, and separated by some
offset in the x
axis. This configuration can sweep 180 degrees (not
the full 360), but can t be steared in elevation at all.
Let s take a look at what this looks like for a well constructed
1x4 phased array:
And now let s take a look at the renders as we play with the configuration of
this array and make sure things look right. Our initial quarter-wavelength
spacing is very effective and has some outstanding performance characteristics.
Let s check to see that everything looks right as a first test.
Nice. Looks perfect. When pointing forward at (0, 0)
, we d expect to see a
torus, which we do. As we sweep between 0 and 360, astute observers will notice
the pattern is mirrored along the axis of the antennas, when the beam is facing
forward to 0 degrees, it ll also receive at 180 degrees just as strong. There s
a small sidelobe that forms when it s configured along the array, but
it also becomes the most directional, and the sidelobes remain fairly small.
z
axis, and separated by a fixed offset
in either the x
or y
axis by their neighbor, forming a square when
viewed along the x/y axis.
Let s take a look at what this looks like for a well constructed
2x2 phased array:
Let s do the same as above and take a look at the renders as we play with the
configuration of this array and see what things look like. This configuration
should suppress the sidelobes and give us good performance, and even give us
some amount of control in elevation while we re at it.
Sweet. Heck yeah. The array is quite directional in the configured direction,
and can even sweep a little bit in elevation, a definite improvement
from the 1x4 above.
Next.