remote: fatal: pack exceeds maximum allowed size (4.88 GiB)
however breaking up the commit into smaller commits for parts of the archive made it possible to push the entire archive. Here are the commands to create this repository:
git init
git lfs install
git lfs track 'dists/**' 'pool/**'
git add .gitattributes
git commit -m"Add Git-LFS track attributes." .gitattributes
time debmirror --method=rsync --host ftp.se.debian.org --root :debian --arch=amd64 --source --dist=bookworm,bookworm-updates --section=main --verbose --diff=none --keyring /usr/share/keyrings/debian-archive-keyring.gpg --ignore .git .
git add dists project
git commit -m"Add." -a
git remote add origin git@gitlab.com:debdistutils/archives/debian/mirror.git
git push --set-upstream origin --all
for d in pool//; do
echo $d;
time git add $d;
git commit -m"Add $d." -a
git push
done
The resulting repository size is around 27MB with Git LFS object storage around 174GB. I think this approach would scale to handle all architectures for one release, but working with a single git repository for all releases for all architectures may lead to a too large git repository (>1GB). So maybe one repository per release? These repositories could also be split up on a subset of pool/ files, or there could be one repository per release per architecture or sources.
Finally, I have concerns about using SHA1 for identifying objects. It seems both Git and Debian s snapshot service is currently using SHA1. For Git there is SHA-256 transition and it seems GitLab is working on support for SHA256-based repositories. For serious long-term deployment of these concepts, it would be nice to go for SHA256 identifiers directly. Git-LFS already uses SHA256 but Git internally uses SHA1 as does the Debian snapshot service.
What do you think? Happy Hacking!
Publisher: | St. Martin's Press |
Copyright: | 2023 |
ISBN: | 1-250-27694-2 |
Format: | Kindle |
Pages: | 310 |
A lawyer for Dalio said he "treated all employees equally, giving people at all levels the same respect and extending them the same perks."Uh-huh. Anyway, I personally know nothing about Bridgewater other than what I learned here and the occasional mention in Matt Levine's newsletter (which is where I got the recommendation for this book). I have no independent information whether anything Copeland describes here is true, but Copeland provides the typical extensive list of notes and sourcing one expects in a book like this, and Levine's comments indicated it's generally consistent with Bridgewater's industry reputation. I think this book is true, but since the clear implication is that the world's largest hedge fund was primarily a deranged cult whose employees mostly spied on and rated each other rather than doing any real investment work, I also have questions, not all of which Copeland answers to my satisfaction. But more on that later. The center of this book are the Principles. These were an ever-changing list of rules and maxims for how people should conduct themselves within Bridgewater. Per Copeland, although Dalio later published a book by that name, the version of the Principles that made it into the book was sanitized and significantly edited down from the version used inside the company. Dalio was constantly adding new ones and sometimes changing them, but the common theme was radical, confrontational "honesty": never being silent about problems, confronting people directly about anything that they did wrong, and telling people all of their faults so that they could "know themselves better." If this sounds like textbook abusive behavior, you have the right idea. This part Dalio admits to openly, describing Bridgewater as a firm that isn't for everyone but that achieves great results because of this culture. But the uncomfortably confrontational vibes are only the tip of the iceberg of dysfunction. Here are just a few of the ways this played out according to Copeland:
In this talk Holger h01ger Levsen will give an overview about Reproducible Builds: How it started with a small BoF at DebConf13 (and before), then grew from being a Debian effort to something many projects work on together, until in 2021 it was mentioned in an Executive Order of the President of the United States. And of course, the talk will not end there, but rather outline where we are today and where we still need to be going, until Debian stable (and other distros!) will be 100% reproducible, verified by many. h01ger has been involved in reproducible builds since 2014 and so far has set up automated reproducibility testing for Debian, Fedora, Arch Linux, FreeBSD, NetBSD and coreboot.More information can be found on FOSDEM s own page for the talk, including a video recording and slides.
Courtesy of my CRANberries, there is also a report with diffstat for this release. Questions, comments, issue tickets can be brought to the GitHub repo. If you like this or other open-source work I do, you can now sponsor me at GitHub.Changes in version 0.1.2 (2024-01-31)
- Update the one exported C-level identifier from data.table following its 1.5.0 release and a renaming
- Routine continuous integration updates
This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.
/C=BE/O=GlobalSign nv-sa/CN=AlphaSSL CA - SHA256 - G4 /C=GB/ST=Greater Manchester/L=Salford/O=Sectigo Limited/CN=Sectigo RSA Domain Validation Secure Server CA /C=GB/ST=Greater Manchester/L=Salford/O=Sectigo Limited/CN=Sectigo RSA Organization Validation Secure Server CA /C=US/ST=Arizona/L=Scottsdale/O=GoDaddy.com, Inc./OU=http://certs.godaddy.com/repository//CN=Go Daddy Secure Certificate Authority - G2 /C=US/ST=Arizona/L=Scottsdale/O=Starfield Technologies, Inc./OU=http://certs.starfieldtech.com/repository//CN=Starfield Secure Certificate Authority - G2 /C=AT/O=ZeroSSL/CN=ZeroSSL RSA Domain Secure Site CA /C=BE/O=GlobalSign nv-sa/CN=GlobalSign GCC R3 DV TLS CA 2020Rather than try to work with raw issuers (because, as Andrew Ayer says, The SSL Certificate Issuer Field is a Lie), I mapped these issuers to the organisations that manage them, and summed the counts for those grouped issuers together.
Issuer | Compromised Count |
---|---|
Sectigo | 170 |
ISRG (Let's Encrypt) | 161 |
GoDaddy | 141 |
DigiCert | 81 |
GlobalSign | 46 |
Entrust | 3 |
SSL.com | 1 |
Issuer | Issuance Volume | Compromised Count | Compromise Rate |
---|---|---|---|
Sectigo | 88,323,068 | 170 | 1 in 519,547 |
ISRG (Let's Encrypt) | 315,476,402 | 161 | 1 in 1,959,480 |
GoDaddy | 56,121,429 | 141 | 1 in 398,024 |
DigiCert | 144,713,475 | 81 | 1 in 1,786,586 |
GlobalSign | 1,438,485 | 46 | 1 in 31,271 |
Entrust | 23,166 | 3 | 1 in 7,722 |
SSL.com | 171,816 | 1 | 1 in 171,816 |
Issuer | Issuance Volume | Compromised Count | Compromise Rate |
---|---|---|---|
Entrust | 23,166 | 3 | 1 in 7,722 |
GlobalSign | 1,438,485 | 46 | 1 in 31,271 |
SSL.com | 171,816 | 1 | 1 in 171,816 |
GoDaddy | 56,121,429 | 141 | 1 in 398,024 |
Sectigo | 88,323,068 | 170 | 1 in 519,547 |
DigiCert | 144,713,475 | 81 | 1 in 1,786,586 |
ISRG (Let's Encrypt) | 315,476,402 | 161 | 1 in 1,959,480 |
SELECT SUM(sub.NUM_ISSUED[2] - sub.NUM_EXPIRED[2]) FROM ( SELECT ca.name, max(coalesce(coalesce(nullif(trim(cc.SUBORDINATE_CA_OWNER), ''), nullif(trim(cc.CA_OWNER), '')), cc.INCLUDED_CERTIFICATE_OWNER)) as OWNER, ca.NUM_ISSUED, ca.NUM_EXPIRED FROM ccadb_certificate cc, ca_certificate cac, ca WHERE cc.CERTIFICATE_ID = cac.CERTIFICATE_ID AND cac.CA_ID = ca.ID GROUP BY ca.ID ) sub WHERE sub.name ILIKE '%Amazon%' OR sub.name ILIKE '%CloudFlare%' AND sub.owner = 'DigiCert';The number I get from running that query is 104,316,112, which should be subtracted from DigiCert s total issuance figures to get a more accurate view of what DigiCert s regular customers do with their private keys. When I do this, the compromise rates table, sorted by the compromise rate, looks like this:
Issuer | Issuance Volume | Compromised Count | Compromise Rate |
---|---|---|---|
Entrust | 23,166 | 3 | 1 in 7,722 |
GlobalSign | 1,438,485 | 46 | 1 in 31,271 |
SSL.com | 171,816 | 1 | 1 in 171,816 |
GoDaddy | 56,121,429 | 141 | 1 in 398,024 |
"Regular" DigiCert | 40,397,363 | 81 | 1 in 498,732 |
Sectigo | 88,323,068 | 170 | 1 in 519,547 |
All DigiCert | 144,713,475 | 81 | 1 in 1,786,586 |
ISRG (Let's Encrypt) | 315,476,402 | 161 | 1 in 1,959,480 |
The less humans have to do with certificate issuance, the less likely they are to compromise that certificate by exposing the private key. While it may not be surprising, it is nice to have some empirical evidence to back up the common wisdom. Fully-managed TLS providers, such as CloudFlare, AWS Certificate Manager, and whatever Azure s thing is called, is the platonic ideal of this principle: never give humans any opportunity to expose a private key. I m not saying you should use one of these providers, but the security approach they have adopted appears to be the optimal one, and should be emulated universally. The ACME protocol is the next best, in that there are a variety of standardised tools widely available that allow humans to take themselves out of the loop, but it s still possible for humans to handle (and mistakenly expose) key material if they try hard enough. Legacy issuance methods, which either cannot be automated, or require custom, per-provider automation to be developed, appear to be at least four times less helpful to the goal of avoiding compromise of the private key associated with a certificate.
manifest-version: "0.1"
installations:
- install:
source: foo
dest-dir: usr/bin
"manifest-version": "0.1",
"installations": [
"install":
"source": "foo",
"dest-dir": "usr/bin",
]
So beyond me being unsatisfied with the current situation, it was also clear to me that I needed to come up with a better solution if I wanted externally provided plugins for debputy. To put a bit more perspective on what I expected from the end result:
- Most plugins would probably throw KeyError: X or ValueError style errors for quite a while. Worst case, they would end on my table because the user would have a hard time telling where debputy ends and where the plugins starts. "Best" case, I would teach debputy to say "This poor error message was brought to you by plugin foo. Go complain to them". Either way, it would be a bad user experience.
- This even assumes plugin providers would actually bother writing manifest parsing code. If it is that difficult, then just providing a custom file in debian might tempt plugin providers and that would undermine the idea of having the manifest be the sole input for debputy.
- It had to cover as many parsing errors as possible. An error case this code would handle for you, would be an error where I could ensure it sufficient degree of detail and context for the user.
- It should be type-safe / provide typing support such that IDEs/mypy could help you when you work on the parsed result.
- It had to support "normalization" of the input, such as
# User provides
- install: "foo"
# Which is normalized into:
- install:
source: "foo"
4) It must be simple to tell debputy what input you expected.
# User provides
- install: "foo"
# Which is normalized into:
- install:
source: "foo"
# Example input matching this typed dict.
#
# "source": ["foo"]
# "into": ["pkg"]
#
class InstallExamplesTargetFormat(TypedDict):
# Which source files to install (dest-dir is fixed)
sources: List[str]
# Which package(s) that should have these files installed.
into: NotRequired[List[str]]
# Example input matching this typed dict.
#
# "source": "foo"
# "into": "pkg"
#
#
class InstallExamplesManifestFormat(TypedDict):
# Note that sources here is split into source (str) vs. sources (List[str])
sources: NotRequired[List[str]]
source: NotRequired[str]
# We allow the user to write into: foo in addition to into: [foo]
into: Union[str, List[str]]
FullInstallExamplesManifestFormat = Union[
InstallExamplesManifestFormat,
List[str],
str,
]
def _handler(
normalized_form: InstallExamplesTargetFormat,
) -> InstallRule:
... # Do something with the normalized form and return an InstallRule.
concept_debputy_api.add_install_rule(
keyword="install-examples",
target_form=InstallExamplesTargetFormat,
manifest_form=FullInstallExamplesManifestFormat,
handler=_handler,
)
This prototype did not take a long (I remember it being within a day) and worked surprisingly well though with some poor error messages here and there. Now came the first challenge, adding the manifest format schema plus relevant normalization rules. The very first normalization I did was transforming into: Union[str, List[str]] into into: List[str]. At that time, source was not a separate attribute. Instead, sources was a Union[str, List[str]], so it was the only normalization I needed for all my use-cases at the time. There are two problems when writing a normalization. First is determining what the "source" type is, what the target type is and how they relate. The second is providing a runtime rule for normalizing from the manifest format into the target format. Keeping it simple, the runtime normalizer for Union[str, List[str]] -> List[str] was written as:
- Read TypedDict.__required_attributes__ and TypedDict.__optional_attributes__ to determine which attributes where present and which were required. This was used for reporting errors when the input did not match.
- Read the types of the provided TypedDict, strip the Required / NotRequired markers and use basic isinstance checks based on the resulting type for str and List[str]. Again, used for reporting errors when the input did not match.
def normalize_into_list(x: Union[str, List[str]]) -> List[str]:
return x if isinstance(x, list) else [x]
# Map from
class InstallExamplesManifestFormat(TypedDict):
# Note that sources here is split into source (str) vs. sources (List[str])
sources: NotRequired[List[str]]
source: NotRequired[str]
# We allow the user to write into: foo in addition to into: [foo]
into: Union[str, List[str]]
# ... into
class InstallExamplesTargetFormat(TypedDict):
# Which source files to install (dest-dir is fixed)
sources: List[str]
# Which package(s) that should have these files installed.
into: NotRequired[List[str]]
While working on all of this type introspection for Python, I had noted the Annotated[X, ...] type. It is basically a fake type that enables you to attach metadata into the type system. A very random example:
- How will the parser generator understand that source should be normalized and then mapped into sources?
- Once that is solved, the parser generator has to understand that while source and sources are declared as NotRequired, they are part of a exactly one of rule (since sources in the target form is Required). This mainly came down to extra book keeping and an extra layer of validation once the previous step is solved.
# For all intents and purposes, foo is a string despite all the Annotated stuff.
foo: Annotated[str, "hello world"] = "my string here"
# Map from
#
# "source": "foo" # (or "sources": ["foo"])
# "into": "pkg"
#
class InstallExamplesManifestFormat(TypedDict):
# Note that sources here is split into source (str) vs. sources (List[str])
sources: NotRequired[List[str]]
source: NotRequired[
Annotated[
str,
DebputyParseHint.target_attribute("sources")
]
]
# We allow the user to write into: foo in addition to into: [foo]
into: Union[str, List[str]]
# ... into
#
# "source": ["foo"]
# "into": ["pkg"]
#
class InstallExamplesTargetFormat(TypedDict):
# Which source files to install (dest-dir is fixed)
sources: List[str]
# Which package(s) that should have these files installed.
into: NotRequired[List[str]]
It was not super difficult given the existing infrastructure, but it did take some hours of coding and debugging. Additionally, I added a parse hint to support making the into conditional based on whether it was a single binary package. With this done, you could now write something like:
- Conversion table where the parser generator can tell that BinaryPackage requires an input of str and a callback to map from str to BinaryPackage. (That is probably lie. I think the conversion table came later, but honestly I do remember and I am not digging into the git history for this one)
- At runtime, said callback needed access to the list of known packages, so it could resolve the provided string.
# Map from
class InstallExamplesManifestFormat(TypedDict):
# Note that sources here is split into source (str) vs. sources (List[str])
sources: NotRequired[List[str]]
source: NotRequired[
Annotated[
str,
DebputyParseHint.target_attribute("sources")
]
]
# We allow the user to write into: foo in addition to into: [foo]
into: Union[BinaryPackage, List[BinaryPackage]]
# ... into
class InstallExamplesTargetFormat(TypedDict):
# Which source files to install (dest-dir is fixed)
sources: List[str]
# Which package(s) that should have these files installed.
into: NotRequired[
Annotated[
List[BinaryPackage],
DebputyParseHint.required_when_multi_binary()
]
]
In this way everybody wins. Yes, writing this parser generator code was more enjoyable than writing the ad-hoc manual parsers it replaced. :)
- The parser engine handles most of the error reporting meaning users get most of the errors in a standard format without the plugin provider having to spend any effort on it. There will be some effort in more complex cases. But the common cases are done for you.
- It is easy to provide flexibility to users while avoiding having to write code to normalize the user input into a simplified programmer oriented format.
- The parser handles mapping from basic types into higher forms for you. These days, we have high level types like FileSystemMode (either an octal or a symbolic mode), different kind of file system matches depending on whether globs should be performed, etc. These types includes their own validation and parsing rules that debputy handles for you.
- Introspection and support for providing online reference documentation. Also, debputy checks that the provided attribute documentation covers all the attributes in the manifest form. If you add a new attribute, debputy will remind you if you forget to document it as well. :)
auto end0 iface end0 inet dhcpThen to get sshd to start you have to run the following commands to generate ssh host keys that aren t zero bytes long:
rm /etc/ssh/ssh_host_* systemctl restart ssh.serviceIt appears to have wifi hardware but the OS doesn t recognise it. This isn t a priority for me as I mostly want to use it as a server. Performance For the first test of performance I created a 100MB file from /dev/urandom and then tried compressing it on various systems. With zstd -9 it took 16.893 user seconds on the LicheePi4A, 0.428s on my Thinkpad X1 Carbon Gen5 with a i5-6300U CPU (Debian/Unstable), 1.288s on my E5-2696 v3 workstation (Debian/Bookworm), 0.467s on the E5-2696 v3 running Debian/Unstable, 2.067s on a E3-1271 v3 server, and 7.179s on the E3-1271 v3 system emulating a RISC-V system via QEMU running Debian/Unstable. It s very impressive that the QEMU emulation is fast enough that emulating a different CPU architecture is only 3.5* slower for this test (or maybe 10* slower if it was running Debian/Unstable on the AMD64 code)! The emulated RISC-V is also more than twice as fast as real RISC-V hardware and probably of comparable speed to real RISC-V hardware when running the same versions (and might be slightly slower if running the same version of zstd) which is a tribute to the quality of emulation. One performance issue that most people don t notice is the time taken to negotiate ssh sessions. It s usually not noticed because the common CPUs have got faster at about the same rate as the algorithms for encryption and authentication have become more complex. On my i5-6300U laptop it takes 0m0.384s to run ssh -i ~/.ssh/id_ed25519 localhost id with the below server settings (taken from advice on ssh-audit.com [3] for a secure ssh configuration). On the E3-1271 v3 server it is 0.336s, on the QMU system it is 28.022s, and on the LicheePi it is 0.592s. By this metric the LicheePi is about 80% slower than decent x86 systems and the QEMU emulation of RISC-V is 73* slower than the x86 system it runs on. Does crypto depend on instructions that are difficult to emulate?
HostKey /etc/ssh/ssh_host_ed25519_key KexAlgorithms -ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group14-sha256 MACs -umac-64-etm@openssh.com,hmac-sha1-etm@openssh.com,umac-64@openssh.com,umac-128@openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-sha1I haven t yet tested the performance of Ethernet (what routing speed can you get through the 2 gigabit ports?), emmc storage, and USB. At the moment I ve been focused on using RISC-V as a test and development platform. My conclusion is that I m glad I don t plan to compile many kernels or anything large like LibreOffice. But that for typical development that I do it will be quite adequate. The speed of Chromium seems adequate in basic tests, but the video output hasn t worked reliably enough to do advanced tests. Hardware Features Having two Gigabit Ethernet ports, 4 USB-3 ports, and Wifi on board gives some great options for using this as a router. It s disappointing that they didn t go with 2.5Gbit as everyone seems to be doing that nowadays but Gigabit is enough for most things. Having only a single HDMI port and not supporting USB-C docks (the USB-C port appears to be power only) limits what can be done for workstation use and for controlling displays. I know of people using small ARM computers attached to the back of large TVs for advertising purposes and that isn t going to be a great option for this. The CPU and RAM apparently uses a lot of power (which is relative the entire system draws up to 2A at 5V so the CPU would be something below 5W). To get this working a cooling fan has to be stuck to the CPU and RAM chips via a layer of thermal stuff that resembles a fine sheet of blu-tack in both color and stickyness. I am disappointed that there isn t any more solid form of construction, to mount this on a wall or ceiling some extra hardware would be needed to secure this. Also if they just had a really big copper heatsink I think that would be better. 80386 CPUs with similar TDP were able to run without a fan. I wonder how things would work with all USB ports in use. It s expected that a USB port can supply a minimum of 2.5W which means that all the ports could require 10W if they were active. Presumably something significantly less than 5W is available for the USB ports. Other Devices Sipheed has a range of other devices in the works. They currently sell the LicheeCluster4A which support 7 compute modules for a cluster in a box. This has some interesting potential for testing and demonstrating cluster software but you could probably buy an AMD64 system with more compute power for less money. The Lichee Console 4A is a tiny laptop which could be useful for people who like the 7 laptop form factor, unfortunately it only has a 1280*800 display if it had the same resolution display as a typical 7 phone I would have bought one. The next device that appeals to me is the soon to be released Lichee Pad 4A which is a 10.1 tablet with 1920*1200 display, Wifi6, Bluetooth 5.4, and 16G of RAM. It also has 1 USB-C connection, 2*USB-3 sockets, and support for an external card with 2*Gigabit ethernet. It s a tablet as a laptop without keyboard instead of the more common larger phone design model. They are also about to release the LicheePadMax4A which is similar to the other tablet but with a 14 2240*1400 display and which ships with a keyboard to make it essentially a laptop with detachable keyboard. Conclusion At this time I wouldn t recommend that this device be used as a workstation or laptop, although the people who want to do such things will probably do it anyway regardless of my recommendations. I think it will be very useful as a test system for RISC-V development. I have some friends who are interested in this sort of thing and I can give them VMs. It is a bit expensive. The Sipheed web site boasts about the LicheePi4 being faster than the RaspberryPi4, but it s not a lot faster and the RaspberryPi4 is much cheaper ($127 or $129 for one with 8G of RAM). The RaspberryPi4 has two HDMI ports but a limit of 8G of RAM while the LicheePi has up to 16G of RAM and two Gigabit Ethernet ports but only a single HDMI port. It seems that the RaspberryPi4 might win if you want a cheap low power desktop system. At this time I think the reason for this device is testing out RISC-V as an alternative to the AMD64 and ARM64 architectures. An open CPU architecture goes well with free software, but it isn t just people who are into FOSS who are testing such things. I know some corporations are trying out RISC-V as a way of getting other options for embedded systems that don t involve paying monopolists. The Lichee Console 4A is probably a usable tiny laptop if the resolution is sufficient for your needs. As an aside I predict that the tiny laptop or pocket computer segment will take off in the near future. There are some AMD64 systems the size of a phone but thicker that run Windows and go for reasonable prices on AliExpress. Hopefully in the near future this device will have better video drivers and be usable as a small and quiet workstation. I won t rule out the possibility of making this my main workstation in the not too distant future, all it needs is reliable 4K display and the ability to decode 4K video. It s performance for web browsing and as an ssh client seems adequate, and that s what matters for my workstation use. But for the moment it s just for server use.
Publisher: | Scholastic Press |
Copyright: | June 2023 |
ISBN: | 1-338-29064-9 |
Format: | Kindle |
Pages: | 446 |
linux-image-armmp-lpae
kernel per the ARMMP page)
There is another wrinkle: Debian doesn t support running 32-bit ARM kernels on 64-bit ARM CPUs, though it does support running a 32-bit userland on them. So we will wind up with a system with kernel packages from arm64 and everything else from armhf. This is a perfectly valid configuration as the arm64 like x86_64 is multiarch (that is, the CPU can natively execute both the 32-bit and 64-bit instructions).
(It is theoretically possible to crossgrade a system from 32-bit to 64-bit userland, but that felt like a rather heavy lift for dubious benefit on a Pi; nevertheless, if you want to make this process even more complicated, refer to the CrossGrading page.)
df -h /boot
to see how big it is. I recommend 200MB at minimum. If your /boot is smaller than that, stop now (or use some other system to shrink your root filesystem and rearrange your partitions; I ve done this, but it s outside the scope of this article.)
You need to have stable power. Once you begin this process, your pi will mostly be left in a non-bootable state until you finish. (You did make a backup, right?)
pi
that can use sudo
to gain root without a password. I think this is an insecure practice, but assuming you haven t changed it, you will need to ensure it still works once you move to Debian. Raspberry Pi OS had a patch in their sudo package to enable it, and that will be removed when Debian s sudo package is installed. So, put this in /etc/sudoers.d/010_picompat
:
pi ALL=(ALL) NOPASSWD: ALL
passwd
command as root to do so.
hcitool dev > /root/bluetooth-from-raspbian.txt
. I don t use Bluetooth, but this should let you develop a script to bring it up properly.
wget http://http.us.debian.org/debian/pool/main/d/debian-archive-keyring/debian-archive-keyring_2023.3+deb12u1_all.deb
sha256sum
to verify the checksum of the downloaded file, comparing it to the package page on the Debian site.
Now, you ll install it with:
dpkg -i debian-archive-keyring_2023.3+deb12u1_all.deb
/etc/apt/sources.list
and all the files in /etc/apt/sources.list.d
. Most likely you will want to delete or comment out all lines in all files there. Replace them with something like:
deb http://deb.debian.org/debian/ bookworm main non-free-firmware contrib non-free
deb http://security.debian.org/debian-security bookworm-security main non-free-firmware contrib non-free
deb https://deb.debian.org/debian bookworm-backports main non-free-firmware contrib non-free
dpkg --add-architecture arm64
apt-get update
apt-get update
.
/boot
by Raspberry Pi OS, but Debian s scripts assume it will be at /boot/firmware
. We need to fix this. First:
umount /boot
mkdir /boot/firmware
/boot
to be to /boot/firmware
. Now:
mount -v /boot/firmware
cd /boot/firmware
mv -vi * ..
/boot/firmware
later.
apt-get install linux-image-arm64
apt-get install firmware-brcm80211=20230210-5
apt-get install raspi-firmware
install firmware-brcm80211
command and then proceed. There are a few packages that Raspbian marked as newer than the version in bookworm (whether or not they really are), and that s one of them.
/etc/default/raspi-firmware
before proceeding. Edit that file.
First, uncomment (or add) a line like this:
KERNEL_ARCH="arm64"
/boot/cmdline.txt
you can find your old Raspbian boot command line. It will say something like:
root=PARTUUID=...
PARTUUID
. Back in /etc/default/raspi-firmware
, set a line like this:
ROOTPART=PARTUUID=abcdef00
/dev/mmcblk0
to /dev/mmcblk1
when switching to Debian s kernel. raspi-firmware
will encode the current device name in /boot/firmware/cmdline.txt
by default, which will be wrong once you boot into Debian s kernel. The PARTUUID
approach lets it work regardless of the device name.
dpkg --purge raspberrypi-kernel
apt-get -u upgrade
apt full-upgrade
apt-get --purge autoremove
apt list '~o'
apt purge '~o'
apt-get --purge remove bluez
apt-get install wpasupplicant parted dosfstools wireless-tools iw alsa-tools
apt-get --purge autoremove
apt-get install firmware-linux
firmware-atheros
, firmware-libertas
, and firmware-realtek
.
Here s how to resolve it, with firmware-realtek
as an example:
https://packages.debian.org/PACKAGENAME
for instance, https://packages.debian.org/firmware-realtek. Note the version number in bookworm in this case, 20230210-5.
apt-get install firmware-realtek=20230210-5
apt-get install firmware-linux
and make sure it runs cleanly.
apt-get install firmware-atheros firmware-libertas firmware-realtek firmware-linux
apt list '?narrow(?installed, ?not(?origin(Debian)))'
--mark-auto
to your apt-get install
command line to allow the package to be autoremoved later if the things depending on it go away.
If you aren t going to use Bluetooth, I recommend apt-get --purge remove bluez
as well. Sometimes it can hang at boot if you don t fix it up as described above.
/etc/network/interfaces.d
. First, eth0
should look like this:
allow-hotplug eth0
iface eth0 inet dhcp
iface eth0 inet6 auto
wlan0
should look like this:
allow-hotplug wlan0
iface wlan0 inet dhcp
wpa-conf /etc/wpa_supplicant/wpa_supplicant.conf
ifconfig
or ip addr
. If you see a long-named interface such as enx<something> or wlp<something>, copy the eth0
file to the one named after the enx interface, or the wlan0
file to the one named after the wlp interface, and edit the internal references to eth0/wlan0 in this new file to name the long interface name.
If using wifi, verify that your SSIDs and passwords are in /etc/wpa_supplicant/wpa_supplicant.conf
. It should have lines like:
network=
ssid="NetworkName"
psk="passwordHere"
isc-dhcp-client
. Verify the system is in the correct state:
apt-get install isc-dhcp-client
apt-get --purge remove dhcpcd dhcpcd-base dhcpcd5 dhcpcd-dbus
apt-get install sysfsutils
. Then put this in a file at /etc/sysfs.d/local-raspi-leds.conf
:
class/leds/ACT/brightness = 1
class/leds/ACT/trigger = mmc1
/boot/firmware
files are updated, run update-initramfs -u
. Verify that root
in /boot/firmware/cmdline.txt
references the PARTUUID
as appropriate. Verify that /boot/firmware/config.txt
contains the lines arm_64bit=1
and upstream_kernel=1
. If not, go back to the section on modifying /etc/default/raspi-firmware
and fix it up.
reboot
/boot/firmware
FAT partition. Otherwise, you ve at least got the kernel going and can troubleshoot like usual from there.
Series: | Sherlock Holmes #1 |
Publisher: | AmazonClassics |
Copyright: | 1887 |
Printing: | February 2018 |
ISBN: | 1-5039-5525-7 |
Format: | Kindle |
Pages: | 159 |
My surprise reached a climax, however, when I found incidentally that he was ignorant of the Copernican Theory and of the composition of the Solar System. That any civilized human being in this nineteenth century should not be aware that the earth travelled round the sun appeared to be to me such an extraordinary fact that I could hardly realize it. "You appear to be astonished," he said, smiling at my expression of surprise. "Now that I do know it I shall do my best to forget it." "To forget it!" "You see," he explained, "I consider that a man's brain originally is like a little empty attic, and you have to stock it with such furniture as you chose. A fool takes in all the lumber of every sort that he comes across, so that the knowledge which might be useful to him gets crowded out, or at best is jumbled up with a lot of other things so that he has a difficulty in laying his hands upon it. Now the skilful workman is very careful indeed as to what he takes into his brain-attic. He will have nothing but the tools which may help him in doing his work, but of these he has a large assortment, and all in the most perfect order. It is a mistake to think that that little room has elastic walls and can distend to any extent. Depend upon it there comes a time when for every addition of knowledge you forget something that you knew before. It is of the highest importance, therefore, not to have useless facts elbowing out the useful ones."This is directly contrary to my expectation that the best way to make leaps of deduction is to know something about a huge range of topics so that one can draw unexpected connections, particularly given the puzzle-box construction and odd details so beloved in classic mysteries. I'm now curious if Doyle stuck with this conception, and if there were any later mysteries that involved astronomy. Speaking of classic mysteries, A Study in Scarlet isn't quite one, although one can see the shape of the genre to come. Doyle does not "play fair" by the rules that have not yet been invented. Holmes at most points knows considerably more than the reader, including bits of evidence that are not described until Holmes describes them and research that Holmes does off-camera and only reveals when he wants to be dramatic. This is not the sort of story where the reader is encouraged to try to figure out the mystery before the detective. Rather, what Doyle seems to be aiming for, and what Watson attempts (unsuccessfully) as the reader surrogate, is slightly different: once Holmes makes one of his grand assertions, the reader is encouraged to guess what Holmes might have done to arrive at that conclusion. Doyle seems to want the reader to guess technique rather than outcome, while providing only vague clues in general descriptions of Holmes's behavior at a crime scene. The structure of this story is quite odd. The first part is roughly what you would expect: first-person narration from Watson, supposedly taken from his journals but not at all in the style of a journal and explicitly written for an audience. Part one concludes with Holmes capturing and dramatically announcing the name of the killer, who the reader has never heard of before. Part two then opens with... a western?
In the central portion of the great North American Continent there lies an arid and repulsive desert, which for many a long year served as a barrier against the advance of civilization. From the Sierra Nevada to Nebraska, and from the Yellowstone River in the north to the Colorado upon the south, is a region of desolation and silence. Nor is Nature always in one mood throughout the grim district. It comprises snow-capped and lofty mountains, and dark and gloomy valleys. There are swift-flowing rivers which dash through jagged ca ons; and there are enormous plains, which in winter are white with snow, and in summer are grey with the saline alkali dust. They all preserve, however, the common characteristics of barrenness, inhospitality, and misery.First, I have issues with the geography. That region contains some of the most beautiful areas on earth, and while a lot of that region is arid, describing it primarily as a repulsive desert is a bit much. Doyle's boundaries and distances are also confusing: the Yellowstone is a northeast-flowing river with its source in Wyoming, so the area between it and the Colorado does not extend to the Sierra Nevadas (or even to Utah), and it's not entirely clear to me that he realizes Nevada exists. This is probably what it's like for people who live anywhere else in the world when US authors write about their country. But second, there's no Holmes, no Watson, and not even the pretense of a transition from the detective novel that we were just reading. Doyle just launches into a random western with an omniscient narrator. It features a lean, grizzled man and an adorable child that he adopts and raises into a beautiful free spirit, who then falls in love with a wild gold-rush adventurer. This was written about 15 years before the first critically recognized western novel, so I can't blame Doyle for all the cliches here, but to a modern reader all of these characters are straight from central casting. Well, except for the villains, who are the Mormons. By that, I don't mean that the villains are Mormon. I mean Brigham Young is the on-page villain, plotting against the hero to force his adopted daughter into a Mormon harem (to use the word that Doyle uses repeatedly) and ruling Salt Lake City with an iron hand, border guards with passwords (?!), and secret police. This part of the book was wild. I was laughing out-loud at the sheer malevolent absurdity of the thirty-day countdown to marriage, which I doubt was the intended effect. We do eventually learn that this is the backstory of the murder, but we don't return to Watson and Holmes for multiple chapters. Which leads me to the other thing that surprised me: Doyle lays out this backstory, but then never has his characters comment directly on the morality of it, only the spectacle. Holmes cares only for the intellectual challenge (and for who gets credit), and Doyle sets things up so that the reader need not concern themselves with aftermath, punishment, or anything of that sort. I probably shouldn't have been surprised this does fit with the Holmes stereotype but I'm used to modern fiction where there is usually at least some effort to pass judgment on the events of the story. Doyle draws very clear villains, but is utterly silent on whether the murder is justified. Given its status in the history of literature, I'm not sorry to have read this book, but I didn't particularly enjoy it. It is very much of its time: everyone's moral character is linked directly to their physical appearance, and Doyle uses the occasional racial stereotype without a second thought. Prevailing writing styles have changed, so the prose feels long-winded and breathless. The rivalry between Holmes and the police detectives is tedious and annoying. I also find it hard to read novels from before the general absorption of techniques of emotional realism and interiority into all genres. The characters in A Study in Scarlet felt more like cartoon characters than fully-realized human beings. I have no strong opinion about the objective merits of this book in the context of its time other than to note that the sudden inserted western felt very weird. My understanding is that this is not considered one of the better Holmes stories, and Holmes gets some deeper characterization later on. Maybe I'll try another of Doyle's works someday, but for now my curiosity has been sated. Followed by The Sign of the Four. Rating: 4 out of 10
Publisher: | Princeton University Press |
Copyright: | 2006, 2008 |
Printing: | 2008 |
ISBN: | 0-691-13640-8 |
Format: | Trade paperback |
Pages: | 278 |
Indeed, while we have proven that there is a strong and significative correlation between the income and the participation in a free/libre software project, it is not possible for us to pronounce ourselves about the causality of this link.In the French original text:
En effet, si nous avons prouv qu il existe une corr lation forte et significative entre le salaire et la participation un projet libre, il ne nous est pas possible de nous prononcer sur la causalit de ce lien.Said differently, it is certain that there is a relationship between income and F/LOSS contribution, but it s unclear whether working on free/libre software ultimately helps finding a well paid job, or if having a well paid job is the cause enabling work on free/libre software. I would like to scratch this question a bit further, mostly relying on my own observations, experiences, and discussions with F/LOSS contributors.
It is unclear whether working on free/libre software ultimately helps finding a well paid job, or if having a well paid job is the cause enabling work on free/libre software.Maybe we need to imagine this cause-effect relationship over time: as a student, without children and lots of free time, hopefully some money from the state or the family, people can spend time on F/LOSS, collect experience, earn recognition - and later find a well-paid job and make unpaid F/LOSS contributions into a hobby, cementing their status in the community, while at the same time generating a sense of well-being from working on the common good. This is a quite common scenario. As the Flosspols study revealed however, boys often get their own computer at the age of 14, while girls get one only at the age of 20. (These numbers might be slightly different now, and possibly many people don t own an actual laptop or desktop computer anymore, instead they own mobile devices which are not exactly inciting them to look behind the surface, take apart, learn, appropriate technology.) In any case, the above scenario does not allow for people who join F/LOSS later in life, eg. changing careers, to find their place. I believe that F/LOSS projects cannot expect to have more women, people of color, people from working class backgrounds, people from outside of Germany, France, USA, UK, Australia, and Canada on board as long as volunteer work is the status quo and waged labour an earned privilege.
Next.