Work report Kubuntu: Bug bashing! I am triaging allthebugs for Plasma which can be seen here: https://bugs.launchpad.net/plasma-5.27/+bug/2053125 I am happy to report many of the remaining bugs have been fixed in the latest bug fix release 5.27.11. I prepared https://kde.org/announcements/plasma/5/5.27.11/ and Rik uploaded to archive, thank you. Unfortunately, this and several other key fixes are stuck in transition do to the time_t64 transition, which you can read about here: https://wiki.debian.org/ReleaseGoals/64bit-time . It is the biggest transition in Debian/Ubuntu history and it couldn t come at a worst time. We are aware our ISO installer is currently broken, calamares is one of those things stuck in this transition. There is a workaround in the comments of the bug report: https://bugs.launchpad.net/ubuntu/+source/calamares/+bug/2054795 Fixed an issue with plasma-welcome. Found the fix for emojis and Aaron has kindly moved this forward with the fontconfig maintainer. Thanks! I have received an https://kfocus.org/spec/spec-ir14.html laptop and it is truly a great machine and is now my daily driver. A big thank you to the Kfocus team! I can t wait to show it off at https://linuxfestnorthwest.org/. KDE Snaps: You will see the activity in this ramp back up as the KDEneon Core project is finally a go! I will participate in the project with part time status and get everyone in the Enokia team up to speed with my snap knowledge, help prepare the qt6/kf6 transition, package plasma, and most importantly I will focus on documentation for future contributors. I have created the ( now split ) qt6 with KDE patchset support and KDE frameworks 6 SDK and runtime snaps. I have made the kde-neon-6 extension and the PR is in: https://github.com/canonical/snapcraft/pull/4698 . Future work on the extension will include multiple versions track support and core24 support. I have successfully created our first qt6/kf6 snap ark. They will show showing up in the store once all the required bits have been merged and published. Thank you for stopping by. ~Scarlett
$ gnutls-cli wiki.cacert.org:443
...
- Status: The certificate is NOT trusted. The certificate issuer is unknown.
*** PKI verification of server certificate failed...
*** Fatal error: Error in the certificate.
$ wget http://www.cacert.org/certs/root_X0F.crt
$ gnutls-cli --x509cafile root_X0F.crt wiki.cacert.org:443
...
- Status: The certificate is trusted.
- Description: (TLS1.2-X.509)-(ECDHE-SECP256R1)-(RSA-SHA256)-(AES-256-GCM)
- Session ID: 37:56:7A:89:EA:5F:13:E8:67:E4:07:94:4B:52:23:63:1E:54:31:69:5D:70:17:3C:D0:A4:80:B0:3A:E5:22:B3
- Options: safe renegotiation,
- Handshake was completed
...
/etc/ssl/certs/ca-certificates.crt
$ sudo cp root_X0F.crt /usr/local/share/ca-certificates/cacert-org-root-ca.crt
$ sudo update-ca-certificates --verbose
...
Adding debian:cacert-org-root-ca.pem
...
$ gnutls-cli wiki.cacert.org:443
...
- Status: The certificate is trusted.
security device
, in fact an extra library wrapping the Debian trust store. The library will wrap the Debian trust store in the PKCS#11 industry format that Firefox supports.
$ sudo apt install p11-kit p11-kit-modules
$ trust list grep --context 2 'CA Cert'
pkcs11:id=%16%B5%32%1B%D4%C7%F3%E0%E6%8E%F3%BD%D2%B0%3A%EE%B2%39%18%D1;type=cert
type: certificate
label: CA Cert Signing Authority
trust: anchor
category: authority
/usr/lib/x86_64-linux-gnu/pkcs11/p11-kit-trust.so
$ dpkg --listfiles p11-kit-modules grep trust
/usr/lib/x86_64-linux-gnu/pkcs11/p11-kit-trust.so
/usr/lib/x86_64-linux-gnu/pkcs11/p11-kit-trust.so
as a module filename.
After adding the module, you should see it in the list of Security Devices, having /etc/ssl/certs/ca-certificates.crt
as a description.
upstream
, mark it forwarded
; perhaps taking
5-10 minutes.
Often I'll stumble across something that has already been fixed but not recorded
as such as I go.
Despite this minimal level of work, I'm quite satisfied with the cumulative
progress. It's notable to me how much my perspective has shifted by becoming a
maintainer: I'm considering everything through a different lens to that of being
just one user.
Eventually I will put some time aside to scratch some of my own itches (html5 by
default; support dark mode; duckduckgo plugin; use the details
tag...) but for
now this minimal exercise is of broader use.
Most of the effort has been spent on the Deb822 based files such as debian/control, which comes with diagnostics, quickfixes, spellchecking (but only for relevant fields!), and completion suggestions. Since not everyone has a LSP capable editor and because sometimes you just want diagnostics without having to open each file in an editor, there is also a batch version for the diagnostics via debputy lint. Please see debputy(1) for how debputy lint compares with lintian if you are curious about which tool to use at what time. To help you getting started, there is a now debputy lsp editor-config command that can provide you with the relevant editor config glue. At the moment, emacs (via eglot) and vim with vim-youcompleteme are supported. For those that followed the previous blog posts on writing the language server, I would like to point out that the command line for running the language server has changed to debputy lsp server and you no longer have to tell which format it is. I have decided to make the language server a "polyglot" server for now, which I will hopefully not regret... Time will tell. :) Anyhow, to get started, you will want:
- debian/control
- debian/copyright (the machine readable variant)
- debian/changelog (mostly just spelling)
- debian/rules
- debian/debputy.manifest (syntax checks only; use debputy check-manifest for the full validation for now)
$ apt satisfy 'dh-debputy (>= 0.1.21~), python3-pygls'
# Optionally, for spellchecking
$ apt install python3-hunspell hunspell-en-us
# For emacs integration
$ apt install elpa-dpkg-dev-el markdown-mode-el
# For vim integration via vim-youcompleteme
$ apt install vim-youcompleteme
The installations feature of the manifest will be disabled in this integration mode to avoid feature interactions with debhelper tools that expect debian/<pkg> to contain the materialized package. On a related note, the debputy migrate-from-dh command now supports a --migration-target option, so you can choose the desired level of integration without doing code changes. The command will attempt to auto-detect the desired integration from existing package features such as a build-dependency on a relevant dh sequence, so you do not have to remember this new option every time once the migration has started. :)
- dh_fixperms
- dh_gencontrol
- dh_md5sums
- dh_builddeb
"Several CISPE members have stated that without the ability to license and use VMware products they will quickly go bankrupt and out of business."Insert here the Jeremy Clarkson "Oh no! Anyway..." meme.
In addition to the problem of state, installing regular updates periodically requires a reboot, even if the rest of the process is automated through a tool like unattended-upgrades. For my personal homelab, I manage a handful of different machines running various services. I used to just schedule a day to update and reboot all of them, but that got very tedious very quickly. I then moved the reboot to a cronjob, and then recently to a systemd timer and service. I figure that laying out my path to better management of this might help others, and will almost certainly lead to someone telling me a better way to do this. UPDATE: Turns out there s another option for better systemd cron integration. SeeYou: uptime
Me: Every machine gets rebooted at 1AM to clear the slate for maintenance, and at 3:30AM to push through any pending updates. @SwiftOnSecurity, December 27, 2020
systemd-cron
below.
Ultimately, uptime only measures the duration since you last proved you can turn the machine on and have it boot. @SwiftOnSecurity, May 7, 2016
/var/spool/cron/crontabs/root
1
is enough to get your machine to reboot once a month2 on the 6th at 8:00 AM3:
0 8 6 * * reboot
regular-reboot.timer
with the following contents:
[Unit]
Description=Reboot on a Regular Basis
[Timer]
Unit=regular-reboot.service
OnBootSec=1month
[Install]
WantedBy=timers.target
regular-reboot.service
systemd unit
when the system reaches one month of uptime.
I ve seen some guides to creating timer units recommend adding
a Wants=regular-reboot.service
to the [Unit]
section,
but this has the consequence of running that service every time it starts the
timer. In this case that will just reboot your system on startup which is
not what you want.
Care needs to be taken to use the OnBootSec
directive instead of
OnCalendar
or any of the other time specifications, as your system could
reboot, discover its still within the expected window and reboot again.
With OnBootSec
your system will not have that problem.
Technically, this same problem could have occurred with the cronjob approach,
but in practice it never did, as the systems took long enough to come back
up that they were no longer within the expected window for the job.
I then added the regular-reboot.service
:
[Unit]
Description=Reboot on a Regular Basis
Wants=regular-reboot.timer
[Service]
Type=oneshot
ExecStart=shutdown -r 02:45
OnBootSec
.
This way different systems have different reboot times so that everything
doesn t just reboot and fail all at once. Were something to fail to come
back up I would have some time to fix it, as each machine has a few hours
between scheduled reboots.
One you have both files in place, you ll simply need to reload configuration
and then enable and start the timer unit:
systemctl daemon-reload
systemctl enable --now regular-reboot.timer
# systemctl status regular-reboot.timer
regular-reboot.timer - Reboot on a Regular Basis
Loaded: loaded (/etc/systemd/system/regular-reboot.timer; enabled; preset: enabled)
Active: active (waiting) since Wed 2024-03-13 01:54:52 EDT; 1 week 4 days ago
Trigger: Fri 2024-04-12 12:24:42 EDT; 2 weeks 4 days left
Triggers: regular-reboot.service
Mar 13 01:54:52 dorfl systemd[1]: Started regular-reboot.timer - Reboot on a Regular Basis.
prometheus-node-exporter
. There are plenty of ways to hack in cron support
to the node exporter, but just moving to systemd units provides both
support for tracking failure and logging,
both of which make system administration much easier when things inevitably
go wrong.
systemd-cron
An alternative to converting everything by hand, if you happen to have
a lot of cronjobs is
systemd-cron
.
It will make each crontab and /etc/cron.*
directory into automatic
service and timer units.
Thanks to Alexandre Detiste for letting me know about this project.
I have few enough cron jobs that I ve already converted, but
for anyone looking at a large number of jobs to convert
you ll want to check it out!
prometheus-alertmanager
rules:
- alert: UptimeTooHigh
expr: (time() - node_boot_time_seconds job="node" ) / 86400 > 35
annotations:
summary: "Instance Has Been Up Too Long!"
description: "Instance Has Been Up Too Long!"
/etc/crontab
or drop a script into /etc/cron.monthly
depending on your system.
apt install rustc cargo
. Either do that and make sure to use only Rust libraries from your distro (with the tiresome config runes below); or, just use rustup.
curl bash
curl bash
bullet
apt install rustc cargo
, you will end up using Debian s compiler but upstream libraries, directly and uncurated from crates.io.
This is not what you want. There are about two reasonable things to do, depending on your preferences.
Q. Download and run whatever code from the internet?
The key question is this:
Are you comfortable downloading code, directly from hundreds of upstream Rust package maintainers, and running it ?
That s what cargo
does. It s one of the main things it s for. Debian s cargo
behaves, in this respect, just like upstream s. Let me say that again:
Debian s cargo promiscuously downloads code from crates.io just like upstream cargo.
So if you use Debian s cargo in the most obvious way, you are still downloading and running all those random libraries. The only thing you re avoiding downloading is the Rust compiler itself, which is precisely the part that is most carefully maintained, and of least concern.
Debian s cargo can even download from crates.io when you re building official Debian source packages written in Rust: if you run dpkg-buildpackage
, the downloading is suppressed; but a plain cargo build
will try to obtain and use dependencies from the upstream ecosystem. ( Happily , if you do this, it s quite likely to bail out early due to version mismatches, before actually downloading anything.)
Option 1: WTF, no I don t want curl bash
OK, but then you must limit yourself to libraries available within Debian. Each Debian release provides a curated set. It may or may not be sufficient for your needs. Many capable programs can be written using the packages in Debian.
But any upstream Rust project that you encounter is likely to be a pain to get working, unless their maintainers specifically intend to support this. (This is fairly rare, and the Rust tooling doesn t make it easy.)
To go with this plan, apt install rustc cargo
and put this in your configuration, in $HOME/.cargo/config.toml
:
[source.debian-packages]
directory = "/usr/share/cargo/registry"
[source.crates-io]
replace-with = "debian-packages"
This causes cargo to look in /usr/share
for dependencies, rather than downloading them from crates.io. You must then install the librust-FOO-dev
packages for each of your dependencies, with apt
.
This will allow you to write your own program in Rust, and build it using cargo build
.
Option 2: Biting the curl bash
bullet
If you want to build software that isn t specifically targeted at Debian s Rust you will probably need to use packages from crates.io, not from Debian.
If you re doing to do that, there is little point not using rustup to get the latest compiler. rustup s install rune is alarming, but cargo will be doing exactly the same kind of thing, only worse (because it trusts many more people) and more hidden.
So in this case: do run the curl bash
install rune.
Hopefully the Rust project you are trying to build have shipped a Cargo.lock
; that contains hashes of all the dependencies that they last used and tested. If you run cargo build --locked
, cargo will only use those versions, which are hopefully OK.
And you can run cargo audit
to see if there are any reported vulnerabilities or problems. But you ll have to bootstrap this with cargo install --locked cargo-audit
; cargo-audit is from the RUSTSEC folks who do care about these kind of things, so hopefully running their code (and their dependencies) is fine. Note the --locked
which is needed because cargo s default behaviour is wrong.
Privilege separation
This approach is rather alarming. For my personal use, I wrote a privsep tool which allows me to run all this upstream Rust code as a separate user.
That tool is nailing-cargo. It s not particularly well productised, or tested, but it does work for at least one person besides me. You may wish to try it out, or consider alternative arrangements. Bug reports and patches welcome.
OMG what a mess
Indeed. There are large number of technical and social factors at play.
cargo itself is deeply troubling, both in principle, and in detail. I often find myself severely disappointed with its maintainers decisions. In mitigation, much of the wider Rust upstream community does takes this kind of thing very seriously, and often makes good choices. RUSTSEC is one of the results.
Debian s technical arrangements for Rust packaging are quite dysfunctional, too: IMO the scheme is based on fundamentally wrong design principles. But, the Debian Rust packaging team is dynamic, constantly working the update treadmills; and the team is generally welcoming and helpful.
Sadly last time I explored the possibility, the Debian Rust Team didn t have the appetite for more fundamental changes to the workflow (including, for example, changes to dependency version handling). Significant improvements to upstream cargo s approach seem unlikely, too; we can only hope that eventually someone might manage to supplant it.
edited 2024-03-21 21:49 to add a cut tagUbuntu is curated, so it probably wouldn t get this far. If it did, then the worst case is that it would get in the way of CI allowing other packages to be removed (again from a curated system, so people are used to removal not being self-service); but the release team would have no hesitation in removing a package like this to fix that, and it certainly wouldn t cause this amount of angst. If you did this in a PPA, then I can t think of any particular negative effects.OK, if you added lots of build-dependencies (as well as run-time dependencies) then you might be able to take out a builder. But Launchpad builders already run arbitrary user-submitted code by design and are therefore very carefully sandboxed and treated as ephemeral, so this is hardly novel. There s a lot to be said for the arrangement of having a curated system for the stuff people actually care about plus an ecosystem of add-on repositories. PPAs cover a wide range of levels of developer activity, from throwaway experiments to quasi-official distribution methods; there are certainly problems that arise from it being difficult to tell the difference between those extremes and from there being no systematic confinement, but for this particular kind of problem they re very nearly ideal. (Canonical has tried various other approaches to software distribution, and while they address some of the problems, they aren t obviously better at helping people make reliable social judgements about code they don t know.) For a hypothetical package with a huge number of dependencies, to even try to upload it directly to Ubuntu you d need to be an Ubuntu developer with upload rights (or to go via Debian, where you d have to clear a similar hurdle). If you have those, then the first upload has to pass manual review by an archive administrator. If your package passes that, then it still has to build and get through proposed-migration CI before it reaches anything that humans typically care about. On the other hand, if you were inclined to try this sort of experiment, you d almost certainly try it in a PPA, and that would trouble nobody but yourself.
--signoff
option.
I do make some small modifications to AI generated submissions.
For example, maybe you used AI to write this code:
+ // Fast inverse square root
+ float fast_rsqrt( float number )
+
+ float x2 = number * 0.5F;
+ float y = number;
+ long i = * ( long * ) &y;
+ i = 0x5f3659df - ( i >> 1 );
+ y = * ( float * ) &i;
+ return (y * ( 1.5F - ( x2 * y * y ) ));
+
...
- foo = rsqrt(bar)
+ foo = fast_rsqrt(bar)
Before AI, only a genious like John Carmack could write anything close to
this, and now you've generated it with some simple prompts to an AI.
So of course I will accept your patch. But as part of my QA process,
I might modify it so the new code is not run all the time. Let's only run
it on leap days to start with. As we know, leap day is February 30th, so I'll
modify your patch like this:
- foo = rsqrt(bar)
+ time_t s = time(NULL);
+ if (localtime(&s)->tm_mday == 30 && localtime(&s)->tm_mon == 2)
+ foo = fast_rsqrt(bar);
+ else
+ foo = rsqrt(bar);
Despite my minor modifications, you did the work (with AI!) and so
you deserve the credit, so I'll keep you listed as the author.
Congrats, you made the world better!
PS: Of course, the other reason I don't review AI generated code is that I
simply don't have time and have to prioritize reviewing code written by
falliable humans. Unfortunately, this does mean that if you submit AI
generated code that is not clearly marked as such, and use my limited
reviewing time, I won't have time to review other submissions from you
in the future. I will still accept all your botshit submissions though!
PPS: Ignore the haters who claim that botshit makes AIs that get trained
on it less effective. Studies like this one
just aren't believable. I asked Bing to summarize it and it said not to worry
about it!
remote: fatal: pack exceeds maximum allowed size (4.88 GiB)
however breaking up the commit into smaller commits for parts of the archive made it possible to push the entire archive. Here are the commands to create this repository:
git init
git lfs install
git lfs track 'dists/**' 'pool/**'
git add .gitattributes
git commit -m"Add Git-LFS track attributes." .gitattributes
time debmirror --method=rsync --host ftp.se.debian.org --root :debian --arch=amd64 --source --dist=bookworm,bookworm-updates --section=main --verbose --diff=none --keyring /usr/share/keyrings/debian-archive-keyring.gpg --ignore .git .
git add dists project
git commit -m"Add." -a
git remote add origin git@gitlab.com:debdistutils/archives/debian/mirror.git
git push --set-upstream origin --all
for d in pool//; do
echo $d;
time git add $d;
git commit -m"Add $d." -a
git push
done
The resulting repository size is around 27MB with Git LFS object storage around 174GB. I think this approach would scale to handle all architectures for one release, but working with a single git repository for all releases for all architectures may lead to a too large git repository (>1GB). So maybe one repository per release? These repositories could also be split up on a subset of pool/ files, or there could be one repository per release per architecture or sources.
Finally, I have concerns about using SHA1 for identifying objects. It seems both Git and Debian s snapshot service is currently using SHA1. For Git there is SHA-256 transition and it seems GitLab is working on support for SHA256-based repositories. For serious long-term deployment of these concepts, it would be nice to go for SHA256 identifiers directly. Git-LFS already uses SHA256 but Git internally uses SHA1 as does the Debian snapshot service.
What do you think? Happy Hacking!
santiago debacle eamanu dererk gwolf @debian.org
. My main contact to
kickstart organization was Mart n Bayo. Mart n was for many years the leader of
the Technical Degree on Free Software at Universidad Nacional del
Litoral,
where I was also a teacher for several years. Together with Leo Mart nez, also a
teacher at the tecnicatura, they contacted us with Guillermo and Gabriela,
from the APUL non-teaching-staff union of said university.
We had the following set of talks (for which there is a promise to get
electronic record, as APUL was kind enough to record them! of course, I will
push them to our usual conference video archiving service as soon as I get them)
Hour | Title (Spanish) | Title (English) | Presented by |
---|---|---|---|
10:00-10:25 | Introducci n al Software Libre | Introduction to Free Software | Mart n Bayo |
10:30-10:55 | Debian y su comunidad | Debian and its community | Emanuel Arias |
11:00-11:25 | Por qu sigo contribuyendo a Debian despu s de 20 a os? | Why am I still contributing to Debian after 20 years? | Santiago Ruano |
11:30-11:55 | Mi identidad y el proyecto Debian: Qu es el llavero OpenPGP y por qu ? | My identity and the Debian project: What is the OpenPGP keyring and why? | Gunnar Wolf |
12:00-13:00 | Explorando las masculinidades en el contexto del Software Libre | Exploring masculinities in the context of Free Software | Gora Ortiz Fuentes - Jos Francisco Ferro |
13:00-14:30 | Lunch | ||
14:30-14:55 | Debian para el d a a d a | Debian for our every day | Leonardo Mart nez |
15:00-15:25 | Debian en las Raspberry Pi | Debian in the Raspberry Pi | Gunnar Wolf |
15:30-15:55 | Device Trees | Device Trees | Lisandro Dami n Nicanor Perez Meyer (videoconferencia) |
16:00-16:25 | Python en Debian | Python in Debian | Emmanuel Arias |
16:30-16:55 | Debian y XMPP en la medici n de viento para la energ a e lica | Debian and XMPP for wind measuring for eolic energy | Martin Borgert |
description = "Haskell dev MicroVM";
inputs.impermanence.url = "github:nix-community/impermanence";
inputs.microvm.url = "github:astro/microvm.nix";
inputs.microvm.inputs.nixpkgs.follows = "nixpkgs";
outputs = self, impermanence, microvm, nixpkgs :
let
persistencePath = "/persistent";
system = "x86_64-linux";
user = "thk";
vmname = "haskell";
nixosConfiguration = nixpkgs.lib.nixosSystem
inherit system;
modules = [
microvm.nixosModules.microvm
impermanence.nixosModules.impermanence
( pkgs, ... :
environment.persistence.$ persistencePath =
hideMounts = true;
users.$ user =
directories = [
"git" ".stack"
];
;
;
environment.sessionVariables =
TERM = "screen-256color";
;
environment.systemPackages = with pkgs; [
ghc
git
(haskell-language-server.override supportedGhcVersions = [ "94" ]; )
htop
stack
tmux
tree
vcsh
zsh
];
fileSystems.$ persistencePath .neededForBoot = nixpkgs.lib.mkForce true;
microvm =
forwardPorts = [
from = "host"; host.port = 2222; guest.port = 22;
from = "guest"; host.port = 5432; guest.port = 5432; # postgresql
];
hypervisor = "qemu";
interfaces = [
type = "user"; id = "usernet"; mac = "00:00:00:00:00:02";
];
mem = 4096;
shares = [
# use "virtiofs" for MicroVMs that are started by systemd
proto = "9p";
tag = "ro-store";
# a host's /nix/store will be picked up so that no
# squashfs/erofs will be built for it.
source = "/nix/store";
mountPoint = "/nix/.ro-store";
proto = "virtiofs";
tag = "persistent";
source = "~/.local/share/microvm/vms/$ vmname /persistent";
mountPoint = persistencePath;
socket = "/run/user/1000/microvm-$ vmname -persistent";
];
socket = "/run/user/1000/microvm-control.socket";
vcpu = 3;
volumes = [];
writableStoreOverlay = "/nix/.rwstore";
;
networking.hostName = vmname;
nix.enable = true;
nix.nixPath = ["nixpkgs=$ builtins.storePath <nixpkgs> "];
nix.settings =
extra-experimental-features = ["nix-command" "flakes"];
trusted-users = [user];
;
security.sudo =
enable = true;
wheelNeedsPassword = false;
;
services.getty.autologinUser = user;
services.openssh =
enable = true;
;
system.stateVersion = "24.11";
systemd.services.loadnixdb =
description = "import hosts nix database";
path = [pkgs.nix];
wantedBy = ["multi-user.target"];
requires = ["nix-daemon.service"];
script = "cat $ persistencePath /nix-store-db-dump nix-store --load-db";
;
time.timeZone = nixpkgs.lib.mkDefault "Europe/Berlin";
users.users.$ user =
extraGroups = [ "wheel" "video" ];
group = "user";
isNormalUser = true;
openssh.authorizedKeys.keys = [
"ssh-rsa REDACTED"
];
password = "";
;
users.users.root.password = "";
users.groups.user = ;
)
];
;
in
packages.$ system .default = nixosConfiguration.config.microvm.declaredRunner;
;
I start the microVM with a templated systemd user service:
[Unit]
Description=MicroVM for Haskell development
Requires=microvm-virtiofsd-persistent@.service
After=microvm-virtiofsd-persistent@.service
AssertFileNotEmpty=%h/.local/share/microvm/vms/%i/flake/flake.nix
[Service]
Type=forking
ExecStartPre=/usr/bin/sh -c "[ /nix/var/nix/db/db.sqlite -ot %h/.local/share/microvm/nix-store-db-dump ] nix-store --dump-db >%h/.local/share/microvm/nix-store-db-dump"
ExecStartPre=ln -f -t %h/.local/share/microvm/vms/%i/persistent/ %h/.local/share/microvm/nix-store-db-dump
ExecStartPre=-%h/.local/state/nix/profile/bin/tmux new -s microvm -d
ExecStart=%h/.local/state/nix/profile/bin/tmux new-window -t microvm: -n "%i" "exec %h/.local/state/nix/profile/bin/nix run --impure %h/.local/share/microvm/vms/%i/flake"
The above service definition creates a dump of the hosts nix store db so that it can be imported in the guest. This is necessary so that the guest can actually use what is available in /nix/store. There is an effort for an overlayed nix store that would be preferable to this hack.
Finally the microvm is started inside a tmux session named microvm . This way I can use the VM with SSH or through the console and also access the qemu console.
And for completeness the virtiofsd service:
[Unit]
Description=serve host persistent folder for dev VM
AssertPathIsDirectory=%h/.local/share/microvm/vms/%i/persistent
[Service]
ExecStart=%h/.local/state/nix/profile/bin/virtiofsd \
--socket-path=$ XDG_RUNTIME_DIR /microvm-%i-persistent \
--shared-dir=%h/.local/share/microvm/vms/%i/persistent \
--gid-map :995:%G:1: \
--uid-map :1000:%U:1:
A mildly technical computer user (able to install software) has access to a search engine that provides them with superior search results compared to Google for at least a few predefined areas of interest.The exact algorithm used by Google Search to rank websites is a secret even to most Googlers. However I assume that it relies heavily on big data. A distributed search engine however can instead rely on user input. Every admin of one node seeds the node ranking with their personal selection of trusted sites. They connect their node with nodes of people they trust. This results in a web of (transitive) trust much like pgp. Imagine you are searching for something in a world without computers: You ask the people around you and probably they forward your question to their peers. I already had a look at YaCy. It is active, somewhat usable and has a friendly maintainer. Unfortunately I consider the codebase to not be worth the effort. Nevertheless, YaCy is a good example that a decentralized search software can be done even by a small team or just one person. I myself started working on a software in Haskell and keep my notes here: Populus:DezInV. Since I m learning Haskell along the way, there is nothing there to see yet. Additionally I took a yak shaving break to learn nix. By the way: DuckDuckGo is not the alternative. And while I would encourage you to also try Yandex for a second opinion, I don t consider this a solution.
Next.