Publisher: | Angry Robot |
Copyright: | 2022 |
ISBN: | 0-85766-967-2 |
Format: | Kindle |
Pages: | 458 |
xdg
namespace. I then tried another solution based on feedback, which turned out not to work for most apps, and now proposed xdg-placement (v2) in an attempt to maybe still get some protocol done that we can agree on, exploring more options before pushing the existing protocol for inclusion into the ext
Wayland protocol namespace. Meanwhile though, we can not port any application that needs this feature, while at the same time we are switching desktops and distributions to Wayland by default.
Couldn't get a file descriptor referring to the console.
bash
is an absolute mess.
zsh
, in contrast, is much cleaner and allows for a clear distinction
between these two cases, with login shell configuration files as a superset of
configuration files used by non-login shells.
profile
file:
.mkshrc -> config/profile
.profile -> config/profile
zsh
for a while, and took this opportunity to do so.
So my new configuration files are .zprofile
and .zshrc
instead of .mkshrc
and .profile
(though I m going to retain those symlinks to allow my old configurations to
continue working).
mksh
is a nice simple shell,
but using zsh
here allows for more consistency
between my home and $WORK
environments, and will allow a lot more powerful
extensions.
PROMPT
variable, so I set that to the needed
values for zsh
.
There s an excellent ZSH prompt generator at
https://zsh-prompt-generator.site
that I used to get these variables, though I m sure they re in the zsh
documentation somewhere as well.
I wanted a simple prompt with user (%n
), host (%m
), and path (%d
).
I also wanted a %
at the end to distinguish this from other shells.
PROMPT="%n@%m%d%% "
mksh
prompts
This worked but surprisingly mksh
also looks at PROMPT
,
leaving my mksh
prompt as the literal prompt string without expansion.
Fixing this requires setting up a proper shrc
and linking it to
.mkshrc
and .zshrc
.
I chose to move my existing aliases
script to this file,
as it also broke in non-login shells when moved to profile
.
Within this new shrc
file we can check what shell we re running via $0
:
if [ "$0" = "/bin/zsh" ] [ "$0" = "zsh" ] [ "$0" = "-zsh" ]
zsh
here in case I run it manually for whatever reason.
I also added -zsh
to support tmux as that s what it presents as $0
.
This also means you ll need to be careful to quote $0
or you ll get fun shell
errors.
There s probably a better way to do this, but I couldn t find something that
was compatible with POSIX shell, which is what most of this has to be written
in to be compatible with mksh
and zsh
2.
We can then setup different prompts for each:
if [ "$0" = "/bin/zsh" ] [ "$0" = "zsh" ] [ "$0" = "-zsh" ]
then
PROMPT="%n@%m%d%% "
else
# Borrowed from
# http://www.unixmantra.com/2013/05/setting-custom-prompt-in-ksh.html
PS1='$(id -un)@$(hostname -s)$PWD$ '
fi
setfont
in my .profile
for a while.
I m not sure where I picked this up, but it s not the right way.
I even tried to only run this in a console with -t
but that only
checks that output is a terminal, not specifically a console.
if [ -t 1 ]
then
setfont /usr/share/consolefonts/Lat15-Terminus20x10.psf.gz
fi
console-setup
like so:
dpkg-reconfigure console-setup
bindkeys -v
bindkey "^[[1;5C" forward-word
bindkey "^[[1;5D" backward-word
zsh
so we can do better than just the same
configuration as we had before.
There s an excellent substring history search plugin that we can just
source without a plugin manager3
source $HOME/config/zsh-history-substring-search.zsh
# Keys are weird, should be ^[[ but it's not
bindkey '^[[A' history-substring-search-up
bindkey '^[OA' history-substring-search-up
bindkey '^[[B' history-substring-search-down
bindkey '^[OB' history-substring-search-down
^[OA
and ^[OB
as up and down keys.
It seems ^[[A
and ^[[B
are the defaults, so I ve left them in,
but I m confused as to what differences would lead to this.
If you know, please let me know
and I ll add a footnote to this article explaining it.
Back to history search.
For this to work, we also need to setup history logging:
export SAVEHIST=1000000
export HISTFILE=$HOME/.zsh_history
export HISTFILESIZE=1000000
export HISTSIZE=1000000
tmux
creates a login shell.
Adding:
echo PROFILE
profile
and:
echo SHRC
shrc
confirms this with:
PROFILE
SHRC
SHRC
profile
sources shrc
so that running twice is expected.
But after this exploration and diagram,
it s clear we don t need that for zsh
.
Removing this will break remote bash shells (see above diagram),
but I can live without those on my development laptop.
Removing that line results in the expected output for a new terminal:
SHRC
PROFILE
SHRC
$WORK
but for now I m going to simplify my personal configuration. If I end up using a lot of plugins I ll reconsider this.
Series: | Deep Witches #1 |
Publisher: | A Girl and Her Fed Books |
Copyright: | March 2021 |
ISBN: | blackwing-war |
Format: | Kindle |
Pages: | 284 |
Indeed, while we have proven that there is a strong and significative correlation between the income and the participation in a free/libre software project, it is not possible for us to pronounce ourselves about the causality of this link.In the French original text:
En effet, si nous avons prouv qu il existe une corr lation forte et significative entre le salaire et la participation un projet libre, il ne nous est pas possible de nous prononcer sur la causalit de ce lien.Said differently, it is certain that there is a relationship between income and F/LOSS contribution, but it s unclear whether working on free/libre software ultimately helps finding a well paid job, or if having a well paid job is the cause enabling work on free/libre software. I would like to scratch this question a bit further, mostly relying on my own observations, experiences, and discussions with F/LOSS contributors.
It is unclear whether working on free/libre software ultimately helps finding a well paid job, or if having a well paid job is the cause enabling work on free/libre software.Maybe we need to imagine this cause-effect relationship over time: as a student, without children and lots of free time, hopefully some money from the state or the family, people can spend time on F/LOSS, collect experience, earn recognition - and later find a well-paid job and make unpaid F/LOSS contributions into a hobby, cementing their status in the community, while at the same time generating a sense of well-being from working on the common good. This is a quite common scenario. As the Flosspols study revealed however, boys often get their own computer at the age of 14, while girls get one only at the age of 20. (These numbers might be slightly different now, and possibly many people don t own an actual laptop or desktop computer anymore, instead they own mobile devices which are not exactly inciting them to look behind the surface, take apart, learn, appropriate technology.) In any case, the above scenario does not allow for people who join F/LOSS later in life, eg. changing careers, to find their place. I believe that F/LOSS projects cannot expect to have more women, people of color, people from working class backgrounds, people from outside of Germany, France, USA, UK, Australia, and Canada on board as long as volunteer work is the status quo and waged labour an earned privilege.
sahilister
There are various ways in which the installation could be done, in our setup here are the pre-requisites. compose.yml
file is present in nextcloud AIO&aposs git repo here . By taking a reference of that file, we have own compose.yml
here. services:
nextcloud-aio-mastercontainer:
image: nextcloud/all-in-one:latest
init: true
restart: always
container_name: nextcloud-aio-mastercontainer # This line is not allowed to be changed as otherwise AIO will not work correctly
volumes:
- nextcloud_aio_mastercontainer:/mnt/docker-aio-config # This line is not allowed to be changed as otherwise the built-in backup solution will not work
- /var/run/docker.sock:/var/run/docker.sock:ro # May be changed on macOS, Windows or docker rootless. See the applicable documentation. If adjusting, don&apost forget to also set &aposWATCHTOWER_DOCKER_SOCKET_PATH&apos!
ports:
- 8080:8080
environment: # Is needed when using any of the options below
# - AIO_DISABLE_BACKUP_SECTION=false # Setting this to true allows to hide the backup section in the AIO interface. See https://github.com/nextcloud/all-in-one#how-to-disable-the-backup-section
- APACHE_PORT=32323 # Is needed when running behind a web server or reverse proxy (like Apache, Nginx, Cloudflare Tunnel and else). See https://github.com/nextcloud/all-in-one/blob/main/reverse-proxy.md
- APACHE_IP_BINDING=127.0.0.1 # Should be set when running behind a web server or reverse proxy (like Apache, Nginx, Cloudflare Tunnel and else) that is running on the same host. See https://github.com/nextcloud/all-in-one/blob/main/reverse-proxy.md
# - BORG_RETENTION_POLICY=--keep-within=7d --keep-weekly=4 --keep-monthly=6 # Allows to adjust borgs retention policy. See https://github.com/nextcloud/all-in-one#how-to-adjust-borgs-retention-policy
# - COLLABORA_SECCOMP_DISABLED=false # Setting this to true allows to disable Collabora&aposs Seccomp feature. See https://github.com/nextcloud/all-in-one#how-to-disable-collaboras-seccomp-feature
- NEXTCLOUD_DATADIR=/opt/docker/cloud.raju.dev/nextcloud # Allows to set the host directory for Nextcloud&aposs datadir. Warning: do not set or adjust this value after the initial Nextcloud installation is done! See https://github.com/nextcloud/all-in-one#how-to-change-the-default-location-of-nextclouds-datadir
# - NEXTCLOUD_MOUNT=/mnt/ # Allows the Nextcloud container to access the chosen directory on the host. See https://github.com/nextcloud/all-in-one#how-to-allow-the-nextcloud-container-to-access-directories-on-the-host
# - NEXTCLOUD_UPLOAD_LIMIT=10G # Can be adjusted if you need more. See https://github.com/nextcloud/all-in-one#how-to-adjust-the-upload-limit-for-nextcloud
# - NEXTCLOUD_MAX_TIME=3600 # Can be adjusted if you need more. See https://github.com/nextcloud/all-in-one#how-to-adjust-the-max-execution-time-for-nextcloud
# - NEXTCLOUD_MEMORY_LIMIT=512M # Can be adjusted if you need more. See https://github.com/nextcloud/all-in-one#how-to-adjust-the-php-memory-limit-for-nextcloud
# - NEXTCLOUD_TRUSTED_CACERTS_DIR=/path/to/my/cacerts # CA certificates in this directory will be trusted by the OS of the nexcloud container (Useful e.g. for LDAPS) See See https://github.com/nextcloud/all-in-one#how-to-trust-user-defined-certification-authorities-ca
# - NEXTCLOUD_STARTUP_APPS=deck twofactor_totp tasks calendar contacts notes # Allows to modify the Nextcloud apps that are installed on starting AIO the first time. See https://github.com/nextcloud/all-in-one#how-to-change-the-nextcloud-apps-that-are-installed-on-the-first-startup
# - NEXTCLOUD_ADDITIONAL_APKS=imagemagick # This allows to add additional packages to the Nextcloud container permanently. Default is imagemagick but can be overwritten by modifying this value. See https://github.com/nextcloud/all-in-one#how-to-add-os-packages-permanently-to-the-nextcloud-container
# - NEXTCLOUD_ADDITIONAL_PHP_EXTENSIONS=imagick # This allows to add additional php extensions to the Nextcloud container permanently. Default is imagick but can be overwritten by modifying this value. See https://github.com/nextcloud/all-in-one#how-to-add-php-extensions-permanently-to-the-nextcloud-container
# - NEXTCLOUD_ENABLE_DRI_DEVICE=true # This allows to enable the /dev/dri device in the Nextcloud container. Warning: this only works if the &apos/dev/dri&apos device is present on the host! If it should not exist on your host, don&apost set this to true as otherwise the Nextcloud container will fail to start! See https://github.com/nextcloud/all-in-one#how-to-enable-hardware-transcoding-for-nextcloud
# - NEXTCLOUD_KEEP_DISABLED_APPS=false # Setting this to true will keep Nextcloud apps that are disabled in the AIO interface and not uninstall them if they should be installed. See https://github.com/nextcloud/all-in-one#how-to-keep-disabled-apps
# - TALK_PORT=3478 # This allows to adjust the port that the talk container is using. See https://github.com/nextcloud/all-in-one#how-to-adjust-the-talk-port
# - WATCHTOWER_DOCKER_SOCKET_PATH=/var/run/docker.sock # Needs to be specified if the docker socket on the host is not located in the default &apos/var/run/docker.sock&apos. Otherwise mastercontainer updates will fail. For macos it needs to be &apos/var/run/docker.sock&apos
# networks: # Is needed when you want to create the nextcloud-aio network with ipv6-support using this file, see the network config at the bottom of the file
# - nextcloud-aio # Is needed when you want to create the nextcloud-aio network with ipv6-support using this file, see the network config at the bottom of the file
# - SKIP_DOMAIN_VALIDATION=true
# # Uncomment the following line when using SELinux
# security_opt: ["label:disable"]
volumes: # If you want to store the data on a different drive, see https://github.com/nextcloud/all-in-one#how-to-store-the-filesinstallation-on-a-separate-drive
nextcloud_aio_mastercontainer:
name: nextcloud_aio_mastercontainer # This line is not allowed to be changed as otherwise the built-in backup solution will not work
I have not removed many of the commented options in the compose file, for a possibility of me using them in the future.If you want a smaller cleaner compose with the extra options, you can refer to services:
nextcloud-aio-mastercontainer:
image: nextcloud/all-in-one:latest
init: true
restart: always
container_name: nextcloud-aio-mastercontainer
volumes:
- nextcloud_aio_mastercontainer:/mnt/docker-aio-config
- /var/run/docker.sock:/var/run/docker.sock:ro
ports:
- 8080:8080
environment:
- APACHE_PORT=32323
- APACHE_IP_BINDING=127.0.0.1
- NEXTCLOUD_DATADIR=/opt/docker/nextcloud
volumes:
nextcloud_aio_mastercontainer:
name: nextcloud_aio_mastercontainer
I am using a separate directory to store nextcloud data. As per nextcloud documentation you should be using a separate partition if you want to use this feature, however I did not have that option on my server, so I used a separate directory instead. Also we use a custom port on which nextcloud listens for operations, we have set it up as 32323
above, but you can use any in the permissible port range. The 8080 port is used the setup the AIO management interface. Both 8080 and the APACHE_PORT
do not need to be open on the host machine, as we will be using reverse proxy setup with nginx to direct requests. once you have your preferred compose.yml
file, you can start the containers using $ docker-compose -f compose.yml up -d
Creating network "clouddev_default" with the default driver
Creating volume "nextcloud_aio_mastercontainer" with default driver
Creating nextcloud-aio-mastercontainer ... done
once your container&aposs are running, we can do the nginx setup.
map $http_upgrade $connection_upgrade
default upgrade;
&apos&apos close;
server
listen 80;
#listen [::]:80; # comment to disable IPv6
if ($scheme = "http")
return 301 https://$host$request_uri;
listen 443 ssl http2; # for nginx versions below v1.25.1
#listen [::]:443 ssl http2; # for nginx versions below v1.25.1 - comment to disable IPv6
# listen 443 ssl; # for nginx v1.25.1+
# listen [::]:443 ssl; # for nginx v1.25.1+ - keep comment to disable IPv6
# http2 on; # uncomment to enable HTTP/2 - supported on nginx v1.25.1+
# http3 on; # uncomment to enable HTTP/3 / QUIC - supported on nginx v1.25.0+
# quic_retry on; # uncomment to enable HTTP/3 / QUIC - supported on nginx v1.25.0+
# add_header Alt-Svc &aposh3=":443"; ma=86400' # uncomment to enable HTTP/3 / QUIC - supported on nginx v1.25.0+
# listen 443 quic reuseport; # uncomment to enable HTTP/3 / QUIC - supported on nginx v1.25.0+ - please remove "reuseport" if there is already another quic listener on port 443 with enabled reuseport
# listen [::]:443 quic reuseport; # uncomment to enable HTTP/3 / QUIC - supported on nginx v1.25.0+ - please remove "reuseport" if there is already another quic listener on port 443 with enabled reuseport - keep comment to disable IPv6
server_name cloud.example.com;
location /
proxy_pass http://127.0.0.1:32323$request_uri;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-Scheme $scheme;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Accept-Encoding "";
proxy_set_header Host $host;
client_body_buffer_size 512k;
proxy_read_timeout 86400s;
client_max_body_size 0;
# Websocket
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
ssl_certificate /etc/letsencrypt/live/cloud.example.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/cloud.example.com/privkey.pem; # managed by Certbot
ssl_session_timeout 1d;
ssl_session_cache shared:MozSSL:10m; # about 40000 sessions
ssl_session_tickets off;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-CHACHA20-POLY1305;
ssl_prefer_server_ciphers on;
# Optional settings:
# OCSP stapling
# ssl_stapling on;
# ssl_stapling_verify on;
# ssl_trusted_certificate /etc/letsencrypt/live/<your-nc-domain>/chain.pem;
# replace with the IP address of your resolver
# resolver 127.0.0.1; # needed for oscp stapling: e.g. use 94.140.15.15 for adguard / 1.1.1.1 for cloudflared or 8.8.8.8 for google - you can use the same nameserver as listed in your /etc/resolv.conf file
Please note that you need to have valid SSL certificates for your domain for this configuration to work. Steps on getting valid SSL certificates for your domain are beyond the scope of this article. You can give a web search on getting SSL certificates with letsencrypt and you will get several resources on that, or may write a blog post on it separately in the future.once your configuration for nginx is done, you can test the nginx configuration using $ sudo nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
and then reload nginx with $ sudo nginx -s reload
domain.tld:8080
, however we do not want to open the 8080 port publicly to do this, so to complete the setup, here is a neat hack from sahilister
ssh -L 8080:127.0.0.1:8080 username:<server-ip>
you can bind the 8080 port of your server to the 8080 of your localhost using Unix socket forwarding over SSH.The port forwarding only last for the duration of your SSH session, if the SSH session breaks, your port forwarding will to. So, once you have the port forwarded, you can open the nextcloud AIO instance in your web browser at 127.0.0.1:8080
you will get this error because you are trying to access a page on localhost over HTTPS. You can click on advanced and then continue to proceed to the next page. Your data is encrypted over SSH for this session as we are binding the port over SSH. Depending on your choice of browser, the above page might look different.once you have proceeded, the nextcloud AIO interface will open and will look something like this. It will show an auto generated passphrase, you need to save this passphrase and make sure to not loose it. For the purposes of security, I have masked the passwords with capsicums. once you have noted down your password, you can proceed to the Nextcloud AIO login, enter your password and then login. After login you will be greeted with a screen like this. now you can put the domain that you want to use in the Submit domain field. Once the domain check is done, you will proceed to the next step and see another screen like thishere you can select any optional containers for the features that you might want. IMPORTANT: Please make sure to also change the time zone at the bottom of the page according to the time zone you wish to operate in. The timezone setup is also important because the data base will get initialized according to the set time zone. This could result in wrong initialization of database and you ending up in a startup loop for nextcloud. I faced this issue and could only resolve it after getting help from sahilister
. Once you are done changing the timezone, and selecting any additional features you want, you can click on Download and start the containers
It will take some time for this process to finish, take a break and look at the farthest object in your room and take a sip of water. Once you are done, and the process has finished you will see a page similar to the following one. wait patiently for everything to turn green. once all the containers have started properly, you can open the nextcloud login interface on your configured domain, the initial login details are auto generated as you can see from the above screenshot. Again you will see a password that you need to note down or save to enter the nextcloud interface. Capsicums will not work as passwords. I have masked the auto generated passwords using capsicums.Now you can click on Open your Nextcloud
button or go to your configured domain to access the login screen. You can use the login details from the previous step to login to the administrator account of your Nextcloud instance. There you have it, your very own cloud!docker-compose -f compose.yml down -v
The above command will also remove the volume associated with the master containerdocker stop nextcloud-aio-apache nextcloud-aio-notify-push nextcloud-aio-nextcloud nextcloud-aio-imaginary nextcloud-aio-fulltextsearch nextcloud-aio-redis nextcloud-aio-database nextcloud-aio-talk nextcloud-aio-collabora
docker rm nextcloud-aio-apache nextcloud-aio-notify-push nextcloud-aio-nextcloud nextcloud-aio-imaginary nextcloud-aio-fulltextsearch nextcloud-aio-redis nextcloud-aio-database nextcloud-aio-talk nextcloud-aio-collabora
docker rmi $(docker images --filter "reference=nextcloud/*" -q)
docker volume rm <volume-name>
docker network rm nextcloud-aio
purple-discord
)apt source
, or dpkg-source
. Instead, use dgit and work in git.
Also, don t use: VCS links on official Debian web pages, debcheckout
, or Debian s (semi-)official gitlab, Salsa. These are suitable for Debian experts only; for most people they can be beartraps. Instead, use dgit.
>
Struggling with Debian source packages?
A friend of mine recently asked for help on IRC. They re an experienced Debian administrator and user, and were trying to: make a change to a Debian package; build and install and run binary packages from it; and record that change for their future self, and their colleagues. They ended up trying to comprehend quilt.
quilt is an ancient utility for managing sets of source code patches, from well before the era of modern version control. It has many strange behaviours and footguns. Debian s ancient and obsolete tarballs-and-patches source package format (which I designed the initial version of in 1993) nowadays uses quilt, at least for most packages.
You don t want to deal with any of this nonsense. You don t want to learn quilt, and suffer its misbehaviours. You don t want to learn about Debian source packages and wrestle dpkg-source.
Happily, you don t need to.
Just use dgit
One of dgit s main objectives is to minimise the amount of Debian craziness you need to learn. dgit aims to empower you to make changes to the software you re running, conveniently and with a minimum of fuss.
You can use dgit to get the source code to a Debian package, as a git tree, with dgit clone
(and dgit fetch
). The git tree can be made into a binary package directly.
The only things you really need to know are:
dgit clone PACKAGE bookworm,-security
(yes, with a comma).
debian/changelog
to make your packages have a different version number.
dpkg-buildpackage -uc -b
.
debian/rules clean
can be inadequate, or crazy. Always commit before building, and use git clean
and git reset --hard
instead of running clean rules from the package.
dpkg-source
manual!) Instead, to preserve and share your work, use the git branch.
dgit pull
or dgit fetch
can be used to get updates.
There is a more comprehensive tutorial, with example runes, in the dgit-user(7) manpage. (There is of course complete reference documentation, but you don t need to bother reading it.)
Objections
But I don t want to learn yet another tool
One of dgit s main goals is to save people from learning things you don t need to. It aims to be straightforward, convenient, and (so far as Debian permits) unsurprising.
So: don t learn dgit. Just run it and it will be fine :-).
Shouldn t I be using official Debian git repos?
Absolutely not.
Unless you are a Debian expert, these can be terrible beartraps. One possible outcome is that you might build an apparently working program but without the security patches. Yikes!
I discussed this in more detail in 2021 in another blog post plugging dgit.
Gosh, is Debian really this bad?
Yes. On behalf of the Debian Project, I apologise.
Debian is a very conservative institution. Change usually comes very slowly. (And when rapid or radical change has been forced through, the results haven t always been pretty, either technically or socially.)
Sadly this means that sometimes much needed change can take a very long time, if it happens at all. But this tendency also provides the stability and reliability that people have come to rely on Debian for.
I m a Debian maintainer. You tell me dgit is something totally different!
dgit is, in fact, a general bidirectional gateway between the Debian archive and git.
So yes, dgit is also a tool for Debian uploaders. You should use it to do your uploads, whenever you can. It s more convenient and more reliable than git-buildpackage
and dput
runes, and produces better output for users. You too can start to forget how to deal with source packages!
A full treatment of this is beyond the scope of this blog post.What's new in the Linux kernel. Apologies for the poor audio quality; I will use a different microphone if I do this again.
Welcome to the September 2023 report from the Reproducible Builds project
In these reports, we outline the most important things that we have been up to over the past month. As a quick recap, whilst anyone may inspect the source code of free software for malicious flaws, almost all software is distributed to end users as pre-compiled binaries.
Andreas Herrmann gave a talk at All Systems Go 2023 titled Fast, correct, reproducible builds with Nix and Bazel . Quoting from the talk description:
You will be introduced to Google s open source build system Bazel, and will learn how it provides fast builds, how correctness and reproducibility is relevant, and how Bazel tries to ensure correctness. But, we will also see where Bazel falls short in ensuring correctness and reproducibility. You will [also] learn about the purely functional package manager Nix and how it approaches correctness and build isolation. And we will see where Bazel has an advantage over Nix when it comes to providing fast feedback during development.Andreas also shows how you can get the best of both worlds and combine Nix and Bazel, too. A video of the talk is available.
file(1)
version 5.45 [ ] and updated some documentation [ ]. In addition, Vagrant Cascadian extended support for GNU Guix [ ][ ] and updated the version in that distribution as well. [ ].
BUILDSPEC.md
file. [ ] And Fay Stegerman fixed the builds failing because of a YAML syntax error.
.dsc
file modulo the GPG signature . This month, however, Russ Allbery closed the bug due to concerns about the viability of source reproducibility.
linuxsampler
(benchmarking issue)antlr3
(date)rpm
(embeds too many build details)seamonkey
(date)conky
(date and ordering-related issue)lsp-plugins-shared
(date/copyright year issue)build-compare
helix
(ASLR-related non-determinism)intel-graphics-compiler
(ASLR)sphinxcontrib-mermaid
.mkdocs-material
.apophenia
.lapackpp
.blaspp
.mysql-connector-java
, java-21-openjdk
, apache-ivy
, maven-assembly-plugin
, eclipse
, antlr3
, groovy18
, hbci4java
, ini4j
, hppc
, checkstyle
, glassfish-jaxb
, tycho
, xmvn
, mockito
, languagetool
, json-lib
, jnr-unixsocket
, jnr-ffi
, jnr-enxio
, jboss-jaxrs-2.0-api
, istack-commons
, rxtx-java
, glassfish-jaxb
, glassfish-hk2
, findbugs
, docker-client-java
, maven
, xmvn-connector-ivy
, xmlstreambuffer
, checkstyle
, cglib
, bean-validation-api
, aws-sdk-java
, javapackages-tools
, ant
, scala
, osgi-service-log
, jmdns
, xml-security
, super-csv
, osgi-service-jdbc
, msv
, junit5
, jsr-311
, jersey
, itextpdf
, httpcomponents-asyncclient
, ed25519-java
, jnacl
, javaparser
, picocli
, freemarker
, extra166y
, javaparser
, xstream
, woodstox-core
, uom-lib
, unit-api
, uncommons-maths
, tycho
, treelayout
, tiger-types
, super-csv
, stax-ex
, stax2-api
, sqlite-jdbc
, reflectasm
, prometheus-simpleclient-java
, powermock
, paranamer
, opennlp
, netty3
, mybatis
, morfologik-stemming
, minlog
, maven-archetype
, mariadb-java-client
, logback
, kryo
, jsonp
, jopt-simple
, jnr-posix
, jnr-constants
, jnr-a64asm
, jfreechart
, jffi
, jetty-schemas
, jetty-minimal
, jeromq
, jctools
, jcsp
, jboss-websocket-1.0-api
, jboss-marshalling
, jboss-logmanager
, jboss-logging
, javaewah
, jatl
, janino
, jackson-modules-base
, jackson-jaxrs-providers
, jackson-datatypes-collections
, jackson-dataformat-xml
, jackson-dataformats-text
, jackson-dataformats-binary
, indriya
, google-gson
, glassfish-websocket-api
, glassfish-transaction-api
, glassfish-jsp
, glassfish-jax-rs-api
, glassfish-hk2
, glassfish-fastinfoset
, felix-scr
, felix-gogo-shell
, felix-gogo-command
, disruptor
, apache-commons-ognl
, apache-commons-math
, apache-commons-csv
, antlr4
, jettison
, sisu
, maven
armhf
and i386
builds due to Debian bug #1052257. [ ][ ][ ][ ]ionice
priority. [ ]dinstall
again. [ ]schroot
running the tested suite. [ ][ ]diffoscope --version
(as suggested by Fay Stegerman on our mailing list) [ ], worked on an openQA credential issue [ ] and also made some changes to the machine-readable reproducible metadata, reproducible-tracker.json
[ ]. Lastly, Roland Clobus added instructions for manual configuration of the openQA secrets [ ].
#reproducible-builds
on irc.oftc.net
.
rb-general@lists.reproducible-builds.org
debian-devel
(first and second) did not end with an agreement how this mechanism could work.1
The central problem was the ability to properly trace uploads back to the ones who authorised it.
Now, several years later and after re-reading those e-mail threads, I again was stopped at the question: can we do this?
Yes, it would not be just "git tag", but could we do with close enough?
I have some rudimentary code ready to actually do uploads from the CI.
However the list of caveats are currently pretty long.
Yes, it works in principle.
It is still disabled here, because in practice it does not yet work.
Problems with this setup
So what are the problems?
It requires the git tags to include the both signed files for a successful source upload.
This is solved by a new tool that could be a git sub-command.
It just creates the source package, signs it and adds the signed file describing the upload (the .dsc
and .changes
file) to the tag to be pushed.
The CI then extracts the signed files from the tag message and does it's work as normal.
It requires a sufficiently reproducible build for source packages.
Right now it is only known to work with the special 3.0 (gitarchive)
source format, but even that requires the latest version of this format.
No idea if it is possible to use others, like 3.0 (quilt)
for this purpose.
The shared GitLab runner provides by Salsa do not allow ftp access to the outside.
But Debian still uses ftp to do uploads.
At least if you don't want to share your ssh key, which can't be restricted to uploads only, but ssh would not work either.
And as the current host for those builds, the Google Cloud Platform, does not provide connection tracking support for ftp, there is no easy way to allow that without just allowing everything.
So we have no way to currently actually perform uploads from this platform.
Further work
As this is code running in a CI under the control of the developer, we can easily do other workflows.
Some teams do workflows that do tags after acceptance into the Debian archive.
Or they don't use tags at all.
With some other markers, like variables or branch names, this support can be expanded easily.
Unrelated to this task here, we might want to think about tying the .changes
files for uploads to the target archive.
As this code makes all of them readily available in form of tag message, replaying them into possible other archives might be more of a concern now.
Conclusion
So to come back to the question, yes we can.
We can prepare uploads using our CI in a way that they would be accepted into the Debian archive.
It just needs some more work on infrastructure.
This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.
Originally posted 2023-08-13, minimally edited 2023-08-15 which changed the timestamo and URL.
Next.