RELEASE=sid autopkgtest-build-incus images:debian/trixie
RELEASE=sid autopkgtest-build-incus --vm images:debian/trixie
python-asyncclick
, python-pyaarlo
and
prepared updates for python-ring-doorbell
and simplemonitor
.adduser
, apt-listchanges
, debconf
and shadow
.base-files
which could lead
to an unpack error from dpkg
.
This is now prevented by having base-files.preinst
error out.qemu-user
when emulation is
required for execution during cross compilation.lprint
and magicfilter
to fix
RC-bugs that appeared due to the introduction of gcc-14.rbtlog
a bootstrappable build is one that builds existing software from scratch for example, building GCC without relying on an existing copy of GCC. In 2023, the Guix project announced that the project had reduced the size of the binary bootstrap seed needed to build its operating system to just 357-bytes not counting the Linux kernel required to run the build process.The article goes onto to describe that now, the live-bootstrap project has gone a step further and removed the need for an existing kernel at all. and concludes:
The real benefit of bootstrappable builds comes from a few things. Like reproducible builds, they can make users more confident that the binary packages downloaded from a package mirror really do correspond to the open-source project whose source code they can inspect. Bootstrappable builds have also had positive effects on the complexity of building a Linux distribution from scratch [ ]. But most of all, bootstrappable builds are a boon to the longevity of our software ecosystem. It s easy for old software to become unbuildable. By having a well-known, self-contained chain of software that can build itself from a small seed, in a variety of environments, bootstrappable builds can help ensure that today s software is not lost, no matter where the open-source community goes from here
.deb
files from our tests.reproducible-builds.org testing framework and the ones in the official Debian archive. Following up from a previous post on the reproducibility of Trisquel, Simon notes that typically [the] rebuilds do not match the official packages, even when they say the package is reproducible , Simon correctly identifies that the purpose of [these] rebuilds are not to say anything about the official binary build, instead the purpose is to offer a QA service to maintainers by performing two builds of a package and declaring success if both builds match.
However, Simon s post swiftly moves on to announce a new tool called debdistrebuild that performs rebuilds of the difference between two distributions in a GitLab pipeline and displays diffoscope output for further analysis.
Reproducible Central is an initiative that curates a list of reproducible Maven libraries, but the list is limited and challenging to maintain due to manual efforts. [We] investigate the feasibility of automatically finding the source code of a library from its Maven release and recovering information about the original release environment. Our tool, AROMA, can obtain this critical information from the artifact and the source repository through several heuristics and we use the results for reproduction attempts of Maven packages. Overall, our approach achieves an accuracy of up to 99.5% when compared field-by-field to the existing manual approach [and] we reveal that automatic reproducibility is feasible for 23.4% of the Maven packages using AROMA, and 8% of these packages are fully reproducible.
rattler-build
, noting that as you can imagine, building packages reproducibly on Windows is the hardest challenge (so far!) . Nichita goes onto mention that the Apple ecosystem appears to be using ZERO_AR_DATE
over SOURCE_DATE_EPOCH
. [ ]
CreationDate
metadata values.
SOURCE_DATE_EPOCH
is configured, Sphinx projects that have configured their copyright notices using dynamic elements can produce nonsensical output under some circumstances. James query ended up generating a number of replies.
rb-core@lists.reproducible-builds.org
and let us know . [ ]
rbtlog
On our mailing list, Fay Stegerman announced a new Reproducible Builds collaboration in the Android ecosystem:
We are pleased to announce Reproducible Builds, special client support and more in our repo : a collaboration between various independent interoperable projects: the IzzyOnDroid team, 3rd-party clients Droid-ify & Neo Store, and rbtlog
(part of my collection of tools for Android Reproducible Builds) to bring Reproducible Builds to IzzyOnDroid and the wider Android ecosystem.
[S]oftware repositories are a vital component of software development and release, with packages downloaded both for direct use and to use as dependencies for other software. Further, when software is updated due to patched vulnerabilities or new features, it is vital that users are able to see and install this patched version of the software. However, this process of updating software can also be the source of attack. To address these attacks, secure software update systems have been proposed. However, these secure software update systems have seen barriers to widespread adoption. The Update Framework (TUF) was introduced in 2010 to address several attacks on software update systems including repository compromise, rollback attacks, and arbitrary software installation. Despite this, compromises continue to occur, with millions of users impacted by such compromises. My work has addressed substantial challenges to adoption of secure software update systems grounded in an understanding of practical concerns. Work with industry and academic communities provided opportunities to discover challenges, expand adoption, and raise awareness about secure software updates. [ ]
ordering_differences_in_pkg_info
.
i386
binaries on the i386
architecture is different when building i386
binaries under amd64
. The fix was narrowed down to x87 excess precision, which can result in slightly different register choices when the compiler is hosted on x86_64
or i386
and a fix committed. [ ]
.apk
files signed with the latest apksigner
could no longer be verified as reproducible. Fay identified the issue as follows:
SinceShe documented multiple available workarounds and filed a bug in Google s issue tracker.build-tools
>= 35.0.0-rc1, backwards-incompatible changes toapksigner
breakapksigcopier
as it now by default forcibly replaces existing alignment padding and changed the default page alignment from 4k to 16k (same as Android Gradle Plugin >= 8.3, so the latter is only an issue when using older AGP). [ ]
272
and Mattia Rizzolo uploaded version 273
to Debian, and the following changes were made as well:
convert
utility is from ImageMagick version 6.x. The command-line interface has seemingly changed with the 7.x series of ImageMagick. [ ]test_jpeg_image
. [ ]identify_version
method after a refactoring change in a previous commit. [ ]assert_diff
in the test_openssh_pub_key
package. [ ]ffmpeg
version 7.x which adds some extra context to the diff. [ ]ed25519
-format key [ ][ ][ ]debian/source
to make sure not to pack the Python sdist
directory into the binary Debian package. [ ]SOURCE_DATE_EPOCH
page to include instructions on how to create reproducible .zip
files from within Python using the zipfile
module. [ ]
rbtlog
to the Tools page [ ] and IzzyOnDroid to the Projects page [ ], also ensuring that the latter page was always sorted regardless of the ordering within the input data files. [ ]
rattler-build
to the Projects page. [ ][ ][ ]
armagetron
(date)blaspp
(hostname)cligen
(GnuTLSs date)cloudflared
(date)dpdk
(Sphinx doctrees)fonttosfnt/xorg-x11-fonts
(toolchain, date)gegl
(build machine details)gettext-runtime
(jar mtime)kf6-kirigami+kf6-qqc2-desktop-style
(race-condition)kubernetes1.26
(backport upstream fix for random path)lapackpp
(hostname)latex2html
(nochecks)libdb-4_8
(.jar
modification time)librcc
(already merged upstream)libreoffice
(strip .jar
mtimes + clucene-core
toolchain)maliit-keyboard
(nocheck)nautilus
(date)openblas
(CPU type, fixed)openssl-3
(random-related issue)python-ruff
(ASLR)python3
(date, parallelism/race)reproducible-faketools
(0.5.2)sphinx
(GZip modification time)sphinxcontrib
(gzip mtime)bremner
access to the ionos7
node. [ ][ ]#reproducible-builds
on irc.oftc.net
.
rb-general@lists.reproducible-builds.org
strace
output and comparing debug logs between versions.
There are still some regressions to sort out, including a problem with
socket activation, and problems in
libssh2 and
Twisted due to DSA now
being disabled at compile-time.
Speaking of DSA, I wrote a release
note
for this change, which is now merged.
GCC 14 regressions
I fixed a number of build failures with GCC 14, mostly in my older packages:
grub (legacy),
imaptool,
kali,
knews, and
vigor.
autopkgtest
I contributed a change to allow maintaining Incus container and VM images
in
parallel.
I use both of these regularly (containers are faster, but some tests need
full machine isolation), and the build tools previously didn t handle that
very well.
I now have a script that just does this regularly to keep my images up to
date (although for now I m running this with PATH
pointing to autopkgtest
from git, since my change hasn t been released yet):
RELEASE=sid autopkgtest-build-incus images:debian/trixie
RELEASE=sid autopkgtest-build-incus --vm images:debian/trixie
imp
module.
I added non-superficial autopkgtests to a number of packages, including
httmock, py-macaroon-bakery, python-libnacl, six, and storm.
I switched a number of packages to build using PEP
517 rather than calling setup.py
directly, including alembic, constantly, hyperlink, isort, khard,
python-cpuinfo, and python3-onelogin-saml2. (Much of this was by working
through the
missing-prerequisite-for-pyproject-backend
Lintian tag, but there s still lots to do.)
I upgraded frozenlist, ipykernel, isort, langtable, python-exceptiongroup,
python-launchpadlib, python-typeguard, pyupgrade, sqlparse, storm, and
uncertainties to new upstream versions. In the process, I added myself to
Uploaders
for isort, since the previous primary uploader has
retired.
Other odds and ends
I applied a suggestion by Chris Hofstaedtler to create /etc/subuid and
/etc/subgid in base-passwd, since the
login package is no longer essential.
I fixed a wireless-tools regression due
to iproute2 dropping its (/usr)/sbin/ip
compatibility symlink.
I applied a suggestion by Petter Reinholdtsen to add AppStream
metainfo to pcmciautils.
allowas-in
directive.1
In the example below, we have four routers in a single confederation, each in
its own sub-AS. R0
originates the 2001:db8::1/128
prefix. R1
, R2
, and
R3
forward this prefix to the next router in the loop.
The router configurations are available in a Git repository. They are
running Cisco IOS XR. R2
uses the following configuration for BGP:
router bgp 64502 bgp confederation peers 64500 64501 64503 ! bgp confederation identifier 64496 bgp router-id 1.0.0.2 address-family ipv6 unicast ! neighbor 2001:db8::2:0 remote-as 64501 description R1 address-family ipv6 unicast ! ! neighbor 2001:db8::3:1 remote-as 64503 advertisement-interval 0 description R3 address-family ipv6 unicast next-hop-self as-override ! ! !
R3
uses both as-override
and next-hop-self
directives.
The latter is only necessary to make the announced prefix valid, as there is no
IGP in this example.2
Here s the sequence of events leading to an infinite AS path:
R0
sends the prefix to R1
with AS path (64500)
.3
R1
selects it as the best path, forwarding it to R2
with AS
path (64501 64500)
.
R2
selects it as the best path, forwarding it to R3
with AS
path (64500 64501 64502)
.
R3
selects it as the best path. It would forward it to R1
with AS path
(64503 64502 64501 64500)
, but due to AS override, it substitutes R1
s
ASN with its own, forwarding it with AS path (64503 64502 64503 64500)
.
R1
accepts the prefix, as its own ASN is not in the AS path. It compares
this new prefix with the one from R0
. Both (64500)
and (64503 64502
64503 64500)
have the same length because confederation sub-ASes don t
contribute to AS path length. The first tie-breaker is the router ID.
R0
s router ID (1.0.0.4
) is higher than R3
s (1.0.0.3
). The new
prefix becomes the best path and is forwarded to R2
with AS path (64501
64503 64501 64503 64500)
.
R2
receives the new prefix, replacing the old one. It selects it as the
best path and forwards it to R3
with AS path (64502 64501 64502 64501
64502 64500)
.
R3
receives the new prefix, replacing the old one. It selects it as the
best path and forwards it to R0
with AS path (64503 64502 64503 64502
64503 64502 64500)
.
R1
receives the new prefix, replacing the old one. Again, it competes with
the prefix from R0
, and again the new prefix wins due to the lower router
ID. The prefix is forwarded to R2
with AS path (64501 64503 64501 64503
64501 64503 64501 64500)
.
R1
views the looping prefix as follows:4
RP/0/RP0/CPU0:R1#show bgp ipv6 u 2001:db8::1/128 bestpath-compare BGP routing table entry for 2001:db8::1/128 Last Modified: Jul 28 10:23:05.560 for 00:00:00 Paths: (2 available, best #2) Path #1: Received by speaker 0 Not advertised to any peer (64500) 2001:db8::1:0 from 2001:db8::1:0 (1.0.0.4), if-handle 0x00000000 Origin IGP, metric 0, localpref 100, valid, confed-external Received Path ID 0, Local Path ID 0, version 0 Higher router ID than best path (path #2) Path #2: Received by speaker 0 Advertised IPv6 Unicast paths to peers (in unique update groups): 2001:db8::2:1 (64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64503 64502 64500) 2001:db8::4:0 from 2001:db8::4:0 (1.0.0.3), if-handle 0x00000000 Origin IGP, metric 0, localpref 100, valid, confed-external, best, group-best Received Path ID 0, Local Path ID 1, version 37 best of AS 64503, Overall best
allowconfedas-in
instead. It s available since IOS XR 7.11.
advertisement-interval
directive. Its default value is 30 seconds for eBGP
peers (even in the same confederation). R1
and R2
set this value to 0,
while R3
sets it to 2 seconds. This gives some time to watch the AS path
grow.
as-override
on an
AS path with a too long AS_CONFED_SEQUENCE
. This should be fixed around
24.3.1.
/etc/systemd/network/dhcp.network
:
[Match] Name=en* wl* [Network] DHCP=yesThis correctly configured IPv4 via DHCP, with the small caveat that it doesn't update
/etc/resolv.conf
without installing resolvconf or systemd-resolved.
However, networkd's default IPv6 settings really are not suitable for public consumption. The key issues (see Bug #1076432):
IPv6PrivacyExtensions=yes
to the above exposed another issue: instead of using the fe80 address generated by the kernel, networkd adds a new one.allow dell_datamgrd_t self:capability dac_override dac_read_search mknod sys_rawio sys_admin ; allow dell_datamgrd_t self:lockdown integrity; dev_rx_raw_memory(dell_datamgrd_t) dev_rw_generic_chr_files(dell_datamgrd_t) dev_rw_ipmi_dev(dell_datamgrd_t) dev_rw_sysfs(dell_datamgrd_t) storage_raw_read_fixed_disk(dell_datamgrd_t) storage_raw_write_fixed_disk(dell_datamgrd_t) allow dellsrvadmin_t self:lockdown integrity; allow dellsrvadmin_t self:capability sys_admin sys_rawio ; dev_read_raw_memory(dellsrvadmin_t) dev_rw_sysfs(dellsrvadmin_t) dev_rx_raw_memory(dellsrvadmin_t)The best thing that Dell could do for their customers is to make this free software and allow the community to fix some of these issues.
policy_module(dellsrvadmin,1.0.0) require type dmidecode_exec_t; type udev_t; type device_t; type event_device_t; type mon_local_test_t; type dellsrvadmin_t; type dellsrvadmin_exec_t; init_daemon_domain(dellsrvadmin_t, dellsrvadmin_exec_t) type dell_datamgrd_t; type dell_datamgrd_exec_t; init_daemon_domain(dell_datamgrd_t, dell_datamgrd_t) type dellsrvadmin_var_t; files_type(dellsrvadmin_var_t) domain_transition_pattern(udev_t, dellsrvadmin_exec_t, dellsrvadmin_t) modutils_domtrans(dellsrvadmin_t) allow dell_datamgrd_t device_t:dir rw_dir_perms; allow dell_datamgrd_t device_t:chr_file create; allow dell_datamgrd_t event_device_t:chr_file read write ; allow dell_datamgrd_t self:process signal; allow dell_datamgrd_t self:fifo_file rw_file_perms; allow dell_datamgrd_t self:sem create_sem_perms; allow dell_datamgrd_t self:capability dac_override dac_read_search mknod sys_rawio sys_admin ; allow dell_datamgrd_t self:lockdown integrity; allow dell_datamgrd_t self:unix_dgram_socket create_socket_perms; allow dell_datamgrd_t self:netlink_route_socket r_netlink_socket_perms; modutils_domtrans(dell_datamgrd_t) can_exec(dell_datamgrd_t, dmidecode_exec_t) allow dell_datamgrd_t dellsrvadmin_var_t:dir rw_dir_perms; allow dell_datamgrd_t dellsrvadmin_var_t:file manage_file_perms; allow dell_datamgrd_t dellsrvadmin_var_t:lnk_file read; allow dell_datamgrd_t dellsrvadmin_var_t:sock_file manage_file_perms; kernel_read_network_state(dell_datamgrd_t) kernel_read_system_state(dell_datamgrd_t) kernel_search_fs_sysctls(dell_datamgrd_t) kernel_read_vm_overcommit_sysctl(dell_datamgrd_t) # for /proc/bus/pci/* kernel_write_proc_files(dell_datamgrd_t) corecmd_exec_bin(dell_datamgrd_t) corecmd_exec_shell(dell_datamgrd_t) corecmd_shell_entry_type(dell_datamgrd_t) dev_rx_raw_memory(dell_datamgrd_t) dev_rw_generic_chr_files(dell_datamgrd_t) dev_rw_ipmi_dev(dell_datamgrd_t) dev_rw_sysfs(dell_datamgrd_t) files_search_tmp(dell_datamgrd_t) files_read_etc_files(dell_datamgrd_t) files_read_etc_symlinks(dell_datamgrd_t) files_read_usr_files(dell_datamgrd_t) logging_search_logs(dell_datamgrd_t) miscfiles_read_localization(dell_datamgrd_t) storage_raw_read_fixed_disk(dell_datamgrd_t) storage_raw_write_fixed_disk(dell_datamgrd_t) can_exec(mon_local_test_t, dellsrvadmin_exec_t) allow mon_local_test_t dellsrvadmin_var_t:dir search; allow mon_local_test_t dellsrvadmin_var_t:file read_file_perms; allow mon_local_test_t dellsrvadmin_var_t:file setattr; allow mon_local_test_t dellsrvadmin_var_t:sock_file write; allow mon_local_test_t dell_datamgrd_t:unix_stream_socket connectto; allow mon_local_test_t self:sem create read write destroy unix_write ; allow dellsrvadmin_t self:process signal; allow dellsrvadmin_t self:lockdown integrity; allow dellsrvadmin_t self:sem create_sem_perms; allow dellsrvadmin_t self:fifo_file rw_file_perms; allow dellsrvadmin_t self:packet_socket create; allow dellsrvadmin_t self:unix_stream_socket connectto create_stream_socket_perms ; allow dellsrvadmin_t self:capability sys_admin sys_rawio ; dev_read_raw_memory(dellsrvadmin_t) dev_rw_sysfs(dellsrvadmin_t) dev_rx_raw_memory(dellsrvadmin_t) allow dellsrvadmin_t dellsrvadmin_var_t:dir rw_dir_perms; allow dellsrvadmin_t dellsrvadmin_var_t:file manage_file_perms; allow dellsrvadmin_t dellsrvadmin_var_t:lnk_file read; allow dellsrvadmin_t dellsrvadmin_var_t:sock_file write; allow dellsrvadmin_t dell_datamgrd_t:unix_stream_socket connectto; kernel_read_network_state(dellsrvadmin_t) kernel_read_system_state(dellsrvadmin_t) kernel_search_fs_sysctls(dellsrvadmin_t) kernel_read_vm_overcommit_sysctl(dellsrvadmin_t) corecmd_exec_bin(dellsrvadmin_t) corecmd_exec_shell(dellsrvadmin_t) corecmd_shell_entry_type(dellsrvadmin_t) files_read_etc_files(dellsrvadmin_t) files_read_etc_symlinks(dellsrvadmin_t) files_read_usr_files(dellsrvadmin_t) logging_search_logs(dellsrvadmin_t) miscfiles_read_localization(dellsrvadmin_t)Here is dellsrvadmin.fc:
/opt/dell/srvadmin/sbin/.* -- gen_context(system_u:object_r:dellsrvadmin_exec_t,s0) /opt/dell/srvadmin/sbin/dsm_sa_datamgrd -- gen_context(system_u:object_r:dell_datamgrd_t,s0) /opt/dell/srvadmin/bin/.* -- gen_context(system_u:object_r:dellsrvadmin_exec_t,s0) /opt/dell/srvadmin/var(/.*)? gen_context(system_u:object_r:dellsrvadmin_var_t,s0) /opt/dell/srvadmin/etc/srvadmin-isvc/ini(/.*)? gen_context(system_u:object_r:dellsrvadmin_var_t,s0)
meson.build
:
project('example', 'cpp', version: '1.0', license : ' ', default_options: ['warning_level=everything', 'cpp_std=c++17'])
subdir('example')
example/meson.build
:
test_example = executable('example-test', ['main.cc'])
example/string.h
:
/* This file intentionally left empty */
example/main.cc
:
#include <cstring>
int main(int argc,const char* argv[])
std::string foo("foo");
return 0;
$ meson setup builddir
The Meson build system
Version: 1.0.1
Source dir: /home/enrico/dev/deb/wobble-repr
Build dir: /home/enrico/dev/deb/wobble-repr/builddir
Build type: native build
Project name: example
Project version: 1.0
C++ compiler for the host machine: ccache c++ (gcc 12.2.0 "c++ (Debian 12.2.0-14) 12.2.0")
C++ linker for the host machine: c++ ld.bfd 2.40
Host machine cpu family: x86_64
Host machine cpu: x86_64
Build targets in project: 1
Found ninja-1.11.1 at /usr/bin/ninja
$ ninja -C builddir
ninja: Entering directory builddir'
[1/2] Compiling C++ object example/example-test.p/main.cc.o
FAILED: example/example-test.p/main.cc.o
ccache c++ -Iexample/example-test.p -Iexample -I../example -fdiagnostics-color=always -D_FILE_OFFSET_BITS=64 -Wall -Winvalid-pch -Wextra -Wpedantic -Wcast-qual -Wconversion -Wfloat-equal -Wformat=2 -Winline -Wmissing-declarations -Wredundant-decls -Wshadow -Wundef -Wuninitialized -Wwrite-strings -Wdisabled-optimization -Wpacked -Wpadded -Wmultichar -Wswitch-default -Wswitch-enum -Wunused-macros -Wmissing-include-dirs -Wunsafe-loop-optimizations -Wstack-protector -Wstrict-overflow=5 -Warray-bounds=2 -Wlogical-op -Wstrict-aliasing=3 -Wvla -Wdouble-promotion -Wsuggest-attribute=const -Wsuggest-attribute=noreturn -Wsuggest-attribute=pure -Wtrampolines -Wvector-operation-performance -Wsuggest-attribute=format -Wdate-time -Wformat-signedness -Wnormalized=nfc -Wduplicated-cond -Wnull-dereference -Wshift-negative-value -Wshift-overflow=2 -Wunused-const-variable=2 -Walloca -Walloc-zero -Wformat-overflow=2 -Wformat-truncation=2 -Wstringop-overflow=3 -Wduplicated-branches -Wattribute-alias=2 -Wcast-align=strict -Wsuggest-attribute=cold -Wsuggest-attribute=malloc -Wanalyzer-too-complex -Warith-conversion -Wbidi-chars=ucn -Wopenacc-parallelism -Wtrivial-auto-var-init -Wctor-dtor-privacy -Weffc++ -Wnon-virtual-dtor -Wold-style-cast -Woverloaded-virtual -Wsign-promo -Wstrict-null-sentinel -Wnoexcept -Wzero-as-null-pointer-constant -Wabi-tag -Wuseless-cast -Wconditionally-supported -Wsuggest-final-methods -Wsuggest-final-types -Wsuggest-override -Wmultiple-inheritance -Wplacement-new=2 -Wvirtual-inheritance -Waligned-new=all -Wnoexcept-type -Wregister -Wcatch-value=3 -Wextra-semi -Wdeprecated-copy-dtor -Wredundant-move -Wcomma-subscript -Wmismatched-tags -Wredundant-tags -Wvolatile -Wdeprecated-enum-enum-conversion -Wdeprecated-enum-float-conversion -Winvalid-imported-macros -std=c++17 -O0 -g -MD -MQ example/example-test.p/main.cc.o -MF example/example-test.p/main.cc.o.d -o example/example-test.p/main.cc.o -c ../example/main.cc
In file included from ../example/main.cc:1:
/usr/include/c++/12/cstring:77:11: error: memchr has not been declared in ::
77 using ::memchr;
^~~~~~
/usr/include/c++/12/cstring:78:11: error: memcmp has not been declared in ::
78 using ::memcmp;
^~~~~~
/usr/include/c++/12/cstring:79:11: error: memcpy has not been declared in ::
79 using ::memcpy;
^~~~~~
/usr/include/c++/12/cstring:80:11: error: memmove has not been declared in ::
80 using ::memmove;
^~~~~~~
Another thing to note is that include_directories
adds both the source
directory and corresponding build directory to include path, so you don't
have to care.
It seems that I have to care after all.
Thankfully there is an implicit_include_directories
setting
that can turn this off if needed.
Its documentation is not as easy to find as I'd like (kudos to Kangie on IRC),
and hopefully this blog post will make it easier for me to find it in the
future.
.Call(symbol)
but CRAN also found two packages
regressing which then required them to take five days to get back to
us. One issue was known; another
did not reproduce under our tests against over 2800 reverse dependencies
leading to the eventual release today. Yay. Checks are good and
appreciated, and it does take time by humans to review them.
This release continues with the six-months January-July cycle started
with release
1.0.5 in July 2020. As a reminder, we do of course make interim
snapshot dev or rc releases available via the Rcpp drat repo as well as
the r-universe page and
repo and strongly encourage their use and testing I run my systems
with these versions which tend to work just as well, and are also fully
tested against all reverse-dependencies.
Rcpp has long established itself
as the most popular way of enhancing R with C or C++ code. Right now,
2867 packages on CRAN depend on
Rcpp for making analytical code go
faster and further, along with 256 in BioConductor. On CRAN, 13.6% of
all packages depend (directly) on Rcpp, and 59.9% of all compiled packages
do. From the cloud mirror of CRAN (which is but a subset of all CRAN
downloads), Rcpp has been downloaded
86.3 million times. The two published papers (also included in the
package as preprint vignettes) have, respectively, 1848 (JSS, 2011) and 324 (TAS, 2018)
citations, while the the book (Springer useR!,
2013) has another 641.
This release is incremental as usual, generally preserving existing
capabilities faithfully while smoothing our corners and / or extending
slightly, sometimes in response to changing and tightened demands from
CRAN or R standards. The move towards a
more standardized approach for the C API of R leads to a few changes;
Kevin did most of the PRs for this. Andrew Johnsom also provided a very
nice PR to update internals taking advantage of variadic templates.
The full list below details all changes, their respective PRs and, if
applicable, issue tickets. Big thanks from all of us to all
contributors!
Thanks to my CRANberries, you can also look at a diff to the previous release Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page. Bugs reports are welcome at the GitHub issue tracker as well (where one can also search among open or closed issues). If you like this or other open-source work I do, you can sponsor me at GitHub.Changes in Rcpp release version 1.0.13 (2024-07-11)
- Changes in Rcpp API:
- Set R_NO_REMAP if not already defined (Dirk in #1296)
- Add variadic templates to be used instead of generated code (Andrew Johnson in #1303)
- Count variables were switches to
size_t
to avoid warnings about conversion-narrowing (Dirk in #1307)- Rcpp now avoids the usage of the (non-API) DATAPTR function when accessing the contents of Rcpp Vector objects where possible. (Kevin in #1310)
- Rcpp now emits an R warning on out-of-bounds Vector accesses. This may become an error in a future Rcpp release. (Kevin in #1310)
- Switch
VECTOR_PTR
andSTRING_PTR
to new API-compliantRO
variants (Kevin in #1317 fixing #1316)- Changes in Rcpp Deployment:
- Small updates to the CI test containers have been made (#1304)
This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.
$ sudo apt update
$ sudo apt install nginx
Output
nginx.service - A high performance web server and a reverse proxy server
Loaded: loaded (/lib/systemd/system/nginx.service; enabled; vendor preset: enabled)
Active: active (running) since Tue 2024-02-12 09:59:20 UTC; 3h ago
Docs: man:nginx(8)
Main PID: 2887 (nginx)
Tasks: 2 (limit: 1132)
Memory: 4.2M
CPU: 81ms
CGroup: /system.slice/nginx.service
2887 nginx: master process /usr/sbin/nginx -g daemon on; master_process on;
2890 nginx: worker process
$ sudo apt install certbot python3-certbot-nginx
$ sudo vi /etc/nginx/sites-available/example.com
server
listen 80;
root /var/www/html/;
index index.html;
server_name example.com
location /
try_files $uri $uri/ =404;
location /test.html
try_files $uri $uri/ =404;
auth_basic "admin area";
auth_basic_user_file /etc/nginx/.htpasswd;
$ sudo nginx -t
$ sudo certbot --nginx -d example.com
Output
IMPORTANT NOTES:
- Congratulations! Your certificate and chain have been saved at:
/etc/letsencrypt/live/example.com/fullchain.pem
Your key file has been saved at:
/etc/letsencrypt/live/example.com/privkey.pem
Your cert will expire on 2024-05-12. To obtain a new or tweaked
version of this certificate in the future, simply run certbot again
with the "certonly" option. To non-interactively renew *all* of
your certificates, run "certbot renew"
- If you like Certbot, please consider supporting our work by:
Donating to ISRG / Let's Encrypt: https://letsencrypt.org/donate
Donating to EFF: https://eff.org/donate-le
$ sudo nginx -t
$ sudo systemctl reload nginx
To understand software signing in practice, we interviewed 18 high-ranking industry practitioners across 13 organizations. We provide possible impacts of experienced software supply chain failures, security standards, and regulations on software signing adoption. We also study the challenges that affect an effective software signing implementation.
Code signing enables software developers to digitally sign their code using cryptographic keys, thereby associating the code to their identity. This allows users to verify the authenticity and integrity of the software, ensuring it has not been tampered with. Next-generation software signing such as Sigstore and OpenPubKey simplify code signing by providing streamlined mechanisms to verify and link signer identities to the public key. However, their designs have vulnerabilities: reliance on an identity provider introduces a single point of failure, and the failure to follow the principle of least privilege on the client side increases security risks. We introduce Diverse Identity Verification (DiVerify) scheme, which strengthens the security guarantees of next-generation software signing by leveraging threshold identity validations and scope mechanisms.
[ ] resulted in an extension of GNU make which is called rmake
, where diffoscope a tool for detecting differences between a large number of file types was integrated into the workflow of make. rmake was later used to answer the posed research questions for this thesis. We found that different build paths and offsets are a big problem as three out of three tested Free and Open Source Software projects all contained these variations. The results also showed that gcc s optimisation levels did not affect reproducibility, but link-time optimisation embeds a lot of unreproducible information in build artefacts. Lastly, the results showed that build paths, build ID s and randomness are the three most common groups of variations encountered in the wild and potential solutions for some variations were proposed.
The thesis serves as an introduction to the concept of reproducibility in software engineering, offering a comprehensive overview of formalizations using mathematical notations for key concepts and an empirical evaluation of several key tools. By exploring various case studies, methodologies and tools, the research aims to provide actionable insights for practitioners and researchers alike.
qutebrowser
, samba
and systemd
), Chris Lamb filing Debian bug #1074214 against the fastfetch
package and Arnout Engelen proposing fixes to refind
and for the Scala compiler [ .
270
and 271
) to Debian, and made the following changes as well:
Build-Depends
on liblz4-tool
in order to fix Debian bug #1072575. [ ]zipdetails
version 4.004
that is shipped with Perl 5.40. [ ]<h4>
elements more distinguishable from the <h3>
level [ ][ ] as well as adding a guide for Dockerfile
reproducibility [ ]. In addition Fay Stegerman added two tools, apksigcopier
and reproducible-apk-tools
, to our Tools page.
virt(32 64)c-armhf
nodes as down. [ ]osuosl4
node in order to debug a regression on the ppc64el
architecture. [ ]osuosl4
node. [ ][ ]/etc/default/jenkins
file with changes performed upstream [ ] and changed how configuration files are handled on the rb-mail1
host. [ ], whilst Vagrant Cascadian documented the failure of the virt32c
and virt64c
nodes after initial investigation [ ].
#reproducible-builds
on irc.oftc.net
.
rb-general@lists.reproducible-builds.org
guix challenge
command.
Or I should say debdistrebuild
has attempted to rebuild those distributions. The number of identically built packages are fairly low, so I didn t want to waste resources building the rest of the archive until I understand if the differences are due to consequences of my build environment (plain apt-get build-dep
followed by dpkg-buildpackage
in a fresh container), or due to some real difference. Summarizing the results, debdistrebuild
is able to rebuild 34% of Debian bullseye on amd64, 36% of bookworm on amd64, 32% of bookworm on arm64. The results for trixie and Ubuntu are disappointing, below 10%.
So what causes my rebuilds to be different from the official rebuilds? Some are trivial like the classical problem of varying build paths, resulting in a different NT_GNU_BUILD_ID
causing a mismatch. Some are a bit strange, like a subtle difference in one of perl s headers file. Some are due to embedded version numbers from a build dependency. Several of the build logs and diffoscope outputs doesn t make sense, likely due to bugs in my build scripts, especially for Ubuntu which appears to strip translations and do other build variations that I don t do. In general, the classes of reproducibility problems are the expected. Some are assembler differences for GnuPG s gpgv-static, likely triggered by upload of a new version of gcc after the original package was built. There are at least two ways to resolve that problem: either use the same version of build dependencies that were used to produce the original build, or demand that all packages that are affected by a change in another package are rebuilt centrally until there are no more differences.
The current design of debdistrebuild
uses the latest version of a build dependency that is available in the distribution. We call this a idempotent rebuild . This is usually not how the binary packages were built originally, they are often built against earlier versions of their build dependency. That is the situation for most binary distributions.
Instead of using the latest build dependency version, higher reproducability may be achieved by rebuilding using the same version of the build dependencies that were used during the original build. This requires parsing buildinfo files to find the right version of the build dependency to install. We believe doing so will lead to a higher number of reproducibly built packages. However it begs the question: can we rebuild that earlier version of the build dependency? This circles back to really old versions and bootstrappable builds eventually.
While rebuilding old versions would be interesting on its own, we believe that is less helpful for trusting the latest version and improving a binary distribution: it is challenging to publish a new version of some old package that would fix a reproducibility bug in another package when used as a build dependency, and then rebuild the later packages with the modified earlier version. Those earlier packages were already published, and are part of history. It may be that ultimately it will no longer be possible to rebuild some package, because proper source code is missing (for packages using build dependencies that were never part of a release); hardware to build a package could be missing; or that the source code is no longer publicly distributable.
I argue that getting to 100% idempotent rebuilds is an interesting goal on its own, and to reach it we need to start measure idempotent rebuild status.
One could conceivable imagine a way to rebuild modified versions of earlier packages, and then rebuild later packages using the modified earlier packages as build dependencies, for the purpose of achieving higher level of reproducible rebuilds of the last version, and to reach for bootstrappability. However, it may be still be that this is insufficient to achieve idempotent rebuilds of the last versions. Idempotent rebuilds are different from a reproducible build (where we try to reproduce the build using the same inputs), and also to bootstrappable builds (in which all binaries are ultimately built from source code). Consider a cycle where package X influence the content of package Y, which in turn influence the content of package X. These cycles may involve several packages, and it is conceivable that a cycle could be circular and infinite. It may be difficult to identify these chains, and even more difficult to break them up, but this effort help identify where to start looking for them. Rebuilding packages using the same build dependency versions as were used during the original build, or rebuilding packages using a bootsrappable build process, both seem orthogonal to the idempotent rebuild problem.
Our notion of rebuildability appears thus to be complementary to reproducible-builds.org s definition and bootstrappable.org s definition. Each to their own devices, and Happy Hacking!
Addendum about terminology: With idempotent rebuild I am talking about a rebuild of the entire operating system, applied to itself. Compare how you build the latest version of the GNU C Compiler: it first builds itself using whatever system compiler is available (often an earlier version of gcc) which we call step 1. Then step 2 is to build a copy of itself using the compiler built in step 1. The final step 3 is to build another copy of itself using the compiler from step 2. Debian, Ubuntu etc are at step 1 in this process right now. The output of step 2 and step 3 ought to be bit-by-bit identical, or something is wrong. The comparison between step 2 and 3 is what I refer to with an idempotent rebuild. Of course, most packages aren t a compiler that can compile itself. However entire operating systems such as Trisquel, PureOS, Ubuntu or Debian are (hopefully) a self-contained system that ought to be able to rebuild itself to an identical copy. Or something is amiss. The reproducible build and bootstrappable build projects are about improve the quality of step 1. The property I am interested is the identical rebuild and comparison in step 2 and 3. I feel the word idempotent describes the property I m interested in well, but I realize there may be better ways to describe this. Ideas welcome!
Series: | Discworld #40 |
Publisher: | Anchor Books |
Copyright: | 2013 |
Printing: | October 2014 |
ISBN: | 0-8041-6920-9 |
Format: | Trade paperback |
Pages: | 365 |
It is said that a soft answer turneth away wrath, but this assertion has a lot to do with hope and was now turning out to be patently inaccurate, since even a well-spoken and thoughtful soft answer could actually drive the wrong kind of person into a state of fury if wrath was what they had in mind, and that was the state the elderly dwarf was now enjoying.One of the best things about Discworld is Pratchett's ability to drop unexpected bits of wisdom in a sentence or two, or twist a verbal knife in an unexpected and surprising direction. Raising Steam still shows flashes of that ability, but it's buried in run-on sentences, drowned in cliches and repetition, and often left behind as the containing sentence meanders off into the weeds and sputters to a confused halt. The idea is still there; the delivery, sadly, is not. This is the first Discworld novel that I found mentally taxing to read. Sentences are often so overpacked that they require real effort to untangle, and the untangled meaning rarely feels worth the effort. The individual voice of the characters is almost gone. Vetinari's monologues, rather than being a rare event with dangerous layers, are frequent, rambling, and indecisive, often sounding like an entirely different character than the Vetinari we know. The constant repetition of the name any given character is speaking to was impossible for me to ignore. And the momentum of the story feels wrong; rather than constructing the events of the story in a way that sweeps the reader along, it felt like Pratchett was constantly pushing, trying to convince the reader that trains were the most exciting thing to ever happen to Discworld. The bones of a good story are here, including further development of dwarf politics from The Fifth Elephant and Thud! and the further fallout of the events of Snuff. There are also glimmers of Pratchett's typically sharp observations and turns of phrase that could have been unearthed and polished. But at the very least this book needed way more editing and a lot of rewriting. I suspect it could have dropped thirty pages just by tightening the dialogue and removing some of the repetition. I'm afraid I did not enjoy this. I am a bit of a hard sell for the magic fascination of trains I love trains, but my model railroad days are behind me and I'm now more interested in them as part of urban transportation policy. Previous Discworld books on technology and social systems did more of the work of drawing the reader in, providing character hooks and additional complexity, and building a firmer foundation than "trains are awesome." The main problem, though, was the quality of the writing, particularly when compared to the previous novels with the same characters. I dragged myself through this book out of a sense of completionism and obligation, and was relieved when I finished it. This is the first Discworld novel that I don't recommend. I think the only reason to read it is if you want to have read all of Discworld. Otherwise, consider stopping with Snuff and letting it be the send-off for the Ankh-Morpork characters. Followed by The Shepherd's Crown, a Tiffany Aching story and the last Discworld novel. Rating: 3 out of 10
To visualize it with [foo] to mark optional parts, it looks like debian/[PACKAGE.][NAME.]STEM[.ARCH] Detecting whether a given file is in fact a packaging file now boils down to reverse engineering its name against this pattern. Again, so far, it might still look manageable. One major complication is that every part (except ARCH) can contain periods. So a trivial "split by period" is not going to cut it. As an example:
- The package name followed by a period. Optional, but must be the first if present.
- The name segment followed by a period. Optional, but must appear between the package name (if present) and the stem. If the package name is not present, then the name segment must be first.
- The stem. Mandatory.
- An architecture restriction prefixed by a period. Optional, must appear after the stem if present.
debian/g++-3.0.user.serviceThis example is deliberately crafted to be ambiguous and show this problem in its full glory. This file name can be in multiple ways:
Therefore, there are a lot of possible ways to split this filename that all matches the pattern but with vastly different meaning and consequences.
- Is the stem service or user.service? (both are known stems from dh_installsystemd and dh_installsystemduser respectively). In fact, it can be both at the same time with "clever" usage of --name=user passed to dh_installsystemd.
- The g++-3.0 can be a package prefix or part of the name segment. Even if there is a g++-3.0 package in debian/control, then debhelper (until compat 15) will still happily match this file for the main package if you pass --name=g++-3.0 to the helper. Side bar: Woe is you if there is a g++-3 and a g++-3.0 package in debian/control, then we have multiple options for the package prefix! Though, I do not think that happens in practice.
A simple solution to the first problem could be to have a static list of known stems. That will get you started but the debhelper eco-system strive on decentralization, so this feels like a mismatch. There is also a second problem with the static list. Namely, a given stem is only "valid" if the command in question is actually in use. Which means you now need to dumpster dive into the mess that is Turning-complete debhelper configuration file known as debian/rules to fully solve that. Thanks to the Turning-completeness, we will never get a perfect solution for a static analysis. Instead, it is time to back out and instead apply some simplifications. Here is a sample flow:
- We need the possible stems up front to have a chance at all. When multiple stems are an option, go for the longest match (that is, the one with most periods) since --name is rare and "code golfing" is even rarer.
- We can make the package prefix mandatory for files with the name segment. This way, the moment there is something before the stem, we know the package prefix will be part of it and can cut it. It does not solve the ambiguity if one package name is a prefix of another package name (from the same source), but it still a lot better. This made its way into debhelper compat 15 and now it is "just" a slow long way to a better future.
With this logic, you can now:
- Check whether the dh sequencer is used. If so, use some heuristics to figure out which addons are used.
- Delegate to dh_assistant to figure out which commands will be used and which debhelper config file stems it knows about. Here we need to know which sequences are in use from step one (if relevant). Combine this with any other sources for stems you have.
- Deconstruct all files in debian/ against the stems and known package names from debian/control. In theory, dumpster diving after --name options would be helpful here, but personally I skipped that part as I want to keep my debian/rules parsing to an absolute minimum.
I have added the logic for all these features in debputy though the documentation association is currently not in a user facing command. All the others are now diagnostics emitted by debputy in its editor support mode (debputy lsp server) or via debputy lint. In the editor mode, the diagnostics are currently associated with the package name in debian/control due to technical limitations of how the editor integration works. Some of these features will the latest version of debhelper (moving target at times). Check with debputy lsp features for the Extra dh support feature, which will be enabled if you got all you need. Note: The detection is currently (mostly) ignoring files with architecture restrictions. That might be lifted in the future. However, architecture restricted config files tend to be rare, so they were not a priority at this point. Additionally, debputy for technical reasons ignores stem typos with multiple matches. That sadly means that typos of debian/docs will often be unreported due to its proximity to debian/dirs and vice versa.
- Provide typo detection of the stem (debian/foo.intsall -> debian/foo.install) provided to have adequate handling of the corner cases (such as debian/*.conf not needing correction into debian/*.config)
- Detect possible invalid package prefix (debian/foo.install without foo being a package). Note this has to be a weak warning unless the package is using debhelper compat 15 or you dumpster dived to validate that dh_install was not passed dh_install --name foo. Agreed, no one should do that, but they can and false positives are the worst kind of positives for a linting tool.
- With some limitations, detect files used without the relevant command being active. As an example, the some integration modes of debputy removes dh_install, so a debian/foo.install would not be used.
- Associate a given file with a given command to assist users with the documentation look up. Like debian/foo.user.service is related to dh_installsystemduser, so man dh_installsystemduser is a natural start for documentation.
The dh_assistant command uses the same logic as dh to identify the active add-ons and loads them. From there, it scans all commands mentioned in the sequence for the PROMISE: DH NOOP WITHOUT ...-hint and a new INTROSPECTABLE: CONFIG-FILES ...-hint. When these hints reference a packaging file (as an example, via pkgfile(foo)) then dh_assistant records that as a known packaging file for that helper. Additionally, debhelper now also tracks commands that were removed from the sequence. Several of the dh_assistant subcommand now use this to enrich their (JSON) output with notes about these commands being known but not active.
- Its own plugins can provide packager provided files. These are only relevant if the package is using debputy.
- It is als possible to provide a debputy plugin that identifies packaging files (either static or named ones). Though in practice, we probably do not want people to roll their own debputy plugin for this purpose, since the detection only works if the plugin is installed. I have used this mechanism to have debhelper provide a debhelper-documentation plugin to enrich the auto-detected data and we can assume most people interested in this feature would have debhelper installed.
- It asks dh_assistant list-guessed-dh-config-files for config files, which is covered below.
$ apt satisfy 'dh-debputy (>= 0.1.43~), debhelper (>= 13.16~), python3-lsprotocol, python3-levenshtein'
# For demo purposes, pull two known repos (feel free to use your own packages here)
$ git clone https://salsa.debian.org/debian/debhelper.git -b debian/13.16
$ git clone https://salsa.debian.org/debian/debputy.git -b debian/0.1.43
$ cd debhelper
$ mv debian/debhelper.install debian/debhelper.intsall
$ debputy lint
warning: File: debian/debhelper.intsall:1:0:1:0: The file "debian/debhelper.intsall" is likely a typo of "debian/debhelper.install"
File-level diagnostic
$ mv debian/debhelper.intsall debian/debhleper.install
$ debputy lint
warning: File: debian/debhleper.install:1:0:1:0: Possible typo in "debian/debhleper.install". Consider renaming the file to "debian/debhelper.debhleper.install" or "debian/debhelper.install" if it is intended for debhelper
File-level diagnostic
$ cd ../debputy
$ touch debian/install
$ debputy lint --no-warn-about-check-manifest
warning: File: debian/install:1:0:1:0: The file debian/install is related to a command that is not active in the dh sequence with the current addons
File-level diagnostic
Next.