Jonathan Dowland: Biosphere
- Bluemars lives on, at echoes of bluemars
Series: | Galactic Empire #2 |
Publisher: | Fawcett Crest |
Copyright: | 1950, 1951 |
Printing: | June 1972 |
Format: | Mass market |
Pages: | 192 |
There was no way of telling when the threshold would be reached. Perhaps not for hours, and perhaps the next moment. Biron remained standing helplessly, flashlight held loosely in his damp hands. Half an hour before, the visiphone had awakened him, and he had been at peace then. Now he knew he was going to die. Biron didn't want to die, but he was penned in hopelessly, and there was no place to hide.Needless to say, Biron doesn't die. Even if your tolerance for pulp melodrama is high, 192 small-print pages of this sort of thing is wearying. Like a lot of Asimov plots, The Stars, Like Dust has some of the shape of a mystery novel. Biron, with the aid of some newfound companions on Rhodia, learns of a secret rebellion against the Tyranni and attempts to track down its base to join them. There are false leads, disguised identities, clues that are difficult to interpret, and similar classic mystery trappings, all covered with a patina of early 1950s imaginary science. To me, it felt constructed and artificial in ways that made the strings Asimov was pulling obvious. I don't know if someone who likes mystery construction would feel differently about it. The worst part of the plot thankfully doesn't come up much. We learn early in the story that Biron was on Earth to search for a long-lost document believed to be vital to defeating the Tyranni. The nature of that document is revealed on the final page, so I won't spoil it, but if you try to think of the stupidest possible document someone could have built this plot around, I suspect you will only need one guess. (In Asimov's defense, he blamed Galaxy editor H.L. Gold for persuading him to include this plot, and disavowed it a few years later.) The Stars, Like Dust is one of the worst books I have ever read. The characters are overwrought, the politics are slapdash and build on broad stereotypes, the romantic subplot is dire and plays out mainly via Biron egregiously manipulating his petulant love interest, and the writing is annoying. Sometimes pulp fiction makes up for those common flaws through larger-than-life feats of daring, sweeping visions of future societies, and ever-escalating stakes. There is little to none of that here. Asimov instead provides tedious political maneuvering among a class of elitist bankers and land owners who consider themselves natural leaders. The only places where the power structures of this future government make sense are where Asimov blatantly steals them from either the Roman Empire or the Doge of Venice. The one thing this book has going for it the thing, apart from bloody-minded completionism, that kept me reading is that the technology is hilariously weird in that way that only 1940s and 1950s science fiction can be. The characters have access to communication via some sort of interstellar telepathy (messages coded to a specific person's "brain waves") and can travel between stars through hyperspace jumps, but each jump is manually calculated by referring to the pilot's (paper!) volumes of the Standard Galactic Ephemeris. Communication between ships (via "etheric radio") requires manually aiming a radio beam at the area in space where one thinks the other ship is. It's an unintentionally entertaining combination of technology that now looks absurdly primitive and science that is so advanced and hand-waved that it's obviously made up. I also have to give Asimov some points for using spherical coordinates. It's a small thing, but the coordinate systems in most SF novels and TV shows are obviously not fit for purpose. I spent about a month and a half of this year barely reading, and while some of that is because I finally tackled a few projects I'd been putting off for years, a lot of it was because of this book. It was only 192 pages, and I'm still curious about the glue between Asimov's Foundation and Robot series, both of which I devoured as a teenager. But every time I picked it up to finally finish it and start another book, I made it about ten pages and then couldn't take any more. Learn from my error: don't try this at home, or at least give up if the same thing starts happening to you. Followed by The Currents of Space. Rating: 2 out of 10
So far, I have not found any reproducibility issues; everything I tested I was able to get to build bit-for-bit identical with what is in the Debian archive.That is to say, reproducibility testing permitted Vagrant and Debian to claim with some confidence that builds performed when this vulnerable version of XZ was installed were not interfered with.
Functional package managers (FPMs) and reproducible builds (R-B) are technologies and methodologies that are conceptually very different from the traditional software deployment model, and that have promising properties for software supply chain security. This thesis aims to evaluate the impact of FPMs and R-B on the security of the software supply chain and propose improvements to the FPM model to further improve trust in the open source supply chain. PDFJulien s paper poses a number of research questions on how the model of distributions such as GNU Guix and NixOS can be leveraged to further improve the safety of the software supply chain , etc.
normal
to a new level of wishlist
. In addition, 28 reviews of Debian packages were added, 38 were updated and 23 were removed this month adding to ever-growing knowledge about identified issues. As part of this effort, a number of issue types were updated, including Chris Lamb adding a new ocaml_include_directories
toolchain issue [ ] and James Addison adding a new filesystem_order_in_java_jar_manifest_mf_include_resource
issue [ ] and updating the random_uuid_in_notebooks_generated_by_nbsphinx
to reference a relevant discussion thread [ ].
In addition, Roland Clobus posted his 24th status update of reproducible Debian ISO images. Roland highlights that the images for Debian unstable often cannot be generated due to changes in that distribution related to the 64-bit time_t
transition.
Lastly, Bernhard M. Wiedemann posted another monthly update for his reproducibility work in openSUSE.
buildinfo
file was present. Arnout Engelen responded with some details.
diff-zip-meta.py
tool to expose extra timestamps embedded in .zip
and .apk
metadata.
CITATION.cff
file. Pol also added an substantial new section to the buy in page documenting the role of Software Bill of Materials (SBOMs) and ephemeral development environments. [ ][ ]
amd64
virtual machines. [ ][ ][ ]
set
data structure is also affected by the PYTHONHASHSEED
functionality. [ ]
259
, 260
and 261
to Debian and made the following additional changes:
zipdetails
tool from the Perl distribution. Thanks to Fay Stegerman and Larry Doolittle et al. for the pointer and thread about this tool. [ ]File.recognizes
so we actually perform the filename check for GNU R data files. [ ].rdb
file without an equivalent .rdx
file. (#1066991).pyc
file with an empty one. [ ].epub
tests after supporting the new zipdetails
tool. [ ]test_zip.py
. [ ]zipfile
module changed to detect potentially insecure overlapping entries within .zip
files. (#362)
Chris Lamb also updated the trydiffoscope
command line client, dropping a build-dependency on the deprecated python3-distutils
package to fix Debian bug #1065988 [ ], taking a moment to also refresh the packaging to the latest Debian standards [ ]. Finally, Vagrant Cascadian submitted an update for diffoscope version 260 in GNU Guix. [ ]
helm
(SSL-related build failure)java-21-openjdk
(parallelism)libressl
(SSL-related build failure)nfdump
(date issue)python-django-q
(avoid stuck build)python-smart-open
(fails to build on single-CPU machines)python-stdnum
(fails to build in 2039)python-yarl
(regression)qemu
(build failure)rabbitmq-java-client
(with Fridrich Strba; Maven timestamp issue)rmw
(build fails in 2038)warewulf
(with Egbert Eich; cpio
modification time and inode issue)wxWidgets
(fails to build in 2038)python-quantities
.gnome-maps
.tox
.q2cli
.mpl-sphinx-theme
.woof-doom
.bochs
.storm-lang
.librsvg
.gretl
.postfix
.node-function-bind
.python-pysaml2
.golang-github-stvp-tempredis
.matplotlib
.pathos
.rdflib
.xonsh
.maven-bundle-plugin
. (This patch was then uploaded by Mattia Rizzollo.)geany
(toolchain-related issue for glfw
)%check
section, thus failing when built with the --no-checks
option. Only half of all openSUSE packages were tested so far, but a large number of bugs were filed, including ones against caddy
, exiv2
, gnome-disk-utility
, grisbi
, gsl
, itinerary
, kosmindoormap
, libQuotient
, med-tools
, plasma6-disks
, pspp
, python-pypuppetdb
, python-urlextract
, rsync
, vagrant-libvirt
and xsimd
.
Similarly, Jean-Pierre De Jesus DIAZ employed reproducible builds techniques in order to test a proposed refactor of the ath9k-htc-firmware
package. As the change produced bit-for-bit identical binaries to the previously shipped pre-built binaries:
I don t have the hardware to test this firmware, but the build produces the same hashes for the firmware so it s safe to say that the firmware should keep working.
armhf
again. [ ][ ]i386
architecture queue. [ ]stats_buildinfo.png
graph once per day. [ ][ ]systemctl
with new systemd-based services. [ ]armhf
and i386
continuous integration tests in order to get some stability back. [ ]deb.debian.org
CDN everywhere. [ ]zst
to the list of packages which are false-positive diskspace issues. [ ]Bot
in the userAgent
for Git. (Re: #929013). [ ]tmpfs
size on our OUSL nodes. [ ]reproducible_build
service. [ ][ ]OOMPolicy=continue
and OOMScoreAdjust=-1000
for both the Jenkins and the reproducible_build
service. [ ]systemd
slice to group all relevant services. [ ][ ]shellcheck
tool. [ ]systemd-run
to handle diffoscope s exit codes specially. [ ]pgrep
tool over grepping the output of ps
. [ ]i386
and armhf
architecture builders. [ ][ ]armhf
architecture due to the time_t
transition. [ ]i386
& armhf
workers. [ ][ ][ ]pbuilder
updates in the unstable distribution, but only on the armhf
architecture. [ ]systemd
service operates. [ ][ ]powercycle_x86_nodes.py
script to use the new IONOS API and its new Python bindings. [ ]stunnel
tool anymore, it shouldn t be needed by anything anymore. [ ]arm64
architecture host keys. [ ]-
) in a variable in order to allow for tags in openQA. [ ]#reproducible-builds
on irc.oftc.net
.
rb-general@lists.reproducible-builds.org
Package maintainers can guarantee package authorship through software signing [but] it is unclear how common this practice is, and whether the resulting signatures are created properly. Prior work has provided raw data on signing practices, but measured single platforms, did not consider time, and did not provide insight on factors that may influence signing. We lack a comprehensive, multi-platform understanding of signing adoption and relevant factors. This study addresses this gap. (arXiv, full PDF)
[The] principle of reusability [ ] makes it harder to reproduce projects build environments, even though reproducibility of build environments is essential for collaboration, maintenance and component lifetime. In this work, we argue that functional package managers provide the tooling to make build environments reproducible in space and time, and we produce a preliminary evaluation to justify this claim.The abstract continues with the claim that Using historical data, we show that we are able to reproduce build environments of about 7 million Nix packages, and to rebuild 99.94% of the 14 thousand packages from a 6-year-old Nixpkgs revision. (arXiv, full PDF)
This paper thus proposes an approach to automatically identify configuration options causing non-reproducibility of builds. It begins by building a set of builds in order to detect non-reproducible ones through binary comparison. We then develop automated techniques that combine statistical learning with symbolic reasoning to analyze over 20,000 configuration options. Our methods are designed to both detect options causing non-reproducibility, and remedy non-reproducible configurations, two tasks that are challenging and costly to perform manually. (HAL Portal, full PDF)
fedora-repro-build
that attempts to reproduce an existing package within a koji build environment. Although the projects README
file lists a number of fields will always or almost always vary and there is a non-zero list of other known issues, this is an excellent first step towards full Fedora reproducibility.
256
, 257
and 258
to Debian and made the following additional changes:
gpg
s use-embedded-filenames. Many thanks to Daniel Kahn Gillmor dkg@debian.org for reporting this issue and providing feedback. [ ][ ]struct.unpack
-related errors when parsing Python .pyc
files. (#1064973). [ ]rdb_expected_diff
on non-GNU systems as %p
formatting can vary, especially with respect to MacOS. [ ]pytest
8.0. [ ]7zip
package (over p7zip-full
) after a Debian package transition. (#1063559). [ ]test_zip
black clean. [ ]diff(1)
correctly [ ][ ] thanks! And lastly, Vagrant Cascadian pushed updates in GNU Guix for diffoscope to version 255, 256, and 258, and updated trydiffoscope to 67.0.6.
README.rst
to match. [ ][ ]--vary=build_path.path
option. [ ][ ][ ][ ]SOURCE_DATE_EPOCH
page. [ ]SOURCE_DATE_EPOCH
documentation re. datetime.datetime.fromtimestamp
. Thanks, James Addison. [ ]/usr/bin/du --apparent-size
in the Jenkins shell monitor. [ ]arm64
nodes. [ ]/proc/$pid/oom_score_adj
to -1000 if it has not already been done. [ ]opemwrt-target-tegra
and jtx
task to the list of zombie jobs. [ ][ ]armhf
architecture build nodes, virt32z
and virt64z
, and insert them into the Munin monitoring. [ ][ ] [ ][ ]tegra
target with mpc85xx
[ ], Jan-Benedict Glaw updated the NetBSD build script to use a separate $TMPDIR
to mitigate out of space issues on a tmpfs-backed /tmp
[ ] and Zheng Junjie added a link to the GNU Guix tests [ ].
Lastly, node maintenance was performed by Holger Levsen [ ][ ][ ][ ][ ][ ] and Vagrant Cascadian [ ][ ][ ][ ].
gimagereader
(date)grass
(date-related issue)grub2
(filesystem ordering issue)latex2html
(drop a non-deterministic log)mhvtl
(tar)obs
(build-tool issue)ollama
(GZip embedding the modification time)presenterm
(filesystem-ordering issue)qt6-quick3d
(parallelism)flask-limiter
.python-parsl-doc
(disable dynamic argument evaluation by Sphinx autodoc
extension)python3-pytest-repeat
(remove entry_points.txt
creation that varied by shell)python3-selinux
(remove packaged direct_url.json
file that embeds build path)python3-sepolicy
(remove packaged direct_url.json
file that embeds build path)pyswarms
.python-x2go
.snapd
(fix timestamp header in packaged manual-page)zzzeeksphinx
(existing RB patch forwarded and merged (with modifications))#reproducible-builds
on irc.oftc.net
.
rb-general@lists.reproducible-builds.org
extract-source
job, used to
produce a debianize source tree of the project. This job was introduced to make
it possible to build the projects on different architectures, on the subsequent
build jobs. However, that extract-source
approach is sub-optimal: not only it
increases the execution time of the pipeline by some minutes, but also projects
whose source tree is too large are not able to use the pipeline. The debianize
source tree is passed as an artifact to the build jobs, and for those large
projects, the size of their source tree exceeds the Salsa s limits. This is
specific issue is documented as
issue #195, and
the proposed solution is to get rid of the extract-source
job, relying on
sbuild
in the very build job (see
issue #296).
Switching to sbuild
would also help to improve the build source
job,
solving issues such as
#187 and
#298.
The
current work-in-progress
is very preliminary, but it has already been possible to run the build (amd64),
build-i386 and build-source job using sbuild with the unshare
mode. The image
on the right shows a pipeline that builds grep. All the test jobs use the
artifacts of the new build job. There is a lot of remaining work, mainly making
the integration with ccache work. This change could break some things, it will
also be important to test how the new pipeline works with complex projects.
Also, thanks to Emmanuel Arias, we are proposing a
Google Summer of Code 2024 project
to improve Salsa CI. As part of the ongoing work in preparation for the GSoC
2024 project, Santiago has proposed a
merge request
to make more efficient how contributors can test their changes on the Salsa CI
pipeline.
debootstrap
. Notably missing is glibc
, which turns out
harder than anticipated via dumat, because
it has Conflicts between different architectures, which dumat does not analyze.
Patches for diversion mitigations have been updated in a way to not exhibit any
loss anymore.
The main change here is that packages which are being diverted now support the
diverting packages in transitioning their diversions. We also supported a few
packages with non-trivial changes such as
netplan.io. dumat has been enhanced to
better support derivatives such as Ubuntu.
-for-host
support to gcc-defaults.dput-ng
enabling dcut migrate
and merging two MRs of Ben
Hutchings.Our exploit path resulted in the ability to upload malicious PyTorch releases to GitHub, upload releases to [Amazon Web Services], potentially add code to the main repository branch, backdoor PyTorch dependencies the list goes on. In short, it was bad. Quite bad.The attack pivoted on PyTorch s use of self-hosted runners as well as submitting a pull request to address a trivial typo in the project s
README
file to gain access to repository secrets and API keys that could subsequently be used for malicious purposes.
archlinux-userland-fs-cmp
, the tool is supposed to be used from a rescue image (any Linux) with an Arch install mounted to, [for example], /mnt
. Crucially, however, at no point is any file from the mounted filesystem eval d or otherwise executed. Parsers are written in a memory safe language.
More information about the tool can be found on their announcement message, as well as on the tool s homepage. A GIF of the tool in action is also available.
SOURCE_DATE_EPOCH
code?
Chris Lamb started a thread on our mailing list summarising some potential problems with the source code snippet the Reproducible Builds project has been using to parse the SOURCE_DATE_EPOCH
environment variable:
I m not 100% sure who originally wrote this code, but it was probably sometime in the ~2015 era, and it must be in a huge number of codebases by now. Anyway, Alejandro Colomar was working on the shadow security tool and pinged me regarding some potential issues with the code. You can see this conversation here.Chris ends his message with a request that those with intimate or low-level knowledge of
time_t
, C types, overflows and the various parsing libraries in the C standard library (etc.) contribute with further info.
SOURCE_DATE_EPOCH
to document its interaction with distribution rebuilds. [ ].
254
and 255
to Debian but focusing on triaging and/or merging code from other contributors. This included adding support for comparing eXtensible ARchive (.XAR/.PKG) files courtesy of Seth Michael Larson [ ][ ], as well considerable work from Vekhir in order to fix compatibility between various and subtle incompatible versions of the progressbar libraries in Python [ ][ ][ ][ ]. Thanks!
arm64
architecture workers from 24 to 16. [ ]arm64
nodes when they hit an OOM (out of memory) state. [ ]real_year
variable to 2024 [ ] and bump various copyright years as well [ ].iptables
tool everywhere, else our custom rc.local
script fails. [ ]/srv/workspace/pbuilder
directory on boot. [ ]chroot-installation
jobs to a maximum of 4 concurrent runs. [ ][ ]armhf
architecture test infrastructure. This provided the incentive to replace the UPS batteries and consolidate infrastructure to reduce future UPS load. [ ]
Elsewhere in our infrastructure, however, Holger Levsen also adjusted the email configuration for @reproducible-builds.org
to deal with a new SMTP email attack. [ ]
cython
(nondeterminstic path issue)deluge
(issue with modification time of .egg
file)gap-ferret
, gap-semigroups
& gap-simpcomp
(nondeterministic config.log
file)grpc
(filesystem ordering issue )hub
(random)kubernetes1.22
& kubernetes1.23
(sort-related issue)kubernetes1.24
& kubernetes1.25
(go -trimpath
vs random issue)libjcat
(drop test files with random bytes)luajit
(Use new d
option for deterministic bytecode output)meson
[ ][ ] (sort the results from Python filesystem call)python-rjsmin
(drop GCC instrumentation artifacts)qt6-virtualkeyboard+others
(bug parallelism/race)SoapySDR
(parallelism-related issue)systemd
(sorting problem)warewulf
(CPIO modification time issue, etc.)guake
( Schroedinger file due to race condition)qhelpgenerator-qt5
(timezone localization; fix also merged upstream for QT6)sphinx
(search index doctitle
sorting)mm-common
package in Debian this was quickly fixed, however. [ ]
#reproducible-builds
on irc.oftc.net
.
rb-general@lists.reproducible-builds.org
hz.tools
will be tagged
#hztools.
cos
and sin
of the multiplied phase (in the range of 0 to tau), assuming
the transmitter is emitting a carrier wave at a static amplitude and all
clocks are in perfect sync.
let observed_phases: Vec<Complex> = antennas
.iter()
.map( antenna
let distance = (antenna - tx).magnitude();
let distance = distance - (distance as i64 as f64);
((distance / wavelength) * TAU)
)
.map( phase Complex(phase.cos(), phase.sin()))
.collect();
let beamformed_phases: Vec<Complex> = ...;
let magnitude = beamformed_phases
.iter()
.zip(observed_phases.iter())
.map( (beamformed, observed) observed * beamformed)
.reduce( acc, el acc + el)
.unwrap()
.abs();
(x, y, z)
point at
(azimuth, elevation, magnitude)
. The color attached two that point is
based on its distance from (0, 0, 0)
. I opted to use the
Life Aquatic
table for this one.
After this process is complete, I have a
point cloud of
((x, y, z), (r, g, b))
points. I wrote a small program using
kiss3d to render point cloud using tons of
small spheres, and write out the frames to a set of PNGs, which get compiled
into a GIF.
Now for the fun part, let s take a look at some radiation patterns!
y
and z
axis, and separated by some
offset in the x
axis. This configuration can sweep 180 degrees (not
the full 360), but can t be steared in elevation at all.
Let s take a look at what this looks like for a well constructed
1x4 phased array:
And now let s take a look at the renders as we play with the configuration of
this array and make sure things look right. Our initial quarter-wavelength
spacing is very effective and has some outstanding performance characteristics.
Let s check to see that everything looks right as a first test.
Nice. Looks perfect. When pointing forward at (0, 0)
, we d expect to see a
torus, which we do. As we sweep between 0 and 360, astute observers will notice
the pattern is mirrored along the axis of the antennas, when the beam is facing
forward to 0 degrees, it ll also receive at 180 degrees just as strong. There s
a small sidelobe that forms when it s configured along the array, but
it also becomes the most directional, and the sidelobes remain fairly small.
z
axis, and separated by a fixed offset
in either the x
or y
axis by their neighbor, forming a square when
viewed along the x/y axis.
Let s take a look at what this looks like for a well constructed
2x2 phased array:
Let s do the same as above and take a look at the renders as we play with the
configuration of this array and see what things look like. This configuration
should suppress the sidelobes and give us good performance, and even give us
some amount of control in elevation while we re at it.
Sweet. Heck yeah. The array is quite directional in the configured direction,
and can even sweep a little bit in elevation, a definite improvement
from the 1x4 above.
The folks from the Reproducibility Project have come a long way since they started working on it 10 years ago, and we believe it s time for the next step in Debian. Several weeks ago, we enabled a migration policy in our migration software that checks for regression in reproducibility. At this moment, that is presented as just for info, but we intend to change that to delays in the not so distant future. We eventually want all packages to be reproducible. To stimulate maintainers to make their packages reproducible now, we ll soon start to apply a bounty [speedup] for reproducible builds, like we ve done with passing autopkgtests for years. We ll reduce the bounty for successful autopkgtests at that moment in time.
What we have done, explains Sollins, is to develop, prove correct, and demonstrate the viability of an approach that allows the [software] maintainers to remain anonymous. Preserving anonymity is obviously important, given that almost everyone software developers included value their confidentiality. This new approach, Sollins adds, simultaneously allows [software] users to have confidence that the maintainers are, in fact, legitimate maintainers and, furthermore, that the code being downloaded is, in fact, the correct code of that maintainer. [ ]The corresponding paper is published on the arXiv preprint server in various formats, and the announcement has also been covered in MIT News.
I noticed that a small but fixed subset of [Git] repositories are getting backed up despite having no changes made. That is odd because I would think that repeated bundling of the same repository state should create the exact same bundle. However [it] turns out that for some, repositories bundling is nondeterministic.Paul goes on to to describe his solution, which involves forcing git to be single threaded makes the output deterministic . The article was also discussed on Hacker News.
libxlst
now deterministic
libxslt is the XSLT C library developed for the GNOME project, where XSLT itself is an XML language to define transformations for XML files. This month, it was revealed that the result of the generate-id()
XSLT function is now deterministic across multiple transformations, fixing many issues with reproducible builds. As the Git commit by Nick Wellnhofer describes:
Rework the generate-id() function to return deterministic values. We use
a simple incrementing counter and store ids in the 'psvi' member of
nodes which was freed up by previous commits. The presence of an id is
indicated by a new "source node" flag.
This fixes long-standing problems with reproducible builds, see
https://bugzilla.gnome.org/show_bug.cgi?id=751621
This also hardens security, as the old implementation leaked the
difference between a heap and a global pointer, see
https://bugs.chromium.org/p/chromium/issues/detail?id=1356211
The old implementation could also generate the same id for dynamically
created nodes which happened to reuse the same memory. Ids for namespace
nodes were completely broken. They now use the id of the parent element
together with the hex-encoded namespace prefix.
generate-draft
script to not blow up if the input files have been corrupted today or even in the past [ ], Holger Levsen updated the Hamburg 2023 summit to add a link to farewell post [ ] & to add a picture of a Post-It note. [ ], and Pol Dellaiera updated the paragraph about tar
and the --clamp-mtime
flag [ ].
On our mailing list this month, Bernhard M. Wiedemann posted an interesting summary on some of the reasons why packages are still not reproducible in 2023.
diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. This month, Chris Lamb made a number of changes, including processing objdump
symbol comment filter inputs as Python byte
(and not str
) instances [ ] and Vagrant Cascadian extended diffoscope support for GNU Guix [ ] and updated the version in that distribution to version 253 [ ].
deep-dive into 6 tools and the accuracy of the SBOMs they produce for complex open-source Java projects. Our novel insights reveal some hard challenges regarding the accurate production and usage of software bills of materials.The paper is available on arXiv.
crack
[ ] (#1021521 & #1021522)dustmite
[ ] (#1020878 & #1020879)edid-decode
[ ] (#1020877)gentoo
[ ] (#1024284)haskell98-report
[ ] (#1024007)infinipath-psm
[ ] (#990862)lcm
[ ] (#1024286)libapache-mod-evasive
[ ] (#1020800)libccrtp
[ ] (#860470)libinput
[ ] (#995809)lirc
[ ] (#979019, #979023 & #979024)mm-common
[ ] (#977177)mpl-sphinx-theme
[ ] (#1005826)psi
[ ] (#1017473)python-parse-type
[ ] (#1002671)ruby-tioga
[ ] (#1005727)ucspi-proxy
[ ] (#1024125)ypserv
[ ] (#983138).buildinfo
files in Debian trixie, specifically lorene
(0.0.0~cvs20161116+dfsg-1.1), maria
(1.3.5-4.2) and ruby-rinku
(1.7.3-2.1).
create-meta-pkgs
tool. [ ][ ]python3-setuptools
and swig
packages, which are now needed to build OpenWrt. [ ]pkg-config
needed to build Coreboot artifacts. [ ]fakeroot
tool is implicitly required but not automatically installed. [ ]vmlinuz
file. [ ]freebsd-jenkins.debian.net
has been updated to FreeBSD 14.0. [ ]apr
(hostname issue)dune
(parallelism)epy
(time-based .pyc
issue)fpc
(Year 2038)gap
(date)gh
(FTBFS in 2024)kubernetes
(fixed random build path)libgda
(date)libguestfs
(tar)metamail
(date)mpi-selector
(date)neovim
(randomness in Lua)nml
(time-based .pyc
)pommed
(parallelism)procmail
(benchmarking)pysnmp
(FTBFS in 2038)python-efl
(drop Sphinx doctrees)python-pyface
(time)python-pytest-salt-factories
(time-based .pyc
issue)python-quimb
(fails to build on single-CPU systems)python-rdflib
(random)python-yarl
(random path)qt6-webengine
(parallelism issue in documentation)texlive
(Gzip modification time issue)waf
(time-based .pyc
)warewulf
(CPIO modification time and inode issue)xemacs
(toolchain hostname)python-aiostream
.openpyxl
.python-multipletau
.wxmplot
.stunnel4
.qttools-opensource-src
.#reproducible-builds
on irc.oftc.net
.
rb-general@lists.reproducible-builds.org
drm-fixes-<date>
.
2) Examine the issue tracker: Confirm that your issue isn t already
documented and addressed in the AMD display driver issue tracker. If you find a
similar issue, you can team up with others and speed up the debugging process.
[drm] Display Core v...
, it s not likely a display driver issue. If this
message doesn t appear in your log, the display driver wasn t fully loaded and
you will see a notification that something went wrong here.[drm] Display Core v3.2.241 initialized on DCN 2.1
[drm] Display Core v3.2.237 initialized on DCN 3.0.1
drivers/gpu/drm/amd/display/dc/dcn301
. We all know
that the AMD s shared code is huge and you can use these boundaries to rule out
codes unrelated to your issue.
7) Newer families may inherit code from older ones: you can find dcn301
using code from dcn30, dcn20, dcn10 files. It s crucial to verify which hooks
and helpers your driver utilizes to investigate the right portion. You can
leverage ftrace
for supplemental validation. To give an example, it was
useful when I was updating DCN3 color mapping to correctly use their new
post-blending color capabilities, such as:
Additionally, you can use two different HW families to compare behaviours.
If you see the issue in one but not in the other, you can compare the code and
understand what has changed and if the implementation from a previous family
doesn t fit well the new HW resources or design. You can also count on the help
of the community on the
Linux AMD issue tracker
to validate your code on other hardware and/or systems.
This approach helped me debug
a 2-year-old issue
where the cursor gamma adjustment was incorrect in DCN3 hardware, but working
correctly for DCN2 family. I solved the issue in two steps, thanks for
community feedback and validation:
drivers/gpu/drm/amd/display/dc/dcn*/dcn*_resource.c
file. More precisely in
the dcn*_resource_construct()
function.
Using DCN301 for illustration, here is the list of its hardware caps:
/*************************************************
* Resource + asic cap harcoding *
*************************************************/
pool->base.underlay_pipe_index = NO_UNDERLAY_PIPE;
pool->base.pipe_count = pool->base.res_cap->num_timing_generator;
pool->base.mpcc_count = pool->base.res_cap->num_timing_generator;
dc->caps.max_downscale_ratio = 600;
dc->caps.i2c_speed_in_khz = 100;
dc->caps.i2c_speed_in_khz_hdcp = 5; /*1.4 w/a enabled by default*/
dc->caps.max_cursor_size = 256;
dc->caps.min_horizontal_blanking_period = 80;
dc->caps.dmdata_alloc_size = 2048;
dc->caps.max_slave_planes = 2;
dc->caps.max_slave_yuv_planes = 2;
dc->caps.max_slave_rgb_planes = 2;
dc->caps.is_apu = true;
dc->caps.post_blend_color_processing = true;
dc->caps.force_dp_tps4_for_cp2520 = true;
dc->caps.extended_aux_timeout_support = true;
dc->caps.dmcub_support = true;
/* Color pipeline capabilities */
dc->caps.color.dpp.dcn_arch = 1;
dc->caps.color.dpp.input_lut_shared = 0;
dc->caps.color.dpp.icsc = 1;
dc->caps.color.dpp.dgam_ram = 0; // must use gamma_corr
dc->caps.color.dpp.dgam_rom_caps.srgb = 1;
dc->caps.color.dpp.dgam_rom_caps.bt2020 = 1;
dc->caps.color.dpp.dgam_rom_caps.gamma2_2 = 1;
dc->caps.color.dpp.dgam_rom_caps.pq = 1;
dc->caps.color.dpp.dgam_rom_caps.hlg = 1;
dc->caps.color.dpp.post_csc = 1;
dc->caps.color.dpp.gamma_corr = 1;
dc->caps.color.dpp.dgam_rom_for_yuv = 0;
dc->caps.color.dpp.hw_3d_lut = 1;
dc->caps.color.dpp.ogam_ram = 1;
// no OGAM ROM on DCN301
dc->caps.color.dpp.ogam_rom_caps.srgb = 0;
dc->caps.color.dpp.ogam_rom_caps.bt2020 = 0;
dc->caps.color.dpp.ogam_rom_caps.gamma2_2 = 0;
dc->caps.color.dpp.ogam_rom_caps.pq = 0;
dc->caps.color.dpp.ogam_rom_caps.hlg = 0;
dc->caps.color.dpp.ocsc = 0;
dc->caps.color.mpc.gamut_remap = 1;
dc->caps.color.mpc.num_3dluts = pool->base.res_cap->num_mpc_3dlut; //2
dc->caps.color.mpc.ogam_ram = 1;
dc->caps.color.mpc.ogam_rom_caps.srgb = 0;
dc->caps.color.mpc.ogam_rom_caps.bt2020 = 0;
dc->caps.color.mpc.ogam_rom_caps.gamma2_2 = 0;
dc->caps.color.mpc.ogam_rom_caps.pq = 0;
dc->caps.color.mpc.ogam_rom_caps.hlg = 0;
dc->caps.color.mpc.ocsc = 1;
dc->caps.dp_hdmi21_pcon_support = true;
/* read VBIOS LTTPR caps */
if (ctx->dc_bios->funcs->get_lttpr_caps)
enum bp_result bp_query_result;
uint8_t is_vbios_lttpr_enable = 0;
bp_query_result = ctx->dc_bios->funcs->get_lttpr_caps(ctx->dc_bios, &is_vbios_lttpr_enable);
dc->caps.vbios_lttpr_enable = (bp_query_result == BP_RESULT_OK) && !!is_vbios_lttpr_enable;
if (ctx->dc_bios->funcs->get_lttpr_interop)
enum bp_result bp_query_result;
uint8_t is_vbios_interop_enabled = 0;
bp_query_result = ctx->dc_bios->funcs->get_lttpr_interop(ctx->dc_bios, &is_vbios_interop_enabled);
dc->caps.vbios_lttpr_aware = (bp_query_result == BP_RESULT_OK) && !!is_vbios_interop_enabled;
git log
and git blame
to identify commits
targeting the code section you re interested in.
10) Track regressions: If you re examining the amd-staging-drm-next
branch, check for regressions between DC release versions. These are defined by
DC_VER
in the drivers/gpu/drm/amd/display/dc/dc.h
file. Alternatively,
find a commit with this format drm/amd/display: 3.2.221
that determines a
display release. It s useful for bisecting. This information helps you
understand how outdated your branch is and identify potential regressions. You
can consider each DC_VER
takes around one week to be bumped. Finally, check
testing log of each release in the report provided on the amd-gfx
mailing
list, such as this one Tested-by: Daniel Wheeler
:
sudo bash -c "echo high > /sys/class/drm/card0/device/power_dpm_force_performance_level"
/* Surface update type is used by dc_update_surfaces_and_stream
* The update type is determined at the very beginning of the function based
* on parameters passed in and decides how much programming (or updating) is
* going to be done during the call.
*
* UPDATE_TYPE_FAST is used for really fast updates that do not require much
* logical calculations or hardware register programming. This update MUST be
* ISR safe on windows. Currently fast update will only be used to flip surface
* address.
*
* UPDATE_TYPE_MED is used for slower updates which require significant hw
* re-programming however do not affect bandwidth consumption or clock
* requirements. At present, this is the level at which front end updates
* that do not require us to run bw_calcs happen. These are in/out transfer func
* updates, viewport offset changes, recout size changes and pixel
depth changes.
* This update can be done at ISR, but we want to minimize how often
this happens.
*
* UPDATE_TYPE_FULL is slow. Really slow. This requires us to recalculate our
* bandwidth and clocks, possibly rearrange some pipes and reprogram
anything front
* end related. Any time viewport dimensions, recout dimensions,
scaling ratios or
* gamma need to be adjusted or pipe needs to be turned on (or
disconnected) we do
* a full update. This cannot be done at ISR level and should be a rare event.
* Unless someone is stress testing mpo enter/exit, playing with
colour or adjusting
* underscan we don't expect to see this call at all.
*/
enum surface_update_type
UPDATE_TYPE_FAST, /* super fast, safe to execute in isr */
UPDATE_TYPE_MED, /* ISR safe, most of programming needed, no bw/clk change*/
UPDATE_TYPE_FULL, /* may need to shuffle resources */
;
sahilister
There are various ways in which the installation could be done, in our setup here are the pre-requisites. compose.yml
file is present in nextcloud AIO&aposs git repo here . By taking a reference of that file, we have own compose.yml
here. services:
nextcloud-aio-mastercontainer:
image: nextcloud/all-in-one:latest
init: true
restart: always
container_name: nextcloud-aio-mastercontainer # This line is not allowed to be changed as otherwise AIO will not work correctly
volumes:
- nextcloud_aio_mastercontainer:/mnt/docker-aio-config # This line is not allowed to be changed as otherwise the built-in backup solution will not work
- /var/run/docker.sock:/var/run/docker.sock:ro # May be changed on macOS, Windows or docker rootless. See the applicable documentation. If adjusting, don&apost forget to also set &aposWATCHTOWER_DOCKER_SOCKET_PATH&apos!
ports:
- 8080:8080
environment: # Is needed when using any of the options below
# - AIO_DISABLE_BACKUP_SECTION=false # Setting this to true allows to hide the backup section in the AIO interface. See https://github.com/nextcloud/all-in-one#how-to-disable-the-backup-section
- APACHE_PORT=32323 # Is needed when running behind a web server or reverse proxy (like Apache, Nginx, Cloudflare Tunnel and else). See https://github.com/nextcloud/all-in-one/blob/main/reverse-proxy.md
- APACHE_IP_BINDING=127.0.0.1 # Should be set when running behind a web server or reverse proxy (like Apache, Nginx, Cloudflare Tunnel and else) that is running on the same host. See https://github.com/nextcloud/all-in-one/blob/main/reverse-proxy.md
# - BORG_RETENTION_POLICY=--keep-within=7d --keep-weekly=4 --keep-monthly=6 # Allows to adjust borgs retention policy. See https://github.com/nextcloud/all-in-one#how-to-adjust-borgs-retention-policy
# - COLLABORA_SECCOMP_DISABLED=false # Setting this to true allows to disable Collabora&aposs Seccomp feature. See https://github.com/nextcloud/all-in-one#how-to-disable-collaboras-seccomp-feature
- NEXTCLOUD_DATADIR=/opt/docker/cloud.raju.dev/nextcloud # Allows to set the host directory for Nextcloud&aposs datadir. Warning: do not set or adjust this value after the initial Nextcloud installation is done! See https://github.com/nextcloud/all-in-one#how-to-change-the-default-location-of-nextclouds-datadir
# - NEXTCLOUD_MOUNT=/mnt/ # Allows the Nextcloud container to access the chosen directory on the host. See https://github.com/nextcloud/all-in-one#how-to-allow-the-nextcloud-container-to-access-directories-on-the-host
# - NEXTCLOUD_UPLOAD_LIMIT=10G # Can be adjusted if you need more. See https://github.com/nextcloud/all-in-one#how-to-adjust-the-upload-limit-for-nextcloud
# - NEXTCLOUD_MAX_TIME=3600 # Can be adjusted if you need more. See https://github.com/nextcloud/all-in-one#how-to-adjust-the-max-execution-time-for-nextcloud
# - NEXTCLOUD_MEMORY_LIMIT=512M # Can be adjusted if you need more. See https://github.com/nextcloud/all-in-one#how-to-adjust-the-php-memory-limit-for-nextcloud
# - NEXTCLOUD_TRUSTED_CACERTS_DIR=/path/to/my/cacerts # CA certificates in this directory will be trusted by the OS of the nexcloud container (Useful e.g. for LDAPS) See See https://github.com/nextcloud/all-in-one#how-to-trust-user-defined-certification-authorities-ca
# - NEXTCLOUD_STARTUP_APPS=deck twofactor_totp tasks calendar contacts notes # Allows to modify the Nextcloud apps that are installed on starting AIO the first time. See https://github.com/nextcloud/all-in-one#how-to-change-the-nextcloud-apps-that-are-installed-on-the-first-startup
# - NEXTCLOUD_ADDITIONAL_APKS=imagemagick # This allows to add additional packages to the Nextcloud container permanently. Default is imagemagick but can be overwritten by modifying this value. See https://github.com/nextcloud/all-in-one#how-to-add-os-packages-permanently-to-the-nextcloud-container
# - NEXTCLOUD_ADDITIONAL_PHP_EXTENSIONS=imagick # This allows to add additional php extensions to the Nextcloud container permanently. Default is imagick but can be overwritten by modifying this value. See https://github.com/nextcloud/all-in-one#how-to-add-php-extensions-permanently-to-the-nextcloud-container
# - NEXTCLOUD_ENABLE_DRI_DEVICE=true # This allows to enable the /dev/dri device in the Nextcloud container. Warning: this only works if the &apos/dev/dri&apos device is present on the host! If it should not exist on your host, don&apost set this to true as otherwise the Nextcloud container will fail to start! See https://github.com/nextcloud/all-in-one#how-to-enable-hardware-transcoding-for-nextcloud
# - NEXTCLOUD_KEEP_DISABLED_APPS=false # Setting this to true will keep Nextcloud apps that are disabled in the AIO interface and not uninstall them if they should be installed. See https://github.com/nextcloud/all-in-one#how-to-keep-disabled-apps
# - TALK_PORT=3478 # This allows to adjust the port that the talk container is using. See https://github.com/nextcloud/all-in-one#how-to-adjust-the-talk-port
# - WATCHTOWER_DOCKER_SOCKET_PATH=/var/run/docker.sock # Needs to be specified if the docker socket on the host is not located in the default &apos/var/run/docker.sock&apos. Otherwise mastercontainer updates will fail. For macos it needs to be &apos/var/run/docker.sock&apos
# networks: # Is needed when you want to create the nextcloud-aio network with ipv6-support using this file, see the network config at the bottom of the file
# - nextcloud-aio # Is needed when you want to create the nextcloud-aio network with ipv6-support using this file, see the network config at the bottom of the file
# - SKIP_DOMAIN_VALIDATION=true
# # Uncomment the following line when using SELinux
# security_opt: ["label:disable"]
volumes: # If you want to store the data on a different drive, see https://github.com/nextcloud/all-in-one#how-to-store-the-filesinstallation-on-a-separate-drive
nextcloud_aio_mastercontainer:
name: nextcloud_aio_mastercontainer # This line is not allowed to be changed as otherwise the built-in backup solution will not work
I have not removed many of the commented options in the compose file, for a possibility of me using them in the future.If you want a smaller cleaner compose with the extra options, you can refer to services:
nextcloud-aio-mastercontainer:
image: nextcloud/all-in-one:latest
init: true
restart: always
container_name: nextcloud-aio-mastercontainer
volumes:
- nextcloud_aio_mastercontainer:/mnt/docker-aio-config
- /var/run/docker.sock:/var/run/docker.sock:ro
ports:
- 8080:8080
environment:
- APACHE_PORT=32323
- APACHE_IP_BINDING=127.0.0.1
- NEXTCLOUD_DATADIR=/opt/docker/nextcloud
volumes:
nextcloud_aio_mastercontainer:
name: nextcloud_aio_mastercontainer
I am using a separate directory to store nextcloud data. As per nextcloud documentation you should be using a separate partition if you want to use this feature, however I did not have that option on my server, so I used a separate directory instead. Also we use a custom port on which nextcloud listens for operations, we have set it up as 32323
above, but you can use any in the permissible port range. The 8080 port is used the setup the AIO management interface. Both 8080 and the APACHE_PORT
do not need to be open on the host machine, as we will be using reverse proxy setup with nginx to direct requests. once you have your preferred compose.yml
file, you can start the containers using $ docker-compose -f compose.yml up -d
Creating network "clouddev_default" with the default driver
Creating volume "nextcloud_aio_mastercontainer" with default driver
Creating nextcloud-aio-mastercontainer ... done
once your container&aposs are running, we can do the nginx setup.
map $http_upgrade $connection_upgrade
default upgrade;
&apos&apos close;
server
listen 80;
#listen [::]:80; # comment to disable IPv6
if ($scheme = "http")
return 301 https://$host$request_uri;
listen 443 ssl http2; # for nginx versions below v1.25.1
#listen [::]:443 ssl http2; # for nginx versions below v1.25.1 - comment to disable IPv6
# listen 443 ssl; # for nginx v1.25.1+
# listen [::]:443 ssl; # for nginx v1.25.1+ - keep comment to disable IPv6
# http2 on; # uncomment to enable HTTP/2 - supported on nginx v1.25.1+
# http3 on; # uncomment to enable HTTP/3 / QUIC - supported on nginx v1.25.0+
# quic_retry on; # uncomment to enable HTTP/3 / QUIC - supported on nginx v1.25.0+
# add_header Alt-Svc &aposh3=":443"; ma=86400' # uncomment to enable HTTP/3 / QUIC - supported on nginx v1.25.0+
# listen 443 quic reuseport; # uncomment to enable HTTP/3 / QUIC - supported on nginx v1.25.0+ - please remove "reuseport" if there is already another quic listener on port 443 with enabled reuseport
# listen [::]:443 quic reuseport; # uncomment to enable HTTP/3 / QUIC - supported on nginx v1.25.0+ - please remove "reuseport" if there is already another quic listener on port 443 with enabled reuseport - keep comment to disable IPv6
server_name cloud.example.com;
location /
proxy_pass http://127.0.0.1:32323$request_uri;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-Scheme $scheme;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Accept-Encoding "";
proxy_set_header Host $host;
client_body_buffer_size 512k;
proxy_read_timeout 86400s;
client_max_body_size 0;
# Websocket
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
ssl_certificate /etc/letsencrypt/live/cloud.example.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/cloud.example.com/privkey.pem; # managed by Certbot
ssl_session_timeout 1d;
ssl_session_cache shared:MozSSL:10m; # about 40000 sessions
ssl_session_tickets off;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-CHACHA20-POLY1305;
ssl_prefer_server_ciphers on;
# Optional settings:
# OCSP stapling
# ssl_stapling on;
# ssl_stapling_verify on;
# ssl_trusted_certificate /etc/letsencrypt/live/<your-nc-domain>/chain.pem;
# replace with the IP address of your resolver
# resolver 127.0.0.1; # needed for oscp stapling: e.g. use 94.140.15.15 for adguard / 1.1.1.1 for cloudflared or 8.8.8.8 for google - you can use the same nameserver as listed in your /etc/resolv.conf file
Please note that you need to have valid SSL certificates for your domain for this configuration to work. Steps on getting valid SSL certificates for your domain are beyond the scope of this article. You can give a web search on getting SSL certificates with letsencrypt and you will get several resources on that, or may write a blog post on it separately in the future.once your configuration for nginx is done, you can test the nginx configuration using $ sudo nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
and then reload nginx with $ sudo nginx -s reload
domain.tld:8080
, however we do not want to open the 8080 port publicly to do this, so to complete the setup, here is a neat hack from sahilister
ssh -L 8080:127.0.0.1:8080 username:<server-ip>
you can bind the 8080 port of your server to the 8080 of your localhost using Unix socket forwarding over SSH.The port forwarding only last for the duration of your SSH session, if the SSH session breaks, your port forwarding will to. So, once you have the port forwarded, you can open the nextcloud AIO instance in your web browser at 127.0.0.1:8080
you will get this error because you are trying to access a page on localhost over HTTPS. You can click on advanced and then continue to proceed to the next page. Your data is encrypted over SSH for this session as we are binding the port over SSH. Depending on your choice of browser, the above page might look different.once you have proceeded, the nextcloud AIO interface will open and will look something like this. It will show an auto generated passphrase, you need to save this passphrase and make sure to not loose it. For the purposes of security, I have masked the passwords with capsicums. once you have noted down your password, you can proceed to the Nextcloud AIO login, enter your password and then login. After login you will be greeted with a screen like this. now you can put the domain that you want to use in the Submit domain field. Once the domain check is done, you will proceed to the next step and see another screen like thishere you can select any optional containers for the features that you might want. IMPORTANT: Please make sure to also change the time zone at the bottom of the page according to the time zone you wish to operate in. The timezone setup is also important because the data base will get initialized according to the set time zone. This could result in wrong initialization of database and you ending up in a startup loop for nextcloud. I faced this issue and could only resolve it after getting help from sahilister
. Once you are done changing the timezone, and selecting any additional features you want, you can click on Download and start the containers
It will take some time for this process to finish, take a break and look at the farthest object in your room and take a sip of water. Once you are done, and the process has finished you will see a page similar to the following one. wait patiently for everything to turn green. once all the containers have started properly, you can open the nextcloud login interface on your configured domain, the initial login details are auto generated as you can see from the above screenshot. Again you will see a password that you need to note down or save to enter the nextcloud interface. Capsicums will not work as passwords. I have masked the auto generated passwords using capsicums.Now you can click on Open your Nextcloud
button or go to your configured domain to access the login screen. You can use the login details from the previous step to login to the administrator account of your Nextcloud instance. There you have it, your very own cloud!docker-compose -f compose.yml down -v
The above command will also remove the volume associated with the master containerdocker stop nextcloud-aio-apache nextcloud-aio-notify-push nextcloud-aio-nextcloud nextcloud-aio-imaginary nextcloud-aio-fulltextsearch nextcloud-aio-redis nextcloud-aio-database nextcloud-aio-talk nextcloud-aio-collabora
docker rm nextcloud-aio-apache nextcloud-aio-notify-push nextcloud-aio-nextcloud nextcloud-aio-imaginary nextcloud-aio-fulltextsearch nextcloud-aio-redis nextcloud-aio-database nextcloud-aio-talk nextcloud-aio-collabora
docker rmi $(docker images --filter "reference=nextcloud/*" -q)
docker volume rm <volume-name>
docker network rm nextcloud-aio
.gitignore
file, was bug 774109. It added a script to install the prerequisites to build Firefox on macOS (still called OSX back then), and that would print a message inviting people to obtain a copy of the source code with either Mercurial or Git. That was a precursor to current bootstrap.py
, from September 2012.
Following that, as far as I can tell, the first real incursion of Git in the Firefox source tree tooling happened in bug 965120. A few days earlier, bug 952379 had added a mach clang-format
command that would apply clang-format-diff
to the output from hg diff
. Obviously, running hg diff
on a Git working tree didn't work, and bug 965120 was filed, and support for Git was added there. That was in January 2014.
A year later, when the initial implementation of mach artifact
was added (which ultimately led to artifact builds), Git users were an immediate thought. But while they were considered, it was not to support them, but to avoid actively breaking their workflows. Git support for mach artifact
was eventually added 14 months later, in March 2016.
From gecko-dev to git-cinnabar
Let's step back a little here, back to the end of 2014. My user experience with Mercurial had reached a level of dissatisfaction that was enough for me to decide to take that script from a couple years prior and make it work for incremental updates. That meant finding a way to store enough information locally to be able to reconstruct whatever the incremental updates would be relying on (guess why other tools hid a local Mercurial clone under hood). I got something working rather quickly, and after talking to a few people about this side project at the Mozilla Portland All Hands and seeing their excitement, I published a git-remote-hg initial prototype on the last day of the All Hands.
Within weeks, the prototype gained the ability to directly push to Mercurial repositories, and a couple months later, was renamed to git-cinnabar. At that point, as a Git user, instead of cloning the gecko-dev repository from GitHub and switching to a local Mercurial repository whenever you needed to push to a Mercurial repository (i.e. the aforementioned Try server, or, at the time, for reviews), you could just clone and push directly from/to Mercurial, all within Git. And it was fast too. You could get a full clone of mozilla-central in less than half an hour, when at the time, other similar tools would take more than 10 hours (needless to say, it's even worse now).
Another couple months later (we're now at the end of April 2015), git-cinnabar became able to start off a local clone of the gecko-dev repository, rather than clone from scratch, which could be time consuming. But because git-cinnabar and the tool that was updating gecko-dev weren't producing the same commits, this setup was cumbersome and not really recommended. For instance, if you pushed something to mozilla-central with git-cinnabar from a gecko-dev clone, it would come back with a different commit hash in gecko-dev, and you'd have to deal with the divergence.
Eventually, in April 2020, the scripts updating gecko-dev were switched to git-cinnabar, making the use of gecko-dev alongside git-cinnabar a more viable option. Ironically(?), the switch occurred to ease collaboration with KaiOS (you know, the mobile OS born from the ashes of Firefox OS). Well, okay, in all honesty, when the need of syncing in both directions between Git and Mercurial (we only had ever synced from Mercurial to Git) came up, I nudged Mozilla in the direction of git-cinnabar, which, in my (biased but still honest) opinion, was the more reliable option for two-way synchronization (we did have regular conversion problems with hg-git, nothing of the sort has happened since the switch).
One Firefox repository to rule them all
For reasons I don't know, Mozilla decided to use separate Mercurial repositories as "branches". With the switch to the rapid release process in 2011, that meant one repository for nightly (mozilla-central), one for aurora, one for beta, and one for release. And with the addition of Extended Support Releases in 2012, we now add a new ESR repository every year. Boot to Gecko also had its own branches, and so did Fennec (Firefox for Mobile, before Android). There are a lot of them.
And then there are also integration branches, where developer's work lands before being merged in mozilla-central (or backed out if it breaks things), always leaving mozilla-central in a (hopefully) good state. Only one of them remains in use today, though.
I can only suppose that the way Mercurial branches work was not deemed practical. It is worth noting, though, that Mercurial branches are used in some cases, to branch off a dot-release when the next major release process has already started, so it's not a matter of not knowing the feature exists or some such.
In 2016, Gregory Szorc set up a new repository that would contain them all (or at least most of them), which eventually became what is now the mozilla-unified repository. This would e.g. simplify switching between branches when necessary.
7 years later, for some reason, the other "branches" still exist, but most developers are expected to be using mozilla-unified. Mozilla's CI also switched to using mozilla-unified as base repository.
Honestly, I'm not sure why the separate repositories are still the main entry point for pushes, rather than going directly to mozilla-unified, but it probably comes down to switching being work, and not being a top priority. Also, it probably doesn't help that working with multiple heads in Mercurial, even (especially?) with bookmarks, can be a source of confusion. To give an example, if you aren't careful, and do a plain clone of the mozilla-unified repository, you may not end up on the latest mozilla-central changeset, but rather, e.g. one from beta, or some other branch, depending which one was last updated.
Hosting is simple, right?
Put your repository on a server, install hgweb or gitweb, and that's it? Maybe that works for... Mercurial itself, but that repository "only" has slightly over 50k changesets and less than 4k files. Mozilla-central has more than an order of magnitude more changesets (close to 700k) and two orders of magnitude more files (more than 700k if you count the deleted or moved files, 350k if you count the currently existing ones).
And remember, there are a lot of "duplicates" of this repository. And I didn't even mention user repositories and project branches.
Sure, it's a self-inflicted pain, and you'd think it could probably(?) be mitigated with shared repositories. But consider the simple case of two repositories: mozilla-central and autoland. You make autoland use mozilla-central as a shared repository. Now, you push something new to autoland, it's stored in the autoland datastore. Eventually, you merge to mozilla-central. Congratulations, it's now in both datastores, and you'd need to clean-up autoland if you wanted to avoid the duplication.
Now, you'd think mozilla-unified would solve these issues, and it would... to some extent. Because that wouldn't cover user repositories and project branches briefly mentioned above, which in GitHub parlance would be considered as Forks. So you'd want a mega global datastore shared by all repositories, and repositories would need to only expose what they really contain. Does Mercurial support that? I don't think so (okay, I'll give you that: even if it doesn't, it could, but that's extra work). And since we're talking about a transition to Git, does Git support that? You may have read about how you can link to a commit from a fork and make-pretend that it comes from the main repository on GitHub? At least, it shows a warning, now. That's essentially the architectural reason why. So the actual answer is that Git doesn't support it out of the box, but GitHub has some backend magic to handle it somehow (and hopefully, other things like Gitea, Girocco, Gitlab, etc. have something similar).
Now, to come back to the size of the repository. A repository is not a static file. It's a server with which you negotiate what you have against what it has that you want. Then the server bundles what you asked for based on what you said you have. Or in the opposite direction, you negotiate what you have that it doesn't, you send it, and the server incorporates what you sent it. Fortunately the latter is less frequent and requires authentication. But the former is more frequent and CPU intensive. Especially when pulling a large number of changesets, which, incidentally, cloning is.
"But there is a solution for clones" you might say, which is true. That's clonebundles, which offload the CPU intensive part of cloning to a single job scheduled regularly. Guess who implemented it? Mozilla. But that only covers the cloning part. We actually had laid the ground to support offloading large incremental updates and split clones, but that never materialized. Even with all that, that still leaves you with a server that can display file contents, diffs, blames, provide zip archives of a revision, and more, all of which are CPU intensive in their own way.
And these endpoints are regularly abused, and cause extra load to your servers, yes plural, because of course a single server won't handle the load for the number of users of your big repositories. And because your endpoints are abused, you have to close some of them. And I'm not mentioning the Try repository with its tens of thousands of heads, which brings its own sets of problems (and it would have even more heads if we didn't fake-merge them once in a while).
Of course, all the above applies to Git (and it only gained support for something akin to clonebundles last year). So, when the Firefox OS project was stopped, there wasn't much motivation to continue supporting our own Git server, Mercurial still being the official point of entry, and git.mozilla.org was shut down in 2016.
The growing difficulty of maintaining the status quo
Slowly, but steadily in more recent years, as new tooling was added that needed some input from the source code manager, support for Git was more and more consistently added. But at the same time, as people left for other endeavors and weren't necessarily replaced, or more recently with layoffs, resources allocated to such tooling have been spread thin.
Meanwhile, the repository growth didn't take a break, and the Try repository was becoming an increasing pain, with push times quite often exceeding 10 minutes. The ongoing work to move Try pushes to Lando will hide the problem under the rug, but the underlying problem will still exist (although the last version of Mercurial seems to have improved things).
On the flip side, more and more people have been relying on Git for Firefox development, to my own surprise, as I didn't really push for that to happen. It just happened organically, by ways of git-cinnabar existing, providing a compelling experience to those who prefer Git, and, I guess, word of mouth. I was genuinely surprised when I recently heard the use of Git among moz-phab users had surpassed a third. I did, however, occasionally orient people who struggled with Mercurial and said they were more familiar with Git, towards git-cinnabar. I suspect there's a somewhat large number of people who never realized Git was a viable option.
But that, on its own, can come with its own challenges: if you use git-cinnabar without being backed by gecko-dev, you'll have a hard time sharing your branches on GitHub, because you can't push to a fork of gecko-dev without pushing your entire local repository, as they have different commit histories. And switching to gecko-dev when you weren't already using it requires some extra work to rebase all your local branches from the old commit history to the new one.
Clone times with git-cinnabar have also started to go a little out of hand in the past few years, but this was mitigated in a similar manner as with the Mercurial cloning problem: with static files that are refreshed regularly. Ironically, that made cloning with git-cinnabar faster than cloning with Mercurial. But generating those static files is increasingly time-consuming. As of writing, generating those for mozilla-unified takes close to 7 hours. I was predicting clone times over 10 hours "in 5 years" in a post from 4 years ago, I wasn't too far off. With exponential growth, it could still happen, although to be fair, CPUs have improved since. I will explore the performance aspect in a subsequent blog post, alongside the upcoming release of git-cinnabar 0.7.0-b1. I don't even want to check how long it now takes with hg-git or git-remote-hg (they were already taking more than a day when git-cinnabar was taking a couple hours).
I suppose it's about time that I clarify that git-cinnabar has always been a side-project. It hasn't been part of my duties at Mozilla, and the extent to which Mozilla supports git-cinnabar is in the form of taskcluster workers on the community instance for both git-cinnabar CI and generating those clone bundles. Consequently, that makes the above git-cinnabar specific issues a Me problem, rather than a Mozilla problem.
Taking the leap
I can't talk for the people who made the proposal to move to Git, nor for the people who put a green light on it. But I can at least give my perspective.
Developers have regularly asked why Mozilla was still using Mercurial, but I think it was the first time that a formal proposal was laid out. And it came from the Engineering Workflow team, responsible for issue tracking, code reviews, source control, build and more.
It's easy to say "Mozilla should have chosen Git in the first place", but back in 2007, GitHub wasn't there, Bitbucket wasn't there, and all the available options were rather new (especially compared to the then 21 years-old CVS). I think Mozilla made the right choice, all things considered. Had they waited a couple years, the story might have been different.
You might say that Mozilla stayed with Mercurial for so long because of the sunk cost fallacy. I don't think that's true either. But after the biggest Mercurial repository hosting service turned off Mercurial support, and the main contributor to Mercurial going their own way, it's hard to ignore that the landscape has evolved.
And the problems that we regularly encounter with the Mercurial servers are not going to get any better as the repository continues to grow. As far as I know, all the Mercurial repositories bigger than Mozilla's are... not using Mercurial. Google has its own closed-source server, and Facebook has another of its own, and it's not really public either. With resources spread thin, I don't expect Mozilla to be able to continue supporting a Mercurial server indefinitely (although I guess Octobus could be contracted to give a hand, but is that sustainable?).
Mozilla, being a champion of Open Source, also doesn't live in a silo. At some point, you have to meet your contributors where they are. And the Open Source world is now majoritarily using Git. I'm sure the vast majority of new hires at Mozilla in the past, say, 5 years, know Git and have had to learn Mercurial (although they arguably didn't need to). Even within Mozilla, with thousands(!) of repositories on GitHub, Firefox is now actually the exception rather than the norm. I should even actually say Desktop Firefox, because even Mobile Firefox lives on GitHub (although Fenix is moving back in together with Desktop Firefox, and the timing is such that that will probably happen before Firefox moves to Git).
Heck, even Microsoft moved to Git!
With a significant developer base already using Git thanks to git-cinnabar, and all the constraints and problems I mentioned previously, it actually seems natural that a transition (finally) happens. However, had git-cinnabar or something similarly viable not existed, I don't think Mozilla would be in a position to take this decision. On one hand, it probably wouldn't be in the current situation of having to support both Git and Mercurial in the tooling around Firefox, nor the resource constraints related to that. But on the other hand, it would be farther from supporting Git and being able to make the switch in order to address all the other problems.
But... GitHub?
I hope I made a compelling case that hosting is not as simple as it can seem, at the scale of the Firefox repository. It's also not Mozilla's main focus. Mozilla has enough on its plate with the migration of existing infrastructure that does rely on Mercurial to understandably not want to figure out the hosting part, especially with limited resources, and with the mixed experience hosting both Mercurial and git has been so far.
After all, GitHub couldn't even display things like the contributors' graph on gecko-dev until recently, and hosting is literally their job! They still drop the ball on large blames (thankfully we have searchfox for those).
Where does that leave us? Gitlab? For those criticizing GitHub for being proprietary, that's probably not open enough. Cloud Source Repositories? "But GitHub is Microsoft" is a complaint I've read a lot after the announcement. Do you think Google hosting would have appealed to these people? Bitbucket? I'm kind of surprised it wasn't in the list of providers that were considered, but I'm also kind of glad it wasn't (and I'll leave it at that).
I think the only relatively big hosting provider that could have made the people criticizing the choice of GitHub happy is Codeberg, but I hadn't even heard of it before it was mentioned in response to Mozilla's announcement. But really, with literal thousands of Mozilla repositories already on GitHub, with literal tens of millions repositories on the platform overall, the pragmatic in me can't deny that it's an attractive option (and I can't stress enough that I wasn't remotely close to the room where the discussion about what choice to make happened).
"But it's a slippery slope". I can see that being a real concern. LLVM also moved its repository to GitHub (from a (I think) self-hosted Subversion server), and ended up moving off Bugzilla and Phabricator to GitHub issues and PRs four years later. As an occasional contributor to LLVM, I hate this move. I hate the GitHub review UI with a passion.
At least, right now, GitHub PRs are not a viable option for Mozilla, for their lack of support for security related PRs, and the more general shortcomings in the review UI. That doesn't mean things won't change in the future, but let's not get too far ahead of ourselves. The move to Git has just been announced, and the migration has not even begun yet. Just because Mozilla is moving the Firefox repository to GitHub doesn't mean it's locked in forever or that all the eggs are going to be thrown into one basket. If bridges need to be crossed in the future, we'll see then.
So, what's next?
The official announcement said we're not expecting the migration to really begin until six months from now. I'll swim against the current here, and say this: the earlier you can switch to git, the earlier you'll find out what works and what doesn't work for you, whether you already know Git or not.
While there is not one unique workflow, here's what I would recommend anyone who wants to take the leap off Mercurial right now:
git-cinnabar
where mach bootstrap
would install it.
$ mkdir -p ~/.mozbuild/git-cinnabar
$ cd ~/.mozbuild/git-cinnabar
$ curl -sOL https://raw.githubusercontent.com/glandium/git-cinnabar/master/download.py
$ python3 download.py && rm download.py
git-cinnabar
to your PATH
. Make sure to also set that wherever you keep your PATH
up-to-date (.bashrc
or wherever else).
$ PATH=$PATH:$HOME/.mozbuild/git-cinnabar
$ git init
$ git remote add origin https://github.com/mozilla/gecko-dev
$ git remote update origin
$ git remote set-url origin hg::https://hg.mozilla.org/mozilla-unified
$ git config --local remote.origin.cinnabar-refs bookmarks
$ git remote update origin --prune
$ git -c cinnabar.refs=heads fetch hg::$PWD refs/heads/default/*:refs/heads/hg/*
This will create a bunch of hg/<sha1>
local branches, not all relevant to you (some come from old branches on mozilla-central). Note that if you're using Mercurial MQ, this will not pull your queues, as they don't exist as heads in the Mercurial repo. You'd need to apply your queues one by one and run the command above for each of them.$ git -c cinnabar.refs=bookmarks fetch hg::$PWD refs/heads/*:refs/heads/hg/*
This will create hg/<bookmark_name>
branches.
$ git reset $(git cinnabar hg2git $(hg log -r . -T ' node '))
This will take a little moment because Git is going to scan all the files in the tree for the first time. On the other hand, it won't touch their content or timestamps, so if you had a build around, it will still be valid, and mach build
won't rebuild anything it doesn't have to.
$ git branch <branch_name> $(git cinnabar hg2git <hg_sha1>)
At this point, you should have everything available on the Git side, and you can remove the .hg
directory. Or move it into some empty directory somewhere else, just in case. But don't leave it here, it will only confuse the tooling. Artifact builds WILL be confused, though, and you'll have to ./mach configure
before being able to do anything. You may also hit bug 1865299 if your working tree is older than this post.
If you have any problem or question, you can ping me on #git-cinnabar or #git on Matrix. I'll put the instructions above somewhere on wiki.mozilla.org, and we can collaboratively iterate on them.
Now, what the announcement didn't say is that the Git repository WILL NOT be gecko-dev, doesn't exist yet, and WON'T BE COMPATIBLE (trust me, it'll be for the better). Why did I make you do all the above, you ask? Because that won't be a problem. I'll have you covered, I promise. The upcoming release of git-cinnabar 0.7.0-b1 will have a way to smoothly switch between gecko-dev and the future repository (incidentally, that will also allow to switch from a pure git-cinnabar clone to a gecko-dev one, for the git-cinnabar users who have kept reading this far).
What about git-cinnabar?
With Mercurial going the way of the dodo at Mozilla, my own need for git-cinnabar will vanish. Legitimately, this begs the question whether it will still be maintained.
I can't answer for sure. I don't have a crystal ball. However, the needs of the transition itself will motivate me to finish some long-standing things (like finalizing the support for pushing merges, which is currently behind an experimental flag) or implement some missing features (support for creating Mercurial branches).
Git-cinnabar started as a Python script, it grew a sidekick implemented in C, which then incorporated some Rust, which then cannibalized the Python script and took its place. It is now close to 90% Rust, and 10% C (if you don't count the code from Git that is statically linked to it), and has sort of become my Rust playground (it's also, I must admit, a mess, because of its history, but it's getting better). So the day to day use with Mercurial is not my sole motivation to keep developing it. If it were, it would stay stagnant, because all the features I need are there, and the speed is not all that bad, although I know it could be better. Arguably, though, git-cinnabar has been relatively stagnant feature-wise, because all the features I need are there.
So, no, I don't expect git-cinnabar to die along Mercurial use at Mozilla, but I can't really promise anything either.
Final words
That was a long post. But there was a lot of ground to cover. And I still skipped over a bunch of things. I hope I didn't bore you to death. If I did and you're still reading... what's wrong with you? ;)
So this is the end of Mercurial at Mozilla. So long, and thanks for all the fish. But this is also the beginning of a transition that is not easy, and that will not be without hiccups, I'm sure. So fasten your seatbelts (plural), and welcome the change.
To circle back to the clickbait title, did I really kill Mercurial at Mozilla? Of course not. But it's like I stumbled upon a few sparks and tossed a can of gasoline on them. I didn't start the fire, but I sure made it into a proper bonfire... and now it has turned into a wildfire.
And who knows? 15 years from now, someone else might be looking back at how Mozilla picked Git at the wrong time, and that, had we waited a little longer, we would have picked some yet to come new horse. But hey, that's the tech cycle for you.
lisandro
), long-time maintainer of the Qt ecosystem, and one of our embedded
world extraordinaires. So, after we got him dry and fed him fresh river fishes,
he gave us a great impromptu talk about understanding and finding our way around
the Device Tree Source files for development boards and similar machines, mostly
in the ARM world.
From Argentina, we also had Emanuel (eamanu
) crossing all the way from La
Rioja.
I spent most of our first workday getting my laptop in shape to be useful as
the driver for my online class on Thursday (which is no small feat people that
know the particularities of my much loved ARM-based laptop will understand), and
running a set of tests again on my Raspberry Pi labortory, which I had not
updated in several months.
I am happy to say we are also finally also building Raspberry images for Trixie
(Debian 13, Testing)! Sadly, I managed
to burn my USB-to-serial-console (UART) adaptor, and could neither test those,
nor the oldstable ones we are still building (and will probably soon be dropped,
if not for anything else, to save disk space).
We enjoyed a lot of socialization time. An important highlight of the conference
for me was that we reconnected with a long-lost DD, Eduardo Tr pani, and got him
interested in getting involved in the project again! This second day, another
local Uruguayan, Mauricio, joined us together with his girlfriend,
Alicia, and Felipe came again to hang out with us. Sadly, we didn t get
photographic evidence of them (nor the permission to post it).
The nice house Santiago got for us was very well equipped for a
miniDebConf. There were a couple of rounds of pool played by those that enjoyed
it (I was very happy just to stand around, take some photos and enjoy the
atmosphere and the conversation).
Today (Saturday) is the last full-house day of miniDebConf; tomorrow we will be
leaving the house by noon. It was also a very productive day! We had a long,
important conversation about an important discussion that we are about to
present on debian-vote@lists.debian.org
.
It has been a great couple of days! Sadly, it s coming to an end But this at
least gives me the opportunity (and moral obligation!) to write a long blog
post. And to thank Santiago for organizing this, and Debian, for sponsoring our
trip, stay, foods and healthy enjoyment!
[ ] In March 2023, Ken gave the closing keynote [and] during the Q&A session, someone jokingly asked about the Turing award lecture, specifically can you tell us right now whether you have a backdoor into every copy of gcc and Linux still today?Although Ken reveals (or at least claims!) that he has no such backdoor, he does admit that he has the actual code which Russ requests and subsequently dissects in great but accessible detail.
Arch Linux packages become reproducible a median of 30 days quicker when compared to Debian packages, while Debian packages remain reproducible for a median of 68 days longer once fixed.A full PDF of their paper is available online, as are many other interesting papers on MCIS publication page.
nixos-minimal
image that is used to install NixOS. In their post, Arnout details what exactly can be reproduced, and even includes some of the history of this endeavour:
You may remember a 2021 announcement that the minimal ISO was 100% reproducible. While back then we successfully tested that all packages that were needed to build the ISO were individually reproducible, actually rebuilding the ISO still introduced differences. This was due to some remaining problems in the hydra cache and the way the ISO was created. By the time we fixed those, regressions had popped up (notably an upstream problem in Python 3.10), and it isn t until this week that we were back to having everything reproducible and being able to validate the complete chain.Congratulations to NixOS team for reaching this important milestone! Discussion about this announcement can be found underneath the post itself, as well as on Hacker News.
arm64
hardware from Codethink
Long-time sponsor of the project, Codethink, have generously replaced our old Moonshot-Slides , which they have generously hosted since 2016 with new KVM-based arm64
hardware. Holger Levsen integrated these new nodes to the Reproducible Builds continuous integration framework.
ext4
filesystem images. [ ]
SOURCE_DATE_EPOCH
environment variable in order to close bug #1034422. In addition, 8 reviews of packages were added, 74 were updated and 56 were removed this month, all adding to our knowledge about identified issues.
Bernhard M. Wiedemann published another monthly report about reproducibility within openSUSE.
edje_cc
(race condition)elasticsearch
(build failure)erlang-retest
(embedded .zip
timestamp)fdo-client
(embeds private keys)fftw3
(random ordering)gsoap
(date issue)gutenprint
(date)hub/golang
(embeds random build path)Hyprland
(filesystem issue)kitty
(sort-related issue, .tar
file embeds modification time)libpinyin
(ASLR)maildir-utils
(date embedded in copyright)mame
(order-related issue)mingw32-binutils
& mingw64-binutils
(date)MooseX
(date from perl-MooseX-App)occt
(sorting issue)openblas
(embeds CPU count)OpenRGB
(corruption-related issue)python-numpy
(random file names)python-pandas
(FTBFS)python-quantities
(date)python3-pyside2
(order)qemu
(date and Sphinx issue)qpid
(sorting problem)rakudo
(filesystem ordering issue)SLOF
(date-related issue)spack
(CPU counting issue)xemacs-packages
(date-related issue)file -i
returns text/plain
, fallback to comparing as a text file. This was originally filed as Debian bug #1053668) by Niels Thykier. [ ] This was then uploaded to Debian (and elsewhere) as version 251
.
#debian-reproducible-changes
IRC channel. [ ][ ][ ]systemd-oomd
on all Debian bookworm nodes (re. Debian bug #1052257). [ ]schroots
. [ ]arm64
machines from Codethink. [ ][ ][ ][ ][ ][ ]#reproducible-builds
on irc.oftc.net
.
rb-general@lists.reproducible-builds.org
xzcat haos_generic-x86-64-11.0.img.xz dd of=/dev/mmcblk0 bs=1M
That just worked, prefectly and really fast. If you want to use a GUI in a live environment, then just using
the gnome-disk-utility ("Disks" in Gnome menu) and using the "Restore Disk Image ..." on a partition would
work just as well. It even supports decompressing the XZ images directly while writing.
But that image is small, will it not have a ton of unused disk space behind the fixed install partition? Yes,
it will ... until first boot. The HA OS takes over the empty space after its install partition on the first
boot-up and just grows its main partition to take up all the remaining space. Smart. After first boot is
completed, the first boot wizard can be accessed via your web browser and one of the prominent buttons there
is restoring from backup. So you just give it the backup file and wait. Sadly the restore does not actually give
any kind of progress, so your only way to figure out when it is done is opening the same web adress in another
browser tab and refresh periodically - after restoring from backup it just boots into the same config at it had
before - all the settings, all the devices, all the history is preserved. Even authentification tokens are
preserved so if yu had a Home Assitant Mobile installed on your phone (both for remote access and to send
location info and phone state, like charging, to HA to trigger automations) then it will just suddenly start
working again without further actions needed from your side. That is an almost perfect backup/restore experience.
The first thing you get for using the OS version of HA is easy automatic update that also automatically takes
a backup before upgrade, so if anything breaks you can roll back with one click. There is also a command-line
tool that allows to upgrade, but also downgrade ha-core and other modules. I had to use it today as HA version
23.10.4 actually broke support for the Sonoff bridge that I am using to control Zigbee devices, which are like
90% of all smart devices in my home. Really helpful stuff, but not a must have.
What is a must have and that you can (really) only get with Home Assistant Operating System are Addons. Some
addons are just normal servers you can run alongside HA on the same HA OS server, like MariaDB or Plex or a
file server. That is not the most important bit, but even there the software comes pre-configured to use in
a home server configuration and has a very simple config UI to pre-configure key settings, like users,
passwords and database accesses for MariaDB - you can litereally in a few clicks and few strings make serveral
users each with its own access to its own database. Couple more clicks and the DB is running and will be kept
restarted in case of failures.
But the real gems in the Home Assistant Addon Store are modules that extend Home Assitant core functionality
in way that would be really hard or near impossible to configure in Home Assitant Container manually,
especially because no documentation has ever existed for such manual config - everyone just tells you to
install the addon from HA Addon store or from HACS. Or you can read the addon metadata in various repos and
figure out what containers it actually runs with what settings and configs and what hooks it puts into the HA
Core to make them cooperate. And then do it all over again when a new version breaks everything 6 months later
when you have already forgotten everything. In the Addons that show up immediately after installation are
addons to install the new Matter
server, a MariaDB and MQTT server
(that other addons can use for data storage
and message exchange), Z-Wave support and ESPHome integration and very handy File manager that includes
editors to edit Home Assitant configs directly in brower and SSH/Terminal addon that boht allows SSH
connection and also a
web based terminal that gives access to the OS itself and also to a comand line interface, for example, to
do package downgrades if needed or see detailed logs. And also there is where you can get the features
that are the focus this year for HA developers - voice enablers.
However that is only a beginning. Like in Debian you can add additional repositories to expand your list of
available addons. Unlike Debian most of the amazing software that is available for Home Assistant is outside
the main, official addon store. For now I have added the most popular addon repository - HACS (Home Assistant
Community Store) and repository maintained by Alexbelgium. The first includes things like NodeRED (a workflow
based automation programming UI), Tailscale/Wirescale for VPN servers, motionEye for CCTV control, Plex for
home streaming. HACS also includes a lot of HA UI enhacement modules, like themes, custom UI control panels
like Mushroom or mini-graph-card and integrations that provide more advanced functions, but also require
more knowledge to use, like Local Tuya - that is harder to set up, but allows fully local control of (normally)
cloud-based devices. And it has AppDaemon - basically a Python based automation framework where you put in
Python scrips that get run in a special environment where they get fed events from Home Assistant and can
trigger back events that can control everything HA can and also do anything Python can do. This I will need
to explore later.
And the repository by Alex includes the thing that is actually the focus of this blog post (I know :D) -
Firefly III addon and Firefly Importer addon that you can then add to your Home Assistant OS with a few
clicks. It also has all kinds of addons for NAS management, photo/video server, book servers and
Portainer that
lets us setup and run any Docker container inside the HA OS structure. HA OS will detect this and warn you
about unsupported processes running on your HA OS instance (nice security feature!), but you can just dismiss
that. This will be very helpful very soon.
This whole environment of OS and containers and apps really made me think - what was missing in Debian that
made the talented developers behind all of that to spend the immense time and effor to setup a completely
new OS and app infrastructure and develop a completel paraller developer community for Home Assistant apps,
interfaces and configurations. Is there anything that can still be done to make HA community and the
general open source and Debian community closer together? HA devs are not doing anything wrong: they are
using the best open source can provide, they bring it to people whould could not install and use it otherwise,
they are contributing fixes and improvements as well. But there must be some way to do this better, together.
So I installed MariaDB, create a user and database for Firefly. I installed Firefly III and configured
it to use the MariaDB with the web config UI. When I went into the Firefly III web UI I was confronted with
the normal wizard to setup a new instance. And no reference to any backup restore. Hmm, ok. Maybe that goes
via the Importer? So I make an access token again, configured the Importer to use that, configured the
Nordlinger bank connection settings. Then I tried to import the export that I downloaded from Firefly III
before. The importer did not auto-recognose the format. Turns out it is just a list of transactions ...
It can only be barely useful if you first manually create all the asset accounts with the same names as
before and even then you'll again have to deal with resolving the problem of transfers showing up twice.
And all of your categories (that have not been used yet) are gone, your automation rules and bills are gone,
your budgets and piggy banks are gone. Boooo. It will be easier for me to recreate my account data from
bank exports again than to resolve data in that transaction export.
Turns out that Firefly III documenation explicitly
recommends making a mysqldump of your own and not rely
on anything in the app itself for backup purposes. Kind of sad this was not mentioned in the export page
that sure looked a lot like a backup :D
After doing all that work all over again I needed to make something new not to feel like I wasted days of
work for no real gain. So I started solving a problem I had for a while already - how do I add cash
transations to the system when I am out of the house with just my phone in the hand? So far my workaround has
been just sending myself messages in WhatsApp with the amount and description of any cash expenses. Two
solutions are possible: app and bot.
There are actually multiple Android-based phone apps that work with Firefly III API to do full
financial management from the phone. However, after trying it out, that is not what I will be using most
of the time. First of all this requires your Firefly III instance to be accessible from the Internet. Either
via direct API access using some port forwarding and secured with HTTPS and good access tokens, or via
a VPN server redirect that is installed on both HA and your phone. Tailscale was really easy to get working.
But the power has its drawbacks - adding a new cash transaction requires opening the app, choosing new
transaction view, entering descriptio, amount, choosing "Cash" as source account and optionally choosing
destination expense account, choosing category and budget and then submitting the form to the server.
Sadly none of that really works if you have no Internet or bad Internet at the place where you are using
cash. And it's just too many steps. Annoying.
An easier alternative is setting up a Telegram bot -
it is running in a custom Docker container right
next to your Firefly (via Portainer) and you talk to it via a custom Telegram chat channel that you create
very easily and quickly. And then you can just tell it "Coffee 5" and it will create a transaction
from the (default) cash account in 5 amount with description "Coffee". This part also works if you are
offline at the moment - the bot will receive the message once you get back online. You can use Telegram
bot menu system to edit the transaction to add categories or expense accounts, but this part only work
if you are online. And the Firefly instance does not have to be online at all. Really nifty.
So next week I will need to write up all the regular payments as bills in Firefly (again) and then I can start
writing a Python script to predict my (financial) future!Series: | Fall Revolution #3 |
Publisher: | Tor |
Copyright: | 1998 |
Printing: | August 2000 |
ISBN: | 0-8125-6858-3 |
Format: | Mass market |
Pages: | 305 |
Life is a process of breaking down and using other matter, and if need be, other life. Therefore, life is aggression, and successful life is successful aggression. Life is the scum of matter, and people are the scum of life. There is nothing but matter, forces, space and time, which together make power. Nothing matters, except what matters to you. Might makes right, and power makes freedom. You are free to do whatever is in your power, and if you want to survive and thrive you had better do whatever is in your interests. If your interests conflict with those of others, let the others pit their power against yours, everyone for theirselves. If your interests coincide with those of others, let them work together with you, and against the rest. We are what we eat, and we eat everything. All that you really value, and the goodness and truth and beauty of life, have their roots in this apparently barren soil. This is the true knowledge. We had founded our idealism on the most nihilistic implications of science, our socialism on crass self-interest, our peace on our capacity for mutual destruction, and our liberty on determinism. We had replaced morality with convention, bravery with safety, frugality with plenty, philosophy with science, stoicism with anaesthetics and piety with immortality. The universal acid of the true knowledge had burned away a world of words, and exposed a universe of things. Things we could use.This is certainly something that some people will believe, particularly cynical college students who love political theory, feeling smarter than other people, and calling their pet theories things like "the true knowledge." It is not even remotely believable as the governing philosophy of a solar confederation. The point of government for the average person in human society is to create and enforce predictable mutual rules that one can use as a basis for planning and habits, allowing you to not think about politics all the time. People who adore thinking about politics have great difficulty understanding how important it is to everyone else to have ignorable government. Constantly testing your power against other coalitions is a sport, not a governing philosophy. Given the implication that this testing is through violence or the threat of violence, it beggars belief that any large number of people would tolerate that type of instability for an extended period of time. Ellen is fully committed to the true knowledge. MacLeod likely is not; I don't think this represents the philosophy of the author. But the primary political conflict in this novel famous for being political science fiction is between the above variation of anarchy and an anarchocapitalist society, neither of which are believable as stable political systems for large numbers of people. This is a bit like seeking out a series because you were told it was about a great clash of European monarchies and discovering it was about a fight between Liberland and Sealand. It becomes hard to take the rest of the book seriously. I do realize that one point of political science fiction is to play with strange political ideas, similar to how science fiction plays with often-implausible science ideas. But those ideas need some contact with human nature. If you're going to tell me that the key to clawing society back from a world-wide catastrophic descent into chaos is to discard literally every social system used to create predictability and order, you had better be describing aliens, because that's not how humans work. The rest of the book is better. I am untangling a lot of backstory for the above synopsis, which in the book comes in dribs and drabs, but piecing that together is good fun. The plot is far more straightforward than the previous two books in the series: there is a clear enemy, a clear goal, and Ellen goes from point A to point B in a comprehensible way with enough twists to keep it interesting. The core moral conflict of the book is that Ellen is an anti-AI fanatic to the point that she considers anyone other than non-uploaded humans to be an existential threat. MacLeod gives the reader both reasons to believe Ellen is right and reasons to believe she's wrong, which maintains an interesting moral tension. One thing that MacLeod is very good at is what Bob Shaw called "wee thinky bits." I think my favorite in this book is the computer technology used by the Cassini Division, who have spent a century in close combat with inimical AI capable of infecting any digital computer system with tailored viruses. As a result, their computers are mechanical non-Von-Neumann machines, but mechanical with all the technology of a highly-advanced 24th century civilization with nanometer-scale manufacturing technology. It's a great mental image and a lot of fun to think about. This is the only science fiction novel that I can think of that has a hard-takeoff singularity that nonetheless is successfully resisted and fought to a stand-still by unmodified humanity. Most writers who were interested in the singularity idea treated it as either a near-total transformation leaving only remnants or as something that had to be stopped before it started. MacLeod realizes that there's no reason to believe a post-singularity form of life would be either uniform in intent or free from its own baffling sudden collapses and reversals, which can be exploited by humans. It makes for a much better story. The sociology of this book is difficult to swallow, but the characterization is significantly better than the previous books of the series and the plot is much tighter. I was too annoyed by the political science to fully enjoy it, but that may be partly the fault of my expectations coming in. If you like chewy, idea-filled science fiction with a lot of unexplained world-building that you have to puzzle out as you go, you may enjoy this, although unfortunately I think you need to read at least The Stone Canal first. The ending was a bit unsatisfying, but even that includes some neat science fiction ideas. Followed by The Sky Road, although I understand it is not a straightforward sequel. Rating: 6 out of 10
Welcome to the September 2023 report from the Reproducible Builds project
In these reports, we outline the most important things that we have been up to over the past month. As a quick recap, whilst anyone may inspect the source code of free software for malicious flaws, almost all software is distributed to end users as pre-compiled binaries.
Andreas Herrmann gave a talk at All Systems Go 2023 titled Fast, correct, reproducible builds with Nix and Bazel . Quoting from the talk description:
You will be introduced to Google s open source build system Bazel, and will learn how it provides fast builds, how correctness and reproducibility is relevant, and how Bazel tries to ensure correctness. But, we will also see where Bazel falls short in ensuring correctness and reproducibility. You will [also] learn about the purely functional package manager Nix and how it approaches correctness and build isolation. And we will see where Bazel has an advantage over Nix when it comes to providing fast feedback during development.Andreas also shows how you can get the best of both worlds and combine Nix and Bazel, too. A video of the talk is available.
file(1)
version 5.45 [ ] and updated some documentation [ ]. In addition, Vagrant Cascadian extended support for GNU Guix [ ][ ] and updated the version in that distribution as well. [ ].
BUILDSPEC.md
file. [ ] And Fay Stegerman fixed the builds failing because of a YAML syntax error.
.dsc
file modulo the GPG signature . This month, however, Russ Allbery closed the bug due to concerns about the viability of source reproducibility.
linuxsampler
(benchmarking issue)antlr3
(date)rpm
(embeds too many build details)seamonkey
(date)conky
(date and ordering-related issue)lsp-plugins-shared
(date/copyright year issue)build-compare
helix
(ASLR-related non-determinism)intel-graphics-compiler
(ASLR)sphinxcontrib-mermaid
.mkdocs-material
.apophenia
.lapackpp
.blaspp
.mysql-connector-java
, java-21-openjdk
, apache-ivy
, maven-assembly-plugin
, eclipse
, antlr3
, groovy18
, hbci4java
, ini4j
, hppc
, checkstyle
, glassfish-jaxb
, tycho
, xmvn
, mockito
, languagetool
, json-lib
, jnr-unixsocket
, jnr-ffi
, jnr-enxio
, jboss-jaxrs-2.0-api
, istack-commons
, rxtx-java
, glassfish-jaxb
, glassfish-hk2
, findbugs
, docker-client-java
, maven
, xmvn-connector-ivy
, xmlstreambuffer
, checkstyle
, cglib
, bean-validation-api
, aws-sdk-java
, javapackages-tools
, ant
, scala
, osgi-service-log
, jmdns
, xml-security
, super-csv
, osgi-service-jdbc
, msv
, junit5
, jsr-311
, jersey
, itextpdf
, httpcomponents-asyncclient
, ed25519-java
, jnacl
, javaparser
, picocli
, freemarker
, extra166y
, javaparser
, xstream
, woodstox-core
, uom-lib
, unit-api
, uncommons-maths
, tycho
, treelayout
, tiger-types
, super-csv
, stax-ex
, stax2-api
, sqlite-jdbc
, reflectasm
, prometheus-simpleclient-java
, powermock
, paranamer
, opennlp
, netty3
, mybatis
, morfologik-stemming
, minlog
, maven-archetype
, mariadb-java-client
, logback
, kryo
, jsonp
, jopt-simple
, jnr-posix
, jnr-constants
, jnr-a64asm
, jfreechart
, jffi
, jetty-schemas
, jetty-minimal
, jeromq
, jctools
, jcsp
, jboss-websocket-1.0-api
, jboss-marshalling
, jboss-logmanager
, jboss-logging
, javaewah
, jatl
, janino
, jackson-modules-base
, jackson-jaxrs-providers
, jackson-datatypes-collections
, jackson-dataformat-xml
, jackson-dataformats-text
, jackson-dataformats-binary
, indriya
, google-gson
, glassfish-websocket-api
, glassfish-transaction-api
, glassfish-jsp
, glassfish-jax-rs-api
, glassfish-hk2
, glassfish-fastinfoset
, felix-scr
, felix-gogo-shell
, felix-gogo-command
, disruptor
, apache-commons-ognl
, apache-commons-math
, apache-commons-csv
, antlr4
, jettison
, sisu
, maven
armhf
and i386
builds due to Debian bug #1052257. [ ][ ][ ][ ]ionice
priority. [ ]dinstall
again. [ ]schroot
running the tested suite. [ ][ ]diffoscope --version
(as suggested by Fay Stegerman on our mailing list) [ ], worked on an openQA credential issue [ ] and also made some changes to the machine-readable reproducible metadata, reproducible-tracker.json
[ ]. Lastly, Roland Clobus added instructions for manual configuration of the openQA secrets [ ].
#reproducible-builds
on irc.oftc.net
.
rb-general@lists.reproducible-builds.org
Series: | Divide #1 |
Publisher: | Tor |
Copyright: | 2021 |
ISBN: | 1-250-23634-7 |
Format: | Kindle |
Pages: | 476 |
Welcome to the August 2023 report from the Reproducible Builds project!
In these reports we outline the most important things that we have been up to over the past month. As a quick recap, whilst anyone may inspect the source code of free software for malicious flaws, almost all software is distributed to end users as pre-compiled binaries.
The motivation behind the reproducible builds effort is to ensure no flaws have been introduced during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised. If you are interested in contributing to the project, please visit our Contribute page on our website.
serde_derive
macro as a precompiled binary. As Ax Sharma writes:
The move has generated a fair amount of push back among developers who worry about its future legal and technical implications, along with a potential for supply chain attacks, should the maintainer account publishing these binaries be compromised.After intensive discussions, use of the precompiled binary was phased out.
[ ] an overview about reproducible builds, the past, the presence and the future. How it started with a small [meeting] at DebConf13 (and before), how it grew from being a Debian effort to something many projects work on together, until in 2021 it was mentioned in an executive order of the president of the United States. (HTML slides)Holger repeated the talk later in the month at Chaos Communication Camp 2023 in Zehdenick, Germany: A video of the talk is available online, as are the HTML slides.
Vagrant walks us through his role in the project where the aim is to ensure identical results in software builds across various machines and times, enhancing software security and creating a seamless developer experience. Discover how this mission, supported by the Software Freedom Conservancy and a broad community, is changing the face of Linux distros, Arch Linux, openSUSE, and F-Droid. They also explore the challenges of managing random elements in software, and Vagrant s vision to make reproducible builds a standard best practice that will ideally become automatic for users. Vagrant shares his work in progress and their commitment to the last mile problem.The episode is available to listen (or download) from the Sustain podcast website. As it happens, the episode was recorded at FOSSY 2023, and the video of Vagrant s talk from this conference (Breaking the Chains of Trusting Trust is now available on Archive.org: It was also announced that Vagrant Cascadian will be presenting at the Open Source Firmware Conference in October on the topic of Reproducible Builds All The Way Down.
hello-traditional
package from Debian. The entire thread can be viewed from the archive page, as can Vagrant Cascadian s reply.
247
, 248
and 249
were uploaded to Debian unstable by Chris Lamb, who also added documentation for the new specialize_as
method and expanding the documentation of the existing specialize
as well [ ]. In addition, Fay Stegerman added specialize_as
and used it to optimise .smali
comparisons when decompiling Android .apk
files [ ], Felix Yan and Mattia Rizzolo corrected some typos in code comments [ , ], Greg Chabala merged the RUN commands into single layer in the package s Dockerfile
[ ] thus greatly reducing the final image size. Lastly, Roland Clobus updated tool descriptions to mark that the xb-tool
has moved package within Debian [ ].
timestamp_in_documentation_using_sphinx_zzzeeksphinx_theme
toolchain issue.
arimo
(modification time in build results)apptainer
(random Go build identifier)arrow
(fails to build on single-CPU machines)camlp
(parallelism-related issue)developer
(Go ordering-related issue)elementary-xfce-icon-theme
(font-related problem)gegl
(parallelism issue)grommunio
(filesystem ordering issue)grpc
(drop nondetermistic log)guile-parted
(parallelism-related issue)icinga
(hostname-based issue)liquid-dsp
(CPU-oriented problem)memcached
(package fails to build far in the future)openmpi5/openpmix
(date/copyright year issue)openmpi5
(date/copyright year issue)orthanc-ohif+orthanc-volview
(ordering related issue plus timestamp in a Gzip)perl-Net-DNS
(package fails to build far in the future)postgis
(parallelism issue)python-scipy
(uses an arbitrary build path)python-trustme
(package fails to build far in the future)qtbase/qmake/goldendict-ng
(timestamp-related issue)qtox
(date-related issue)ring
(filesytem ordering related issue)scipy
(1 & 2) (drop arbtirary build path and filesytem-ordering issue)snimpy
(1 & 3) (fails to build on single-CPU machines as well far in the future)tango-icon-theme
(font-related issue)reproducible-tracker.json
data file. [ ]pbuilder.tgz
for Debian unstable due to #1050784. [ ][ ]usrmerge
. [ ][ ]armhf
nodes (wbq0
and jtx1a
) as down; investigation is needed. [ ]buildd.debian.org
. [ ][ ]
#reproducible-builds
on irc.oftc.net
.
rb-general@lists.reproducible-builds.org
Next.