STRICT_R_HEADERSwas prososed years ago in 2016 and again in 2018. But making such a chance against a widely-deployed code base has repurcussions, and we were not ready then. Last April, this was revisited in issue #1158. Over the course of numerous lengthy runs of tests of a changed Rcpp package against (essentially) all reverse-dependencies (i.e. packages which use Rcpp) we identified ninetyfour packages in total which needed a change. We provided either a patch we emailed, or a GitHub pull request, to all ninetyfour. And we are happy to say that eighty cases were resolved via a new CRAN upload, with a seven more having merged the pull request but not yet uploaded. Hence, we could make the case to CRAN (who were always CC ed on the monthly nag emails we sent to maintainers of packages needing a change) that an upload was warranted. And after a brief period for their checks and inspection, our January 11 release of Rcpp 1.0.8 arrived on CRAN on January 13. So with that, a big and heartfelt Thank You! to all eighty maintainers for updating their packages to permit this change at the Rcpp end, to CRAN for the extra checking, and to everybody else who I bugged with the numerous emails and updated to the seemingly never-ending issue #1158. We all got this done, and that is a Good Thing (TM). Other than the aforementioned change which will not automatically set
STRICT_R_HEADERS(unless opted out which one can), a number of nice pull request by a number of contributors are included in this release:
Thanks to my CRANberries, you can also look at a diff to the previous release. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page. Bugs reports are welcome at the GitHub issue tracker as well (where one can also search among open or closed issues); questions are also welcome under
Changes in Rcpp release version 1.0.8 (2022-01-11)
- Changes in Rcpp API:
STRICT_R_HEADERSis now enabled by default, see extensive discussion in #1158 closing #898.
- A new
#defineallows default setting of finalizer calls for external pointers (I aki in #1180 closing #1108).
Rcpp:::CxxFlags()now quotes the include path generated, (Kevin in #1189 closing #1188).
- New header files
Rcpp/Rcppfor fine-grained access to features (and compilation time) (Dirk #1191 addressing #1168).
- Changes in Rcpp Attributes:
- Changes in Rcpp Documentation:
- Changes in Rcpp Deployment:
rcpptag at StackOverflow which also allows searching among the (currently) 2822 previous questions. If you like this or other open-source work I do, you can sponsor me at GitHub.
0.4.7.3-alphaof Tor can now build reproducible tarballs via the
make dist-reprodcommand. This issue was tracked via Tor issue #26299.
Fabian s original post generated a short back-and-forth with Chris Lamb regarding how diffoscope might be able to support the particular format of images generated by this command set.
After rebasing ElectroBSD from FreeBSD stable/11 to stable/12
I recently noticed that the "memstick" images are unfortunately
still not 100% reproducible.
198to Debian, as well as made the following changes:
.dscfield values. [ ]
/usr/lib/x86_64-linux-gnuto our local binary search path. [ ]
has_same_content_aslogging calls. [ ]
tokenvariable with an anonymously-named variable instead to remove extra lines. [ ]
.pycfiles. This fixes test failures on big-endian machines. [ ]
binary-with-bad-dynamic-table. [ ]
debian/control. [ ]
GNU_BUILD_IDfield has been modified [ ]. Thank you for your contributions!
1.13.0-1was uploaded to Debian unstable by Holger Levsen. It included contributions already covered in previous months as well as new ones from Mattia Rizzolo, particularly that the
dh_strip_nondeterminismDebian integration interface uses the new
get_non_binnmu_date_epoch()utility when available: this is important to ensure that strip-nondeterminism does not break some kinds of binNMUs.
/bootpartition size. [ ]
apt-daily-upgradeservices [ ], failed
user@systemd units [ ][ ] as well as generic build failures [ ].
arm64architecture nodes hosted at/by codethink.co.uk. [ ]
JH_2538_2592_ZNP_UART_20211222.hex) - while it s possible to do USB directly with the CC2538 my board doesn t have those bits so going the external USB UART route is easier. The device had some existing firmware on it, so I needed to erase this to force a drop into the boot loader. That means soldering up the JTAG pins and hooking it up to my Bus Pirate for OpenOCD goodness.
source [find interface/buspirate.cfg] buspirate_port /dev/ttyUSB1 buspirate_mode normal buspirate_vreg 1 buspirate_pullup 0 transport select jtag source [find target/cc2538.cfg]
$ telnet localhost 4444 Trying ::1... Trying 127.0.0.1... Connected to localhost. Escape character is '^]'. Open On-Chip Debugger > mww 0x400D300C 0x7F800 > mww 0x400D3008 0x0205 > shutdown shutdown command invoked Connection closed by foreign host.
$ git clone https://github.com/JelmerT/cc2538-bsl.git $ cc2538-bsl/cc2538-bsl.py -p /dev/ttyUSB1 -e -w -v ~/JH_2538_2592_ZNP_UART_20211222.hex Opening port /dev/ttyUSB1, baud 500000 Reading data from /home/noodles/JH_2538_2592_ZNP_UART_20211222.hex Firmware file: Intel Hex Connecting to target... CC2538 PG2.0: 512KB Flash, 32KB SRAM, CCFG at 0x0027FFD4 Primary IEEE Address: 00:12:4B:00:22:22:22:22 Performing mass erase Erasing 524288 bytes starting at address 0x00200000 Erase done Writing 524256 bytes starting at address 0x00200000 Write 232 bytes at 0x0027FEF88 Write done Verifying by comparing CRC32 calculations. Verified (match: 0x74f2b0a1)
python3 -m zigpy_znp.tools.network_backup /dev/zigbee > cc2531-network.json
cc2538-network.jsonand modified the
coordinator_ieeeto be the new device s MAC address (rather than end up with 2 devices claiming the same MAC if/when I reuse the CC2531) and did:
python3 -m zigpy_znp.tools.network_restore --input cc2538-network.json /dev/ttyUSB1
RuntimeError: Network formation refused, RF environment is likely too noisy. Temporarily unscrew the antenna or shield the coordinator with metal until a network is formed.error. After that I updated my
udevrules to map the CC2538 to
/dev/zigbeeand restarted Home Assistant. To my surprise it came up and detected the existing devices without any extra effort on my part. However that resulted in 2 coordinators being shown in the visualisation, with the old one turning up as
unk_manufacturer. Fixing that involved editing
/etc/homeassistant/.storage/core.device_registryand removing the entry which had the old MAC address, removing the device entry in
/etc/homeassistant/.storage/zha.storagefor the old MAC and then finally firing up
sqliteto modify the Zigbee database:
$ sqlite3 /etc/homeassistant/zigbee.db SQLite version 3.34.1 2021-01-20 14:10:07 Enter ".help" for usage hints. sqlite> DELETE FROM devices_v6 WHERE ieee = '00:12:4b:00:11:11:11:11'; sqlite> DELETE FROM endpoints_v6 WHERE ieee = '00:12:4b:00:11:11:11:11'; sqlite> DELETE FROM in_clusters_v6 WHERE ieee = '00:12:4b:00:11:11:11:11'; sqlite> DELETE FROM neighbors_v6 WHERE ieee = '00:12:4b:00:11:11:11:11' OR device_ieee = '00:12:4b:00:11:11:11:11'; sqlite> DELETE FROM node_descriptors_v6 WHERE ieee = '00:12:4b:00:11:11:11:11'; sqlite> DELETE FROM out_clusters_v6 WHERE ieee = '00:12:4b:00:11:11:11:11'; sqlite> .quit
Dependsfields, which are Debian s way of talking about runtime dependencies, even though they are used only at build-time. The way this works is that some ultimate leaf package (which is supposed to produce actual executable code)
Build-Dependson the libraries it needs, and those
Dependson their under-libraries, so that everything needed is installed. What do dependencies mean and what are they for anyway? In systems where packages declare dependencies on other packages, it generally becomes necessary to support versioned dependencies. In all but the most simple systems, this involves an ordering (or similar) on version numbers and a way for a package A to specify that it depends on certain versions of B. Both Debian and Rust have this. Rust upstream crates have version numbers and can specify their dependencies according to semver. Debian s dependency system can represent that. So it was natural for the designers of the scheme for packaging Rust code in Debian to simply translate the Rust version dependencies to Debian ones. However, while the two dependency schemes seem equivalent in the abstract, their concrete real-world semantics are totally different. These different package management systems have different practices and different meanings for dependencies. (Interestingly, the Python world also has debates about the meaning and proper use of dependency versions.) The epistemological problem Consider some package A which is known to depend on B. In general, it is not trivial to know which versions of B will be satisfactory. I.e., whether a new B, with potentially-breaking changes, will actually break A. Sometimes tooling can be used which calculates this (eg, the Debian
shlibdepssystem for runtime dependencies) but this is unusual - especially for build-time dependencies. Which versions of B are OK can normally only be discovered by a human consideration of changelogs etc., or by having a computer try particular combinations. Few ecosystems with dependencies, in the Free Software community at least, make an attempt to precisely calculate the versions of B that are actually required to build some A. So it turns out that there are three cases for a particular combination of A and B: it is believed to work; it is known not to work; and: it is not known whether it will work. And, I am not aware of any dependency system that has an explicit machine-readable representation for the unknown state, so that they can say something like A is known to depend on B; versions of B before v1 are known to break; version v2 is known to work . (Sometimes statements like that can be found in human-readable docs.) That leaves two possibilities for the semantics of a dependency A depends B, version(s) V..W: Precise: A will definitely work if B matches V..W, and Optimistic: We have no reason to think B breaks with any of V..W. At first sight the latter does not seem useful, since how would the package manager find a working combination? Taking Debian as an example, which uses optimistic version dependencies, the answer is as follows: The primary information about what package versions to use is not only the dependencies, but mostly in which Debian release is being targeted. (Other systems using optimistic version dependencies could use the date of the build, i.e. use only packages that are current .)
|People involved in version management||
Package developers, |
Package developers, |
distribution QA and release managers.
|Package developers declare versions V and dependency ranges V..W so that||It definitely works.||A wide range of B can satisfy the declared requirement.|
|The principal version data used by the package manager||Only dependency versions.||Contextual, eg, Releases - set(s) of packages available.|
|Version dependencies are for||Selecting working combinations (out of all that ever existed).||Sequencing (ordering) of updates; QA.|
|Expected use pattern by a downstream||
Downstream can combine any|
|Use a particular release of the whole system. Mixing-and-matching requires additional QA and remedial work.|
|Downstreams are protected from breakage by||Pessimistically updating versions and dependencies whenever anything might go wrong.||Whole-release QA.|
|A substantial deployment will typically contain||Multiple versions of many packages.||A single version of each package, except where there are actual incompatibilities which are too hard to fix.|
|Package updates are driven by||
Depending package updates the declared metadata.
Depended-on package is updated in the repository for the work-in-progress release.
Cargo.tomlwhere the dependency version restrictions are relaxed, or by using a modified version of cargo which has special option(s) to relax certain dependencies. Handling breakage Rust packages in Debian should already be provided with autopkgtests so that ci.debian.net will detect build breakages. Build breakages will stop the updated dependency from migrating to the work-in-progress release, Debian testing. To resolve this, and allow forward progress, we will usually upload a new version of the dependency containing an appropriate
Breaks, and either file an RC bug against the depending package, or update it. This can be done after the upload of the base package. Thus, resolution of breakage due to incompatibilities will be done collaboratively within the Debian archive, rather than ad-hoc locally. And it can be done without blocking. My proposal prioritises the ability to make progress in the core, over stability and in particular over retaining leaf packages. This is not Debian s usual approach but given the Rust ecosystem s practical attitudes to API design, versioning, etc., I think the instability will be manageable. In practice fixing leaf packages is not usually really that hard, but it s still work and the question is what happens if the work doesn t get done. After all we are always a shortage of effort - and we probably still will be, even if we get rid of the makework clerical work of patching dependency versions everywhere (so that usually no work is needed on depending packages). Exceptions to the one-version rule There will have to be some packages that we need to keep multiple versions of. We won t want to update every depending package manually when this happens. Instead, we ll probably want to set a version number split: rdepends which want version <X will get the old one. Details - a sketch I m going to sketch out some of the details of a scheme I think would work. But I haven t thought this through fully. This is still mostly at the handwaving stage. If my ideas find favour, we ll have to do some detailed review and consider a whole bunch of edge cases I m glossing over. The dependency specification consists of two halves: the depending
Depends(or, for a leaf package,
Build-Depends) and the base
Provides. Even though libraries vastly outnumber leaf packages, we still want to avoid updating leaf Debian source packages simply to bump dependencies. Dependency encoding proposal Compared to the existing scheme, I suggest we implement the dependency relaxation by changing the depended-on package, rather than the depending one. So we retain roughly the existing semver translation for
Dependsfields. But we drop all local patching of dependency versions. Into every library source package we insert a new Debian-specific metadata file declaring the earliest version that we uploaded. When we translate a library source package to a
.deb, the binary package build adds
Providesfor every previous version. The effect is that when one updates a base package, the usual behaviour is to simply try to use it to satisfy everything that depends on that base package. The Debian CI will report the build or test failures of all the depending packages which the API changes broke. We will have a choice, then: Breakage handling - update broken depending packages individually If there are only a few packages that are broken, for each broken dependency, we add an appropriate
Breaksto the base binary package. (The version field in the
Breaksshould be chosen narrowly, so that it is possible to resolve it without changing the major version of the dependency, eg by making a minor source change.) When can then do one of the following:
Provideslines to be generated - withdrawing the
Provideslines for earlier APIs. Hopefully examination of the upstream changelog will show what the main compat break is, and therefore tell us which
Provideswe still want to retain. This is like declaring
Breaksfor all the rdepends. We should do it if many rdepends are affected. Then, for each rdependency, we must choose one of the responses in the bullet points above. In practice this will often be a mass bug filing campaign, or large update campaign. Breakage handling - multiple versions Sometimes there will be a big API rewrite in some package, and we can t easily update all of the rdependencies because the upstream ecosystem is fragmented and the work involved in reconciling it all is too substantial. When this happens we will bite the bullet and include multiple versions of the base package in Debian. The old version will become a new source package with a version number in its name. This is analogous to how key C/C++ libraries are handled. Downsides of this scheme The first obvious downside is that assembling some arbitrary set of Debian Rust library packages, that satisfy the dependencies declared by Debian, is no longer necessarily going to work. The combinations that Debian has tested - Debian releases - will work, though. And at least, any breakage will affect only people building Rust code using Debian-supplied libraries. Another less obvious problem is that because there is no such thing as
Build-Breaks(in a Debian binary package), the per-package update scheme may result in no way to declare that a particular library update breaks the build of a particular leaf package. In other words, old source packages might no longer build when exposed to newer versions of their build-dependencies, taken from a newer Debian release. This is a thing that already happens in Debian, with source packages in other languages, though. Semver violation I am proposing that Debian should routinely compile Rust packages against dependencies in violation of the declared semver, and ship the results to Debian s millions of users. This sounds quite alarming! But I think it will not in fact lead to shipping bad binaries, for the following reasons: The Rust community strongly values safety (in a broad sense) in its APIs. An API which is merely capable of insecure (or other seriously bad) use is generally considered to be wrong. For example, such situations are regarded as vulnerabilities by the RustSec project, even if there is no suggestion that any actually-broken caller source code exists, let alone that actually-broken compiled code is likely. The Rust community also values alerting programmers to problems. Nontrivial semantic changes to APIs are typically accompanied not merely by a semver bump, but also by changes to names or types, precisely to ensure that broken combinations of code do not compile. Or to look at it another way, in Debian we would simply be doing what many Rust upstream developers routinely do: bump the versions of their dependencies, and throw it at the wall and hope it sticks. We can mitigate the risks the same way a Rust upstream maintainer would: when updating a package we should of course review the upstream changelog for any gotchas. We should look at RustSec and other upstream ecosystem tracking and authorship information. Difficulties for another day As I said, I see some other issues with Rust in Debian.
Hidden Valley Road (2020) Robert Kolker A compelling and disturbing account of the Galvin family six of whom were diagnosed with schizophrenia which details a journey through the study and misunderstanding of the condition. The story of the Galvin family offers a parallel history of the science of schizophrenia itself, from the era of institutionalisation, lobotomies and the 'schizo mother', to the contemporary search for genetic markers for the disease... all amidst fundamental disagreements about the nature of schizophrenia and, indeed, of all illnesses of the mind. Samples of the Galvins' DNA informed decades of research which, curiously, continues to this day, potentially offering paths to treatment, prediction and even eradication of the disease, although on this last point I fancy that I detect a kind of neo-Victorian hubris that we alone will be the ones to find a cure. Either way, a gentle yet ultimately tragic view of a curiously 'American' family, where the inherent lack of narrative satisfaction brings a frustration and sadness of its own.
Islands of Abandonment: Life in the Post-Human Landscape (2021) Cat Flyn In this disarmingly lyrical book, Cat Flyn addresses the twin questions of what happens after humans are gone and how far can our damage to nature be undone. From the forbidden areas of post-war France to the mining regions of Scotland, Islands of Abandonment explores the extraordinary places where humans no longer live in an attempt to give us a glimpse into what happens when mankind's impact on nature is, for one reason or another, forced to stop. Needless to say, if anxieties in this area are not curdling away in your subconscious mind, you are probably in some kind of denial. Through a journey into desolate, eerie and ravaged areas in the world, this artfully-written study offers profound insights into human nature, eschewing the usual dry sawdust of Wikipedia trivia. Indeed, I summed it up to a close friend remarking that, through some kind of hilarious administrative error, the book's publisher accidentally dispatched a poet instead of a scientist to write this book. With glimmers of hope within the (mostly) tragic travelogue, Islands of Abandonment is not only a compelling read, but also a fascinating insight into the relationship between Nature and Man.
The Anatomy of Fascism (2004) Robert O. Paxton Everyone is absolutely sure they know what fascism is... or at least they feel confident choosing from a buffet of features to suit the political mood. To be sure, this is not a new phenomenon: even as 'early' as 1946, George Orwell complained in Politics and the English Language that the word Fascism has now no meaning except in so far as it signifies something not desirable . Still, it has proved uncommonly hard to define the core nature of fascism and what differentiates it from related political movements. This is still of great significance in the twenty-first century, for the definition ultimately determines where the powerful label of 'fascist' can be applied today. Part of the enjoyment of reading this book was having my own cosy definition thoroughly dismantled and replaced with a robust system of abstractions and common themes. This is achieved through a study of the intellectual origins of fascism and how it played out in the streets of Berlin, Rome and Paris. Moreover, unlike Strongmen (see above), fascisms that failed to gain meaningful power are analysed too, including Oswald Mosley's British Union of Fascists. Curiously enough, Paxton's own definition of fascism is left to the final chapter, and by the time you reach it, you get an anti-climatic feeling of it being redundant. Indeed, whatever it actually is, fascism is really not quite like any other 'isms' at all, so to try and classify it like one might be a mistake. In his introduction, Paxton warns that many of those infamous images associated with fascism (eg. Hitler in Triumph of the Will, Mussolini speaking from a balcony, etc.) have the ability to induce facile errors about the fascist leader and the apparent compliance of the crowd. (Contemporary accounts often record how sceptical the common man was of the leader's political message, even if they were transfixed by their oratorical bombast.) As it happens, I thus believe I had something of an advantage of reading this via an audiobook, and completely avoided re-absorbing these iconic images. To me, this was an implicit reminder that, however you choose to reduce it to a definition, fascism is undoubtedly the most visual of all political forms, presenting itself to us in vivid and iconic primary images: ranks of disciplined marching youths, coloured-shirted militants beating up members of demonised minorities; the post-war pictures from the concentration camps... Still, regardless of you choose to read it, The Anatomy of Fascism is a powerful book that can teach a great deal about fascism in particular and history in general.
What Good are the Arts? (2005) John Carey What Good are the Arts? takes a delightfully sceptical look at the nature of art, and cuts through the sanctimony and cant that inevitably surrounds them. It begins by revealing the flaws in lofty aesthetic theories and, along the way, debunks the claims that art makes us better people. They may certainly bring joy into your life, but by no means do the fine arts make you automatically virtuous. Carey also rejects the entire enterprise of separating things into things that are art and things that are not, making a thoroughly convincing case that there is no transcendental category containing so-called 'true' works of art. But what is perhaps equally important to what Carey is claiming is the way he does all this. As in, this is an extremely enjoyable book to read, with not only a fine sense of pace and language, but a devilish sense of humour as well. To be clear, What Good are the Arts? it is no crotchety monograph: Leo Tolstoy's *What Is Art? (1897) is hilarious to read in similar ways, but you can't avoid feeling its cantankerous tone holds Tolstoy's argument back. By contrast, Carey makes his argument in a playful sort of manner, in a way that made me slightly sad to read other polemics throughout the year. It's definitely not that modern genre of boomer jeremiad about the young, political correctness or, heaven forbid, 'cancel culture'... which, incidentally, made Carey's 2014 memoir, The Unexpected Professor something of a disappointing follow-up. Just for fun, Carey later undermines his own argument by arguing at length for the value of one art in particular. Literature, Carey asserts, is the only art capable of reasoning and the only art with the ability to criticise. Perhaps so, and Carey spends a chapter or so contending that fiction has the exclusive power to inspire the mind and move the heart towards practical ends... or at least far better than any work of conceptual art. Whilst reading this book I found myself taking down innumerable quotations and laughing at the jokes far more than I disagreed. And the sustained and intellectual style of polemic makes this a pretty strong candidate for my favourite overall book of the year.
fsync()system call to ensure that writes are flushed to disk. Practically speaking, what this meant is that a page of disk-backed memory would be written to the disk as soon as an event occurred that triggered a log message. Because of potential performance implications,
fsync()was not typically enabled for most log files. However, due to the more sensitive nature of authentication logs, it was often enabled for
/var/log/auth.log. In the first decade of the 2000 s, there was a fairly unsophisticated worm loose on the Internet that would probe sshd with some common username/password combinations. The worm would pause for a second or so between login attempts, most likely in an effort to avoid automated security responses. The effect was that a system being probed by this worm would generate disk write every second, with a very distinct audible signature from the hard drive. I think this situation is a fun demonstration of a side-channel data leak. It s primitive and doesn t leak very much information, but it was certainly enough to make some inference about the state of the system in question. Of course, side-channel leakage issues have been a concern for ages, but I like this one for its simplicity. It was something that could be explained and demonstrated easily, even to somebody with relatively limited understanding of how computers work , unlike, for instance measuring electromagnetic emanations from CPU power management units. For a different take on the sounds of a computing infrastructure, Peep (The Network Auralizer) won an award at a USENIX conference long, long ago. I d love to see a modern deployment of such a system. I m sure you could build something for your cloud deployment using something like AWS EventBridge or Amazon SQS fairly easily. For more on research into actual real-world side-channel attacks, you can read A Survey of Microarchitectural Side-channel Vulnerabilities, Attacks and Defenses in Cryptography or A Survey of Electromagnetic Side-Channel Attacks and Discussion on their Case-Progressing Potential for Digital Forensics.
startxand a systemd service:
[Unit] Description=X11 session for bernat After=graphical.target systemd-user-sessions.service [Service] User=bernat WorkingDirectory=~ PAMName=login Environment=XDG_SESSION_TYPE=x11 TTYPath=/dev/tty8 StandardInput=tty UnsetEnvironment=TERM UtmpIdentifier=tty8 UtmpMode=user StandardOutput=journal ExecStartPre=/usr/bin/chvt 8 ExecStart=/usr/bin/startx -- vt8 -keeptty -verbose 3 -logfile /dev/null Restart=no [Install] WantedBy=graphical.target
systemd-user-sessions.service, which enables user logins after boot by removing the
User=bernat, the unit is started with the identity of the specified user. This implies that
Xorgdoes not run with elevated privileges.
PAMName=login, the executed process is registered as a PAM session for the
loginservice, which includes pam_systemd. This module registers the session to the systemd login manager. To be effective, we also need to allocate a TTY with
TTYPath=/dev/tty8. When the TTY is active, the user is granted additional access to local devices notably display, sound, keyboard, mouse. These additional rights are needed to get Xorg working rootless.1 The
TERMenvironment variable is unset because it would be set to
linuxby systemd as a result of attaching the standard input to the TTY. Moreover, we inform
pam_systemdwe want an X11 session with
logindconsiders the session idle unless it receives input on the TTY. Software relying on the idle hint from
logindwould be ineffective.2
UtmpMode=userdirectives are just a nice addition to register the session in
Xorgto take control of the local devices,
chvt 8switches to the allocated TTY.3
StandardOutput=journal, combined with the
-verbose 3 -logfile /dev/nullflags for
Xorg, puts the logs from the X server into the journal instead of using a file. While equal to the default value, the
Restart=nodirective highlights we do not want this unit to be restarted. This ensures the loginless session is only available on boot. By default,
xinitrc. If you want to run Kodi instead, add
/etc/systemd/system/x11-autologin.serviceand enable it with
systemctl enable x11-autologin.service. Xorg is now running rootless and logging into the journal. After using it for a few months, I didn t notice any regression compared to LightDM with autologin.
logindprovides access to devices, see this blog post. The method names do not match the current implementation, but the concepts are still correct.
Xorgtakes control of the session when the TTY is active.
xf86OpenConsole: VT_ACTIVATE failed: Operation not permitted.
/etc/ssl/certs/ca-certificates.crtfor years so is not impacted by this change. That CA file is generated by update-ca-certificates and is part of the ca-certificates package. We have also had another go of tamping down the nagging WordPress does about updates, as you cannot use the automatic updates through WordPress but via the usual Debian system. I see we are not fully there as WordPress has a site health page that doesn t like things turned off. The two bugs fixed in 5.8.2 I ve not personally hit, but they might help someone out there. In any case, an update is always good. Next stop 5.9 The next planned release is in late January 2022. I m sure there will be a new default theme, but they are planning on making big changes around the blocks and styles to make it easier to customise the look.
Thanks to my CRANberries, there is a diff to the previous release. The RProtoBuf page has copies of the (older) package vignette, the quick overview vignette, and the pre-print of our JSS paper. Questions, comments etc should go to the GitHub issue tracker off the GitHub repo. If you like this or other open-source work I do, you can now sponsor me at GitHub.
Changes in RProtoBuf version 0.4.18 (2021-12-15)
FindMethodByName()(Adam Cozzette in #72).
- CI use was updated first at Travis, later at GitHub and now uses r-ci (Dirk in #74 and (parts of) #76).
- The (to the best of our knowledge) unused minimal RPC mechanism has been removed, retiring one method and one class as well as the import of the RCurl package (Dirk in #76).
toJSON()method supports two (upstream) formatting toggles (Vitali Spinu in #79 with minor edit by Dirk).
- Windows UCRT builds are now supported (Jeroen in #81, Dirk and Tomas Kalibera in #82).
integer64arithmetic. Initially implemented using the S3 system, it has benefitted greatly from a rigorous refactoring by Leonardo who not only rejigged
nanotimeinternals in S4 but also added new S4 types for periods, intervals and durations. The NEWS snippet adds more details.
Thanks to my CRANberries there is also a diff to the previous version. More details and examples are at the nanotime page; code, issue tickets etc at the GitHub repository. If you like this or other open-source work I do, you can now sponsor me at GitHub.
Changes in version 0.3.5 (2021-12-14)
- Applied patch by Tomas Kalibera for Windows UCRT under the upcoming R 4.2.0 expected for April.
We also have a diff to the previous version thanks to my CRANberries. More details are at the RcppCCTZ page; code, issue tickets etc at the GitHub repository. If you like this or other open-source work I do, you can now sponsor me at GitHub.
Changes in version 0.2.10 (2021-12-14)
- Switch CI use to r-ci
- Applied patch by Tomas Kalibera for Windows UCRT under the upcoming R 4.2.0 expected for April.
gccin what is otherwise unchanged and years old C code In addition, two other warnings were fixed right after the previous release. Thanks to CRANberries, you can also look at the most recent diff. If you like this or other open-source work I do, you can now sponsor me at GitHub.
/usr/share/metainfo, which was very inefficient. To shorten a long story, the old caching code was rewritten with the new concepts of caches not necessarily being system-wide and caches existing for more fine-grained groups of files in mind. The new caching code uses Richard Hughes excellent libxmlb internally for memory-mapped data storage. Unlike LMDB, libxmlb knows about the XML document model, so queries can be much more powerful and we do not need to build indices manually. The library is also already used by GNOME Software and fwupd for parsing of (refined) AppStream metadata, so it works quite well for that usecase. As a result, search queries via libappstream are now a bit slower (very much depends on the query, roughly 20% on average), but can be mmuch more powerful. The caching code is a lot more robust, which should speed up startup time of applications. And in addition to all of that, the
AsPoolclass has gained a flag to allow it to monitor AppStream source data for changes and refresh the cache fully automatically and transparently in the background. All software written against the previous version of the libappstream library should continue to work with the new caching code, but to make use of some of the new features, software using it may need adjustments. A lot of methods have been deprecated too now. 2. Experimental compose support Compiling MetaInfo and other metadata into AppStream collection metadata, extracting icons, language information, refining data and caching media is an involved process. The appstream-generator tool does this very well for data from Linux distribution sources, but the tool is also pretty heavyweight with lots of knobs to adjust, an underlying database and a complex algorithm for icon extraction. Embedding it into other tools via anything else but its command-line API is also not easy (due to D s GC initialization, and because it was never written with that feature in mind). Sometimes a simpler tool is all you need, so the libappstream-compose library as well as
appstreamcli composeare being developed at the moment. The library contains building blocks for developing a tool like appstream-generator while the cli tool allows to simply extract metadata from any directory tree, which can be used by e.g. Flatpak. For this to work well, a lot of appstream-generator s D code is translated into plain C, so the implementation stays identical but the language changes. Ultimately, the generator tool will use libappstream-compose for any general data refinement, and only implement things necessary to extract data from the archive of distributions. New applications (e.g. for new bundling systems and other purposes) can then use the same building blocks to implement new data generators similar to appstream-generator with ease, sharing much of the code that would be identical between implementations anyway. 2. Supporting user input controls Want to advertise that your application supports touch input? Keyboard input? Has support for graphics tablets? Gamepads? Sure, nothing is easier than that with the new
controlrelation item and
supportsrelation kind (since 0.12.11 / 0.15.0, details):
<supports> <control>pointing</control> <control>keyboard</control> <control>touch</control> <control>tablet</control> </supports>
display_lengthrelation item to require or recommend a minimum (or maximum) display size that the described GUI application can work with. For example:
<requires> <display_length compare="ge">360</display_length> </requires>
display_lengthvalue will be checked against the longest edge of a display by default (by explicitly specifying the shorter edge, this can be changed). This feature is available since 0.13.0, details. See also Tobias Bernard s blog entry on this topic. 4. Tags This is a feature that was originally requested for the LVFS/fwupd, but one of the great things about AppStream is that we can take very project-specific ideas and generalize them so something comes out of them that is useful for many. The new
tagstag allows people to tag components with an arbitrary namespaced string. This can be useful for project-internal organization of applications, as well as to convey certain additional properties to a software center, e.g. an application could mark itself as featured in a specific software center only. Metadata generators may also add their own tags to components to improve organization. AppStream gives no recommendations as to how these tags are to be interpreted except for them being a strictly optional feature. So any meaning is something clients and metadata authors need to negotiate. It therefore is a more specialized usecase of the already existing
customtag, and I expect it to be primarily useful within larger organizations that produce a lot of software components that need sorting. For example:
<tags> <tag namespace="lvfs">vendor-2021q1</tag> <tag namespace="plasma">featured</tag> </tags>
display_lengthtags, resolved a few minor issues and also added a button to instantly copy the generated output to clipboard so people can paste it into their project. If you want to create a new MetaInfo file, this tool is the best way to do it! The creator tool will also not transfer any data out of your webbrowser, it is strictly a client-side application. And that is about it for the most notable changes in AppStream land! Of course there is a lot more, additional tags for the LVFS and content rating have been added, lots of bugs have been squashed, the documentation has been refined a lot and the library has gained a lot of new API to make building software centers easier. Still, there is a lot to do and quite a few open feature requests too. Onwards to 1.0!
I ll explore how I go about identifying issues to work on, learn more about the specific issues, recreate the problem locally, isolate the potential causes, dissect the problem into identifiable parts, and adapt the packaging and/or source code to fix the issues.A video recording of the talk is available on archive.org.
In the role of a packager, updating packages is a recurring task. For some projects, a packager is involved in upstream maintenance, or well written release notes make it easy to figure out what changed between the releases. This isn t always the case, for instance with some small project maintained by one or two people somewhere on GitHub, and it can be useful to verify what exactly changed. Diffoscope can help determine the changes between package releases. [ ]
rebuilderdversion 0.16.3 on our mailing list this month, adding support for builds to generate multiple artifacts at once.
file(1)claims if they are XML files or if they are named
test_android_manifest. [ ]
.changesfiles if they contain non-printable characters. [ ]
type. [ ]
%-style string interpolations into f-strings or
str.format. [ ]
itertoolstop-level module directly. [ ]
try.diffoscope.orgto respond in tests. (#998360) In addition Brandon Maier corrected an issue where parts of large diffs were missing from the output [ ], Zbigniew J drzejewski-Szmek fixed some logic in the
assert_diff_startswithmethod [ ] and Mattia Rizzolo updated the packaging metadata to denote that we support both Python 3.9 and 3.10 [ ] as well as a number of warning-related changes[ ][ ]. Vagrant Cascadian also updated the diffoscope package in GNU Guix [ ][ ].
packages.ymlYAML file and a sibling
README.mdfile shows how to classify packages too. Finally, Bernhard M. Wiedemann posted his monthly reproducible builds status report for openSUSE and Vagrant Cascadian updated a link on our website to link to the GNU Guix reproducibility testing overview [ ].
opensbi, which later led to a better patch on their mailing list.
snapshot.reproducible-builds.org. [ ]
dstatpackage available on all Debian based systems. [ ]
virt64b-armhfas down. [ ]
unstableuntil an issue in the snapshot system is resolved. [ ]
EtherTypefield to indicate the Payload type. I also stole the idea of using a CRC at the end of the Frame to check for corruption, as well as the specific CRC method (
0xedb88320as the polynomial). Lastly, I added a
callsignfield to make life easier on ham radio frequencies if I was ever to seriously attempt to use a variant of this protocol over the air with multiple users. However, given this scheme is not a commonly used scheme, it s best practice to use a nearby radio to identify your transmissions on the same frequency while testing or use a Faraday box to test without transmitting over the airwaves. I added the callsign field in an effort to lean into the spirit of the Part 97 regulations, even if I relied on a phone emission to identify the Frames. As an aside, I asked the ARRL for input here, and their stance to me over email was I d be OK according to the regs if I were to stick to UHF and put my callsign into the BPSK stream using a widely understood encoding (even with no knowledge of PACKRAT, the callsign is ASCII over BPSK and should be easily demodulatable for followup with me). Even with all this, I opted to use FM phone to transmit my callsign when I was active on the air (specifically, using an SDR and a small bash script to automate transmission while I watched for interference or other band users). Right, back to the Frame:
type FrameType byte type Frame struct Destination net.HardwareAddr Source net.HardwareAddr Callsign byte Type FrameType Payload byte CRC uint32
syncsequence, which the sender will transmit before the Frame, while the receiver listens for that sequence to know when it s in byte alignment with the symbol stream. My
byte 'U', 'f', '~'which works out to be a very pleasant bit sequence of
01010101 01100110 01111110. It s important to have soothing preambles for your Frames. We need all the good energy we can get at this point.
var ( FrameStart = byte 'U', 'f', '~' FrameMaxPayloadSize = 1500 )
FrameTypevalues for the
typefield, which I can use to determine what is done with that data next, something Ethernet was originally missing, but has since grown to depend on (who needs Length anyway? Not me. See below!)
|Raw||Bytes in the Payload field are opaque and not to be parsed.||byte 0x00, 0x01|
|IPv4||Bytes in the Payload field are an IPv4 packet.||byte 0x00, 0x02|
var ( FrameTypeRaw = FrameType 0, 1 FrameTypeIPv4 = FrameType 0, 2 )
syncbit pattern in the symbols we re receiving from our BPSK demodulator. There s some smart ways to do this, but given that I m not much of a smart man, I again decided to go for simple instead. Given our incoming vector of symbols (which are still
floatvalues) prepend one at a time to a vector of floats that is the same length as the
syncphrase, and compare against the
syncphrase, to determine if we re in sync with the byte boundary within the symbol stream. The only trick here is that because we re using BPSK to modulate and demodulate the data, post phaselock we can be 180 degrees out of alignment (such that a +1 is demodulated as -1, or vice versa). To deal with that, I check against both the
syncphrase as well as the inverse of the
[1, -1, 1]as well as
[-1, 1, -1]) where if the inverse sync is matched, all symbols to follow will be inverted as well. This effectively turns our symbols back into bits, even if we re flipped out of phase. Other techniques like NRZI will represent a 0 or 1 by a change in phase state which is great, but can often cascade into long runs of bit errors, and is generally more complex to implement. That representation isn t ambiguous, given you look for a phase change, not the absolute phase value, which is incredibly compelling. Here s a notional example of how I ve been thinking about the phrase sliding window and how I ve been thinking of the checks. Each row is a new symbol taken from the BPSK receiver, and pushed to the head of the sliding window, moving all symbols back in the vector by one.
var ( sync = float ... buf = make(float, len(sync)) incomingSymbols = float ... ) for _, el := range incomingSymbols copy(buf, buf[1:]) buf[len(buf)-1] = el if compare(sync, buf) // we're synced! break
|[ ]float 0, ,0||[ ]float -1, ,-1||[ ]float 1, ,1|
|[ ]float 0, ,1||[ ]float -1, ,-1||[ ]float 1, ,1|
|[more bits in]||[ ]float -1, ,-1||[ ]float 1, ,1|
|[ ]float 1, ,1||[ ]float -1, ,-1||[ ]float 1, ,1|
ansible-galaxy) behave in regard to clashes in naming of the hosted content. "Ansible content hosting services"?! There are currently three main ways for users to obtain Ansible content:
server_listto contain multiple Galaxy-compatible servers, like Ansible Galaxy and Automation Hub, and then asks to install a collection, the Ansible Galaxy CLI will ask every server in the list, until one returns a successful result. The exact order seems to differ between versions, but this doesn't really matter for the issue at hand. Imagine someone wants to install the
redhat.satellitecollection from Automation Hub (using
ansible-galaxy collection install redhat.satellite). Now if their configuration defines Galaxy as the first, and Automation Hub as the second server, Galaxy is always asked whether it has
redhat.satelliteand only if the answer is negative, Automation Hub is asked. Today there is no
redhatnamespace on Galaxy, but there is a
redhatuser on GitHub, so The canonical answer to this issue is to use a
requirements.ymlfile and setting the
sourceparameter. This parameter allows you to express "regardless which sources are configured, please fetch this collection from here". That's is nice, but I think this not being the default syntax (contrary to what e.g. Bundler does) is a bad approach. Users might overlook the security implications, as the shorter syntax without the
sourcejust "magically" works. However, I think this is not even the main problem here. The documentation says: Once a collection is found, any of its requirements are only searched within the same Galaxy instance as the parent collection. The install process will not search for a collection requirement in a different Galaxy instance. But as it turns out, the
sourcebehavior was changed and now only applies to the exact collection it is set for, not for any dependencies this collection might have. For the sake of the example, imagine two collections:
test2declares a dependency on
galaxy.yml. Actually, no need to imagine, both collections are available in version 1.0.0 from
test1version 2.0.0 is available from
galaxy-dev.ansible.com. Now, given our recent reading of the docs, we craft the following
collections: - name: evgeni.test2 version: '*' source: https://galaxy.ansible.comIn a perfect world, following the documentation, this would mean that both collections are fetched from
galaxy.ansible.com, right? However, this is not what
ansible-galaxydoes. It will fetch
evgeni.test2from the specified source, determine it has a dependency on
evgeni.test1and fetch that from the "first" available source from the configuration. Take for example the following
[galaxy] server_list = test_galaxy, release_galaxy, test_galaxy [galaxy_server.release_galaxy] url=https://galaxy.ansible.com/ [galaxy_server.test_galaxy] url=https://galaxy-dev.ansible.com/And try to install collections, using the above
% ansible-galaxy collection install -r requirements.yml -vvv ansible-galaxy 2.9.27 config file = /home/evgeni/Devel/ansible-wtf/collections/ansible.cfg configured module search path = ['/home/evgeni/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python3.10/site-packages/ansible executable location = /usr/bin/ansible-galaxy python version = 3.10.0 (default, Oct 4 2021, 00:00:00) [GCC 11.2.1 20210728 (Red Hat 11.2.1-1)] Using /home/evgeni/Devel/ansible-wtf/collections/ansible.cfg as config file Reading requirement file at '/home/evgeni/Devel/ansible-wtf/collections/requirements.yml' Found installed collection theforeman.foreman:3.0.0 at '/home/evgeni/.ansible/collections/ansible_collections/theforeman/foreman' Process install dependency map Processing requirement collection 'evgeni.test2' Collection 'evgeni.test2' obtained from server explicit_requirement_evgeni.test2 https://galaxy.ansible.com/api/ Opened /home/evgeni/.ansible/galaxy_token Processing requirement collection 'evgeni.test1' - as dependency of evgeni.test2 Collection 'evgeni.test1' obtained from server test_galaxy https://galaxy-dev.ansible.com/api Starting collection install process Installing 'evgeni.test2:1.0.0' to '/home/evgeni/.ansible/collections/ansible_collections/evgeni/test2' Downloading https://galaxy.ansible.com/download/evgeni-test2-1.0.0.tar.gz to /home/evgeni/.ansible/tmp/ansible-local-133/tmp9uqyjgki Installing 'evgeni.test1:2.0.0' to '/home/evgeni/.ansible/collections/ansible_collections/evgeni/test1' Downloading https://galaxy-dev.ansible.com/download/evgeni-test1-2.0.0.tar.gz to /home/evgeni/.ansible/tmp/ansible-local-133/tmp9uqyjgkiAs you can see,
evgeni.test1is fetched from
galaxy-dev.ansible.com, instead of
galaxy.ansible.com. Now, if those servers instead would be Galaxy and Automation Hub, and somebody managed to snag the
redhatnamespace on Galaxy, I would be now getting the wrong stuff Another problematic setup would be with Galaxy and on-prem Ansible Automation Platform, as you can have any namespace on the later and these most certainly can clash with namespaces on public Galaxy. I have reported this behavior to Ansible Security on 2021-08-26, giving a 90 days disclosure deadline, which expired on 2021-11-24. So far, the response was that this is working as designed, to allow cross-source dependencies (e.g. a private collection referring to one on Galaxy) and there is an issue to update the docs to match the code. If users want to explicitly pin sources, they are supposed to name all dependencies and their sources in
requirements.yml. Alternatively they obviously can configure only one source in the configuration and always mirror all dependencies. I am not happy with this and I think this is terrible UX, explicitly inviting people to make mistakes.